{"name":"databricks","displayName":"Databricks","version":"1.87.0","description":"A Pulumi package for creating and managing databricks cloud resources.","keywords":["pulumi","databricks","category/infrastructure"],"homepage":"https://www.pulumi.com","license":"Apache-2.0","attribution":"This Pulumi package is based on the [`databricks` Terraform Provider](https://github.com/databricks/terraform-provider-databricks).","repository":"https://github.com/pulumi/pulumi-databricks","publisher":"Pulumi","meta":{"moduleFormat":"(.*)(?:/[^/]*)"},"language":{"csharp":{"packageReferences":{"Pulumi":"3.*"},"compatibility":"tfbridge20","respectSchemaVersion":true},"go":{"importBasePath":"github.com/pulumi/pulumi-databricks/sdk/go/databricks","generateResourceContainerTypes":true,"generateExtraInputTypes":true,"respectSchemaVersion":true},"nodejs":{"packageDescription":"A Pulumi package for creating and managing databricks cloud resources.","readme":"\u003e This provider is a derived work of the [Terraform Provider](https://github.com/databricks/terraform-provider-databricks)\n\u003e distributed under [MPL 2.0](https://www.mozilla.org/en-US/MPL/2.0/). If you encounter a bug or missing feature,\n\u003e first check the [`pulumi-databricks` repo](https://github.com/pulumi/pulumi-databricks/issues); however, if that doesn't turn up anything,\n\u003e please consult the source [`terraform-provider-databricks` repo](https://github.com/databricks/terraform-provider-databricks/issues).","devDependencies":{"@types/mime":"^2.0.0","@types/node":"^10.0.0"},"compatibility":"tfbridge20","disableUnionOutputTypes":true,"respectSchemaVersion":true},"python":{"readme":"\u003e This provider is a derived work of the [Terraform Provider](https://github.com/databricks/terraform-provider-databricks)\n\u003e distributed under [MPL 2.0](https://www.mozilla.org/en-US/MPL/2.0/). If you encounter a bug or missing feature,\n\u003e first check the [`pulumi-databricks` repo](https://github.com/pulumi/pulumi-databricks/issues); however, if that doesn't turn up anything,\n\u003e please consult the source [`terraform-provider-databricks` repo](https://github.com/databricks/terraform-provider-databricks/issues).","compatibility":"tfbridge20","respectSchemaVersion":true,"pyproject":{"enabled":true}}},"config":{"variables":{"accountId":{"type":"string"},"actionsIdTokenRequestToken":{"type":"string"},"actionsIdTokenRequestUrl":{"type":"string"},"audience":{"type":"string"},"authType":{"type":"string"},"azureClientId":{"type":"string"},"azureClientSecret":{"type":"string","secret":true},"azureEnvironment":{"type":"string"},"azureLoginAppId":{"type":"string"},"azureTenantId":{"type":"string"},"azureUseMsi":{"type":"boolean"},"azureWorkspaceResourceId":{"type":"string"},"clientId":{"type":"string"},"clientSecret":{"type":"string","secret":true},"clusterId":{"type":"string"},"configFile":{"type":"string"},"databricksCliPath":{"type":"string"},"databricksIdTokenFilepath":{"type":"string"},"debugHeaders":{"type":"boolean"},"debugTruncateBytes":{"type":"integer"},"disableOauthRefreshToken":{"type":"boolean"},"experimentalIsUnifiedHost":{"type":"boolean"},"googleCredentials":{"type":"string","secret":true},"googleServiceAccount":{"type":"string"},"host":{"type":"string"},"httpTimeoutSeconds":{"type":"integer"},"metadataServiceUrl":{"type":"string","secret":true},"oauthCallbackPort":{"type":"integer"},"oidcTokenEnv":{"type":"string"},"password":{"type":"string","secret":true},"profile":{"type":"string"},"rateLimit":{"type":"integer"},"retryTimeoutSeconds":{"type":"integer"},"scopes":{"type":"array","items":{"type":"string"}},"serverlessComputeId":{"type":"string"},"skipVerify":{"type":"boolean"},"token":{"type":"string","secret":true},"username":{"type":"string"},"warehouseId":{"type":"string"},"workspaceId":{"type":"string"}}},"types":{"databricks:index/AccessControlRuleSetGrantRule:AccessControlRuleSetGrantRule":{"properties":{"principals":{"type":"array","items":{"type":"string"},"description":"a list of principals who are granted a role. The following format is supported:\n* `users/{username}` (also exposed as \u003cspan pulumi-lang-nodejs=\"`aclPrincipalId`\" pulumi-lang-dotnet=\"`AclPrincipalId`\" pulumi-lang-go=\"`aclPrincipalId`\" pulumi-lang-python=\"`acl_principal_id`\" pulumi-lang-yaml=\"`aclPrincipalId`\" pulumi-lang-java=\"`aclPrincipalId`\"\u003e`acl_principal_id`\u003c/span\u003e attribute of \u003cspan pulumi-lang-nodejs=\"`databricks.User`\" pulumi-lang-dotnet=\"`databricks.User`\" pulumi-lang-go=\"`User`\" pulumi-lang-python=\"`User`\" pulumi-lang-yaml=\"`databricks.User`\" pulumi-lang-java=\"`databricks.User`\"\u003e`databricks.User`\u003c/span\u003e resource).\n* `groups/{groupname}` (also exposed as \u003cspan pulumi-lang-nodejs=\"`aclPrincipalId`\" pulumi-lang-dotnet=\"`AclPrincipalId`\" pulumi-lang-go=\"`aclPrincipalId`\" pulumi-lang-python=\"`acl_principal_id`\" pulumi-lang-yaml=\"`aclPrincipalId`\" pulumi-lang-java=\"`aclPrincipalId`\"\u003e`acl_principal_id`\u003c/span\u003e attribute of \u003cspan pulumi-lang-nodejs=\"`databricks.Group`\" pulumi-lang-dotnet=\"`databricks.Group`\" pulumi-lang-go=\"`Group`\" pulumi-lang-python=\"`Group`\" pulumi-lang-yaml=\"`databricks.Group`\" pulumi-lang-java=\"`databricks.Group`\"\u003e`databricks.Group`\u003c/span\u003e resource).\n* `servicePrincipals/{applicationId}` (also exposed as \u003cspan pulumi-lang-nodejs=\"`aclPrincipalId`\" pulumi-lang-dotnet=\"`AclPrincipalId`\" pulumi-lang-go=\"`aclPrincipalId`\" pulumi-lang-python=\"`acl_principal_id`\" pulumi-lang-yaml=\"`aclPrincipalId`\" pulumi-lang-java=\"`aclPrincipalId`\"\u003e`acl_principal_id`\u003c/span\u003e attribute of \u003cspan pulumi-lang-nodejs=\"`databricks.ServicePrincipal`\" pulumi-lang-dotnet=\"`databricks.ServicePrincipal`\" pulumi-lang-go=\"`ServicePrincipal`\" pulumi-lang-python=\"`ServicePrincipal`\" pulumi-lang-yaml=\"`databricks.ServicePrincipal`\" pulumi-lang-java=\"`databricks.ServicePrincipal`\"\u003e`databricks.ServicePrincipal`\u003c/span\u003e resource).\n"},"role":{"type":"string","description":"Role to be granted. The supported roles are listed below. For more information about these roles, refer to [service principal roles](https://docs.databricks.com/security/auth-authz/access-control/service-principal-acl.html#service-principal-roles), [group roles](https://docs.databricks.com/en/administration-guide/users-groups/groups.html#manage-roles-on-an-account-group-using-the-workspace-admin-settings-page), [marketplace roles](https://docs.databricks.com/en/marketplace/get-started-provider.html#assign-the-marketplace-admin-role) or [budget policy permissions](https://docs.databricks.com/aws/en/admin/usage/budget-policies#manage-budget-policy-permissions), depending on the \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e defined:\n* `accounts/{account_id}/ruleSets/default`\n* `roles/marketplace.admin` - Databricks Marketplace administrator.\n* `roles/billing.admin` - Billing administrator.\n* `roles/tagPolicy.creator` - Creator of tag policies.\n* `roles/tagPolicy.manager` - Manager of tag policies.\n* `roles/tagPolicy.assigner` - Assigner of tag policies.\n* `accounts/{account_id}/servicePrincipals/{service_principal_application_id}/ruleSets/default`\n* `roles/servicePrincipal.manager` - Manager of a service principal.\n* `roles/servicePrincipal.user` - User of a service principal.\n* `accounts/{account_id}/groups/{group_id}/ruleSets/default`\n* `roles/group.manager` - Manager of a group.\n* `accounts/{account_id}/budgetPolicies/{budget_policy_id}/ruleSets/default`\n* `roles/budgetPolicy.manager` - Manager of a budget policy.\n* `roles/budgetPolicy.user` - User of a budget policy.\n* `accounts/{account_id}/tagPolicies/{tag_policy_id}/ruleSets/default`\n* `roles/tagPolicy.manager` - Manager of a specific tag policy.\n* `roles/tagPolicy.assigner` - Assigner of a specific tag policy.\n"}},"type":"object","required":["role"]},"databricks:index/AccountFederationPolicyOidcPolicy:AccountFederationPolicyOidcPolicy":{"properties":{"audiences":{"type":"array","items":{"type":"string"},"description":"The allowed token audiences, as specified in the 'aud' claim of federated tokens.\nThe audience identifier is intended to represent the recipient of the token.\nCan be any non-empty string value. As long as the audience in the token matches\nat least one audience in the policy, the token is considered a match. If audiences\nis unspecified, defaults to your Databricks account id\n"},"issuer":{"type":"string","description":"The required token issuer, as specified in the 'iss' claim of federated tokens\n"},"jwksJson":{"type":"string","description":"The public keys used to validate the signature of federated tokens, in JWKS format.\nMost use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri \" pulumi-lang-dotnet=\" JwksUri \" pulumi-lang-go=\" jwksUri \" pulumi-lang-python=\" jwks_uri \" pulumi-lang-yaml=\" jwksUri \" pulumi-lang-java=\" jwksUri \"\u003e jwks_uri \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson\n\" pulumi-lang-dotnet=\" JwksJson\n\" pulumi-lang-go=\" jwksJson\n\" pulumi-lang-python=\" jwks_json\n\" pulumi-lang-yaml=\" jwksJson\n\" pulumi-lang-java=\" jwksJson\n\"\u003e jwks_json\n\u003c/span\u003eare both unspecified (recommended), Databricks automatically fetches the public\nkeys from your issuer’s well known endpoint. Databricks strongly recommends\nrelying on your issuer’s well known endpoint for discovering public keys\n"},"jwksUri":{"type":"string","description":"URL of the public keys used to validate the signature of federated tokens, in\nJWKS format. Most use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri\n\" pulumi-lang-dotnet=\" JwksUri\n\" pulumi-lang-go=\" jwksUri\n\" pulumi-lang-python=\" jwks_uri\n\" pulumi-lang-yaml=\" jwksUri\n\" pulumi-lang-java=\" jwksUri\n\"\u003e jwks_uri\n\u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson \" pulumi-lang-dotnet=\" JwksJson \" pulumi-lang-go=\" jwksJson \" pulumi-lang-python=\" jwks_json \" pulumi-lang-yaml=\" jwksJson \" pulumi-lang-java=\" jwksJson \"\u003e jwks_json \u003c/span\u003eare both unspecified (recommended), Databricks automatically\nfetches the public keys from your issuer’s well known endpoint. Databricks\nstrongly recommends relying on your issuer’s well known endpoint for discovering\npublic keys\n"},"subject":{"type":"string","description":"The required token subject, as specified in the subject claim of federated tokens.\nMust be specified for service principal federation policies. Must not be specified\nfor account federation policies\n"},"subjectClaim":{"type":"string","description":"The claim that contains the subject of the token. If unspecified, the default value\nis 'sub'\n"}},"type":"object"},"databricks:index/AccountNetworkPolicyEgress:AccountNetworkPolicyEgress":{"properties":{"networkAccess":{"$ref":"#/types/databricks:index/AccountNetworkPolicyEgressNetworkAccess:AccountNetworkPolicyEgressNetworkAccess","description":"The access policy enforced for egress traffic to the internet\n"}},"type":"object"},"databricks:index/AccountNetworkPolicyEgressNetworkAccess:AccountNetworkPolicyEgressNetworkAccess":{"properties":{"allowedInternetDestinations":{"type":"array","items":{"$ref":"#/types/databricks:index/AccountNetworkPolicyEgressNetworkAccessAllowedInternetDestination:AccountNetworkPolicyEgressNetworkAccessAllowedInternetDestination"},"description":"List of internet destinations that serverless workloads are allowed to access when in RESTRICTED_ACCESS mode\n"},"allowedStorageDestinations":{"type":"array","items":{"$ref":"#/types/databricks:index/AccountNetworkPolicyEgressNetworkAccessAllowedStorageDestination:AccountNetworkPolicyEgressNetworkAccessAllowedStorageDestination"},"description":"List of storage destinations that serverless workloads are allowed to access when in RESTRICTED_ACCESS mode\n"},"policyEnforcement":{"$ref":"#/types/databricks:index/AccountNetworkPolicyEgressNetworkAccessPolicyEnforcement:AccountNetworkPolicyEgressNetworkAccessPolicyEnforcement","description":"Optional. When\u003cspan pulumi-lang-nodejs=\" policyEnforcement \" pulumi-lang-dotnet=\" PolicyEnforcement \" pulumi-lang-go=\" policyEnforcement \" pulumi-lang-python=\" policy_enforcement \" pulumi-lang-yaml=\" policyEnforcement \" pulumi-lang-java=\" policyEnforcement \"\u003e policy_enforcement \u003c/span\u003eis not provided, we default to ENFORCE_MODE_ALL_SERVICES\n"},"restrictionMode":{"type":"string","description":"The restriction mode that controls how serverless workloads can access the internet. Possible values are: `FULL_ACCESS`, `RESTRICTED_ACCESS`\n"}},"type":"object","required":["restrictionMode"]},"databricks:index/AccountNetworkPolicyEgressNetworkAccessAllowedInternetDestination:AccountNetworkPolicyEgressNetworkAccessAllowedInternetDestination":{"properties":{"destination":{"type":"string","description":"The internet destination to which access will be allowed. Format dependent on the destination type\n"},"internetDestinationType":{"type":"string","description":"The type of internet destination. Currently only DNS_NAME is supported. Possible values are: `DNS_NAME`\n"}},"type":"object"},"databricks:index/AccountNetworkPolicyEgressNetworkAccessAllowedStorageDestination:AccountNetworkPolicyEgressNetworkAccessAllowedStorageDestination":{"properties":{"azureStorageAccount":{"type":"string","description":"The Azure storage account name\n"},"azureStorageService":{"type":"string","description":"The Azure storage service type (blob, dfs, etc.)\n"},"bucketName":{"type":"string"},"region":{"type":"string"},"storageDestinationType":{"type":"string","description":"The type of storage destination. Possible values are: `AWS_S3`, `AZURE_STORAGE`, `GOOGLE_CLOUD_STORAGE`\n"}},"type":"object"},"databricks:index/AccountNetworkPolicyEgressNetworkAccessPolicyEnforcement:AccountNetworkPolicyEgressNetworkAccessPolicyEnforcement":{"properties":{"dryRunModeProductFilters":{"type":"array","items":{"type":"string"},"description":"When empty, it means dry run for all products.\nWhen non-empty, it means dry run for specific products and for the other products, they will run in enforced mode\n"},"enforcementMode":{"type":"string","description":"The mode of policy enforcement. ENFORCED blocks traffic that violates policy,\nwhile DRY_RUN only logs violations without blocking. When not specified,\ndefaults to ENFORCED. Possible values are: `DRY_RUN`, `ENFORCED`\n"}},"type":"object"},"databricks:index/AccountSettingUserPreferenceV2BooleanVal:AccountSettingUserPreferenceV2BooleanVal":{"properties":{"value":{"type":"boolean"}},"type":"object"},"databricks:index/AccountSettingUserPreferenceV2EffectiveBooleanVal:AccountSettingUserPreferenceV2EffectiveBooleanVal":{"properties":{"value":{"type":"boolean"}},"type":"object"},"databricks:index/AccountSettingUserPreferenceV2EffectiveStringVal:AccountSettingUserPreferenceV2EffectiveStringVal":{"properties":{"value":{"type":"string"}},"type":"object"},"databricks:index/AccountSettingUserPreferenceV2StringVal:AccountSettingUserPreferenceV2StringVal":{"properties":{"value":{"type":"string"}},"type":"object"},"databricks:index/AccountSettingV2AibiDashboardEmbeddingAccessPolicy:AccountSettingV2AibiDashboardEmbeddingAccessPolicy":{"properties":{"accessPolicyType":{"type":"string","description":"Possible values are: `ALLOW_ALL_DOMAINS`, `ALLOW_APPROVED_DOMAINS`, `DENY_ALL_DOMAINS`\n"}},"type":"object","required":["accessPolicyType"]},"databricks:index/AccountSettingV2AibiDashboardEmbeddingApprovedDomains:AccountSettingV2AibiDashboardEmbeddingApprovedDomains":{"properties":{"approvedDomains":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspace:AccountSettingV2AutomaticClusterUpdateWorkspace":{"properties":{"canToggle":{"type":"boolean"},"enabled":{"type":"boolean"},"enablementDetails":{"$ref":"#/types/databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails:AccountSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails"},"maintenanceWindow":{"$ref":"#/types/databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow:AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow"},"restartEvenIfNoUpdatesAvailable":{"type":"boolean"}},"type":"object"},"databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails:AccountSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails":{"properties":{"forcedForComplianceMode":{"type":"boolean","description":"The feature is force enabled if compliance mode is active\n"},"unavailableForDisabledEntitlement":{"type":"boolean","description":"The feature is unavailable if the corresponding entitlement disabled (see getShieldEntitlementEnable)\n"},"unavailableForNonEnterpriseTier":{"type":"boolean","description":"The feature is unavailable if the customer doesn't have enterprise tier\n"}},"type":"object"},"databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow:AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow":{"properties":{"weekDayBasedSchedule":{"$ref":"#/types/databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule"}},"type":"object"},"databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule":{"properties":{"dayOfWeek":{"type":"string","description":"Possible values are: `FRIDAY`, `MONDAY`, `SATURDAY`, `SUNDAY`, `THURSDAY`, `TUESDAY`, `WEDNESDAY`\n"},"frequency":{"type":"string","description":"Possible values are: `EVERY_WEEK`, `FIRST_AND_THIRD_OF_MONTH`, `FIRST_OF_MONTH`, `FOURTH_OF_MONTH`, `SECOND_AND_FOURTH_OF_MONTH`, `SECOND_OF_MONTH`, `THIRD_OF_MONTH`\n"},"windowStartTime":{"$ref":"#/types/databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime"}},"type":"object"},"databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:AccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime":{"properties":{"hours":{"type":"integer"},"minutes":{"type":"integer"}},"type":"object"},"databricks:index/AccountSettingV2BooleanVal:AccountSettingV2BooleanVal":{"properties":{"value":{"type":"boolean"}},"type":"object"},"databricks:index/AccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:AccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy":{"properties":{"accessPolicyType":{"type":"string","description":"Possible values are: `ALLOW_ALL_DOMAINS`, `ALLOW_APPROVED_DOMAINS`, `DENY_ALL_DOMAINS`\n"}},"type":"object","required":["accessPolicyType"]},"databricks:index/AccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:AccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains":{"properties":{"approvedDomains":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspace:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspace":{"properties":{"canToggle":{"type":"boolean"},"enabled":{"type":"boolean"},"enablementDetails":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails"},"maintenanceWindow":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow"},"restartEvenIfNoUpdatesAvailable":{"type":"boolean"}},"type":"object"},"databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails":{"properties":{"forcedForComplianceMode":{"type":"boolean","description":"The feature is force enabled if compliance mode is active\n"},"unavailableForDisabledEntitlement":{"type":"boolean","description":"The feature is unavailable if the corresponding entitlement disabled (see getShieldEntitlementEnable)\n"},"unavailableForNonEnterpriseTier":{"type":"boolean","description":"The feature is unavailable if the customer doesn't have enterprise tier\n"}},"type":"object"},"databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow":{"properties":{"weekDayBasedSchedule":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule"}},"type":"object"},"databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule":{"properties":{"dayOfWeek":{"type":"string","description":"Possible values are: `FRIDAY`, `MONDAY`, `SATURDAY`, `SUNDAY`, `THURSDAY`, `TUESDAY`, `WEDNESDAY`\n"},"frequency":{"type":"string","description":"Possible values are: `EVERY_WEEK`, `FIRST_AND_THIRD_OF_MONTH`, `FIRST_OF_MONTH`, `FOURTH_OF_MONTH`, `SECOND_AND_FOURTH_OF_MONTH`, `SECOND_OF_MONTH`, `THIRD_OF_MONTH`\n"},"windowStartTime":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime"}},"type":"object"},"databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime":{"properties":{"hours":{"type":"integer"},"minutes":{"type":"integer"}},"type":"object"},"databricks:index/AccountSettingV2EffectiveBooleanVal:AccountSettingV2EffectiveBooleanVal":{"properties":{"value":{"type":"boolean"}},"type":"object"},"databricks:index/AccountSettingV2EffectiveIntegerVal:AccountSettingV2EffectiveIntegerVal":{"properties":{"value":{"type":"integer"}},"type":"object"},"databricks:index/AccountSettingV2EffectivePersonalCompute:AccountSettingV2EffectivePersonalCompute":{"properties":{"value":{"type":"string"}},"type":"object"},"databricks:index/AccountSettingV2EffectiveRestrictWorkspaceAdmins:AccountSettingV2EffectiveRestrictWorkspaceAdmins":{"properties":{"status":{"type":"string","description":"Possible values are: `ALLOW_ALL`, `RESTRICT_TOKENS_AND_JOB_RUN_AS`\n"}},"type":"object","required":["status"]},"databricks:index/AccountSettingV2EffectiveStringVal:AccountSettingV2EffectiveStringVal":{"properties":{"value":{"type":"string"}},"type":"object"},"databricks:index/AccountSettingV2IntegerVal:AccountSettingV2IntegerVal":{"properties":{"value":{"type":"integer"}},"type":"object"},"databricks:index/AccountSettingV2PersonalCompute:AccountSettingV2PersonalCompute":{"properties":{"value":{"type":"string"}},"type":"object"},"databricks:index/AccountSettingV2RestrictWorkspaceAdmins:AccountSettingV2RestrictWorkspaceAdmins":{"properties":{"status":{"type":"string","description":"Possible values are: `ALLOW_ALL`, `RESTRICT_TOKENS_AND_JOB_RUN_AS`\n"}},"type":"object","required":["status"]},"databricks:index/AccountSettingV2StringVal:AccountSettingV2StringVal":{"properties":{"value":{"type":"string"}},"type":"object"},"databricks:index/AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicy:AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicy":{"properties":{"accessPolicyType":{"type":"string","description":"Configured embedding policy. Possible values are `ALLOW_ALL_DOMAINS`, `ALLOW_APPROVED_DOMAINS`, `DENY_ALL_DOMAINS`.\n"}},"type":"object","required":["accessPolicyType"]},"databricks:index/AibiDashboardEmbeddingAccessPolicySettingProviderConfig:AibiDashboardEmbeddingAccessPolicySettingProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomains:AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomains":{"properties":{"approvedDomains":{"type":"array","items":{"type":"string"},"description":"the list of approved domains. To allow all subdomains for a given domain, use a wildcard symbol (`*`) before the domain name, i.e., `*.databricks.com` will allow to embed into any site under the `databricks.com`.\n"}},"type":"object","required":["approvedDomains"]},"databricks:index/AibiDashboardEmbeddingApprovedDomainsSettingProviderConfig:AibiDashboardEmbeddingApprovedDomainsSettingProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/AlertCondition:AlertCondition":{"properties":{"emptyResultState":{"type":"string","description":"Alert state if the result is empty (`UNKNOWN`, `OK`, `TRIGGERED`)\n"},"op":{"type":"string","description":"Operator used for comparison in alert evaluation. (Enum: `GREATER_THAN`, `GREATER_THAN_OR_EQUAL`, `LESS_THAN`, `LESS_THAN_OR_EQUAL`, `EQUAL`, `NOT_EQUAL`, `IS_NULL`)\n"},"operand":{"$ref":"#/types/databricks:index/AlertConditionOperand:AlertConditionOperand","description":"Name of the column from the query result to use for comparison in alert evaluation:\n"},"threshold":{"$ref":"#/types/databricks:index/AlertConditionThreshold:AlertConditionThreshold","description":"Threshold value used for comparison in alert evaluation:\n"}},"type":"object","required":["op","operand"]},"databricks:index/AlertConditionOperand:AlertConditionOperand":{"properties":{"column":{"$ref":"#/types/databricks:index/AlertConditionOperandColumn:AlertConditionOperandColumn","description":"Block describing the column from the query result to use for comparison in alert evaluation:\n"}},"type":"object","required":["column"]},"databricks:index/AlertConditionOperandColumn:AlertConditionOperandColumn":{"properties":{"name":{"type":"string","description":"Name of the column.\n"}},"type":"object","required":["name"]},"databricks:index/AlertConditionThreshold:AlertConditionThreshold":{"properties":{"value":{"$ref":"#/types/databricks:index/AlertConditionThresholdValue:AlertConditionThresholdValue","description":"actual value used in comparison (one of the attributes is required):\n"}},"type":"object","required":["value"]},"databricks:index/AlertConditionThresholdValue:AlertConditionThresholdValue":{"properties":{"boolValue":{"type":"boolean","description":"boolean value (\u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e) to compare against boolean results.\n"},"doubleValue":{"type":"number","description":"double value to compare against integer and double results.\n"},"stringValue":{"type":"string","description":"string value to compare against string results.\n"}},"type":"object"},"databricks:index/AlertProviderConfig:AlertProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/AlertV2EffectiveRunAs:AlertV2EffectiveRunAs":{"properties":{"servicePrincipalName":{"type":"string","description":"Application ID of an active service principal. Setting this field requires the `servicePrincipal/user` role\n"},"userName":{"type":"string","description":"The email of an active workspace user. Can only set this field to their own email\n"}},"type":"object"},"databricks:index/AlertV2Evaluation:AlertV2Evaluation":{"properties":{"comparisonOperator":{"type":"string","description":"Operator used for comparison in alert evaluation. Possible values are: `EQUAL`, `GREATER_THAN`, `GREATER_THAN_OR_EQUAL`, `IS_NOT_NULL`, `IS_NULL`, `LESS_THAN`, `LESS_THAN_OR_EQUAL`, `NOT_EQUAL`\n"},"emptyResultState":{"type":"string","description":"Alert state if result is empty. Please avoid setting this field to be `UNKNOWN` because `UNKNOWN` state is planned to be deprecated. Possible values are: `ERROR`, `OK`, `TRIGGERED`, `UNKNOWN`\n"},"lastEvaluatedAt":{"type":"string","description":"(string) - Timestamp of the last evaluation\n"},"notification":{"$ref":"#/types/databricks:index/AlertV2EvaluationNotification:AlertV2EvaluationNotification","description":"User or Notification Destination to notify when alert is triggered\n"},"source":{"$ref":"#/types/databricks:index/AlertV2EvaluationSource:AlertV2EvaluationSource","description":"Source column from result to use to evaluate alert\n"},"state":{"type":"string","description":"(string) - Latest state of alert evaluation. Possible values are: `ERROR`, `OK`, `TRIGGERED`, `UNKNOWN`\n"},"threshold":{"$ref":"#/types/databricks:index/AlertV2EvaluationThreshold:AlertV2EvaluationThreshold","description":"Threshold to user for alert evaluation, can be a column or a value\n"}},"type":"object","required":["comparisonOperator","source"],"language":{"nodejs":{"requiredOutputs":["comparisonOperator","lastEvaluatedAt","source","state"]}}},"databricks:index/AlertV2EvaluationNotification:AlertV2EvaluationNotification":{"properties":{"effectiveNotifyOnOk":{"type":"boolean"},"effectiveRetriggerSeconds":{"type":"integer"},"notifyOnOk":{"type":"boolean","description":"Whether to notify alert subscribers when alert returns back to normal\n"},"retriggerSeconds":{"type":"integer","description":"Number of seconds an alert waits after being triggered before it is allowed to send another notification.\nIf set to 0 or omitted, the alert will not send any further notifications after the first trigger\nSetting this value to 1 allows the alert to send a notification on every evaluation where the condition is met, effectively making it always retrigger for notification purposes\n"},"subscriptions":{"type":"array","items":{"$ref":"#/types/databricks:index/AlertV2EvaluationNotificationSubscription:AlertV2EvaluationNotificationSubscription"}}},"type":"object","language":{"nodejs":{"requiredOutputs":["effectiveNotifyOnOk","effectiveRetriggerSeconds"]}}},"databricks:index/AlertV2EvaluationNotificationSubscription:AlertV2EvaluationNotificationSubscription":{"properties":{"destinationId":{"type":"string"},"userEmail":{"type":"string"}},"type":"object"},"databricks:index/AlertV2EvaluationSource:AlertV2EvaluationSource":{"properties":{"aggregation":{"type":"string","description":"If not set, the behavior is equivalent to using `First row` in the UI. Possible values are: `AVG`, `COUNT`, `COUNT_DISTINCT`, `MAX`, `MEDIAN`, `MIN`, `STDDEV`, `SUM`\n"},"display":{"type":"string"},"name":{"type":"string"}},"type":"object","required":["name"]},"databricks:index/AlertV2EvaluationThreshold:AlertV2EvaluationThreshold":{"properties":{"column":{"$ref":"#/types/databricks:index/AlertV2EvaluationThresholdColumn:AlertV2EvaluationThresholdColumn"},"value":{"$ref":"#/types/databricks:index/AlertV2EvaluationThresholdValue:AlertV2EvaluationThresholdValue"}},"type":"object"},"databricks:index/AlertV2EvaluationThresholdColumn:AlertV2EvaluationThresholdColumn":{"properties":{"aggregation":{"type":"string","description":"If not set, the behavior is equivalent to using `First row` in the UI. Possible values are: `AVG`, `COUNT`, `COUNT_DISTINCT`, `MAX`, `MEDIAN`, `MIN`, `STDDEV`, `SUM`\n"},"display":{"type":"string"},"name":{"type":"string"}},"type":"object","required":["name"]},"databricks:index/AlertV2EvaluationThresholdValue:AlertV2EvaluationThresholdValue":{"properties":{"boolValue":{"type":"boolean"},"doubleValue":{"type":"number"},"stringValue":{"type":"string"}},"type":"object"},"databricks:index/AlertV2ProviderConfig:AlertV2ProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/AlertV2RunAs:AlertV2RunAs":{"properties":{"servicePrincipalName":{"type":"string","description":"Application ID of an active service principal. Setting this field requires the `servicePrincipal/user` role\n"},"userName":{"type":"string","description":"The email of an active workspace user. Can only set this field to their own email\n"}},"type":"object"},"databricks:index/AlertV2Schedule:AlertV2Schedule":{"properties":{"pauseStatus":{"type":"string","description":"Indicate whether this schedule is paused or not. Possible values are: `PAUSED`, `UNPAUSED`\n"},"quartzCronSchedule":{"type":"string","description":"A cron expression using quartz syntax that specifies the schedule for this pipeline.\nShould use the quartz format described here: http://www.quartz-scheduler.org/documentation/quartz-2.1.7/tutorials/tutorial-lesson-06.html\n"},"timezoneId":{"type":"string","description":"A Java timezone id. The schedule will be resolved using this timezone.\nThis will be combined with the\u003cspan pulumi-lang-nodejs=\" quartzCronSchedule \" pulumi-lang-dotnet=\" QuartzCronSchedule \" pulumi-lang-go=\" quartzCronSchedule \" pulumi-lang-python=\" quartz_cron_schedule \" pulumi-lang-yaml=\" quartzCronSchedule \" pulumi-lang-java=\" quartzCronSchedule \"\u003e quartz_cron_schedule \u003c/span\u003eto determine the schedule.\nSee https://docs.databricks.com/sql/language-manual/sql-ref-syntax-aux-conf-mgmt-set-timezone.html for details\n"}},"type":"object","required":["quartzCronSchedule","timezoneId"]},"databricks:index/AppActiveDeployment:AppActiveDeployment":{"properties":{"commands":{"type":"array","items":{"type":"string"}},"createTime":{"type":"string","description":"The creation time of the app.\n"},"creator":{"type":"string","description":"The email of the user that created the app.\n"},"deploymentArtifacts":{"$ref":"#/types/databricks:index/AppActiveDeploymentDeploymentArtifacts:AppActiveDeploymentDeploymentArtifacts"},"deploymentId":{"type":"string"},"envVars":{"type":"array","items":{"$ref":"#/types/databricks:index/AppActiveDeploymentEnvVar:AppActiveDeploymentEnvVar"}},"gitSource":{"$ref":"#/types/databricks:index/AppActiveDeploymentGitSource:AppActiveDeploymentGitSource"},"mode":{"type":"string"},"sourceCodePath":{"type":"string"},"status":{"$ref":"#/types/databricks:index/AppActiveDeploymentStatus:AppActiveDeploymentStatus"},"updateTime":{"type":"string","description":"The update time of the app.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["createTime","creator","deploymentArtifacts","status","updateTime"]}}},"databricks:index/AppActiveDeploymentDeploymentArtifacts:AppActiveDeploymentDeploymentArtifacts":{"properties":{"sourceCodePath":{"type":"string"}},"type":"object"},"databricks:index/AppActiveDeploymentEnvVar:AppActiveDeploymentEnvVar":{"properties":{"name":{"type":"string","description":"The name of the app. The name must contain only lowercase alphanumeric characters and hyphens. It must be unique within the workspace.\n"},"value":{"type":"string"},"valueFrom":{"type":"string"}},"type":"object"},"databricks:index/AppActiveDeploymentGitSource:AppActiveDeploymentGitSource":{"properties":{"branch":{"type":"string"},"commit":{"type":"string"},"gitRepository":{"$ref":"#/types/databricks:index/AppActiveDeploymentGitSourceGitRepository:AppActiveDeploymentGitSourceGitRepository"},"resolvedCommit":{"type":"string"},"sourceCodePath":{"type":"string"},"tag":{"type":"string"}},"type":"object","language":{"nodejs":{"requiredOutputs":["gitRepository","resolvedCommit"]}}},"databricks:index/AppActiveDeploymentGitSourceGitRepository:AppActiveDeploymentGitSourceGitRepository":{"properties":{"provider":{"type":"string"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"}},"type":"object","required":["provider","url"]},"databricks:index/AppActiveDeploymentStatus:AppActiveDeploymentStatus":{"properties":{"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["message","state"]}}},"databricks:index/AppAppStatus:AppAppStatus":{"properties":{"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["message","state"]}}},"databricks:index/AppComputeStatus:AppComputeStatus":{"properties":{"activeInstances":{"type":"integer"},"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["activeInstances","message","state"]}}},"databricks:index/AppGitRepository:AppGitRepository":{"properties":{"provider":{"type":"string"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"}},"type":"object","required":["provider","url"]},"databricks:index/AppPendingDeployment:AppPendingDeployment":{"properties":{"commands":{"type":"array","items":{"type":"string"}},"createTime":{"type":"string","description":"The creation time of the app.\n"},"creator":{"type":"string","description":"The email of the user that created the app.\n"},"deploymentArtifacts":{"$ref":"#/types/databricks:index/AppPendingDeploymentDeploymentArtifacts:AppPendingDeploymentDeploymentArtifacts"},"deploymentId":{"type":"string"},"envVars":{"type":"array","items":{"$ref":"#/types/databricks:index/AppPendingDeploymentEnvVar:AppPendingDeploymentEnvVar"}},"gitSource":{"$ref":"#/types/databricks:index/AppPendingDeploymentGitSource:AppPendingDeploymentGitSource"},"mode":{"type":"string"},"sourceCodePath":{"type":"string"},"status":{"$ref":"#/types/databricks:index/AppPendingDeploymentStatus:AppPendingDeploymentStatus"},"updateTime":{"type":"string","description":"The update time of the app.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["createTime","creator","deploymentArtifacts","status","updateTime"]}}},"databricks:index/AppPendingDeploymentDeploymentArtifacts:AppPendingDeploymentDeploymentArtifacts":{"properties":{"sourceCodePath":{"type":"string"}},"type":"object"},"databricks:index/AppPendingDeploymentEnvVar:AppPendingDeploymentEnvVar":{"properties":{"name":{"type":"string","description":"The name of the app. The name must contain only lowercase alphanumeric characters and hyphens. It must be unique within the workspace.\n"},"value":{"type":"string"},"valueFrom":{"type":"string"}},"type":"object"},"databricks:index/AppPendingDeploymentGitSource:AppPendingDeploymentGitSource":{"properties":{"branch":{"type":"string"},"commit":{"type":"string"},"gitRepository":{"$ref":"#/types/databricks:index/AppPendingDeploymentGitSourceGitRepository:AppPendingDeploymentGitSourceGitRepository"},"resolvedCommit":{"type":"string"},"sourceCodePath":{"type":"string"},"tag":{"type":"string"}},"type":"object","language":{"nodejs":{"requiredOutputs":["gitRepository","resolvedCommit"]}}},"databricks:index/AppPendingDeploymentGitSourceGitRepository:AppPendingDeploymentGitSourceGitRepository":{"properties":{"provider":{"type":"string"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"}},"type":"object","required":["provider","url"]},"databricks:index/AppPendingDeploymentStatus:AppPendingDeploymentStatus":{"properties":{"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["message","state"]}}},"databricks:index/AppProviderConfig:AppProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/AppResource:AppResource":{"properties":{"database":{"$ref":"#/types/databricks:index/AppResourceDatabase:AppResourceDatabase","description":"attribute\n"},"description":{"type":"string","description":"The description of the resource.\n\nExactly one of the following attributes must be provided:\n"},"experiment":{"$ref":"#/types/databricks:index/AppResourceExperiment:AppResourceExperiment"},"genieSpace":{"$ref":"#/types/databricks:index/AppResourceGenieSpace:AppResourceGenieSpace","description":"attribute\n"},"job":{"$ref":"#/types/databricks:index/AppResourceJob:AppResourceJob","description":"attribute\n"},"name":{"type":"string","description":"The name of the resource.\n"},"secret":{"$ref":"#/types/databricks:index/AppResourceSecret:AppResourceSecret","description":"attribute\n"},"servingEndpoint":{"$ref":"#/types/databricks:index/AppResourceServingEndpoint:AppResourceServingEndpoint","description":"attribute\n"},"sqlWarehouse":{"$ref":"#/types/databricks:index/AppResourceSqlWarehouse:AppResourceSqlWarehouse","description":"attribute\n"},"ucSecurable":{"$ref":"#/types/databricks:index/AppResourceUcSecurable:AppResourceUcSecurable","description":"attribute (see the [API docs](https://docs.databricks.com/api/workspace/apps/create#resources-uc_securable) for full list of supported UC objects)\n"}},"type":"object","required":["name"]},"databricks:index/AppResourceDatabase:AppResourceDatabase":{"properties":{"databaseName":{"type":"string","description":"The name of database.\n"},"instanceName":{"type":"string","description":"The name of database instance.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"}},"type":"object","required":["databaseName","instanceName","permission"]},"databricks:index/AppResourceExperiment:AppResourceExperiment":{"properties":{"experimentId":{"type":"string"},"permission":{"type":"string"}},"type":"object","required":["experimentId","permission"]},"databricks:index/AppResourceGenieSpace:AppResourceGenieSpace":{"properties":{"name":{"type":"string","description":"The name of Genie Space.\n"},"permission":{"type":"string"},"spaceId":{"type":"string","description":"The unique ID of Genie Space.\n"}},"type":"object","required":["name","permission","spaceId"]},"databricks:index/AppResourceJob:AppResourceJob":{"properties":{"id":{"type":"string","description":"Id of the job to grant permission on.\n"},"permission":{"type":"string","description":"Permissions to grant on the Job. Supported permissions are: `CAN_MANAGE`, `IS_OWNER`, `CAN_MANAGE_RUN`, `CAN_VIEW`.\n"}},"type":"object","required":["id","permission"]},"databricks:index/AppResourceSecret:AppResourceSecret":{"properties":{"key":{"type":"string","description":"Key of the secret to grant permission on.\n"},"permission":{"type":"string","description":"Permission to grant on the secret scope. For secrets, only one permission is allowed. Permission must be one of: `READ`, `WRITE`, `MANAGE`.\n"},"scope":{"type":"string","description":"Scope of the secret to grant permission on.\n"}},"type":"object","required":["key","permission","scope"]},"databricks:index/AppResourceServingEndpoint:AppResourceServingEndpoint":{"properties":{"name":{"type":"string","description":"Name of the serving endpoint to grant permission on.\n"},"permission":{"type":"string","description":"Permission to grant on the serving endpoint. Supported permissions are: `CAN_MANAGE`, `CAN_QUERY`, `CAN_VIEW`.\n"}},"type":"object","required":["name","permission"]},"databricks:index/AppResourceSqlWarehouse:AppResourceSqlWarehouse":{"properties":{"id":{"type":"string","description":"Id of the SQL warehouse to grant permission on.\n"},"permission":{"type":"string","description":"Permission to grant on the SQL warehouse. Supported permissions are: `CAN_MANAGE`, `CAN_USE`, `IS_OWNER`.\n"}},"type":"object","required":["id","permission"]},"databricks:index/AppResourceUcSecurable:AppResourceUcSecurable":{"properties":{"permission":{"type":"string","description":"Permissions to grant on UC securable, i.e. `READ_VOLUME`, `WRITE_VOLUME`.\n"},"securableFullName":{"type":"string","description":"the full name of UC securable, i.e. `my-catalog.my-schema.my-volume`.\n"},"securableType":{"type":"string","description":"the type of UC securable, i.e. `VOLUME`.\n"}},"type":"object","required":["permission","securableFullName","securableType"]},"databricks:index/AppsSettingsCustomTemplateManifest:AppsSettingsCustomTemplateManifest":{"properties":{"description":{"type":"string","description":"The description of the template\n"},"name":{"type":"string","description":"The name of the template. It must contain only alphanumeric characters, hyphens, underscores, and whitespaces.\nIt must be unique within the workspace\n"},"resourceSpecs":{"type":"array","items":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateManifestResourceSpec:AppsSettingsCustomTemplateManifestResourceSpec"}},"version":{"type":"integer","description":"The manifest schema version, for now only 1 is allowed\n"}},"type":"object","required":["name","version"]},"databricks:index/AppsSettingsCustomTemplateManifestResourceSpec:AppsSettingsCustomTemplateManifestResourceSpec":{"properties":{"description":{"type":"string","description":"The description of the template\n"},"experimentSpec":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateManifestResourceSpecExperimentSpec:AppsSettingsCustomTemplateManifestResourceSpecExperimentSpec"},"jobSpec":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateManifestResourceSpecJobSpec:AppsSettingsCustomTemplateManifestResourceSpecJobSpec"},"name":{"type":"string","description":"The name of the template. It must contain only alphanumeric characters, hyphens, underscores, and whitespaces.\nIt must be unique within the workspace\n"},"secretSpec":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateManifestResourceSpecSecretSpec:AppsSettingsCustomTemplateManifestResourceSpecSecretSpec"},"servingEndpointSpec":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateManifestResourceSpecServingEndpointSpec:AppsSettingsCustomTemplateManifestResourceSpecServingEndpointSpec"},"sqlWarehouseSpec":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateManifestResourceSpecSqlWarehouseSpec:AppsSettingsCustomTemplateManifestResourceSpecSqlWarehouseSpec"},"ucSecurableSpec":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateManifestResourceSpecUcSecurableSpec:AppsSettingsCustomTemplateManifestResourceSpecUcSecurableSpec"}},"type":"object","required":["name"]},"databricks:index/AppsSettingsCustomTemplateManifestResourceSpecExperimentSpec:AppsSettingsCustomTemplateManifestResourceSpecExperimentSpec":{"properties":{"permission":{"type":"string"}},"type":"object","required":["permission"]},"databricks:index/AppsSettingsCustomTemplateManifestResourceSpecJobSpec:AppsSettingsCustomTemplateManifestResourceSpecJobSpec":{"properties":{"permission":{"type":"string"}},"type":"object","required":["permission"]},"databricks:index/AppsSettingsCustomTemplateManifestResourceSpecSecretSpec:AppsSettingsCustomTemplateManifestResourceSpecSecretSpec":{"properties":{"permission":{"type":"string"}},"type":"object","required":["permission"]},"databricks:index/AppsSettingsCustomTemplateManifestResourceSpecServingEndpointSpec:AppsSettingsCustomTemplateManifestResourceSpecServingEndpointSpec":{"properties":{"permission":{"type":"string"}},"type":"object","required":["permission"]},"databricks:index/AppsSettingsCustomTemplateManifestResourceSpecSqlWarehouseSpec:AppsSettingsCustomTemplateManifestResourceSpecSqlWarehouseSpec":{"properties":{"permission":{"type":"string"}},"type":"object","required":["permission"]},"databricks:index/AppsSettingsCustomTemplateManifestResourceSpecUcSecurableSpec:AppsSettingsCustomTemplateManifestResourceSpecUcSecurableSpec":{"properties":{"permission":{"type":"string"},"securableType":{"type":"string","description":"Possible values are: `CONNECTION`, `FUNCTION`, `TABLE`, `VOLUME`\n"}},"type":"object","required":["permission","securableType"]},"databricks:index/AppsSettingsCustomTemplateProviderConfig:AppsSettingsCustomTemplateProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/AppsSpaceProviderConfig:AppsSpaceProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/AppsSpaceResource:AppsSpaceResource":{"properties":{"database":{"$ref":"#/types/databricks:index/AppsSpaceResourceDatabase:AppsSpaceResourceDatabase"},"description":{"type":"string","description":"The description of the app space\n"},"experiment":{"$ref":"#/types/databricks:index/AppsSpaceResourceExperiment:AppsSpaceResourceExperiment"},"genieSpace":{"$ref":"#/types/databricks:index/AppsSpaceResourceGenieSpace:AppsSpaceResourceGenieSpace"},"job":{"$ref":"#/types/databricks:index/AppsSpaceResourceJob:AppsSpaceResourceJob"},"name":{"type":"string","description":"(string) - The name of the app space. The name must contain only lowercase alphanumeric characters and hyphens.\nIt must be unique within the workspace\n"},"secret":{"$ref":"#/types/databricks:index/AppsSpaceResourceSecret:AppsSpaceResourceSecret"},"servingEndpoint":{"$ref":"#/types/databricks:index/AppsSpaceResourceServingEndpoint:AppsSpaceResourceServingEndpoint"},"sqlWarehouse":{"$ref":"#/types/databricks:index/AppsSpaceResourceSqlWarehouse:AppsSpaceResourceSqlWarehouse"},"ucSecurable":{"$ref":"#/types/databricks:index/AppsSpaceResourceUcSecurable:AppsSpaceResourceUcSecurable"}},"type":"object","required":["name"]},"databricks:index/AppsSpaceResourceDatabase:AppsSpaceResourceDatabase":{"properties":{"databaseName":{"type":"string"},"instanceName":{"type":"string"},"permission":{"type":"string"}},"type":"object","required":["databaseName","instanceName","permission"]},"databricks:index/AppsSpaceResourceExperiment:AppsSpaceResourceExperiment":{"properties":{"experimentId":{"type":"string"},"permission":{"type":"string"}},"type":"object","required":["experimentId","permission"]},"databricks:index/AppsSpaceResourceGenieSpace:AppsSpaceResourceGenieSpace":{"properties":{"name":{"type":"string","description":"(string) - The name of the app space. The name must contain only lowercase alphanumeric characters and hyphens.\nIt must be unique within the workspace\n"},"permission":{"type":"string"},"spaceId":{"type":"string"}},"type":"object","required":["name","permission","spaceId"]},"databricks:index/AppsSpaceResourceJob:AppsSpaceResourceJob":{"properties":{"id":{"type":"string","description":"(string) - The unique identifier of the app space\n"},"permission":{"type":"string"}},"type":"object","required":["id","permission"]},"databricks:index/AppsSpaceResourceSecret:AppsSpaceResourceSecret":{"properties":{"key":{"type":"string","description":"Key of the secret to grant permission on\n"},"permission":{"type":"string"},"scope":{"type":"string","description":"Scope of the secret to grant permission on\n"}},"type":"object","required":["key","permission","scope"]},"databricks:index/AppsSpaceResourceServingEndpoint:AppsSpaceResourceServingEndpoint":{"properties":{"name":{"type":"string","description":"(string) - The name of the app space. The name must contain only lowercase alphanumeric characters and hyphens.\nIt must be unique within the workspace\n"},"permission":{"type":"string"}},"type":"object","required":["name","permission"]},"databricks:index/AppsSpaceResourceSqlWarehouse:AppsSpaceResourceSqlWarehouse":{"properties":{"id":{"type":"string","description":"(string) - The unique identifier of the app space\n"},"permission":{"type":"string"}},"type":"object","required":["id","permission"]},"databricks:index/AppsSpaceResourceUcSecurable:AppsSpaceResourceUcSecurable":{"properties":{"permission":{"type":"string"},"securableFullName":{"type":"string"},"securableType":{"type":"string","description":"Possible values are: `CONNECTION`, `FUNCTION`, `TABLE`, `VOLUME`\n"}},"type":"object","required":["permission","securableFullName","securableType"]},"databricks:index/AppsSpaceStatus:AppsSpaceStatus":{"properties":{"message":{"type":"string","description":"(string) - Message providing context about the current state\n"},"state":{"type":"string","description":"(string) - The state of the app space. Possible values are: `SPACE_ACTIVE`, `SPACE_CREATING`, `SPACE_DELETED`, `SPACE_DELETING`, `SPACE_ERROR`, `SPACE_UPDATING`\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["message","state"]}}},"databricks:index/ArtifactAllowlistArtifactMatcher:ArtifactAllowlistArtifactMatcher":{"properties":{"artifact":{"type":"string","description":"The artifact path or maven coordinate.\n"},"matchType":{"type":"string","description":"The pattern matching type of the artifact. Only `PREFIX_MATCH` is supported.\n"}},"type":"object","required":["artifact","matchType"]},"databricks:index/ArtifactAllowlistProviderConfig:ArtifactAllowlistProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspace:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspace":{"properties":{"canToggle":{"type":"boolean"},"enabled":{"type":"boolean"},"enablementDetails":{"$ref":"#/types/databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceEnablementDetails:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceEnablementDetails"},"maintenanceWindow":{"$ref":"#/types/databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindow:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindow"},"restartEvenIfNoUpdatesAvailable":{"type":"boolean"}},"type":"object","required":["enabled"],"language":{"nodejs":{"requiredOutputs":["enabled","enablementDetails"]}}},"databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceEnablementDetails:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceEnablementDetails":{"properties":{"forcedForComplianceMode":{"type":"boolean"},"unavailableForDisabledEntitlement":{"type":"boolean"},"unavailableForNonEnterpriseTier":{"type":"boolean"}},"type":"object"},"databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindow:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindow":{"properties":{"weekDayBasedSchedule":{"$ref":"#/types/databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule"}},"type":"object"},"databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule":{"properties":{"dayOfWeek":{"type":"string"},"frequency":{"type":"string"},"windowStartTime":{"$ref":"#/types/databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime"}},"type":"object","required":["dayOfWeek","frequency"]},"databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime":{"properties":{"hours":{"type":"integer"},"minutes":{"type":"integer"}},"type":"object","required":["hours","minutes"]},"databricks:index/AutomaticClusterUpdateWorkspaceSettingProviderConfig:AutomaticClusterUpdateWorkspaceSettingProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/BudgetAlertConfiguration:BudgetAlertConfiguration":{"properties":{"actionConfigurations":{"type":"array","items":{"$ref":"#/types/databricks:index/BudgetAlertConfigurationActionConfiguration:BudgetAlertConfigurationActionConfiguration"},"description":"List of action configurations to take when the budget alert is triggered. Consists of the following fields:\n"},"alertConfigurationId":{"type":"string"},"quantityThreshold":{"type":"string","description":"The threshold for the budget alert to determine if it is in a triggered state. The number is evaluated based on \u003cspan pulumi-lang-nodejs=\"`quantityType`\" pulumi-lang-dotnet=\"`QuantityType`\" pulumi-lang-go=\"`quantityType`\" pulumi-lang-python=\"`quantity_type`\" pulumi-lang-yaml=\"`quantityType`\" pulumi-lang-java=\"`quantityType`\"\u003e`quantity_type`\u003c/span\u003e.\n"},"quantityType":{"type":"string","description":"The way to calculate cost for this budget alert. This is what\u003cspan pulumi-lang-nodejs=\" quantityThreshold \" pulumi-lang-dotnet=\" QuantityThreshold \" pulumi-lang-go=\" quantityThreshold \" pulumi-lang-python=\" quantity_threshold \" pulumi-lang-yaml=\" quantityThreshold \" pulumi-lang-java=\" quantityThreshold \"\u003e quantity_threshold \u003c/span\u003eis measured in. (Enum: `LIST_PRICE_DOLLARS_USD`)\n"},"timePeriod":{"type":"string","description":"The time window of usage data for the budget. (Enum: `MONTH`)\n"},"triggerType":{"type":"string","description":"The evaluation method to determine when this budget alert is in a triggered state. (Enum: `CUMULATIVE_SPENDING_EXCEEDED`)\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["alertConfigurationId"]}}},"databricks:index/BudgetAlertConfigurationActionConfiguration:BudgetAlertConfigurationActionConfiguration":{"properties":{"actionConfigurationId":{"type":"string"},"actionType":{"type":"string","description":"The type of action to take when the budget alert is triggered. (Enum: `EMAIL_NOTIFICATION`)\n"},"target":{"type":"string","description":"The target of the action. For `EMAIL_NOTIFICATION`, this is the email address to send the notification to.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["actionConfigurationId"]}}},"databricks:index/BudgetFilter:BudgetFilter":{"properties":{"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/BudgetFilterTag:BudgetFilterTag"},"description":"List of tags to filter by. Consists of the following fields:\n"},"workspaceId":{"$ref":"#/types/databricks:index/BudgetFilterWorkspaceId:BudgetFilterWorkspaceId","description":"Filter by workspace ID (if empty, include usage all usage for this account). Consists of the following fields:\n"}},"type":"object"},"databricks:index/BudgetFilterTag:BudgetFilterTag":{"properties":{"key":{"type":"string","description":"The key of the tag.\n"},"value":{"$ref":"#/types/databricks:index/BudgetFilterTagValue:BudgetFilterTagValue","description":"Consists of the following fields:\n"}},"type":"object"},"databricks:index/BudgetFilterTagValue:BudgetFilterTagValue":{"properties":{"operator":{"type":"string","description":"The operator to use for the filter. (Enum: `IN`)\n"},"values":{"type":"array","items":{"type":"string"},"description":"The values to filter by.\n"}},"type":"object"},"databricks:index/BudgetFilterWorkspaceId:BudgetFilterWorkspaceId":{"properties":{"operator":{"type":"string","description":"The operator to use for the filter. (Enum: `IN`)\n"},"values":{"type":"array","items":{"type":"integer"},"description":"The values to filter by.\n"}},"type":"object"},"databricks:index/BudgetPolicyCustomTag:BudgetPolicyCustomTag":{"properties":{"key":{"type":"string","description":"The key of the tag.\n- Must be unique among all custom tags of the same policy\n- Cannot be “budget-policy-name”, “budget-policy-id” or \"budget-policy-resolution-result\" -\nthese tags are preserved\n"},"value":{"type":"string","description":"The value of the tag\n"}},"type":"object","required":["key"]},"databricks:index/CatalogEffectivePredictiveOptimizationFlag:CatalogEffectivePredictiveOptimizationFlag":{"properties":{"inheritedFromName":{"type":"string"},"inheritedFromType":{"type":"string"},"value":{"type":"string"}},"type":"object","required":["value"]},"databricks:index/CatalogProviderConfig:CatalogProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/CatalogProvisioningInfo:CatalogProvisioningInfo":{"properties":{"state":{"type":"string"}},"type":"object"},"databricks:index/CatalogWorkspaceBindingProviderConfig:CatalogWorkspaceBindingProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"ID of the workspace. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/ClusterAutoscale:ClusterAutoscale":{"properties":{"maxWorkers":{"type":"integer","description":"The maximum number of workers to which the cluster can scale up when overloaded.\u003cspan pulumi-lang-nodejs=\" maxWorkers \" pulumi-lang-dotnet=\" MaxWorkers \" pulumi-lang-go=\" maxWorkers \" pulumi-lang-python=\" max_workers \" pulumi-lang-yaml=\" maxWorkers \" pulumi-lang-java=\" maxWorkers \"\u003e max_workers \u003c/span\u003emust be strictly greater than min_workers.\n\nTo create a [single node cluster](https://docs.databricks.com/clusters/single-node.html), set \u003cspan pulumi-lang-nodejs=\"`isSingleNode \" pulumi-lang-dotnet=\"`IsSingleNode \" pulumi-lang-go=\"`isSingleNode \" pulumi-lang-python=\"`is_single_node \" pulumi-lang-yaml=\"`isSingleNode \" pulumi-lang-java=\"`isSingleNode \"\u003e`is_single_node \u003c/span\u003e= true` and `kind = \"CLASSIC_PREVIEW\"` for the cluster. Single-node clusters are suitable for small, non-distributed workloads like single-node machine learning use-cases.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst smallest = databricks.getNodeType({\n    localDisk: true,\n});\nconst latestLts = databricks.getSparkVersion({\n    longTermSupport: true,\n});\nconst singleNode = new databricks.Cluster(\"single_node\", {\n    clusterName: \"Single Node\",\n    sparkVersion: latestLts.then(latestLts =\u003e latestLts.id),\n    nodeTypeId: smallest.then(smallest =\u003e smallest.id),\n    autoterminationMinutes: 20,\n    isSingleNode: true,\n    kind: \"CLASSIC_PREVIEW\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsmallest = databricks.get_node_type(local_disk=True)\nlatest_lts = databricks.get_spark_version(long_term_support=True)\nsingle_node = databricks.Cluster(\"single_node\",\n    cluster_name=\"Single Node\",\n    spark_version=latest_lts.id,\n    node_type_id=smallest.id,\n    autotermination_minutes=20,\n    is_single_node=True,\n    kind=\"CLASSIC_PREVIEW\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var smallest = Databricks.GetNodeType.Invoke(new()\n    {\n        LocalDisk = true,\n    });\n\n    var latestLts = Databricks.GetSparkVersion.Invoke(new()\n    {\n        LongTermSupport = true,\n    });\n\n    var singleNode = new Databricks.Cluster(\"single_node\", new()\n    {\n        ClusterName = \"Single Node\",\n        SparkVersion = latestLts.Apply(getSparkVersionResult =\u003e getSparkVersionResult.Id),\n        NodeTypeId = smallest.Apply(getNodeTypeResult =\u003e getNodeTypeResult.Id),\n        AutoterminationMinutes = 20,\n        IsSingleNode = true,\n        Kind = \"CLASSIC_PREVIEW\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsmallest, err := databricks.GetNodeType(ctx, \u0026databricks.GetNodeTypeArgs{\n\t\t\tLocalDisk: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlatestLts, err := databricks.GetSparkVersion(ctx, \u0026databricks.GetSparkVersionArgs{\n\t\t\tLongTermSupport: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewCluster(ctx, \"single_node\", \u0026databricks.ClusterArgs{\n\t\t\tClusterName:            pulumi.String(\"Single Node\"),\n\t\t\tSparkVersion:           pulumi.String(latestLts.Id),\n\t\t\tNodeTypeId:             pulumi.String(smallest.Id),\n\t\t\tAutoterminationMinutes: pulumi.Int(20),\n\t\t\tIsSingleNode:           pulumi.Bool(true),\n\t\t\tKind:                   pulumi.String(\"CLASSIC_PREVIEW\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetNodeTypeArgs;\nimport com.pulumi.databricks.inputs.GetSparkVersionArgs;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var smallest = DatabricksFunctions.getNodeType(GetNodeTypeArgs.builder()\n            .localDisk(true)\n            .build());\n\n        final var latestLts = DatabricksFunctions.getSparkVersion(GetSparkVersionArgs.builder()\n            .longTermSupport(true)\n            .build());\n\n        var singleNode = new Cluster(\"singleNode\", ClusterArgs.builder()\n            .clusterName(\"Single Node\")\n            .sparkVersion(latestLts.id())\n            .nodeTypeId(smallest.id())\n            .autoterminationMinutes(20)\n            .isSingleNode(true)\n            .kind(\"CLASSIC_PREVIEW\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  singleNode:\n    type: databricks:Cluster\n    name: single_node\n    properties:\n      clusterName: Single Node\n      sparkVersion: ${latestLts.id}\n      nodeTypeId: ${smallest.id}\n      autoterminationMinutes: 20\n      isSingleNode: true\n      kind: CLASSIC_PREVIEW\nvariables:\n  smallest:\n    fn::invoke:\n      function: databricks:getNodeType\n      arguments:\n        localDisk: true\n  latestLts:\n    fn::invoke:\n      function: databricks:getSparkVersion\n      arguments:\n        longTermSupport: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"},"minWorkers":{"type":"integer","description":"The minimum number of workers to which the cluster can scale down when underutilized. It is also the initial number of workers the cluster will have after creation.\n"}},"type":"object"},"databricks:index/ClusterAwsAttributes:ClusterAwsAttributes":{"properties":{"availability":{"type":"string","description":"Availability type used for all subsequent nodes past the \u003cspan pulumi-lang-nodejs=\"`firstOnDemand`\" pulumi-lang-dotnet=\"`FirstOnDemand`\" pulumi-lang-go=\"`firstOnDemand`\" pulumi-lang-python=\"`first_on_demand`\" pulumi-lang-yaml=\"`firstOnDemand`\" pulumi-lang-java=\"`firstOnDemand`\"\u003e`first_on_demand`\u003c/span\u003e ones. Valid values are `SPOT`, `SPOT_WITH_FALLBACK` and `ON_DEMAND`. Note: If \u003cspan pulumi-lang-nodejs=\"`firstOnDemand`\" pulumi-lang-dotnet=\"`FirstOnDemand`\" pulumi-lang-go=\"`firstOnDemand`\" pulumi-lang-python=\"`first_on_demand`\" pulumi-lang-yaml=\"`firstOnDemand`\" pulumi-lang-java=\"`firstOnDemand`\"\u003e`first_on_demand`\u003c/span\u003e is zero, this availability type will be used for the entire cluster. Backend default value is `SPOT_WITH_FALLBACK` and could change in the future\n"},"ebsVolumeCount":{"type":"integer","description":"The number of volumes launched for each instance. You can choose up to 10 volumes. This feature is only enabled for supported node types. Legacy node types cannot specify custom EBS volumes. For node types with no instance store, at least one EBS volume needs to be specified; otherwise, cluster creation will fail. These EBS volumes will be mounted at /ebs0, /ebs1, and etc. Instance store volumes will be mounted at /local_disk0, /local_disk1, and etc. If EBS volumes are attached, Databricks will configure Spark to use only the EBS volumes for scratch storage because heterogeneously sized scratch devices can lead to inefficient disk utilization. If no EBS volumes are attached, Databricks will configure Spark to use instance store volumes. If EBS volumes are specified, then the Spark configuration spark.local.dir will be overridden.\n"},"ebsVolumeIops":{"type":"integer","description":"If using gp3 volumes, what IOPS to use for the disk. If this is not set, the maximum performance of a gp2 volume with the same volume size will be used.\n"},"ebsVolumeSize":{"type":"integer","description":"The size of each EBS volume (in GiB) launched for each instance. For general purpose SSD, this value must be within the range 100 - 4096. For throughput optimized HDD, this value must be within the range 500 - 4096. Custom EBS volumes cannot be specified for the legacy node types (memory-optimized and compute-optimized).\n"},"ebsVolumeThroughput":{"type":"integer","description":"If using gp3 volumes, what throughput to use for the disk. If this is not set, the maximum performance of a gp2 volume with the same volume size will be used.\n"},"ebsVolumeType":{"type":"string","description":"The type of EBS volumes that will be launched with this cluster. Valid values are `GENERAL_PURPOSE_SSD` or `THROUGHPUT_OPTIMIZED_HDD`. Use this option only if you're not picking *Delta Optimized `i3.*`* node types.\n"},"firstOnDemand":{"type":"integer","description":"The first \u003cspan pulumi-lang-nodejs=\"`firstOnDemand`\" pulumi-lang-dotnet=\"`FirstOnDemand`\" pulumi-lang-go=\"`firstOnDemand`\" pulumi-lang-python=\"`first_on_demand`\" pulumi-lang-yaml=\"`firstOnDemand`\" pulumi-lang-java=\"`firstOnDemand`\"\u003e`first_on_demand`\u003c/span\u003e nodes of the cluster will be placed on on-demand instances. If this value is greater than 0, the cluster driver node will be placed on an on-demand instance. If this value is greater than or equal to the current cluster size, all nodes will be placed on on-demand instances. If this value is less than the current cluster size, \u003cspan pulumi-lang-nodejs=\"`firstOnDemand`\" pulumi-lang-dotnet=\"`FirstOnDemand`\" pulumi-lang-go=\"`firstOnDemand`\" pulumi-lang-python=\"`first_on_demand`\" pulumi-lang-yaml=\"`firstOnDemand`\" pulumi-lang-java=\"`firstOnDemand`\"\u003e`first_on_demand`\u003c/span\u003e nodes will be placed on on-demand instances, and the remainder will be placed on availability instances. This value does not affect cluster size and cannot be mutated over the lifetime of a cluster. If unspecified, the default value is 0.\n"},"instanceProfileArn":{"type":"string","description":"Nodes for this cluster will only be placed on AWS instances with this instance profile. Please see\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eresource documentation for extended examples on adding a valid instance profile using Pulumi.\n"},"spotBidPricePercent":{"type":"integer","description":"The max price for AWS spot instances, as a percentage of the corresponding instance type’s on-demand price. For example, if this field is set to 50, and the cluster needs a new `i3.xlarge` spot instance, then the max price is half of the price of on-demand `i3.xlarge` instances. Similarly, if this field is set to 200, the max price is twice the price of on-demand `i3.xlarge` instances. If not specified, the default value is \u003cspan pulumi-lang-nodejs=\"`100`\" pulumi-lang-dotnet=\"`100`\" pulumi-lang-go=\"`100`\" pulumi-lang-python=\"`100`\" pulumi-lang-yaml=\"`100`\" pulumi-lang-java=\"`100`\"\u003e`100`\u003c/span\u003e. When spot instances are requested for this cluster, only spot instances whose max price percentage matches this field will be considered. For safety, we enforce this field to be no more than \u003cspan pulumi-lang-nodejs=\"`10000`\" pulumi-lang-dotnet=\"`10000`\" pulumi-lang-go=\"`10000`\" pulumi-lang-python=\"`10000`\" pulumi-lang-yaml=\"`10000`\" pulumi-lang-java=\"`10000`\"\u003e`10000`\u003c/span\u003e.\n"},"zoneId":{"type":"string","description":"Identifier for the availability zone/datacenter in which the cluster resides. This string will be of a form like `us-west-2a`. The provided availability zone must be in the same region as the Databricks deployment. For example, `us-west-2a` is not a valid zone ID if the Databricks deployment resides in the `us-east-1` region. Enable automatic availability zone selection (\"Auto-AZ\"), by setting the value \u003cspan pulumi-lang-nodejs=\"`auto`\" pulumi-lang-dotnet=\"`Auto`\" pulumi-lang-go=\"`auto`\" pulumi-lang-python=\"`auto`\" pulumi-lang-yaml=\"`auto`\" pulumi-lang-java=\"`auto`\"\u003e`auto`\u003c/span\u003e. Databricks selects the AZ based on available IPs in the workspace subnets and retries in other availability zones if AWS returns insufficient capacity errors.\n"}},"type":"object"},"databricks:index/ClusterAzureAttributes:ClusterAzureAttributes":{"properties":{"availability":{"type":"string","description":"Availability type used for all subsequent nodes past the \u003cspan pulumi-lang-nodejs=\"`firstOnDemand`\" pulumi-lang-dotnet=\"`FirstOnDemand`\" pulumi-lang-go=\"`firstOnDemand`\" pulumi-lang-python=\"`first_on_demand`\" pulumi-lang-yaml=\"`firstOnDemand`\" pulumi-lang-java=\"`firstOnDemand`\"\u003e`first_on_demand`\u003c/span\u003e ones. Valid values are `SPOT_AZURE`, `SPOT_WITH_FALLBACK_AZURE`, and `ON_DEMAND_AZURE`. Note: If \u003cspan pulumi-lang-nodejs=\"`firstOnDemand`\" pulumi-lang-dotnet=\"`FirstOnDemand`\" pulumi-lang-go=\"`firstOnDemand`\" pulumi-lang-python=\"`first_on_demand`\" pulumi-lang-yaml=\"`firstOnDemand`\" pulumi-lang-java=\"`firstOnDemand`\"\u003e`first_on_demand`\u003c/span\u003e is zero, this availability type will be used for the entire cluster.\n"},"firstOnDemand":{"type":"integer","description":"The first \u003cspan pulumi-lang-nodejs=\"`firstOnDemand`\" pulumi-lang-dotnet=\"`FirstOnDemand`\" pulumi-lang-go=\"`firstOnDemand`\" pulumi-lang-python=\"`first_on_demand`\" pulumi-lang-yaml=\"`firstOnDemand`\" pulumi-lang-java=\"`firstOnDemand`\"\u003e`first_on_demand`\u003c/span\u003e nodes of the cluster will be placed on on-demand instances. If this value is greater than 0, the cluster driver node will be placed on an on-demand instance. If this value is greater than or equal to the current cluster size, all nodes will be placed on on-demand instances. If this value is less than the current cluster size, \u003cspan pulumi-lang-nodejs=\"`firstOnDemand`\" pulumi-lang-dotnet=\"`FirstOnDemand`\" pulumi-lang-go=\"`firstOnDemand`\" pulumi-lang-python=\"`first_on_demand`\" pulumi-lang-yaml=\"`firstOnDemand`\" pulumi-lang-java=\"`firstOnDemand`\"\u003e`first_on_demand`\u003c/span\u003e nodes will be placed on on-demand instances, and the remainder will be placed on availability instances. This value does not affect cluster size and cannot be mutated over the lifetime of a cluster.\n"},"logAnalyticsInfo":{"$ref":"#/types/databricks:index/ClusterAzureAttributesLogAnalyticsInfo:ClusterAzureAttributesLogAnalyticsInfo"},"spotBidMaxPrice":{"type":"number","description":"The max bid price used for Azure spot instances. You can set this to greater than or equal to the current spot price. You can also set this to `-1`, which specifies that the instance cannot be evicted on the basis of price. The price for the instance will be the current price for spot instances or the price for a standard instance.\n"}},"type":"object"},"databricks:index/ClusterAzureAttributesLogAnalyticsInfo:ClusterAzureAttributesLogAnalyticsInfo":{"properties":{"logAnalyticsPrimaryKey":{"type":"string"},"logAnalyticsWorkspaceId":{"type":"string"}},"type":"object"},"databricks:index/ClusterClusterLogConf:ClusterClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/ClusterClusterLogConfDbfs:ClusterClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/ClusterClusterLogConfS3:ClusterClusterLogConfS3"},"volumes":{"$ref":"#/types/databricks:index/ClusterClusterLogConfVolumes:ClusterClusterLogConfVolumes"}},"type":"object"},"databricks:index/ClusterClusterLogConfDbfs:ClusterClusterLogConfDbfs":{"properties":{"destination":{"type":"string","description":"S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.\n"}},"type":"object","required":["destination"]},"databricks:index/ClusterClusterLogConfS3:ClusterClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string","description":"Set canned access control list, e.g. `bucket-owner-full-control`. If \u003cspan pulumi-lang-nodejs=\"`cannedCal`\" pulumi-lang-dotnet=\"`CannedCal`\" pulumi-lang-go=\"`cannedCal`\" pulumi-lang-python=\"`canned_cal`\" pulumi-lang-yaml=\"`cannedCal`\" pulumi-lang-java=\"`cannedCal`\"\u003e`canned_cal`\u003c/span\u003e is set, the cluster instance profile must have `s3:PutObjectAcl` permission on the destination bucket and prefix. The full list of possible canned ACLs can be found [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl). By default, only the object owner gets full control. If you are using a cross-account role for writing data, you may want to set `bucket-owner-full-control` to make bucket owners able to read the logs.\n"},"destination":{"type":"string","description":"S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.\n"},"enableEncryption":{"type":"boolean","description":"Enable server-side encryption, false by default.\n"},"encryptionType":{"type":"string","description":"The encryption type, it could be `sse-s3` or `sse-kms`. It is used only when encryption is enabled, and the default type is `sse-s3`.\n"},"endpoint":{"type":"string","description":"S3 endpoint, e.g. \u003chttps://s3-us-west-2.amazonaws.com\u003e. Either \u003cspan pulumi-lang-nodejs=\"`region`\" pulumi-lang-dotnet=\"`Region`\" pulumi-lang-go=\"`region`\" pulumi-lang-python=\"`region`\" pulumi-lang-yaml=\"`region`\" pulumi-lang-java=\"`region`\"\u003e`region`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`endpoint`\" pulumi-lang-dotnet=\"`Endpoint`\" pulumi-lang-go=\"`endpoint`\" pulumi-lang-python=\"`endpoint`\" pulumi-lang-yaml=\"`endpoint`\" pulumi-lang-java=\"`endpoint`\"\u003e`endpoint`\u003c/span\u003e needs to be set. If both are set, the endpoint is used.\n"},"kmsKey":{"type":"string","description":"KMS key used if encryption is enabled and encryption type is set to `sse-kms`.\n"},"region":{"type":"string","description":"S3 region, e.g. `us-west-2`. Either \u003cspan pulumi-lang-nodejs=\"`region`\" pulumi-lang-dotnet=\"`Region`\" pulumi-lang-go=\"`region`\" pulumi-lang-python=\"`region`\" pulumi-lang-yaml=\"`region`\" pulumi-lang-java=\"`region`\"\u003e`region`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`endpoint`\" pulumi-lang-dotnet=\"`Endpoint`\" pulumi-lang-go=\"`endpoint`\" pulumi-lang-python=\"`endpoint`\" pulumi-lang-yaml=\"`endpoint`\" pulumi-lang-java=\"`endpoint`\"\u003e`endpoint`\u003c/span\u003e must be set. If both are set, the endpoint is used.\n"}},"type":"object","required":["destination"]},"databricks:index/ClusterClusterLogConfVolumes:ClusterClusterLogConfVolumes":{"properties":{"destination":{"type":"string","description":"S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.\n"}},"type":"object","required":["destination"]},"databricks:index/ClusterClusterMountInfo:ClusterClusterMountInfo":{"properties":{"localMountDirPath":{"type":"string","description":"path inside the Spark container.\n\nFor example, you can mount Azure Data Lake Storage container using the following code:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst storageAccount = \"ewfw3ggwegwg\";\nconst storageContainer = \"test\";\nconst withNfs = new databricks.Cluster(\"with_nfs\", {clusterMountInfos: [{\n    networkFilesystemInfo: {\n        serverAddress: `${storageAccount}.blob.core.windows.net`,\n        mountOptions: \"sec=sys,vers=3,nolock,proto=tcp\",\n    },\n    remoteMountDirPath: `${storageAccount}/${storageContainer}`,\n    localMountDirPath: \"/mnt/nfs-test\",\n}]});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nstorage_account = \"ewfw3ggwegwg\"\nstorage_container = \"test\"\nwith_nfs = databricks.Cluster(\"with_nfs\", cluster_mount_infos=[{\n    \"network_filesystem_info\": {\n        \"server_address\": f\"{storage_account}.blob.core.windows.net\",\n        \"mount_options\": \"sec=sys,vers=3,nolock,proto=tcp\",\n    },\n    \"remote_mount_dir_path\": f\"{storage_account}/{storage_container}\",\n    \"local_mount_dir_path\": \"/mnt/nfs-test\",\n}])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var storageAccount = \"ewfw3ggwegwg\";\n\n    var storageContainer = \"test\";\n\n    var withNfs = new Databricks.Cluster(\"with_nfs\", new()\n    {\n        ClusterMountInfos = new[]\n        {\n            new Databricks.Inputs.ClusterClusterMountInfoArgs\n            {\n                NetworkFilesystemInfo = new Databricks.Inputs.ClusterClusterMountInfoNetworkFilesystemInfoArgs\n                {\n                    ServerAddress = $\"{storageAccount}.blob.core.windows.net\",\n                    MountOptions = \"sec=sys,vers=3,nolock,proto=tcp\",\n                },\n                RemoteMountDirPath = $\"{storageAccount}/{storageContainer}\",\n                LocalMountDirPath = \"/mnt/nfs-test\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tstorageAccount := \"ewfw3ggwegwg\"\n\t\tstorageContainer := \"test\"\n\t\t_, err := databricks.NewCluster(ctx, \"with_nfs\", \u0026databricks.ClusterArgs{\n\t\t\tClusterMountInfos: databricks.ClusterClusterMountInfoArray{\n\t\t\t\t\u0026databricks.ClusterClusterMountInfoArgs{\n\t\t\t\t\tNetworkFilesystemInfo: \u0026databricks.ClusterClusterMountInfoNetworkFilesystemInfoArgs{\n\t\t\t\t\t\tServerAddress: pulumi.Sprintf(\"%v.blob.core.windows.net\", storageAccount),\n\t\t\t\t\t\tMountOptions:  pulumi.String(\"sec=sys,vers=3,nolock,proto=tcp\"),\n\t\t\t\t\t},\n\t\t\t\t\tRemoteMountDirPath: pulumi.Sprintf(\"%v/%v\", storageAccount, storageContainer),\n\t\t\t\t\tLocalMountDirPath:  pulumi.String(\"/mnt/nfs-test\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport com.pulumi.databricks.inputs.ClusterClusterMountInfoArgs;\nimport com.pulumi.databricks.inputs.ClusterClusterMountInfoNetworkFilesystemInfoArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var storageAccount = \"ewfw3ggwegwg\";\n\n        final var storageContainer = \"test\";\n\n        var withNfs = new Cluster(\"withNfs\", ClusterArgs.builder()\n            .clusterMountInfos(ClusterClusterMountInfoArgs.builder()\n                .networkFilesystemInfo(ClusterClusterMountInfoNetworkFilesystemInfoArgs.builder()\n                    .serverAddress(String.format(\"%s.blob.core.windows.net\", storageAccount))\n                    .mountOptions(\"sec=sys,vers=3,nolock,proto=tcp\")\n                    .build())\n                .remoteMountDirPath(String.format(\"%s/%s\", storageAccount,storageContainer))\n                .localMountDirPath(\"/mnt/nfs-test\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  withNfs:\n    type: databricks:Cluster\n    name: with_nfs\n    properties:\n      clusterMountInfos:\n        - networkFilesystemInfo:\n            serverAddress: ${storageAccount}.blob.core.windows.net\n            mountOptions: sec=sys,vers=3,nolock,proto=tcp\n          remoteMountDirPath: ${storageAccount}/${storageContainer}\n          localMountDirPath: /mnt/nfs-test\nvariables:\n  storageAccount: ewfw3ggwegwg\n  storageContainer: test\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"},"networkFilesystemInfo":{"$ref":"#/types/databricks:index/ClusterClusterMountInfoNetworkFilesystemInfo:ClusterClusterMountInfoNetworkFilesystemInfo","description":"block specifying connection. It consists of:\n"},"remoteMountDirPath":{"type":"string","description":"string specifying path to mount on the remote service.\n"}},"type":"object","required":["localMountDirPath","networkFilesystemInfo"]},"databricks:index/ClusterClusterMountInfoNetworkFilesystemInfo:ClusterClusterMountInfoNetworkFilesystemInfo":{"properties":{"mountOptions":{"type":"string","description":"string that will be passed as options passed to the \u003cspan pulumi-lang-nodejs=\"`mount`\" pulumi-lang-dotnet=\"`Mount`\" pulumi-lang-go=\"`mount`\" pulumi-lang-python=\"`mount`\" pulumi-lang-yaml=\"`mount`\" pulumi-lang-java=\"`mount`\"\u003e`mount`\u003c/span\u003e command.\n"},"serverAddress":{"type":"string","description":"host name.\n"}},"type":"object","required":["serverAddress"]},"databricks:index/ClusterDockerImage:ClusterDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/ClusterDockerImageBasicAuth:ClusterDockerImageBasicAuth","description":"`basic_auth.username` and `basic_auth.password` for Docker repository. Docker registry credentials are encrypted when they are stored in Databricks internal storage and when they are passed to a registry upon fetching Docker images at cluster launch.  For better security, these credentials should be stored in the secret scope and referred using secret path syntax: `{{secrets/scope/key}}`, otherwise other users of the workspace may access them via UI/API.\n\nExample usage with\u003cspan pulumi-lang-nodejs=\" azurermContainerRegistry \" pulumi-lang-dotnet=\" AzurermContainerRegistry \" pulumi-lang-go=\" azurermContainerRegistry \" pulumi-lang-python=\" azurerm_container_registry \" pulumi-lang-yaml=\" azurermContainerRegistry \" pulumi-lang-java=\" azurermContainerRegistry \"\u003e azurerm_container_registry \u003c/span\u003eand docker_registry_image, that you can adapt to your specific use-case:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as docker from \"@pulumi/docker\";\n\nconst _this = new docker.RegistryImage(\"this\", {\n    build: [{}],\n    name: `${thisAzurermContainerRegistry.loginServer}/sample:latest`,\n});\nconst thisCluster = new databricks.Cluster(\"this\", {dockerImage: {\n    url: _this.name,\n    basicAuth: {\n        username: thisAzurermContainerRegistry.adminUsername,\n        password: thisAzurermContainerRegistry.adminPassword,\n    },\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_docker as docker\n\nthis = docker.RegistryImage(\"this\",\n    build=[{}],\n    name=f\"{this_azurerm_container_registry['loginServer']}/sample:latest\")\nthis_cluster = databricks.Cluster(\"this\", docker_image={\n    \"url\": this.name,\n    \"basic_auth\": {\n        \"username\": this_azurerm_container_registry[\"adminUsername\"],\n        \"password\": this_azurerm_container_registry[\"adminPassword\"],\n    },\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Docker = Pulumi.Docker;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Docker.RegistryImage(\"this\", new()\n    {\n        Build = new[]\n        {\n            null,\n        },\n        Name = $\"{thisAzurermContainerRegistry.LoginServer}/sample:latest\",\n    });\n\n    var thisCluster = new Databricks.Cluster(\"this\", new()\n    {\n        DockerImage = new Databricks.Inputs.ClusterDockerImageArgs\n        {\n            Url = @this.Name,\n            BasicAuth = new Databricks.Inputs.ClusterDockerImageBasicAuthArgs\n            {\n                Username = thisAzurermContainerRegistry.AdminUsername,\n                Password = thisAzurermContainerRegistry.AdminPassword,\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-docker/sdk/v4/go/docker\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := docker.NewRegistryImage(ctx, \"this\", \u0026docker.RegistryImageArgs{\n\t\t\tBuild: docker.RegistryImageBuildArgs{\n\t\t\t\tmap[string]interface{}{},\n\t\t\t},\n\t\t\tName: pulumi.Sprintf(\"%v/sample:latest\", thisAzurermContainerRegistry.LoginServer),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewCluster(ctx, \"this\", \u0026databricks.ClusterArgs{\n\t\t\tDockerImage: \u0026databricks.ClusterDockerImageArgs{\n\t\t\t\tUrl: this.Name,\n\t\t\t\tBasicAuth: \u0026databricks.ClusterDockerImageBasicAuthArgs{\n\t\t\t\t\tUsername: pulumi.Any(thisAzurermContainerRegistry.AdminUsername),\n\t\t\t\t\tPassword: pulumi.Any(thisAzurermContainerRegistry.AdminPassword),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.docker.RegistryImage;\nimport com.pulumi.docker.RegistryImageArgs;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport com.pulumi.databricks.inputs.ClusterDockerImageArgs;\nimport com.pulumi.databricks.inputs.ClusterDockerImageBasicAuthArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new RegistryImage(\"this\", RegistryImageArgs.builder()\n            .build(RegistryImageBuildArgs.builder()\n                .build())\n            .name(String.format(\"%s/sample:latest\", thisAzurermContainerRegistry.loginServer()))\n            .build());\n\n        var thisCluster = new Cluster(\"thisCluster\", ClusterArgs.builder()\n            .dockerImage(ClusterDockerImageArgs.builder()\n                .url(this_.name())\n                .basicAuth(ClusterDockerImageBasicAuthArgs.builder()\n                    .username(thisAzurermContainerRegistry.adminUsername())\n                    .password(thisAzurermContainerRegistry.adminPassword())\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: docker:RegistryImage\n    properties:\n      build:\n        - {}\n      name: ${thisAzurermContainerRegistry.loginServer}/sample:latest\n  thisCluster:\n    type: databricks:Cluster\n    name: this\n    properties:\n      dockerImage:\n        url: ${this.name}\n        basicAuth:\n          username: ${thisAzurermContainerRegistry.adminUsername}\n          password: ${thisAzurermContainerRegistry.adminPassword}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"},"url":{"type":"string","description":"URL for the Docker image\n"}},"type":"object","required":["url"]},"databricks:index/ClusterDockerImageBasicAuth:ClusterDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true},"username":{"type":"string"}},"type":"object","required":["password","username"]},"databricks:index/ClusterDriverNodeTypeFlexibility:ClusterDriverNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"},"description":"list of alternative node types that will be used if main node type isn't available.  Follow the [documentation](https://learn.microsoft.com/en-us/azure/databricks/compute/flexible-node-types#fallback-instance-type-requirements) for requirements on selection of alternative node types.\n"}},"type":"object"},"databricks:index/ClusterGcpAttributes:ClusterGcpAttributes":{"properties":{"availability":{"type":"string","description":"Availability type used for all nodes. Valid values are `PREEMPTIBLE_GCP`, `PREEMPTIBLE_WITH_FALLBACK_GCP` and `ON_DEMAND_GCP`, default: `ON_DEMAND_GCP`.\n"},"bootDiskSize":{"type":"integer","description":"Boot disk size in GB\n"},"firstOnDemand":{"type":"integer","description":"The first \u003cspan pulumi-lang-nodejs=\"`firstOnDemand`\" pulumi-lang-dotnet=\"`FirstOnDemand`\" pulumi-lang-go=\"`firstOnDemand`\" pulumi-lang-python=\"`first_on_demand`\" pulumi-lang-yaml=\"`firstOnDemand`\" pulumi-lang-java=\"`firstOnDemand`\"\u003e`first_on_demand`\u003c/span\u003e nodes of the cluster will be placed on on-demand instances. If this value is greater than 0, the cluster driver node will be placed on an on-demand instance. If this value is greater than or equal to the current cluster size, all nodes will be placed on on-demand instances. If this value is less than the current cluster size, \u003cspan pulumi-lang-nodejs=\"`firstOnDemand`\" pulumi-lang-dotnet=\"`FirstOnDemand`\" pulumi-lang-go=\"`firstOnDemand`\" pulumi-lang-python=\"`first_on_demand`\" pulumi-lang-yaml=\"`firstOnDemand`\" pulumi-lang-java=\"`firstOnDemand`\"\u003e`first_on_demand`\u003c/span\u003e nodes will be placed on on-demand instances, and the remainder will be placed on availability instances. This value does not affect cluster size and cannot be mutated over the lifetime of a cluster.\n"},"googleServiceAccount":{"type":"string","description":"Google Service Account email address that the cluster uses to authenticate with Google Identity. This field is used for authentication with the GCS and BigQuery data sources.\n"},"localSsdCount":{"type":"integer","description":"Number of local SSD disks (each is 375GB in size) that will be attached to each node of the cluster.\n"},"usePreemptibleExecutors":{"type":"boolean","description":"if we should use preemptible executors ([GCP documentation](https://cloud.google.com/compute/docs/instances/preemptible)). *Warning: this field is deprecated in favor of \u003cspan pulumi-lang-nodejs=\"`availability`\" pulumi-lang-dotnet=\"`Availability`\" pulumi-lang-go=\"`availability`\" pulumi-lang-python=\"`availability`\" pulumi-lang-yaml=\"`availability`\" pulumi-lang-java=\"`availability`\"\u003e`availability`\u003c/span\u003e, and will be removed soon.*\n"},"zoneId":{"type":"string","description":"Identifier for the availability zone in which the cluster resides. This can be one of the following:\n* `HA` (default): High availability, spread nodes across availability zones for a Databricks deployment region.\n* `AUTO`: Databricks picks an availability zone to schedule the cluster on.\n* name of a GCP availability zone: pick one of the available zones from the [list of available availability zones](https://cloud.google.com/compute/docs/regions-zones#available).\n"}},"type":"object"},"databricks:index/ClusterInitScript:ClusterInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/ClusterInitScriptAbfss:ClusterInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/ClusterInitScriptDbfs:ClusterInitScriptDbfs","deprecationMessage":"For init scripts use 'volumes', 'workspace' or cloud storage location instead of 'dbfs'."},"file":{"$ref":"#/types/databricks:index/ClusterInitScriptFile:ClusterInitScriptFile"},"gcs":{"$ref":"#/types/databricks:index/ClusterInitScriptGcs:ClusterInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/ClusterInitScriptS3:ClusterInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/ClusterInitScriptVolumes:ClusterInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/ClusterInitScriptWorkspace:ClusterInitScriptWorkspace"}},"type":"object"},"databricks:index/ClusterInitScriptAbfss:ClusterInitScriptAbfss":{"properties":{"destination":{"type":"string","description":"S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.\n"}},"type":"object","required":["destination"]},"databricks:index/ClusterInitScriptDbfs:ClusterInitScriptDbfs":{"properties":{"destination":{"type":"string","description":"S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.\n"}},"type":"object","required":["destination"]},"databricks:index/ClusterInitScriptFile:ClusterInitScriptFile":{"properties":{"destination":{"type":"string","description":"S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.\n"}},"type":"object","required":["destination"]},"databricks:index/ClusterInitScriptGcs:ClusterInitScriptGcs":{"properties":{"destination":{"type":"string","description":"S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.\n"}},"type":"object","required":["destination"]},"databricks:index/ClusterInitScriptS3:ClusterInitScriptS3":{"properties":{"cannedAcl":{"type":"string","description":"Set canned access control list, e.g. `bucket-owner-full-control`. If \u003cspan pulumi-lang-nodejs=\"`cannedCal`\" pulumi-lang-dotnet=\"`CannedCal`\" pulumi-lang-go=\"`cannedCal`\" pulumi-lang-python=\"`canned_cal`\" pulumi-lang-yaml=\"`cannedCal`\" pulumi-lang-java=\"`cannedCal`\"\u003e`canned_cal`\u003c/span\u003e is set, the cluster instance profile must have `s3:PutObjectAcl` permission on the destination bucket and prefix. The full list of possible canned ACLs can be found [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl). By default, only the object owner gets full control. If you are using a cross-account role for writing data, you may want to set `bucket-owner-full-control` to make bucket owners able to read the logs.\n"},"destination":{"type":"string","description":"S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.\n"},"enableEncryption":{"type":"boolean","description":"Enable server-side encryption, false by default.\n"},"encryptionType":{"type":"string","description":"The encryption type, it could be `sse-s3` or `sse-kms`. It is used only when encryption is enabled, and the default type is `sse-s3`.\n"},"endpoint":{"type":"string","description":"S3 endpoint, e.g. \u003chttps://s3-us-west-2.amazonaws.com\u003e. Either \u003cspan pulumi-lang-nodejs=\"`region`\" pulumi-lang-dotnet=\"`Region`\" pulumi-lang-go=\"`region`\" pulumi-lang-python=\"`region`\" pulumi-lang-yaml=\"`region`\" pulumi-lang-java=\"`region`\"\u003e`region`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`endpoint`\" pulumi-lang-dotnet=\"`Endpoint`\" pulumi-lang-go=\"`endpoint`\" pulumi-lang-python=\"`endpoint`\" pulumi-lang-yaml=\"`endpoint`\" pulumi-lang-java=\"`endpoint`\"\u003e`endpoint`\u003c/span\u003e needs to be set. If both are set, the endpoint is used.\n"},"kmsKey":{"type":"string","description":"KMS key used if encryption is enabled and encryption type is set to `sse-kms`.\n"},"region":{"type":"string","description":"S3 region, e.g. `us-west-2`. Either \u003cspan pulumi-lang-nodejs=\"`region`\" pulumi-lang-dotnet=\"`Region`\" pulumi-lang-go=\"`region`\" pulumi-lang-python=\"`region`\" pulumi-lang-yaml=\"`region`\" pulumi-lang-java=\"`region`\"\u003e`region`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`endpoint`\" pulumi-lang-dotnet=\"`Endpoint`\" pulumi-lang-go=\"`endpoint`\" pulumi-lang-python=\"`endpoint`\" pulumi-lang-yaml=\"`endpoint`\" pulumi-lang-java=\"`endpoint`\"\u003e`endpoint`\u003c/span\u003e must be set. If both are set, the endpoint is used.\n"}},"type":"object","required":["destination"]},"databricks:index/ClusterInitScriptVolumes:ClusterInitScriptVolumes":{"properties":{"destination":{"type":"string","description":"S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.\n"}},"type":"object","required":["destination"]},"databricks:index/ClusterInitScriptWorkspace:ClusterInitScriptWorkspace":{"properties":{"destination":{"type":"string","description":"S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.\n"}},"type":"object","required":["destination"]},"databricks:index/ClusterLibrary:ClusterLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/ClusterLibraryCran:ClusterLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/ClusterLibraryMaven:ClusterLibraryMaven"},"pypi":{"$ref":"#/types/databricks:index/ClusterLibraryPypi:ClusterLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/ClusterLibraryCran:ClusterLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/ClusterLibraryMaven:ClusterLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/ClusterLibraryPypi:ClusterLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/ClusterPolicyLibrary:ClusterPolicyLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/ClusterPolicyLibraryCran:ClusterPolicyLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/ClusterPolicyLibraryMaven:ClusterPolicyLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/ClusterPolicyLibraryProviderConfig:ClusterPolicyLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/ClusterPolicyLibraryPypi:ClusterPolicyLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/ClusterPolicyLibraryCran:ClusterPolicyLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/ClusterPolicyLibraryMaven:ClusterPolicyLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/ClusterPolicyLibraryProviderConfig:ClusterPolicyLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/ClusterPolicyLibraryPypi:ClusterPolicyLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/ClusterPolicyProviderConfig:ClusterPolicyProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/ClusterProviderConfig:ClusterProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n\nThe following example demonstrates how to create an autoscaling cluster with [Delta Cache](https://docs.databricks.com/delta/optimizations/delta-cache.html) enabled:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst smallest = databricks.getNodeType({\n    localDisk: true,\n});\nconst latestLts = databricks.getSparkVersion({\n    longTermSupport: true,\n});\nconst sharedAutoscaling = new databricks.Cluster(\"shared_autoscaling\", {\n    clusterName: \"Shared Autoscaling\",\n    sparkVersion: latestLts.then(latestLts =\u003e latestLts.id),\n    nodeTypeId: smallest.then(smallest =\u003e smallest.id),\n    autoterminationMinutes: 20,\n    autoscale: {\n        minWorkers: 1,\n        maxWorkers: 50,\n    },\n    sparkConf: {\n        \"spark.databricks.io.cache.enabled\": \"true\",\n        \"spark.databricks.io.cache.maxDiskUsage\": \"50g\",\n        \"spark.databricks.io.cache.maxMetaDataCache\": \"1g\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsmallest = databricks.get_node_type(local_disk=True)\nlatest_lts = databricks.get_spark_version(long_term_support=True)\nshared_autoscaling = databricks.Cluster(\"shared_autoscaling\",\n    cluster_name=\"Shared Autoscaling\",\n    spark_version=latest_lts.id,\n    node_type_id=smallest.id,\n    autotermination_minutes=20,\n    autoscale={\n        \"min_workers\": 1,\n        \"max_workers\": 50,\n    },\n    spark_conf={\n        \"spark.databricks.io.cache.enabled\": \"true\",\n        \"spark.databricks.io.cache.maxDiskUsage\": \"50g\",\n        \"spark.databricks.io.cache.maxMetaDataCache\": \"1g\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var smallest = Databricks.GetNodeType.Invoke(new()\n    {\n        LocalDisk = true,\n    });\n\n    var latestLts = Databricks.GetSparkVersion.Invoke(new()\n    {\n        LongTermSupport = true,\n    });\n\n    var sharedAutoscaling = new Databricks.Cluster(\"shared_autoscaling\", new()\n    {\n        ClusterName = \"Shared Autoscaling\",\n        SparkVersion = latestLts.Apply(getSparkVersionResult =\u003e getSparkVersionResult.Id),\n        NodeTypeId = smallest.Apply(getNodeTypeResult =\u003e getNodeTypeResult.Id),\n        AutoterminationMinutes = 20,\n        Autoscale = new Databricks.Inputs.ClusterAutoscaleArgs\n        {\n            MinWorkers = 1,\n            MaxWorkers = 50,\n        },\n        SparkConf = \n        {\n            { \"spark.databricks.io.cache.enabled\", \"true\" },\n            { \"spark.databricks.io.cache.maxDiskUsage\", \"50g\" },\n            { \"spark.databricks.io.cache.maxMetaDataCache\", \"1g\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsmallest, err := databricks.GetNodeType(ctx, \u0026databricks.GetNodeTypeArgs{\n\t\t\tLocalDisk: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlatestLts, err := databricks.GetSparkVersion(ctx, \u0026databricks.GetSparkVersionArgs{\n\t\t\tLongTermSupport: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewCluster(ctx, \"shared_autoscaling\", \u0026databricks.ClusterArgs{\n\t\t\tClusterName:            pulumi.String(\"Shared Autoscaling\"),\n\t\t\tSparkVersion:           pulumi.String(latestLts.Id),\n\t\t\tNodeTypeId:             pulumi.String(smallest.Id),\n\t\t\tAutoterminationMinutes: pulumi.Int(20),\n\t\t\tAutoscale: \u0026databricks.ClusterAutoscaleArgs{\n\t\t\t\tMinWorkers: pulumi.Int(1),\n\t\t\t\tMaxWorkers: pulumi.Int(50),\n\t\t\t},\n\t\t\tSparkConf: pulumi.StringMap{\n\t\t\t\t\"spark.databricks.io.cache.enabled\":          pulumi.String(\"true\"),\n\t\t\t\t\"spark.databricks.io.cache.maxDiskUsage\":     pulumi.String(\"50g\"),\n\t\t\t\t\"spark.databricks.io.cache.maxMetaDataCache\": pulumi.String(\"1g\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetNodeTypeArgs;\nimport com.pulumi.databricks.inputs.GetSparkVersionArgs;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport com.pulumi.databricks.inputs.ClusterAutoscaleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var smallest = DatabricksFunctions.getNodeType(GetNodeTypeArgs.builder()\n            .localDisk(true)\n            .build());\n\n        final var latestLts = DatabricksFunctions.getSparkVersion(GetSparkVersionArgs.builder()\n            .longTermSupport(true)\n            .build());\n\n        var sharedAutoscaling = new Cluster(\"sharedAutoscaling\", ClusterArgs.builder()\n            .clusterName(\"Shared Autoscaling\")\n            .sparkVersion(latestLts.id())\n            .nodeTypeId(smallest.id())\n            .autoterminationMinutes(20)\n            .autoscale(ClusterAutoscaleArgs.builder()\n                .minWorkers(1)\n                .maxWorkers(50)\n                .build())\n            .sparkConf(Map.ofEntries(\n                Map.entry(\"spark.databricks.io.cache.enabled\", \"true\"),\n                Map.entry(\"spark.databricks.io.cache.maxDiskUsage\", \"50g\"),\n                Map.entry(\"spark.databricks.io.cache.maxMetaDataCache\", \"1g\")\n            ))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sharedAutoscaling:\n    type: databricks:Cluster\n    name: shared_autoscaling\n    properties:\n      clusterName: Shared Autoscaling\n      sparkVersion: ${latestLts.id}\n      nodeTypeId: ${smallest.id}\n      autoterminationMinutes: 20\n      autoscale:\n        minWorkers: 1\n        maxWorkers: 50\n      sparkConf:\n        spark.databricks.io.cache.enabled: true\n        spark.databricks.io.cache.maxDiskUsage: 50g\n        spark.databricks.io.cache.maxMetaDataCache: 1g\nvariables:\n  smallest:\n    fn::invoke:\n      function: databricks:getNodeType\n      arguments:\n        localDisk: true\n  latestLts:\n    fn::invoke:\n      function: databricks:getSparkVersion\n      arguments:\n        longTermSupport: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"}},"type":"object","required":["workspaceId"]},"databricks:index/ClusterWorkerNodeTypeFlexibility:ClusterWorkerNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"},"description":"list of alternative node types that will be used if main node type isn't available.  Follow the [documentation](https://learn.microsoft.com/en-us/azure/databricks/compute/flexible-node-types#fallback-instance-type-requirements) for requirements on selection of alternative node types.\n"}},"type":"object"},"databricks:index/ClusterWorkloadType:ClusterWorkloadType":{"properties":{"clients":{"$ref":"#/types/databricks:index/ClusterWorkloadTypeClients:ClusterWorkloadTypeClients"}},"type":"object","required":["clients"]},"databricks:index/ClusterWorkloadTypeClients:ClusterWorkloadTypeClients":{"properties":{"jobs":{"type":"boolean","description":"boolean flag defining if it's possible to run Databricks Jobs on this cluster. Default: \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst withNfs = new databricks.Cluster(\"with_nfs\", {workloadType: {\n    clients: {\n        jobs: false,\n        notebooks: true,\n    },\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nwith_nfs = databricks.Cluster(\"with_nfs\", workload_type={\n    \"clients\": {\n        \"jobs\": False,\n        \"notebooks\": True,\n    },\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var withNfs = new Databricks.Cluster(\"with_nfs\", new()\n    {\n        WorkloadType = new Databricks.Inputs.ClusterWorkloadTypeArgs\n        {\n            Clients = new Databricks.Inputs.ClusterWorkloadTypeClientsArgs\n            {\n                Jobs = false,\n                Notebooks = true,\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewCluster(ctx, \"with_nfs\", \u0026databricks.ClusterArgs{\n\t\t\tWorkloadType: \u0026databricks.ClusterWorkloadTypeArgs{\n\t\t\t\tClients: \u0026databricks.ClusterWorkloadTypeClientsArgs{\n\t\t\t\t\tJobs:      pulumi.Bool(false),\n\t\t\t\t\tNotebooks: pulumi.Bool(true),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport com.pulumi.databricks.inputs.ClusterWorkloadTypeArgs;\nimport com.pulumi.databricks.inputs.ClusterWorkloadTypeClientsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var withNfs = new Cluster(\"withNfs\", ClusterArgs.builder()\n            .workloadType(ClusterWorkloadTypeArgs.builder()\n                .clients(ClusterWorkloadTypeClientsArgs.builder()\n                    .jobs(false)\n                    .notebooks(true)\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  withNfs:\n    type: databricks:Cluster\n    name: with_nfs\n    properties:\n      workloadType:\n        clients:\n          jobs: false\n          notebooks: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"},"notebooks":{"type":"boolean","description":"boolean flag defining if it's possible to run notebooks on this cluster. Default: \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e.\n"}},"type":"object"},"databricks:index/ComplianceSecurityProfileWorkspaceSettingComplianceSecurityProfileWorkspace:ComplianceSecurityProfileWorkspaceSettingComplianceSecurityProfileWorkspace":{"properties":{"complianceStandards":{"type":"array","items":{"type":"string"}},"isEnabled":{"type":"boolean"}},"type":"object","required":["complianceStandards","isEnabled"]},"databricks:index/ComplianceSecurityProfileWorkspaceSettingProviderConfig:ComplianceSecurityProfileWorkspaceSettingProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/ConnectionProviderConfig:ConnectionProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/ConnectionProvisioningInfo:ConnectionProvisioningInfo":{"properties":{"state":{"type":"string"}},"type":"object"},"databricks:index/CredentialAwsIamRole:CredentialAwsIamRole":{"properties":{"externalId":{"type":"string"},"roleArn":{"type":"string","description":"The Amazon Resource Name (ARN) of the AWS IAM role you want to use to setup the trust policy, of the form `arn:aws:iam::1234567890:role/MyRole-AJJHDSKSDF`\n\n\u003cspan pulumi-lang-nodejs=\"`azureManagedIdentity`\" pulumi-lang-dotnet=\"`AzureManagedIdentity`\" pulumi-lang-go=\"`azureManagedIdentity`\" pulumi-lang-python=\"`azure_managed_identity`\" pulumi-lang-yaml=\"`azureManagedIdentity`\" pulumi-lang-java=\"`azureManagedIdentity`\"\u003e`azure_managed_identity`\u003c/span\u003e optional configuration block for using managed identity as credential details for Azure (recommended over \u003cspan pulumi-lang-nodejs=\"`azureServicePrincipal`\" pulumi-lang-dotnet=\"`AzureServicePrincipal`\" pulumi-lang-go=\"`azureServicePrincipal`\" pulumi-lang-python=\"`azure_service_principal`\" pulumi-lang-yaml=\"`azureServicePrincipal`\" pulumi-lang-java=\"`azureServicePrincipal`\"\u003e`azure_service_principal`\u003c/span\u003e):\n"},"unityCatalogIamArn":{"type":"string"}},"type":"object","language":{"nodejs":{"requiredOutputs":["externalId","unityCatalogIamArn"]}}},"databricks:index/CredentialAzureManagedIdentity:CredentialAzureManagedIdentity":{"properties":{"accessConnectorId":{"type":"string","description":"The Resource ID of the Azure Databricks Access Connector resource, of the form `/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-name/providers/Microsoft.Databricks/accessConnectors/connector-name`.\n"},"credentialId":{"type":"string","description":"Unique ID of the credential.\n"},"managedIdentityId":{"type":"string","description":"The Resource ID of the Azure User Assigned Managed Identity associated with Azure Databricks Access Connector, of the form `/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-name/providers/Microsoft.ManagedIdentity/userAssignedIdentities/user-managed-identity-name`.\n\n\u003cspan pulumi-lang-nodejs=\"`azureServicePrincipal`\" pulumi-lang-dotnet=\"`AzureServicePrincipal`\" pulumi-lang-go=\"`azureServicePrincipal`\" pulumi-lang-python=\"`azure_service_principal`\" pulumi-lang-yaml=\"`azureServicePrincipal`\" pulumi-lang-java=\"`azureServicePrincipal`\"\u003e`azure_service_principal`\u003c/span\u003e optional configuration block to use service principal as credential details for Azure. Only applicable when purpose is `STORAGE` (Legacy):\n"}},"type":"object","required":["accessConnectorId"],"language":{"nodejs":{"requiredOutputs":["accessConnectorId","credentialId"]}}},"databricks:index/CredentialAzureServicePrincipal:CredentialAzureServicePrincipal":{"properties":{"applicationId":{"type":"string","description":"The application ID of the application registration within the referenced AAD tenant\n"},"clientSecret":{"type":"string","description":"The client secret generated for the above app ID in AAD. **This field is redacted on output**\n\n\u003cspan pulumi-lang-nodejs=\"`databricksGcpServiceAccount`\" pulumi-lang-dotnet=\"`DatabricksGcpServiceAccount`\" pulumi-lang-go=\"`databricksGcpServiceAccount`\" pulumi-lang-python=\"`databricks_gcp_service_account`\" pulumi-lang-yaml=\"`databricksGcpServiceAccount`\" pulumi-lang-java=\"`databricksGcpServiceAccount`\"\u003e`databricks_gcp_service_account`\u003c/span\u003e optional configuration block for creating a Databricks-managed GCP Service Account:\n","secret":true},"directoryId":{"type":"string","description":"The directory ID corresponding to the Azure Active Directory (AAD) tenant of the application\n"}},"type":"object","required":["applicationId","clientSecret","directoryId"]},"databricks:index/CredentialDatabricksGcpServiceAccount:CredentialDatabricksGcpServiceAccount":{"properties":{"credentialId":{"type":"string","description":"Unique ID of the credential.\n"},"email":{"type":"string","description":"The email of the GCP service account created, to be granted access to relevant buckets.\n"},"privateKeyId":{"type":"string"}},"type":"object","language":{"nodejs":{"requiredOutputs":["credentialId","email","privateKeyId"]}}},"databricks:index/CustomAppIntegrationTokenAccessPolicy:CustomAppIntegrationTokenAccessPolicy":{"properties":{"absoluteSessionLifetimeInMinutes":{"type":"integer"},"accessTokenTtlInMinutes":{"type":"integer","description":"access token time to live (TTL) in minutes.\n"},"enableSingleUseRefreshTokens":{"type":"boolean"},"refreshTokenTtlInMinutes":{"type":"integer","description":"refresh token TTL in minutes. The TTL of refresh token cannot be lower than TTL of access token.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["absoluteSessionLifetimeInMinutes"]}}},"databricks:index/DashboardProviderConfig:DashboardProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/DataQualityMonitorAnomalyDetectionConfig:DataQualityMonitorAnomalyDetectionConfig":{"properties":{"excludedTableFullNames":{"type":"array","items":{"type":"string"},"description":"List of fully qualified table names to exclude from anomaly detection\n"}},"type":"object"},"databricks:index/DataQualityMonitorDataProfilingConfig:DataQualityMonitorDataProfilingConfig":{"properties":{"assetsDir":{"type":"string","description":"Field for specifying the absolute path to a custom directory to store data-monitoring\nassets. Normally prepopulated to a default user location via UI and Python APIs\n"},"baselineTableName":{"type":"string","description":"Baseline table name.\nBaseline data is used to compute drift from the data in the monitored \u003cspan pulumi-lang-nodejs=\"`tableName`\" pulumi-lang-dotnet=\"`TableName`\" pulumi-lang-go=\"`tableName`\" pulumi-lang-python=\"`table_name`\" pulumi-lang-yaml=\"`tableName`\" pulumi-lang-java=\"`tableName`\"\u003e`table_name`\u003c/span\u003e.\nThe baseline table and the monitored table shall have the same schema\n"},"customMetrics":{"type":"array","items":{"$ref":"#/types/databricks:index/DataQualityMonitorDataProfilingConfigCustomMetric:DataQualityMonitorDataProfilingConfigCustomMetric"},"description":"Custom metrics\n"},"dashboardId":{"type":"string"},"driftMetricsTableName":{"type":"string"},"effectiveWarehouseId":{"type":"string"},"inferenceLog":{"$ref":"#/types/databricks:index/DataQualityMonitorDataProfilingConfigInferenceLog:DataQualityMonitorDataProfilingConfigInferenceLog","description":"`Analysis Configuration` for monitoring inference log tables\n"},"latestMonitorFailureMessage":{"type":"string"},"monitorVersion":{"type":"integer"},"monitoredTableName":{"type":"string"},"notificationSettings":{"$ref":"#/types/databricks:index/DataQualityMonitorDataProfilingConfigNotificationSettings:DataQualityMonitorDataProfilingConfigNotificationSettings","description":"Field for specifying notification settings\n"},"outputSchemaId":{"type":"string","description":"ID of the schema where output tables are created\n"},"profileMetricsTableName":{"type":"string"},"schedule":{"$ref":"#/types/databricks:index/DataQualityMonitorDataProfilingConfigSchedule:DataQualityMonitorDataProfilingConfigSchedule","description":"The cron schedule\n"},"skipBuiltinDashboard":{"type":"boolean","description":"Whether to skip creating a default dashboard summarizing data quality metrics\n"},"slicingExprs":{"type":"array","items":{"type":"string"},"description":"List of column expressions to slice data with for targeted analysis. The data is grouped by\neach expression independently, resulting in a separate slice for each predicate and its\ncomplements. For example `slicing_exprs=[“col_1”, “col_2 \u003e 10”]` will generate the following\nslices: two slices for \u003cspan pulumi-lang-nodejs=\"`col2 \" pulumi-lang-dotnet=\"`Col2 \" pulumi-lang-go=\"`col2 \" pulumi-lang-python=\"`col_2 \" pulumi-lang-yaml=\"`col2 \" pulumi-lang-java=\"`col2 \"\u003e`col_2 \u003c/span\u003e\u003e 10` (True and False), and one slice per unique value in\n\u003cspan pulumi-lang-nodejs=\"`col1`\" pulumi-lang-dotnet=\"`Col1`\" pulumi-lang-go=\"`col1`\" pulumi-lang-python=\"`col1`\" pulumi-lang-yaml=\"`col1`\" pulumi-lang-java=\"`col1`\"\u003e`col1`\u003c/span\u003e. For high-cardinality columns, only the top 100 unique values by frequency will\ngenerate slices\n"},"snapshot":{"$ref":"#/types/databricks:index/DataQualityMonitorDataProfilingConfigSnapshot:DataQualityMonitorDataProfilingConfigSnapshot","description":"`Analysis Configuration` for monitoring snapshot tables\n"},"status":{"type":"string"},"timeSeries":{"$ref":"#/types/databricks:index/DataQualityMonitorDataProfilingConfigTimeSeries:DataQualityMonitorDataProfilingConfigTimeSeries","description":"`Analysis Configuration` for monitoring time series tables\n"},"warehouseId":{"type":"string","description":"Optional argument to specify the warehouse for dashboard creation. If not specified, the first running\nwarehouse will be used\n"}},"type":"object","required":["outputSchemaId"],"language":{"nodejs":{"requiredOutputs":["dashboardId","driftMetricsTableName","effectiveWarehouseId","latestMonitorFailureMessage","monitorVersion","monitoredTableName","outputSchemaId","profileMetricsTableName","status"]}}},"databricks:index/DataQualityMonitorDataProfilingConfigCustomMetric:DataQualityMonitorDataProfilingConfigCustomMetric":{"properties":{"definition":{"type":"string","description":"Jinja template for a SQL expression that specifies how to compute the metric. See [create metric definition](https://docs.databricks.com/en/lakehouse-monitoring/custom-metrics.html#create-definition)\n"},"inputColumns":{"type":"array","items":{"type":"string"},"description":"A list of column names in the input table the metric should be computed for.\nCan use ``\":table\"`` to indicate that the metric needs information from multiple columns\n"},"name":{"type":"string","description":"Name of the metric in the output tables\n"},"outputDataType":{"type":"string","description":"The output type of the custom metric\n"},"type":{"type":"string","description":"The type of the custom metric. Possible values are: `DATA_PROFILING_CUSTOM_METRIC_TYPE_AGGREGATE`, `DATA_PROFILING_CUSTOM_METRIC_TYPE_DERIVED`, `DATA_PROFILING_CUSTOM_METRIC_TYPE_DRIFT`\n"}},"type":"object","required":["definition","inputColumns","name","outputDataType","type"]},"databricks:index/DataQualityMonitorDataProfilingConfigInferenceLog:DataQualityMonitorDataProfilingConfigInferenceLog":{"properties":{"granularities":{"type":"array","items":{"type":"string"}},"labelColumn":{"type":"string","description":"Column for the label\n"},"modelIdColumn":{"type":"string","description":"Column for the model identifier\n"},"predictionColumn":{"type":"string","description":"Column for the prediction\n"},"problemType":{"type":"string","description":"Problem type the model aims to solve. Possible values are: `INFERENCE_PROBLEM_TYPE_CLASSIFICATION`, `INFERENCE_PROBLEM_TYPE_REGRESSION`\n"},"timestampColumn":{"type":"string"}},"type":"object","required":["granularities","modelIdColumn","predictionColumn","problemType","timestampColumn"]},"databricks:index/DataQualityMonitorDataProfilingConfigNotificationSettings:DataQualityMonitorDataProfilingConfigNotificationSettings":{"properties":{"onFailure":{"$ref":"#/types/databricks:index/DataQualityMonitorDataProfilingConfigNotificationSettingsOnFailure:DataQualityMonitorDataProfilingConfigNotificationSettingsOnFailure","description":"Destinations to send notifications on failure/timeout\n"}},"type":"object"},"databricks:index/DataQualityMonitorDataProfilingConfigNotificationSettingsOnFailure:DataQualityMonitorDataProfilingConfigNotificationSettingsOnFailure":{"properties":{"emailAddresses":{"type":"array","items":{"type":"string"},"description":"The list of email addresses to send the notification to. A maximum of 5 email addresses is supported\n"}},"type":"object"},"databricks:index/DataQualityMonitorDataProfilingConfigSchedule:DataQualityMonitorDataProfilingConfigSchedule":{"properties":{"pauseStatus":{"type":"string"},"quartzCronExpression":{"type":"string","description":"The expression that determines when to run the monitor. See [examples](https://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html)\n"},"timezoneId":{"type":"string","description":"A Java timezone id. The schedule for a job will be resolved with respect to this timezone.\nSee `Java TimeZone \u003chttp://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html\u003e`_ for details.\nThe timezone id (e.g., ``America/Los_Angeles``) in which to evaluate the quartz expression\n"}},"type":"object","required":["quartzCronExpression","timezoneId"],"language":{"nodejs":{"requiredOutputs":["pauseStatus","quartzCronExpression","timezoneId"]}}},"databricks:index/DataQualityMonitorDataProfilingConfigSnapshot:DataQualityMonitorDataProfilingConfigSnapshot":{"type":"object"},"databricks:index/DataQualityMonitorDataProfilingConfigTimeSeries:DataQualityMonitorDataProfilingConfigTimeSeries":{"properties":{"granularities":{"type":"array","items":{"type":"string"}},"timestampColumn":{"type":"string"}},"type":"object","required":["granularities","timestampColumn"]},"databricks:index/DataQualityMonitorProviderConfig:DataQualityMonitorProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/DataQualityRefreshProviderConfig:DataQualityRefreshProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/DatabaseDatabaseCatalogProviderConfig:DatabaseDatabaseCatalogProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/DatabaseInstanceChildInstanceRef:DatabaseInstanceChildInstanceRef":{"properties":{"branchTime":{"type":"string","description":"Branch time of the ref database instance.\nFor a parent ref instance, this is the point in time on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the point in time on the instance from which the child\ninstance was created.\nInput: For specifying the point in time to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"effectiveLsn":{"type":"string","description":"(string) - For a parent ref instance, this is the LSN on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the LSN on the instance from which the child instance\nwas created.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"lsn":{"type":"string","description":"User-specified WAL LSN of the ref database instance.\n\nInput: For specifying the WAL LSN to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"name":{"type":"string","description":"The name of the instance. This is the unique identifier for the instance\n"},"uid":{"type":"string","description":"(string) - Id of the ref database instance\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["effectiveLsn","lsn","uid"]}}},"databricks:index/DatabaseInstanceCustomTag:DatabaseInstanceCustomTag":{"properties":{"key":{"type":"string","description":"The key of the custom tag\n"},"value":{"type":"string","description":"The value of the custom tag\n"}},"type":"object"},"databricks:index/DatabaseInstanceEffectiveCustomTag:DatabaseInstanceEffectiveCustomTag":{"properties":{"key":{"type":"string","description":"The key of the custom tag\n"},"value":{"type":"string","description":"The value of the custom tag\n"}},"type":"object"},"databricks:index/DatabaseInstanceParentInstanceRef:DatabaseInstanceParentInstanceRef":{"properties":{"branchTime":{"type":"string","description":"Branch time of the ref database instance.\nFor a parent ref instance, this is the point in time on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the point in time on the instance from which the child\ninstance was created.\nInput: For specifying the point in time to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"effectiveLsn":{"type":"string","description":"(string) - For a parent ref instance, this is the LSN on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the LSN on the instance from which the child instance\nwas created.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"lsn":{"type":"string","description":"User-specified WAL LSN of the ref database instance.\n\nInput: For specifying the WAL LSN to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"name":{"type":"string","description":"The name of the instance. This is the unique identifier for the instance\n"},"uid":{"type":"string","description":"(string) - Id of the ref database instance\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["effectiveLsn","lsn","uid"]}}},"databricks:index/DatabaseInstanceProviderConfig:DatabaseInstanceProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatus:DatabaseSyncedDatabaseTableDataSynchronizationStatus":{"properties":{"continuousUpdateStatus":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatus:DatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatus"},"detailedState":{"type":"string","description":"(string) - The state of the synced table. Possible values are: `SYNCED_TABLED_OFFLINE`, `SYNCED_TABLE_OFFLINE_FAILED`, `SYNCED_TABLE_ONLINE`, `SYNCED_TABLE_ONLINE_CONTINUOUS_UPDATE`, `SYNCED_TABLE_ONLINE_NO_PENDING_UPDATE`, `SYNCED_TABLE_ONLINE_PIPELINE_FAILED`, `SYNCED_TABLE_ONLINE_TRIGGERED_UPDATE`, `SYNCED_TABLE_ONLINE_UPDATING_PIPELINE_RESOURCES`, `SYNCED_TABLE_PROVISIONING`, `SYNCED_TABLE_PROVISIONING_INITIAL_SNAPSHOT`, `SYNCED_TABLE_PROVISIONING_PIPELINE_RESOURCES`\n"},"failedStatus":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusFailedStatus:DatabaseSyncedDatabaseTableDataSynchronizationStatusFailedStatus"},"lastSync":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusLastSync:DatabaseSyncedDatabaseTableDataSynchronizationStatusLastSync","description":"(SyncedTablePosition) - Summary of the last successful synchronization from source to destination.\n"},"message":{"type":"string","description":"(string) - A text description of the current state of the synced table\n"},"pipelineId":{"type":"string","description":"(string) - ID of the associated pipeline. The pipeline ID may have been provided by the client\n(in the case of bin packing), or generated by the server (when creating a new pipeline)\n"},"provisioningStatus":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatus:DatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatus"},"triggeredUpdateStatus":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatus:DatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatus"}},"type":"object","language":{"nodejs":{"requiredOutputs":["detailedState","lastSync","message","pipelineId"]}}},"databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatus:DatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatus":{"properties":{"initialPipelineSyncProgress":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress:DatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress","description":"(SyncedTablePipelineProgress) - Details about initial data synchronization. Only populated when in the\nPROVISIONING_INITIAL_SNAPSHOT state\n"},"lastProcessedCommitVersion":{"type":"integer","description":"(integer) - The last source table Delta version that was successfully synced to the synced table\n"},"timestamp":{"type":"string","description":"(string) - The end timestamp of the last time any data was synchronized from the source table to the synced\ntable. This is when the data is available in the synced table\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["initialPipelineSyncProgress","lastProcessedCommitVersion","timestamp"]}}},"databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress:DatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number","description":"(number) - The estimated time remaining to complete this update in seconds\n"},"latestVersionCurrentlyProcessing":{"type":"integer","description":"(integer) - The source table Delta version that was last processed by the pipeline. The pipeline may not\nhave completely processed this version yet\n"},"provisioningPhase":{"type":"string","description":"(string) - The current phase of the data synchronization pipeline. Possible values are: `PROVISIONING_PHASE_INDEX_SCAN`, `PROVISIONING_PHASE_INDEX_SORT`, `PROVISIONING_PHASE_MAIN`\n"},"syncProgressCompletion":{"type":"number","description":"(number) - The completion ratio of this update. This is a number between 0 and 1\n"},"syncedRowCount":{"type":"integer","description":"(integer) - The number of rows that have been synced in this update\n"},"totalRowCount":{"type":"integer","description":"(integer) - The total number of rows that need to be synced in this update. This number may be an estimate\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["estimatedCompletionTimeSeconds","latestVersionCurrentlyProcessing","provisioningPhase","syncProgressCompletion","syncedRowCount","totalRowCount"]}}},"databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusFailedStatus:DatabaseSyncedDatabaseTableDataSynchronizationStatusFailedStatus":{"properties":{"lastProcessedCommitVersion":{"type":"integer","description":"(integer) - The last source table Delta version that was successfully synced to the synced table\n"},"timestamp":{"type":"string","description":"(string) - The end timestamp of the last time any data was synchronized from the source table to the synced\ntable. This is when the data is available in the synced table\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["lastProcessedCommitVersion","timestamp"]}}},"databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusLastSync:DatabaseSyncedDatabaseTableDataSynchronizationStatusLastSync":{"properties":{"deltaTableSyncInfo":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo:DatabaseSyncedDatabaseTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo","description":"(DeltaTableSyncInfo)\n"},"syncEndTimestamp":{"type":"string","description":"(string) - The end timestamp of the most recent successful synchronization.\nThis is the time when the data is available in the synced table\n"},"syncStartTimestamp":{"type":"string","description":"(string) - The starting timestamp of the most recent successful synchronization from the source table\nto the destination (synced) table.\nNote this is the starting timestamp of the sync operation, not the end time.\nE.g., for a batch, this is the time when the sync operation started\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["deltaTableSyncInfo","syncEndTimestamp","syncStartTimestamp"]}}},"databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo:DatabaseSyncedDatabaseTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo":{"properties":{"deltaCommitTimestamp":{"type":"string","description":"(string) - The timestamp when the above Delta version was committed in the source Delta table.\nNote: This is the Delta commit time, not the time the data was written to the synced table\n"},"deltaCommitVersion":{"type":"integer","description":"(integer) - The Delta Lake commit version that was last successfully synced\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["deltaCommitTimestamp","deltaCommitVersion"]}}},"databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatus:DatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatus":{"properties":{"initialPipelineSyncProgress":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress:DatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress","description":"(SyncedTablePipelineProgress) - Details about initial data synchronization. Only populated when in the\nPROVISIONING_INITIAL_SNAPSHOT state\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["initialPipelineSyncProgress"]}}},"databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress:DatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number","description":"(number) - The estimated time remaining to complete this update in seconds\n"},"latestVersionCurrentlyProcessing":{"type":"integer","description":"(integer) - The source table Delta version that was last processed by the pipeline. The pipeline may not\nhave completely processed this version yet\n"},"provisioningPhase":{"type":"string","description":"(string) - The current phase of the data synchronization pipeline. Possible values are: `PROVISIONING_PHASE_INDEX_SCAN`, `PROVISIONING_PHASE_INDEX_SORT`, `PROVISIONING_PHASE_MAIN`\n"},"syncProgressCompletion":{"type":"number","description":"(number) - The completion ratio of this update. This is a number between 0 and 1\n"},"syncedRowCount":{"type":"integer","description":"(integer) - The number of rows that have been synced in this update\n"},"totalRowCount":{"type":"integer","description":"(integer) - The total number of rows that need to be synced in this update. This number may be an estimate\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["estimatedCompletionTimeSeconds","latestVersionCurrentlyProcessing","provisioningPhase","syncProgressCompletion","syncedRowCount","totalRowCount"]}}},"databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatus:DatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatus":{"properties":{"lastProcessedCommitVersion":{"type":"integer","description":"(integer) - The last source table Delta version that was successfully synced to the synced table\n"},"timestamp":{"type":"string","description":"(string) - The end timestamp of the last time any data was synchronized from the source table to the synced\ntable. This is when the data is available in the synced table\n"},"triggeredUpdateProgress":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress:DatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress","description":"(SyncedTablePipelineProgress) - Progress of the active data synchronization pipeline\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["lastProcessedCommitVersion","timestamp","triggeredUpdateProgress"]}}},"databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress:DatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number","description":"(number) - The estimated time remaining to complete this update in seconds\n"},"latestVersionCurrentlyProcessing":{"type":"integer","description":"(integer) - The source table Delta version that was last processed by the pipeline. The pipeline may not\nhave completely processed this version yet\n"},"provisioningPhase":{"type":"string","description":"(string) - The current phase of the data synchronization pipeline. Possible values are: `PROVISIONING_PHASE_INDEX_SCAN`, `PROVISIONING_PHASE_INDEX_SORT`, `PROVISIONING_PHASE_MAIN`\n"},"syncProgressCompletion":{"type":"number","description":"(number) - The completion ratio of this update. This is a number between 0 and 1\n"},"syncedRowCount":{"type":"integer","description":"(integer) - The number of rows that have been synced in this update\n"},"totalRowCount":{"type":"integer","description":"(integer) - The total number of rows that need to be synced in this update. This number may be an estimate\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["estimatedCompletionTimeSeconds","latestVersionCurrentlyProcessing","provisioningPhase","syncProgressCompletion","syncedRowCount","totalRowCount"]}}},"databricks:index/DatabaseSyncedDatabaseTableProviderConfig:DatabaseSyncedDatabaseTableProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/DatabaseSyncedDatabaseTableSpec:DatabaseSyncedDatabaseTableSpec":{"properties":{"createDatabaseObjectsIfMissing":{"type":"boolean","description":"If true, the synced table's logical database and schema resources in PG\nwill be created if they do not already exist\n"},"existingPipelineId":{"type":"string","description":"At most one of\u003cspan pulumi-lang-nodejs=\" existingPipelineId \" pulumi-lang-dotnet=\" ExistingPipelineId \" pulumi-lang-go=\" existingPipelineId \" pulumi-lang-python=\" existing_pipeline_id \" pulumi-lang-yaml=\" existingPipelineId \" pulumi-lang-java=\" existingPipelineId \"\u003e existing_pipeline_id \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" newPipelineSpec \" pulumi-lang-dotnet=\" NewPipelineSpec \" pulumi-lang-go=\" newPipelineSpec \" pulumi-lang-python=\" new_pipeline_spec \" pulumi-lang-yaml=\" newPipelineSpec \" pulumi-lang-java=\" newPipelineSpec \"\u003e new_pipeline_spec \u003c/span\u003eshould be defined.\n\nIf\u003cspan pulumi-lang-nodejs=\" existingPipelineId \" pulumi-lang-dotnet=\" ExistingPipelineId \" pulumi-lang-go=\" existingPipelineId \" pulumi-lang-python=\" existing_pipeline_id \" pulumi-lang-yaml=\" existingPipelineId \" pulumi-lang-java=\" existingPipelineId \"\u003e existing_pipeline_id \u003c/span\u003eis defined, the synced table will be bin packed into the existing pipeline\nreferenced. This avoids creating a new pipeline and allows sharing existing compute.\nIn this case, the\u003cspan pulumi-lang-nodejs=\" schedulingPolicy \" pulumi-lang-dotnet=\" SchedulingPolicy \" pulumi-lang-go=\" schedulingPolicy \" pulumi-lang-python=\" scheduling_policy \" pulumi-lang-yaml=\" schedulingPolicy \" pulumi-lang-java=\" schedulingPolicy \"\u003e scheduling_policy \u003c/span\u003eof this synced table must match the scheduling policy of the existing pipeline\n"},"newPipelineSpec":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableSpecNewPipelineSpec:DatabaseSyncedDatabaseTableSpecNewPipelineSpec","description":"At most one of\u003cspan pulumi-lang-nodejs=\" existingPipelineId \" pulumi-lang-dotnet=\" ExistingPipelineId \" pulumi-lang-go=\" existingPipelineId \" pulumi-lang-python=\" existing_pipeline_id \" pulumi-lang-yaml=\" existingPipelineId \" pulumi-lang-java=\" existingPipelineId \"\u003e existing_pipeline_id \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" newPipelineSpec \" pulumi-lang-dotnet=\" NewPipelineSpec \" pulumi-lang-go=\" newPipelineSpec \" pulumi-lang-python=\" new_pipeline_spec \" pulumi-lang-yaml=\" newPipelineSpec \" pulumi-lang-java=\" newPipelineSpec \"\u003e new_pipeline_spec \u003c/span\u003eshould be defined.\n\nIf\u003cspan pulumi-lang-nodejs=\" newPipelineSpec \" pulumi-lang-dotnet=\" NewPipelineSpec \" pulumi-lang-go=\" newPipelineSpec \" pulumi-lang-python=\" new_pipeline_spec \" pulumi-lang-yaml=\" newPipelineSpec \" pulumi-lang-java=\" newPipelineSpec \"\u003e new_pipeline_spec \u003c/span\u003eis defined, a new pipeline is created for this synced table. The location pointed to is used\nto store intermediate files (checkpoints, event logs etc). The caller must have write permissions to create Delta\ntables in the specified catalog and schema. Again, note this requires write permissions, whereas the source table\nonly requires read permissions\n"},"primaryKeyColumns":{"type":"array","items":{"type":"string"},"description":"Primary Key columns to be used for data insert/update in the destination\n"},"schedulingPolicy":{"type":"string","description":"Scheduling policy of the underlying pipeline. Possible values are: `CONTINUOUS`, `SNAPSHOT`, `TRIGGERED`\n"},"sourceTableFullName":{"type":"string","description":"Three-part (catalog, schema, table) name of the source Delta table\n"},"timeseriesKey":{"type":"string","description":"Time series key to deduplicate (tie-break) rows with the same primary key\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["createDatabaseObjectsIfMissing","existingPipelineId","newPipelineSpec"]}}},"databricks:index/DatabaseSyncedDatabaseTableSpecNewPipelineSpec:DatabaseSyncedDatabaseTableSpecNewPipelineSpec":{"properties":{"budgetPolicyId":{"type":"string","description":"Budget policy to set on the newly created pipeline\n"},"storageCatalog":{"type":"string","description":"This field needs to be specified if the destination catalog is a managed postgres catalog.\n\nUC catalog for the pipeline to store intermediate files (checkpoints, event logs etc).\nThis needs to be a standard catalog where the user has permissions to create Delta tables\n"},"storageSchema":{"type":"string","description":"This field needs to be specified if the destination catalog is a managed postgres catalog.\n\nUC schema for the pipeline to store intermediate files (checkpoints, event logs etc).\nThis needs to be in the standard catalog where the user has permissions to create Delta tables\n"}},"type":"object"},"databricks:index/DbfsFileProviderConfig:DbfsFileProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/DefaultNamespaceSettingNamespace:DefaultNamespaceSettingNamespace":{"properties":{"value":{"type":"string","description":"The value for the setting.\n"}},"type":"object"},"databricks:index/DefaultNamespaceSettingProviderConfig:DefaultNamespaceSettingProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/DirectoryProviderConfig:DirectoryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/DisableLegacyAccessSettingDisableLegacyAccess:DisableLegacyAccessSettingDisableLegacyAccess":{"properties":{"value":{"type":"boolean","description":"The boolean value for the setting.\n"}},"type":"object","required":["value"]},"databricks:index/DisableLegacyAccessSettingProviderConfig:DisableLegacyAccessSettingProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/DisableLegacyDbfsSettingDisableLegacyDbfs:DisableLegacyDbfsSettingDisableLegacyDbfs":{"properties":{"value":{"type":"boolean","description":"The boolean value for the setting.\n"}},"type":"object","required":["value"]},"databricks:index/DisableLegacyDbfsSettingProviderConfig:DisableLegacyDbfsSettingProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/DisableLegacyFeaturesSettingDisableLegacyFeatures:DisableLegacyFeaturesSettingDisableLegacyFeatures":{"properties":{"value":{"type":"boolean","description":"The boolean value for the setting.\n"}},"type":"object","required":["value"]},"databricks:index/DisableLegacyFeaturesSettingProviderConfig:DisableLegacyFeaturesSettingProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/EndpointAzurePrivateEndpointInfo:EndpointAzurePrivateEndpointInfo":{"properties":{"privateEndpointName":{"type":"string","description":"The name of the Private Endpoint in the Azure subscription\n"},"privateEndpointResourceGuid":{"type":"string","description":"The GUID of the Private Endpoint resource in the Azure subscription.\nThis is assigned by Azure when the user sets up the Private Endpoint\n"},"privateEndpointResourceId":{"type":"string","description":"(string) - The full resource ID of the Private Endpoint\n"},"privateLinkServiceId":{"type":"string","description":"(string) - The resource ID of the Databricks Private Link Service that this Private Endpoint connects to\n"}},"type":"object","required":["privateEndpointName","privateEndpointResourceGuid"],"language":{"nodejs":{"requiredOutputs":["privateEndpointName","privateEndpointResourceGuid","privateEndpointResourceId","privateLinkServiceId"]}}},"databricks:index/EnhancedSecurityMonitoringWorkspaceSettingEnhancedSecurityMonitoringWorkspace:EnhancedSecurityMonitoringWorkspaceSettingEnhancedSecurityMonitoringWorkspace":{"properties":{"isEnabled":{"type":"boolean"}},"type":"object","required":["isEnabled"]},"databricks:index/EnhancedSecurityMonitoringWorkspaceSettingProviderConfig:EnhancedSecurityMonitoringWorkspaceSettingProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/EntitlementsProviderConfig:EntitlementsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/EntityTagAssignmentProviderConfig:EntityTagAssignmentProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/ExternalLocationEncryptionDetails:ExternalLocationEncryptionDetails":{"properties":{"sseEncryptionDetails":{"$ref":"#/types/databricks:index/ExternalLocationEncryptionDetailsSseEncryptionDetails:ExternalLocationEncryptionDetailsSseEncryptionDetails","description":"a block describing server-Side Encryption properties for clients communicating with AWS S3. Consists of the following attributes:\n"}},"type":"object"},"databricks:index/ExternalLocationEncryptionDetailsSseEncryptionDetails:ExternalLocationEncryptionDetailsSseEncryptionDetails":{"properties":{"algorithm":{"type":"string","description":"Encryption algorithm value. Sets the value of the `x-amz-server-side-encryption` header in S3 request.\n"},"awsKmsKeyArn":{"type":"string","description":"Optional ARN of the SSE-KMS key used with the S3 location, when `algorithm = \"SSE-KMS\"`. Sets the value of the `x-amz-server-side-encryption-aws-kms-key-id` header.\n"}},"type":"object"},"databricks:index/ExternalLocationFileEventQueue:ExternalLocationFileEventQueue":{"properties":{"managedAqs":{"$ref":"#/types/databricks:index/ExternalLocationFileEventQueueManagedAqs:ExternalLocationFileEventQueueManagedAqs","description":"Configuration for managed Azure Queue Storage queue.\n"},"managedPubsub":{"$ref":"#/types/databricks:index/ExternalLocationFileEventQueueManagedPubsub:ExternalLocationFileEventQueueManagedPubsub","description":"Configuration for managed Google Cloud Pub/Sub queue.\n"},"managedSqs":{"$ref":"#/types/databricks:index/ExternalLocationFileEventQueueManagedSqs:ExternalLocationFileEventQueueManagedSqs","description":"Configuration for managed Amazon SQS queue.\n"},"providedAqs":{"$ref":"#/types/databricks:index/ExternalLocationFileEventQueueProvidedAqs:ExternalLocationFileEventQueueProvidedAqs","description":"Configuration for provided Azure Storage Queue.\n"},"providedPubsub":{"$ref":"#/types/databricks:index/ExternalLocationFileEventQueueProvidedPubsub:ExternalLocationFileEventQueueProvidedPubsub","description":"Configuration for provided Google Cloud Pub/Sub queue.\n"},"providedSqs":{"$ref":"#/types/databricks:index/ExternalLocationFileEventQueueProvidedSqs:ExternalLocationFileEventQueueProvidedSqs","description":"Configuration for provided Amazon SQS queue.\n"}},"type":"object"},"databricks:index/ExternalLocationFileEventQueueManagedAqs:ExternalLocationFileEventQueueManagedAqs":{"properties":{"managedResourceId":{"type":"string","description":"The ID of the managed resource.\n"},"queueUrl":{"type":"string"},"resourceGroup":{"type":"string","description":"The name of the Azure resource group.\n"},"subscriptionId":{"type":"string","description":"The Azure subscription ID.\n"}},"type":"object","required":["resourceGroup","subscriptionId"],"language":{"nodejs":{"requiredOutputs":["managedResourceId","resourceGroup","subscriptionId"]}}},"databricks:index/ExternalLocationFileEventQueueManagedPubsub:ExternalLocationFileEventQueueManagedPubsub":{"properties":{"managedResourceId":{"type":"string","description":"The ID of the managed resource.\n"},"subscriptionName":{"type":"string","description":"The name of the subscription.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["managedResourceId"]}}},"databricks:index/ExternalLocationFileEventQueueManagedSqs:ExternalLocationFileEventQueueManagedSqs":{"properties":{"managedResourceId":{"type":"string","description":"The ID of the managed resource.\n"},"queueUrl":{"type":"string"}},"type":"object","language":{"nodejs":{"requiredOutputs":["managedResourceId"]}}},"databricks:index/ExternalLocationFileEventQueueProvidedAqs:ExternalLocationFileEventQueueProvidedAqs":{"properties":{"managedResourceId":{"type":"string"},"queueUrl":{"type":"string","description":"The URL of the queue.\n"},"resourceGroup":{"type":"string","description":"The name of the Azure resource group.\n"},"subscriptionId":{"type":"string","description":"The Azure subscription ID.\n"}},"type":"object","required":["queueUrl"],"language":{"nodejs":{"requiredOutputs":["managedResourceId","queueUrl"]}}},"databricks:index/ExternalLocationFileEventQueueProvidedPubsub:ExternalLocationFileEventQueueProvidedPubsub":{"properties":{"managedResourceId":{"type":"string"},"subscriptionName":{"type":"string","description":"The name of the subscription.\n"}},"type":"object","required":["subscriptionName"],"language":{"nodejs":{"requiredOutputs":["managedResourceId","subscriptionName"]}}},"databricks:index/ExternalLocationFileEventQueueProvidedSqs:ExternalLocationFileEventQueueProvidedSqs":{"properties":{"managedResourceId":{"type":"string"},"queueUrl":{"type":"string","description":"The URL of the SQS queue.\n"}},"type":"object","required":["queueUrl"],"language":{"nodejs":{"requiredOutputs":["managedResourceId","queueUrl"]}}},"databricks:index/ExternalLocationProviderConfig:ExternalLocationProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/ExternalMetadataProviderConfig:ExternalMetadataProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/FeatureEngineeringFeatureFunction:FeatureEngineeringFeatureFunction":{"properties":{"extraParameters":{"type":"array","items":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureFunctionExtraParameter:FeatureEngineeringFeatureFunctionExtraParameter"},"description":"Extra parameters for parameterized functions\n"},"functionType":{"type":"string","description":"The type of the function. Possible values are: `APPROX_COUNT_DISTINCT`, `APPROX_PERCENTILE`, `AVG`, `COUNT`, `FIRST`, `LAST`, `MAX`, `MIN`, `STDDEV_POP`, `STDDEV_SAMP`, `SUM`, `VAR_POP`, `VAR_SAMP`\n"}},"type":"object","required":["functionType"]},"databricks:index/FeatureEngineeringFeatureFunctionExtraParameter:FeatureEngineeringFeatureFunctionExtraParameter":{"properties":{"key":{"type":"string","description":"The name of the parameter\n"},"value":{"type":"string","description":"The value of the parameter\n"}},"type":"object","required":["key","value"]},"databricks:index/FeatureEngineeringFeatureLineageContext:FeatureEngineeringFeatureLineageContext":{"properties":{"jobContext":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureLineageContextJobContext:FeatureEngineeringFeatureLineageContextJobContext","description":"Job context information including job ID and run ID\n"},"notebookId":{"type":"integer","description":"The notebook ID where this API was invoked\n"}},"type":"object"},"databricks:index/FeatureEngineeringFeatureLineageContextJobContext:FeatureEngineeringFeatureLineageContextJobContext":{"properties":{"jobId":{"type":"integer","description":"The job ID where this API invoked\n"},"jobRunId":{"type":"integer","description":"The job run ID where this API was invoked\n"}},"type":"object"},"databricks:index/FeatureEngineeringFeatureProviderConfig:FeatureEngineeringFeatureProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/FeatureEngineeringFeatureSource:FeatureEngineeringFeatureSource":{"properties":{"deltaTableSource":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureSourceDeltaTableSource:FeatureEngineeringFeatureSourceDeltaTableSource"},"kafkaSource":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureSourceKafkaSource:FeatureEngineeringFeatureSourceKafkaSource"}},"type":"object"},"databricks:index/FeatureEngineeringFeatureSourceDeltaTableSource:FeatureEngineeringFeatureSourceDeltaTableSource":{"properties":{"entityColumns":{"type":"array","items":{"type":"string"},"description":"The entity columns of the Delta table\n"},"fullName":{"type":"string","description":"The full three-part name (catalog, schema, name) of the feature\n"},"timeseriesColumn":{"type":"string","description":"The timeseries column of the Delta table\n"}},"type":"object","required":["entityColumns","fullName","timeseriesColumn"]},"databricks:index/FeatureEngineeringFeatureSourceKafkaSource:FeatureEngineeringFeatureSourceKafkaSource":{"properties":{"entityColumnIdentifiers":{"type":"array","items":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureSourceKafkaSourceEntityColumnIdentifier:FeatureEngineeringFeatureSourceKafkaSourceEntityColumnIdentifier"},"description":"The entity column identifiers of the Kafka source\n"},"name":{"type":"string","description":"Name of the Kafka source, used to identify it. This is used to look up the corresponding KafkaConfig object. Can be distinct from topic name\n"},"timeseriesColumnIdentifier":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureSourceKafkaSourceTimeseriesColumnIdentifier:FeatureEngineeringFeatureSourceKafkaSourceTimeseriesColumnIdentifier","description":"The timeseries column identifier of the Kafka source\n"}},"type":"object","required":["entityColumnIdentifiers","name","timeseriesColumnIdentifier"]},"databricks:index/FeatureEngineeringFeatureSourceKafkaSourceEntityColumnIdentifier:FeatureEngineeringFeatureSourceKafkaSourceEntityColumnIdentifier":{"properties":{"variantExprPath":{"type":"string","description":"String representation of the column name or variant expression path. For nested fields, the leaf value is what will be present in materialized tables\nand expected to match at query time. For example, the leaf node of value:trip_details.location_details.pickup_zip is pickup_zip\n"}},"type":"object","required":["variantExprPath"]},"databricks:index/FeatureEngineeringFeatureSourceKafkaSourceTimeseriesColumnIdentifier:FeatureEngineeringFeatureSourceKafkaSourceTimeseriesColumnIdentifier":{"properties":{"variantExprPath":{"type":"string","description":"String representation of the column name or variant expression path. For nested fields, the leaf value is what will be present in materialized tables\nand expected to match at query time. For example, the leaf node of value:trip_details.location_details.pickup_zip is pickup_zip\n"}},"type":"object","required":["variantExprPath"]},"databricks:index/FeatureEngineeringFeatureTimeWindow:FeatureEngineeringFeatureTimeWindow":{"properties":{"continuous":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureTimeWindowContinuous:FeatureEngineeringFeatureTimeWindowContinuous"},"sliding":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureTimeWindowSliding:FeatureEngineeringFeatureTimeWindowSliding"},"tumbling":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureTimeWindowTumbling:FeatureEngineeringFeatureTimeWindowTumbling"}},"type":"object"},"databricks:index/FeatureEngineeringFeatureTimeWindowContinuous:FeatureEngineeringFeatureTimeWindowContinuous":{"properties":{"offset":{"type":"string","description":"The offset of the continuous window (must be non-positive)\n"},"windowDuration":{"type":"string"}},"type":"object","required":["windowDuration"]},"databricks:index/FeatureEngineeringFeatureTimeWindowSliding:FeatureEngineeringFeatureTimeWindowSliding":{"properties":{"slideDuration":{"type":"string","description":"The slide duration (interval by which windows advance, must be positive and less than duration)\n"},"windowDuration":{"type":"string"}},"type":"object","required":["slideDuration","windowDuration"]},"databricks:index/FeatureEngineeringFeatureTimeWindowTumbling:FeatureEngineeringFeatureTimeWindowTumbling":{"properties":{"windowDuration":{"type":"string"}},"type":"object","required":["windowDuration"]},"databricks:index/FeatureEngineeringKafkaConfigAuthConfig:FeatureEngineeringKafkaConfigAuthConfig":{"properties":{"ucServiceCredentialName":{"type":"string","description":"Name of the Unity Catalog service credential. This value will be set under the option databricks.serviceCredential\n"}},"type":"object"},"databricks:index/FeatureEngineeringKafkaConfigBackfillSource:FeatureEngineeringKafkaConfigBackfillSource":{"properties":{"deltaTableSource":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigBackfillSourceDeltaTableSource:FeatureEngineeringKafkaConfigBackfillSourceDeltaTableSource","description":"The Delta table source containing the historic data to backfill.\nOnly the delta table name is used for backfill, the entity columns and timeseries column are ignored as they are defined by the associated KafkaSource\n"}},"type":"object"},"databricks:index/FeatureEngineeringKafkaConfigBackfillSourceDeltaTableSource:FeatureEngineeringKafkaConfigBackfillSourceDeltaTableSource":{"properties":{"entityColumns":{"type":"array","items":{"type":"string"},"description":"The entity columns of the Delta table\n"},"fullName":{"type":"string","description":"The full three-part (catalog, schema, table) name of the Delta table\n"},"timeseriesColumn":{"type":"string","description":"The timeseries column of the Delta table\n"}},"type":"object","required":["entityColumns","fullName","timeseriesColumn"]},"databricks:index/FeatureEngineeringKafkaConfigKeySchema:FeatureEngineeringKafkaConfigKeySchema":{"properties":{"jsonSchema":{"type":"string","description":"Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)\n"}},"type":"object"},"databricks:index/FeatureEngineeringKafkaConfigProviderConfig:FeatureEngineeringKafkaConfigProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/FeatureEngineeringKafkaConfigSubscriptionMode:FeatureEngineeringKafkaConfigSubscriptionMode":{"properties":{"assign":{"type":"string","description":"A JSON string that contains the specific topic-partitions to consume from.\nFor example, for '{\"topicA\":[0,1],\"topicB\":[2,4]}', topicA's 0'th and 1st partitions will be consumed from\n"},"subscribe":{"type":"string","description":"A comma-separated list of Kafka topics to read from. For example, 'topicA,topicB,topicC'\n"},"subscribePattern":{"type":"string","description":"A regular expression matching topics to subscribe to. For example, 'topic.*' will subscribe to all topics starting with 'topic'\n"}},"type":"object"},"databricks:index/FeatureEngineeringKafkaConfigValueSchema:FeatureEngineeringKafkaConfigValueSchema":{"properties":{"jsonSchema":{"type":"string","description":"Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)\n"}},"type":"object"},"databricks:index/FeatureEngineeringMaterializedFeatureOfflineStoreConfig:FeatureEngineeringMaterializedFeatureOfflineStoreConfig":{"properties":{"catalogName":{"type":"string"},"schemaName":{"type":"string"},"tableNamePrefix":{"type":"string"}},"type":"object","required":["catalogName","schemaName","tableNamePrefix"]},"databricks:index/FeatureEngineeringMaterializedFeatureOnlineStoreConfig:FeatureEngineeringMaterializedFeatureOnlineStoreConfig":{"properties":{"catalogName":{"type":"string"},"onlineStoreName":{"type":"string","description":"The name of the target online store\n"},"schemaName":{"type":"string"},"tableNamePrefix":{"type":"string"}},"type":"object","required":["catalogName","onlineStoreName","schemaName","tableNamePrefix"]},"databricks:index/FeatureEngineeringMaterializedFeatureProviderConfig:FeatureEngineeringMaterializedFeatureProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/FileProviderConfig:FileProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/GitCredentialProviderConfig:GitCredentialProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/GlobalInitScriptProviderConfig:GlobalInitScriptProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/GrantProviderConfig:GrantProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/GrantsGrant:GrantsGrant":{"properties":{"principal":{"type":"string"},"privileges":{"type":"array","items":{"type":"string"}}},"type":"object","required":["principal","privileges"]},"databricks:index/GrantsProviderConfig:GrantsProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/InstancePoolAwsAttributes:InstancePoolAwsAttributes":{"properties":{"availability":{"type":"string","description":"(String) Availability type used for all instances in the pool. Only `ON_DEMAND` and `SPOT` are supported.\n","willReplaceOnChanges":true},"instanceProfileArn":{"type":"string","description":"Nodes belonging to the pool will only be placed on AWS instances with this instance profile. Please see\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eresource documentation for extended examples on adding a valid instance profile using Pulumi.\n","willReplaceOnChanges":true},"spotBidPricePercent":{"type":"integer","description":"(Integer) The max price for AWS spot instances, as a percentage of the corresponding instance type's on-demand price. For example, if this field is set to 50, and the instance pool needs a new i3.xlarge spot instance, then the max price is half of the price of on-demand i3.xlarge instances. Similarly, if this field is set to 200, the max price is twice the price of on-demand i3.xlarge instances. If not specified, the *default value is 100*. When spot instances are requested for this instance pool, only spot instances whose max price percentage matches this field are considered. *For safety, this field cannot be greater than 10000.*\n","willReplaceOnChanges":true},"zoneId":{"type":"string","description":"(String) Identifier for the availability zone/datacenter in which the instance pool resides. This string is of the form like `\"us-west-2a\"`. The provided availability zone must be in the same region as the Databricks deployment. For example, `\"us-west-2a\"` is not a valid zone ID if the Databricks deployment resides in the `\"us-east-1\"` region. If not specified, a default zone is used. You can find the list of available zones as well as the default value by using the [List Zones API](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterclusterservicelistavailablezones).\n","willReplaceOnChanges":true}},"type":"object","language":{"nodejs":{"requiredOutputs":["zoneId"]}}},"databricks:index/InstancePoolAzureAttributes:InstancePoolAzureAttributes":{"properties":{"availability":{"type":"string","description":"Availability type used for all nodes. Valid values are `SPOT_AZURE` and `ON_DEMAND_AZURE`.\n","willReplaceOnChanges":true},"spotBidMaxPrice":{"type":"number","description":"The max bid price used for Azure spot instances. You can set this to greater than or equal to the current spot price. You can also set this to `-1`, which specifies that the instance cannot be evicted on the basis of price. The price for the instance will be the current price for spot instances or the price for a standard instance.\n","willReplaceOnChanges":true}},"type":"object"},"databricks:index/InstancePoolDiskSpec:InstancePoolDiskSpec":{"properties":{"diskCount":{"type":"integer","description":"(Integer) The number of disks to attach to each instance. This feature is only enabled for supported node types. Users can choose up to the limit of the disks supported by the node type. For node types with no local disk, at least one disk needs to be specified.\n"},"diskSize":{"type":"integer","description":"(Integer) The size of each disk (in GiB) to attach.\n"},"diskType":{"$ref":"#/types/databricks:index/InstancePoolDiskSpecDiskType:InstancePoolDiskSpecDiskType"}},"type":"object"},"databricks:index/InstancePoolDiskSpecDiskType:InstancePoolDiskSpecDiskType":{"properties":{"azureDiskVolumeType":{"type":"string","willReplaceOnChanges":true},"ebsVolumeType":{"type":"string","willReplaceOnChanges":true}},"type":"object"},"databricks:index/InstancePoolGcpAttributes:InstancePoolGcpAttributes":{"properties":{"gcpAvailability":{"type":"string","description":"Availability type used for all nodes. Valid values are `PREEMPTIBLE_GCP`, `PREEMPTIBLE_WITH_FALLBACK_GCP` and `ON_DEMAND_GCP`, default: `ON_DEMAND_GCP`.\n","willReplaceOnChanges":true},"localSsdCount":{"type":"integer","description":"Number of local SSD disks (each is 375GB in size) that will be attached to each node of the cluster.\n","willReplaceOnChanges":true},"zoneId":{"type":"string","description":"Identifier for the availability zone/datacenter in which the cluster resides. This string will be of a form like `us-central1-a`. The provided availability zone must be in the same region as the Databricks workspace.\n","willReplaceOnChanges":true}},"type":"object","language":{"nodejs":{"requiredOutputs":["localSsdCount","zoneId"]}}},"databricks:index/InstancePoolInstancePoolFleetAttributes:InstancePoolInstancePoolFleetAttributes":{"properties":{"fleetOnDemandOption":{"$ref":"#/types/databricks:index/InstancePoolInstancePoolFleetAttributesFleetOnDemandOption:InstancePoolInstancePoolFleetAttributesFleetOnDemandOption","willReplaceOnChanges":true},"fleetSpotOption":{"$ref":"#/types/databricks:index/InstancePoolInstancePoolFleetAttributesFleetSpotOption:InstancePoolInstancePoolFleetAttributesFleetSpotOption","willReplaceOnChanges":true},"launchTemplateOverrides":{"type":"array","items":{"$ref":"#/types/databricks:index/InstancePoolInstancePoolFleetAttributesLaunchTemplateOverride:InstancePoolInstancePoolFleetAttributesLaunchTemplateOverride"},"willReplaceOnChanges":true}},"type":"object","required":["launchTemplateOverrides"]},"databricks:index/InstancePoolInstancePoolFleetAttributesFleetOnDemandOption:InstancePoolInstancePoolFleetAttributesFleetOnDemandOption":{"properties":{"allocationStrategy":{"type":"string","willReplaceOnChanges":true},"instancePoolsToUseCount":{"type":"integer"}},"type":"object","required":["allocationStrategy"]},"databricks:index/InstancePoolInstancePoolFleetAttributesFleetSpotOption:InstancePoolInstancePoolFleetAttributesFleetSpotOption":{"properties":{"allocationStrategy":{"type":"string","willReplaceOnChanges":true},"instancePoolsToUseCount":{"type":"integer"}},"type":"object","required":["allocationStrategy"]},"databricks:index/InstancePoolInstancePoolFleetAttributesLaunchTemplateOverride:InstancePoolInstancePoolFleetAttributesLaunchTemplateOverride":{"properties":{"availabilityZone":{"type":"string","willReplaceOnChanges":true},"instanceType":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["availabilityZone","instanceType"]},"databricks:index/InstancePoolNodeTypeFlexibility:InstancePoolNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"},"description":"list of alternative node types that will be used if main node type isn't available.  Follow the [documentation](https://learn.microsoft.com/en-us/azure/databricks/compute/flexible-node-types#fallback-instance-type-requirements) for requirements on selection of alternative node types.\n","willReplaceOnChanges":true}},"type":"object","required":["alternateNodeTypeIds"]},"databricks:index/InstancePoolPreloadedDockerImage:InstancePoolPreloadedDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/InstancePoolPreloadedDockerImageBasicAuth:InstancePoolPreloadedDockerImageBasicAuth","description":"`basic_auth.username` and `basic_auth.password` for Docker repository. Docker registry credentials are encrypted when they are stored in Databricks internal storage and when they are passed to a registry upon fetching Docker images at cluster launch.  For better security, these credentials should be stored in the secret scope and referred using secret path syntax: `{{secrets/scope/key}}`, otherwise other users of the workspace may access them via UI/API.\n\nExample usage with\u003cspan pulumi-lang-nodejs=\" azurermContainerRegistry \" pulumi-lang-dotnet=\" AzurermContainerRegistry \" pulumi-lang-go=\" azurermContainerRegistry \" pulumi-lang-python=\" azurerm_container_registry \" pulumi-lang-yaml=\" azurermContainerRegistry \" pulumi-lang-java=\" azurermContainerRegistry \"\u003e azurerm_container_registry \u003c/span\u003eand docker_registry_image, that you can adapt to your specific use-case:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as docker from \"@pulumi/docker\";\n\nconst _this = new docker.RegistryImage(\"this\", {\n    build: [{}],\n    name: `${thisAzurermContainerRegistry.loginServer}/sample:latest`,\n});\nconst thisInstancePool = new databricks.InstancePool(\"this\", {preloadedDockerImages: [{\n    url: _this.name,\n    basicAuth: {\n        username: thisAzurermContainerRegistry.adminUsername,\n        password: thisAzurermContainerRegistry.adminPassword,\n    },\n}]});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_docker as docker\n\nthis = docker.RegistryImage(\"this\",\n    build=[{}],\n    name=f\"{this_azurerm_container_registry['loginServer']}/sample:latest\")\nthis_instance_pool = databricks.InstancePool(\"this\", preloaded_docker_images=[{\n    \"url\": this.name,\n    \"basic_auth\": {\n        \"username\": this_azurerm_container_registry[\"adminUsername\"],\n        \"password\": this_azurerm_container_registry[\"adminPassword\"],\n    },\n}])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Docker = Pulumi.Docker;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Docker.RegistryImage(\"this\", new()\n    {\n        Build = new[]\n        {\n            null,\n        },\n        Name = $\"{thisAzurermContainerRegistry.LoginServer}/sample:latest\",\n    });\n\n    var thisInstancePool = new Databricks.InstancePool(\"this\", new()\n    {\n        PreloadedDockerImages = new[]\n        {\n            new Databricks.Inputs.InstancePoolPreloadedDockerImageArgs\n            {\n                Url = @this.Name,\n                BasicAuth = new Databricks.Inputs.InstancePoolPreloadedDockerImageBasicAuthArgs\n                {\n                    Username = thisAzurermContainerRegistry.AdminUsername,\n                    Password = thisAzurermContainerRegistry.AdminPassword,\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-docker/sdk/v4/go/docker\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := docker.NewRegistryImage(ctx, \"this\", \u0026docker.RegistryImageArgs{\n\t\t\tBuild: docker.RegistryImageBuildArgs{\n\t\t\t\tmap[string]interface{}{},\n\t\t\t},\n\t\t\tName: pulumi.Sprintf(\"%v/sample:latest\", thisAzurermContainerRegistry.LoginServer),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewInstancePool(ctx, \"this\", \u0026databricks.InstancePoolArgs{\n\t\t\tPreloadedDockerImages: databricks.InstancePoolPreloadedDockerImageArray{\n\t\t\t\t\u0026databricks.InstancePoolPreloadedDockerImageArgs{\n\t\t\t\t\tUrl: this.Name,\n\t\t\t\t\tBasicAuth: \u0026databricks.InstancePoolPreloadedDockerImageBasicAuthArgs{\n\t\t\t\t\t\tUsername: pulumi.Any(thisAzurermContainerRegistry.AdminUsername),\n\t\t\t\t\t\tPassword: pulumi.Any(thisAzurermContainerRegistry.AdminPassword),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.docker.RegistryImage;\nimport com.pulumi.docker.RegistryImageArgs;\nimport com.pulumi.databricks.InstancePool;\nimport com.pulumi.databricks.InstancePoolArgs;\nimport com.pulumi.databricks.inputs.InstancePoolPreloadedDockerImageArgs;\nimport com.pulumi.databricks.inputs.InstancePoolPreloadedDockerImageBasicAuthArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new RegistryImage(\"this\", RegistryImageArgs.builder()\n            .build(RegistryImageBuildArgs.builder()\n                .build())\n            .name(String.format(\"%s/sample:latest\", thisAzurermContainerRegistry.loginServer()))\n            .build());\n\n        var thisInstancePool = new InstancePool(\"thisInstancePool\", InstancePoolArgs.builder()\n            .preloadedDockerImages(InstancePoolPreloadedDockerImageArgs.builder()\n                .url(this_.name())\n                .basicAuth(InstancePoolPreloadedDockerImageBasicAuthArgs.builder()\n                    .username(thisAzurermContainerRegistry.adminUsername())\n                    .password(thisAzurermContainerRegistry.adminPassword())\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: docker:RegistryImage\n    properties:\n      build:\n        - {}\n      name: ${thisAzurermContainerRegistry.loginServer}/sample:latest\n  thisInstancePool:\n    type: databricks:InstancePool\n    name: this\n    properties:\n      preloadedDockerImages:\n        - url: ${this.name}\n          basicAuth:\n            username: ${thisAzurermContainerRegistry.adminUsername}\n            password: ${thisAzurermContainerRegistry.adminPassword}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","willReplaceOnChanges":true},"url":{"type":"string","description":"URL for the Docker image\n","willReplaceOnChanges":true}},"type":"object","required":["url"]},"databricks:index/InstancePoolPreloadedDockerImageBasicAuth:InstancePoolPreloadedDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true,"willReplaceOnChanges":true},"username":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["password","username"]},"databricks:index/InstancePoolProviderConfig:InstancePoolProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/InstanceProfileProviderConfig:InstanceProfileProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/IpAccessListProviderConfig:IpAccessListProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobContinuous:JobContinuous":{"properties":{"pauseStatus":{"type":"string","description":"Indicate whether this continuous job is paused or not. Either `PAUSED` or `UNPAUSED`. When the \u003cspan pulumi-lang-nodejs=\"`pauseStatus`\" pulumi-lang-dotnet=\"`PauseStatus`\" pulumi-lang-go=\"`pauseStatus`\" pulumi-lang-python=\"`pause_status`\" pulumi-lang-yaml=\"`pauseStatus`\" pulumi-lang-java=\"`pauseStatus`\"\u003e`pause_status`\u003c/span\u003e field is omitted in the block, the server will default to using `UNPAUSED` as a value for \u003cspan pulumi-lang-nodejs=\"`pauseStatus`\" pulumi-lang-dotnet=\"`PauseStatus`\" pulumi-lang-go=\"`pauseStatus`\" pulumi-lang-python=\"`pause_status`\" pulumi-lang-yaml=\"`pauseStatus`\" pulumi-lang-java=\"`pauseStatus`\"\u003e`pause_status`\u003c/span\u003e.\n"},"taskRetryMode":{"type":"string","description":"Controls task level retry behaviour. Allowed values are:\n* `NEVER` (default): The failed task will not be retried.\n* `ON_FAILURE`: Retry a failed task if at least one other task in the job is still running its first attempt. When this condition is no longer met or the retry limit is reached, the job run is cancelled and a new run is started.\n"}},"type":"object"},"databricks:index/JobDbtTask:JobDbtTask":{"properties":{"catalog":{"type":"string","description":"The name of the catalog to use inside Unity Catalog.\n"},"commands":{"type":"array","items":{"type":"string"},"description":"(Array) Series of dbt commands to execute in sequence. Every command must start with \"dbt\".\n"},"profilesDirectory":{"type":"string","description":"The relative path to the directory in the repository specified by \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e where dbt should look in for the `profiles.yml` file. If not specified, defaults to the repository's root directory. Equivalent to passing `--profile-dir` to a dbt command.\n"},"projectDirectory":{"type":"string","description":"The path where dbt should look for `dbt_project.yml`. Equivalent to passing `--project-dir` to the dbt CLI.\n* If \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e is `GIT`: Relative path to the directory in the repository specified in the \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e block. Defaults to the repository's root directory when not specified.\n* If \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e is `WORKSPACE`: Absolute path to the folder in the workspace.\n"},"schema":{"type":"string","description":"The name of the schema dbt should run in. Defaults to \u003cspan pulumi-lang-nodejs=\"`default`\" pulumi-lang-dotnet=\"`Default`\" pulumi-lang-go=\"`default`\" pulumi-lang-python=\"`default`\" pulumi-lang-yaml=\"`default`\" pulumi-lang-java=\"`default`\"\u003e`default`\u003c/span\u003e.\n"},"source":{"type":"string","description":"The source of the project. Possible values are `WORKSPACE` and `GIT`.  Defaults to `GIT` if a \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e block is present in the job definition.\n"},"warehouseId":{"type":"string","description":"The ID of the SQL warehouse that dbt should execute against.\n\nYou also need to include a \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e block to configure the repository that contains the dbt project.\n"}},"type":"object","required":["commands"]},"databricks:index/JobDeployment:JobDeployment":{"properties":{"kind":{"type":"string"},"metadataFilePath":{"type":"string"}},"type":"object","required":["kind"]},"databricks:index/JobEmailNotifications:JobEmailNotifications":{"properties":{"noAlertForSkippedRuns":{"type":"boolean","description":"(Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the \u003cspan pulumi-lang-nodejs=\"`notificationSettings`\" pulumi-lang-dotnet=\"`NotificationSettings`\" pulumi-lang-go=\"`notificationSettings`\" pulumi-lang-python=\"`notification_settings`\" pulumi-lang-yaml=\"`notificationSettings`\" pulumi-lang-java=\"`notificationSettings`\"\u003e`notification_settings`\u003c/span\u003e configuration block).\n"},"onDurationWarningThresholdExceededs":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the duration of a run exceeds the threshold specified by the `RUN_DURATION_SECONDS` metric in the \u003cspan pulumi-lang-nodejs=\"`health`\" pulumi-lang-dotnet=\"`Health`\" pulumi-lang-go=\"`health`\" pulumi-lang-python=\"`health`\" pulumi-lang-yaml=\"`health`\" pulumi-lang-java=\"`health`\"\u003e`health`\u003c/span\u003e block.\n"},"onFailures":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the run fails.\n"},"onStarts":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the run starts.\n"},"onStreamingBacklogExceededs":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when any streaming backlog thresholds are exceeded for any stream.\n\nThe following parameter is only available for the job level configuration.\n"},"onSuccesses":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the run completes successfully.\n"}},"type":"object"},"databricks:index/JobEnvironment:JobEnvironment":{"properties":{"environmentKey":{"type":"string","description":"an unique identifier of the Environment.  It will be referenced from \u003cspan pulumi-lang-nodejs=\"`environmentKey`\" pulumi-lang-dotnet=\"`EnvironmentKey`\" pulumi-lang-go=\"`environmentKey`\" pulumi-lang-python=\"`environment_key`\" pulumi-lang-yaml=\"`environmentKey`\" pulumi-lang-java=\"`environmentKey`\"\u003e`environment_key`\u003c/span\u003e attribute of corresponding task.\n"},"spec":{"$ref":"#/types/databricks:index/JobEnvironmentSpec:JobEnvironmentSpec","description":"block describing the Environment. Consists of following attributes:\n"}},"type":"object","required":["environmentKey"]},"databricks:index/JobEnvironmentSpec:JobEnvironmentSpec":{"properties":{"baseEnvironment":{"type":"string"},"client":{"type":"string"},"dependencies":{"type":"array","items":{"type":"string"},"description":"List of pip dependencies, as supported by the version of pip in this environment. Each dependency is a pip requirement file line.  See [API docs](https://docs.databricks.com/api/workspace/jobs/create#environments-spec-dependencies) for more information.\n\n"},"environmentVersion":{"type":"string","description":"client version used by the environment. Each version comes with a specific Python version and a set of Python packages.\n"},"javaDependencies":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/JobGitSource:JobGitSource":{"properties":{"branch":{"type":"string","description":"name of the Git branch to use. Conflicts with \u003cspan pulumi-lang-nodejs=\"`tag`\" pulumi-lang-dotnet=\"`Tag`\" pulumi-lang-go=\"`tag`\" pulumi-lang-python=\"`tag`\" pulumi-lang-yaml=\"`tag`\" pulumi-lang-java=\"`tag`\"\u003e`tag`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`commit`\" pulumi-lang-dotnet=\"`Commit`\" pulumi-lang-go=\"`commit`\" pulumi-lang-python=\"`commit`\" pulumi-lang-yaml=\"`commit`\" pulumi-lang-java=\"`commit`\"\u003e`commit`\u003c/span\u003e.\n"},"commit":{"type":"string","description":"hash of Git commit to use. Conflicts with \u003cspan pulumi-lang-nodejs=\"`branch`\" pulumi-lang-dotnet=\"`Branch`\" pulumi-lang-go=\"`branch`\" pulumi-lang-python=\"`branch`\" pulumi-lang-yaml=\"`branch`\" pulumi-lang-java=\"`branch`\"\u003e`branch`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`tag`\" pulumi-lang-dotnet=\"`Tag`\" pulumi-lang-go=\"`tag`\" pulumi-lang-python=\"`tag`\" pulumi-lang-yaml=\"`tag`\" pulumi-lang-java=\"`tag`\"\u003e`tag`\u003c/span\u003e.\n"},"gitSnapshot":{"$ref":"#/types/databricks:index/JobGitSourceGitSnapshot:JobGitSourceGitSnapshot"},"jobSource":{"$ref":"#/types/databricks:index/JobGitSourceJobSource:JobGitSourceJobSource"},"provider":{"type":"string","description":"case insensitive name of the Git provider.  Following values are supported right now (could be a subject for change, consult [Repos API documentation](https://docs.databricks.com/dev-tools/api/latest/repos.html)): `gitHub`, `gitHubEnterprise`, `bitbucketCloud`, `bitbucketServer`, `azureDevOpsServices`, `gitLab`, `gitLabEnterpriseEdition`.\n"},"sparseCheckout":{"$ref":"#/types/databricks:index/JobGitSourceSparseCheckout:JobGitSourceSparseCheckout"},"tag":{"type":"string","description":"name of the Git branch to use. Conflicts with \u003cspan pulumi-lang-nodejs=\"`branch`\" pulumi-lang-dotnet=\"`Branch`\" pulumi-lang-go=\"`branch`\" pulumi-lang-python=\"`branch`\" pulumi-lang-yaml=\"`branch`\" pulumi-lang-java=\"`branch`\"\u003e`branch`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`commit`\" pulumi-lang-dotnet=\"`Commit`\" pulumi-lang-go=\"`commit`\" pulumi-lang-python=\"`commit`\" pulumi-lang-yaml=\"`commit`\" pulumi-lang-java=\"`commit`\"\u003e`commit`\u003c/span\u003e.\n"},"url":{"type":"string","description":"URL of the Git repository to use.\n"}},"type":"object","required":["url"]},"databricks:index/JobGitSourceGitSnapshot:JobGitSourceGitSnapshot":{"properties":{"usedCommit":{"type":"string"}},"type":"object"},"databricks:index/JobGitSourceJobSource:JobGitSourceJobSource":{"properties":{"dirtyState":{"type":"string"},"importFromGitBranch":{"type":"string"},"jobConfigPath":{"type":"string"}},"type":"object","required":["importFromGitBranch","jobConfigPath"]},"databricks:index/JobGitSourceSparseCheckout:JobGitSourceSparseCheckout":{"properties":{"patterns":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/JobHealth:JobHealth":{"properties":{"rules":{"type":"array","items":{"$ref":"#/types/databricks:index/JobHealthRule:JobHealthRule"},"description":"list of rules that are represented as objects with the following attributes:\n"}},"type":"object","required":["rules"]},"databricks:index/JobHealthRule:JobHealthRule":{"properties":{"metric":{"type":"string","description":"string specifying the metric to check, like `RUN_DURATION_SECONDS`, `STREAMING_BACKLOG_FILES`, etc. - check the [Jobs REST API documentation](https://docs.databricks.com/api/workspace/jobs/create#health-rules-metric) for the full list of supported metrics.\n"},"op":{"type":"string","description":"string specifying the operation used to evaluate the given metric. The only supported operation is `GREATER_THAN`.\n"},"value":{"type":"integer","description":"integer value used to compare to the given metric.\n"}},"type":"object","required":["metric","op","value"]},"databricks:index/JobJobCluster:JobJobCluster":{"properties":{"jobClusterKey":{"type":"string","description":"Identifier that can be referenced in \u003cspan pulumi-lang-nodejs=\"`task`\" pulumi-lang-dotnet=\"`Task`\" pulumi-lang-go=\"`task`\" pulumi-lang-python=\"`task`\" pulumi-lang-yaml=\"`task`\" pulumi-lang-java=\"`task`\"\u003e`task`\u003c/span\u003e block, so that cluster is shared between tasks\n"},"newCluster":{"$ref":"#/types/databricks:index/JobJobClusterNewCluster:JobJobClusterNewCluster","description":"Block with almost the same set of parameters as for\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eresource, except following (check the [REST API documentation for full list of supported parameters](https://docs.databricks.com/api/workspace/jobs/create#job_clusters-new_cluster)):\n"}},"type":"object","required":["jobClusterKey","newCluster"]},"databricks:index/JobJobClusterNewCluster:JobJobClusterNewCluster":{"properties":{"__applyPolicyDefaultValuesAllowLists":{"type":"array","items":{"type":"string"}},"applyPolicyDefaultValues":{"type":"boolean"},"autoscale":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterAutoscale:JobJobClusterNewClusterAutoscale"},"awsAttributes":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterAwsAttributes:JobJobClusterNewClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterAzureAttributes:JobJobClusterNewClusterAzureAttributes"},"clusterId":{"type":"string"},"clusterLogConf":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterClusterLogConf:JobJobClusterNewClusterClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterClusterMountInfo:JobJobClusterNewClusterClusterMountInfo"}},"clusterName":{"type":"string"},"customTags":{"type":"object","additionalProperties":{"type":"string"}},"dataSecurityMode":{"type":"string"},"dockerImage":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterDockerImage:JobJobClusterNewClusterDockerImage"},"driverInstancePoolId":{"type":"string"},"driverNodeTypeFlexibility":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterDriverNodeTypeFlexibility:JobJobClusterNewClusterDriverNodeTypeFlexibility"},"driverNodeTypeId":{"type":"string"},"enableElasticDisk":{"type":"boolean"},"enableLocalDiskEncryption":{"type":"boolean"},"gcpAttributes":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterGcpAttributes:JobJobClusterNewClusterGcpAttributes"},"idempotencyToken":{"type":"string","willReplaceOnChanges":true},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterInitScript:JobJobClusterNewClusterInitScript"}},"instancePoolId":{"type":"string"},"isSingleNode":{"type":"boolean"},"kind":{"type":"string"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterLibrary:JobJobClusterNewClusterLibrary"},"description":"(List) An optional list of libraries to be installed on the cluster that will execute the job. See library Configuration Block below.\n"},"nodeTypeId":{"type":"string"},"numWorkers":{"type":"integer"},"policyId":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterProviderConfig:JobJobClusterNewClusterProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"remoteDiskThroughput":{"type":"integer"},"runtimeEngine":{"type":"string"},"singleUserName":{"type":"string"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"}},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"}},"sparkVersion":{"type":"string"},"sshPublicKeys":{"type":"array","items":{"type":"string"}},"totalInitialRemoteDiskSize":{"type":"integer"},"useMlRuntime":{"type":"boolean"},"workerNodeTypeFlexibility":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterWorkerNodeTypeFlexibility:JobJobClusterNewClusterWorkerNodeTypeFlexibility"},"workloadType":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterWorkloadType:JobJobClusterNewClusterWorkloadType","description":"isn't supported\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["driverInstancePoolId","driverNodeTypeId","enableElasticDisk","enableLocalDiskEncryption","nodeTypeId"]}}},"databricks:index/JobJobClusterNewClusterAutoscale:JobJobClusterNewClusterAutoscale":{"properties":{"maxWorkers":{"type":"integer"},"minWorkers":{"type":"integer"}},"type":"object"},"databricks:index/JobJobClusterNewClusterAwsAttributes:JobJobClusterNewClusterAwsAttributes":{"properties":{"availability":{"type":"string"},"ebsVolumeCount":{"type":"integer"},"ebsVolumeIops":{"type":"integer"},"ebsVolumeSize":{"type":"integer"},"ebsVolumeThroughput":{"type":"integer"},"ebsVolumeType":{"type":"string"},"firstOnDemand":{"type":"integer"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/JobJobClusterNewClusterAzureAttributes:JobJobClusterNewClusterAzureAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"logAnalyticsInfo":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterAzureAttributesLogAnalyticsInfo:JobJobClusterNewClusterAzureAttributesLogAnalyticsInfo"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/JobJobClusterNewClusterAzureAttributesLogAnalyticsInfo:JobJobClusterNewClusterAzureAttributesLogAnalyticsInfo":{"properties":{"logAnalyticsPrimaryKey":{"type":"string"},"logAnalyticsWorkspaceId":{"type":"string"}},"type":"object"},"databricks:index/JobJobClusterNewClusterClusterLogConf:JobJobClusterNewClusterClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterClusterLogConfDbfs:JobJobClusterNewClusterClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterClusterLogConfS3:JobJobClusterNewClusterClusterLogConfS3"},"volumes":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterClusterLogConfVolumes:JobJobClusterNewClusterClusterLogConfVolumes"}},"type":"object"},"databricks:index/JobJobClusterNewClusterClusterLogConfDbfs:JobJobClusterNewClusterClusterLogConfDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobJobClusterNewClusterClusterLogConfS3:JobJobClusterNewClusterClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobJobClusterNewClusterClusterLogConfVolumes:JobJobClusterNewClusterClusterLogConfVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobJobClusterNewClusterClusterMountInfo:JobJobClusterNewClusterClusterMountInfo":{"properties":{"localMountDirPath":{"type":"string"},"networkFilesystemInfo":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterClusterMountInfoNetworkFilesystemInfo:JobJobClusterNewClusterClusterMountInfoNetworkFilesystemInfo"},"remoteMountDirPath":{"type":"string"}},"type":"object","required":["localMountDirPath","networkFilesystemInfo"]},"databricks:index/JobJobClusterNewClusterClusterMountInfoNetworkFilesystemInfo:JobJobClusterNewClusterClusterMountInfoNetworkFilesystemInfo":{"properties":{"mountOptions":{"type":"string"},"serverAddress":{"type":"string"}},"type":"object","required":["serverAddress"]},"databricks:index/JobJobClusterNewClusterDockerImage:JobJobClusterNewClusterDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterDockerImageBasicAuth:JobJobClusterNewClusterDockerImageBasicAuth"},"url":{"type":"string","description":"URL of the job on the given workspace\n"}},"type":"object","required":["url"]},"databricks:index/JobJobClusterNewClusterDockerImageBasicAuth:JobJobClusterNewClusterDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true},"username":{"type":"string"}},"type":"object","required":["password","username"]},"databricks:index/JobJobClusterNewClusterDriverNodeTypeFlexibility:JobJobClusterNewClusterDriverNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/JobJobClusterNewClusterGcpAttributes:JobJobClusterNewClusterGcpAttributes":{"properties":{"availability":{"type":"string"},"bootDiskSize":{"type":"integer"},"firstOnDemand":{"type":"integer"},"googleServiceAccount":{"type":"string"},"localSsdCount":{"type":"integer"},"usePreemptibleExecutors":{"type":"boolean"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/JobJobClusterNewClusterInitScript:JobJobClusterNewClusterInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterInitScriptAbfss:JobJobClusterNewClusterInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterInitScriptDbfs:JobJobClusterNewClusterInitScriptDbfs","deprecationMessage":"For init scripts use 'volumes', 'workspace' or cloud storage location instead of 'dbfs'."},"file":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterInitScriptFile:JobJobClusterNewClusterInitScriptFile","description":"block consisting of single string fields:\n"},"gcs":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterInitScriptGcs:JobJobClusterNewClusterInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterInitScriptS3:JobJobClusterNewClusterInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterInitScriptVolumes:JobJobClusterNewClusterInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterInitScriptWorkspace:JobJobClusterNewClusterInitScriptWorkspace"}},"type":"object"},"databricks:index/JobJobClusterNewClusterInitScriptAbfss:JobJobClusterNewClusterInitScriptAbfss":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobJobClusterNewClusterInitScriptDbfs:JobJobClusterNewClusterInitScriptDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobJobClusterNewClusterInitScriptFile:JobJobClusterNewClusterInitScriptFile":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobJobClusterNewClusterInitScriptGcs:JobJobClusterNewClusterInitScriptGcs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobJobClusterNewClusterInitScriptS3:JobJobClusterNewClusterInitScriptS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobJobClusterNewClusterInitScriptVolumes:JobJobClusterNewClusterInitScriptVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobJobClusterNewClusterInitScriptWorkspace:JobJobClusterNewClusterInitScriptWorkspace":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobJobClusterNewClusterLibrary:JobJobClusterNewClusterLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterLibraryCran:JobJobClusterNewClusterLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterLibraryMaven:JobJobClusterNewClusterLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterLibraryProviderConfig:JobJobClusterNewClusterLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterLibraryPypi:JobJobClusterNewClusterLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/JobJobClusterNewClusterLibraryCran:JobJobClusterNewClusterLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobJobClusterNewClusterLibraryMaven:JobJobClusterNewClusterLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/JobJobClusterNewClusterLibraryProviderConfig:JobJobClusterNewClusterLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobJobClusterNewClusterLibraryPypi:JobJobClusterNewClusterLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobJobClusterNewClusterProviderConfig:JobJobClusterNewClusterProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobJobClusterNewClusterWorkerNodeTypeFlexibility:JobJobClusterNewClusterWorkerNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/JobJobClusterNewClusterWorkloadType:JobJobClusterNewClusterWorkloadType":{"properties":{"clients":{"$ref":"#/types/databricks:index/JobJobClusterNewClusterWorkloadTypeClients:JobJobClusterNewClusterWorkloadTypeClients"}},"type":"object","required":["clients"]},"databricks:index/JobJobClusterNewClusterWorkloadTypeClients:JobJobClusterNewClusterWorkloadTypeClients":{"properties":{"jobs":{"type":"boolean"},"notebooks":{"type":"boolean"}},"type":"object"},"databricks:index/JobLibrary:JobLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/JobLibraryCran:JobLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/JobLibraryMaven:JobLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/JobLibraryProviderConfig:JobLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/JobLibraryPypi:JobLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/JobLibraryCran:JobLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobLibraryMaven:JobLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/JobLibraryProviderConfig:JobLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobLibraryPypi:JobLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobNewCluster:JobNewCluster":{"properties":{"applyPolicyDefaultValues":{"type":"boolean"},"autoscale":{"$ref":"#/types/databricks:index/JobNewClusterAutoscale:JobNewClusterAutoscale"},"awsAttributes":{"$ref":"#/types/databricks:index/JobNewClusterAwsAttributes:JobNewClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/JobNewClusterAzureAttributes:JobNewClusterAzureAttributes"},"clusterId":{"type":"string"},"clusterLogConf":{"$ref":"#/types/databricks:index/JobNewClusterClusterLogConf:JobNewClusterClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/JobNewClusterClusterMountInfo:JobNewClusterClusterMountInfo"}},"clusterName":{"type":"string"},"customTags":{"type":"object","additionalProperties":{"type":"string"}},"dataSecurityMode":{"type":"string"},"dockerImage":{"$ref":"#/types/databricks:index/JobNewClusterDockerImage:JobNewClusterDockerImage"},"driverInstancePoolId":{"type":"string"},"driverNodeTypeFlexibility":{"$ref":"#/types/databricks:index/JobNewClusterDriverNodeTypeFlexibility:JobNewClusterDriverNodeTypeFlexibility"},"driverNodeTypeId":{"type":"string"},"enableElasticDisk":{"type":"boolean"},"enableLocalDiskEncryption":{"type":"boolean"},"gcpAttributes":{"$ref":"#/types/databricks:index/JobNewClusterGcpAttributes:JobNewClusterGcpAttributes"},"idempotencyToken":{"type":"string","willReplaceOnChanges":true},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/JobNewClusterInitScript:JobNewClusterInitScript"}},"instancePoolId":{"type":"string"},"isSingleNode":{"type":"boolean"},"kind":{"type":"string"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/JobNewClusterLibrary:JobNewClusterLibrary"},"description":"(List) An optional list of libraries to be installed on the cluster that will execute the job. See library Configuration Block below.\n"},"nodeTypeId":{"type":"string"},"numWorkers":{"type":"integer"},"policyId":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/JobNewClusterProviderConfig:JobNewClusterProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"remoteDiskThroughput":{"type":"integer"},"runtimeEngine":{"type":"string"},"singleUserName":{"type":"string"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"}},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"}},"sparkVersion":{"type":"string"},"sshPublicKeys":{"type":"array","items":{"type":"string"}},"totalInitialRemoteDiskSize":{"type":"integer"},"useMlRuntime":{"type":"boolean"},"workerNodeTypeFlexibility":{"$ref":"#/types/databricks:index/JobNewClusterWorkerNodeTypeFlexibility:JobNewClusterWorkerNodeTypeFlexibility"},"workloadType":{"$ref":"#/types/databricks:index/JobNewClusterWorkloadType:JobNewClusterWorkloadType","description":"isn't supported\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["driverInstancePoolId","driverNodeTypeId","enableElasticDisk","enableLocalDiskEncryption","nodeTypeId"]}}},"databricks:index/JobNewClusterAutoscale:JobNewClusterAutoscale":{"properties":{"maxWorkers":{"type":"integer"},"minWorkers":{"type":"integer"}},"type":"object"},"databricks:index/JobNewClusterAwsAttributes:JobNewClusterAwsAttributes":{"properties":{"availability":{"type":"string"},"ebsVolumeCount":{"type":"integer"},"ebsVolumeIops":{"type":"integer"},"ebsVolumeSize":{"type":"integer"},"ebsVolumeThroughput":{"type":"integer"},"ebsVolumeType":{"type":"string"},"firstOnDemand":{"type":"integer"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/JobNewClusterAzureAttributes:JobNewClusterAzureAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"logAnalyticsInfo":{"$ref":"#/types/databricks:index/JobNewClusterAzureAttributesLogAnalyticsInfo:JobNewClusterAzureAttributesLogAnalyticsInfo"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/JobNewClusterAzureAttributesLogAnalyticsInfo:JobNewClusterAzureAttributesLogAnalyticsInfo":{"properties":{"logAnalyticsPrimaryKey":{"type":"string"},"logAnalyticsWorkspaceId":{"type":"string"}},"type":"object"},"databricks:index/JobNewClusterClusterLogConf:JobNewClusterClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/JobNewClusterClusterLogConfDbfs:JobNewClusterClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/JobNewClusterClusterLogConfS3:JobNewClusterClusterLogConfS3"},"volumes":{"$ref":"#/types/databricks:index/JobNewClusterClusterLogConfVolumes:JobNewClusterClusterLogConfVolumes"}},"type":"object"},"databricks:index/JobNewClusterClusterLogConfDbfs:JobNewClusterClusterLogConfDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobNewClusterClusterLogConfS3:JobNewClusterClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobNewClusterClusterLogConfVolumes:JobNewClusterClusterLogConfVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobNewClusterClusterMountInfo:JobNewClusterClusterMountInfo":{"properties":{"localMountDirPath":{"type":"string"},"networkFilesystemInfo":{"$ref":"#/types/databricks:index/JobNewClusterClusterMountInfoNetworkFilesystemInfo:JobNewClusterClusterMountInfoNetworkFilesystemInfo"},"remoteMountDirPath":{"type":"string"}},"type":"object","required":["localMountDirPath","networkFilesystemInfo"]},"databricks:index/JobNewClusterClusterMountInfoNetworkFilesystemInfo:JobNewClusterClusterMountInfoNetworkFilesystemInfo":{"properties":{"mountOptions":{"type":"string"},"serverAddress":{"type":"string"}},"type":"object","required":["serverAddress"]},"databricks:index/JobNewClusterDockerImage:JobNewClusterDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/JobNewClusterDockerImageBasicAuth:JobNewClusterDockerImageBasicAuth"},"url":{"type":"string","description":"URL of the job on the given workspace\n"}},"type":"object","required":["url"]},"databricks:index/JobNewClusterDockerImageBasicAuth:JobNewClusterDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true},"username":{"type":"string"}},"type":"object","required":["password","username"]},"databricks:index/JobNewClusterDriverNodeTypeFlexibility:JobNewClusterDriverNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/JobNewClusterGcpAttributes:JobNewClusterGcpAttributes":{"properties":{"availability":{"type":"string"},"bootDiskSize":{"type":"integer"},"firstOnDemand":{"type":"integer"},"googleServiceAccount":{"type":"string"},"localSsdCount":{"type":"integer"},"usePreemptibleExecutors":{"type":"boolean"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/JobNewClusterInitScript:JobNewClusterInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/JobNewClusterInitScriptAbfss:JobNewClusterInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/JobNewClusterInitScriptDbfs:JobNewClusterInitScriptDbfs","deprecationMessage":"For init scripts use 'volumes', 'workspace' or cloud storage location instead of 'dbfs'."},"file":{"$ref":"#/types/databricks:index/JobNewClusterInitScriptFile:JobNewClusterInitScriptFile","description":"block consisting of single string fields:\n"},"gcs":{"$ref":"#/types/databricks:index/JobNewClusterInitScriptGcs:JobNewClusterInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/JobNewClusterInitScriptS3:JobNewClusterInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/JobNewClusterInitScriptVolumes:JobNewClusterInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/JobNewClusterInitScriptWorkspace:JobNewClusterInitScriptWorkspace"}},"type":"object"},"databricks:index/JobNewClusterInitScriptAbfss:JobNewClusterInitScriptAbfss":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobNewClusterInitScriptDbfs:JobNewClusterInitScriptDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobNewClusterInitScriptFile:JobNewClusterInitScriptFile":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobNewClusterInitScriptGcs:JobNewClusterInitScriptGcs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobNewClusterInitScriptS3:JobNewClusterInitScriptS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobNewClusterInitScriptVolumes:JobNewClusterInitScriptVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobNewClusterInitScriptWorkspace:JobNewClusterInitScriptWorkspace":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobNewClusterLibrary:JobNewClusterLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/JobNewClusterLibraryCran:JobNewClusterLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/JobNewClusterLibraryMaven:JobNewClusterLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/JobNewClusterLibraryProviderConfig:JobNewClusterLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/JobNewClusterLibraryPypi:JobNewClusterLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/JobNewClusterLibraryCran:JobNewClusterLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobNewClusterLibraryMaven:JobNewClusterLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/JobNewClusterLibraryProviderConfig:JobNewClusterLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobNewClusterLibraryPypi:JobNewClusterLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobNewClusterProviderConfig:JobNewClusterProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobNewClusterWorkerNodeTypeFlexibility:JobNewClusterWorkerNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/JobNewClusterWorkloadType:JobNewClusterWorkloadType":{"properties":{"clients":{"$ref":"#/types/databricks:index/JobNewClusterWorkloadTypeClients:JobNewClusterWorkloadTypeClients"}},"type":"object","required":["clients"]},"databricks:index/JobNewClusterWorkloadTypeClients:JobNewClusterWorkloadTypeClients":{"properties":{"jobs":{"type":"boolean"},"notebooks":{"type":"boolean"}},"type":"object"},"databricks:index/JobNotebookTask:JobNotebookTask":{"properties":{"baseParameters":{"type":"object","additionalProperties":{"type":"string"},"description":"(Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in\u003cspan pulumi-lang-nodejs=\" baseParameters \" pulumi-lang-dotnet=\" BaseParameters \" pulumi-lang-go=\" baseParameters \" pulumi-lang-python=\" base_parameters \" pulumi-lang-yaml=\" baseParameters \" pulumi-lang-java=\" baseParameters \"\u003e base_parameters \u003c/span\u003eand in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job's\u003cspan pulumi-lang-nodejs=\" baseParameters \" pulumi-lang-dotnet=\" BaseParameters \" pulumi-lang-go=\" baseParameters \" pulumi-lang-python=\" base_parameters \" pulumi-lang-yaml=\" baseParameters \" pulumi-lang-java=\" baseParameters \"\u003e base_parameters \u003c/span\u003eor the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using `dbutils.widgets.get`.\n"},"notebookPath":{"type":"string","description":"The path of the\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.\n"},"source":{"type":"string","description":"Location type of the notebook, can only be `WORKSPACE` or `GIT`. When set to `WORKSPACE`, the notebook will be retrieved from the local Databricks workspace. When set to `GIT`, the notebook will be retrieved from a Git repository defined in \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e. If the value is empty, the task will use `GIT` if \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e is defined and `WORKSPACE` otherwise.\n"},"warehouseId":{"type":"string","description":"ID of the (the databricks_sql_endpoint) that will be used to execute the task with SQL notebook.\n"}},"type":"object","required":["notebookPath"]},"databricks:index/JobNotificationSettings:JobNotificationSettings":{"properties":{"noAlertForCanceledRuns":{"type":"boolean","description":"(Bool) don't send alert for cancelled runs.\n\nThe following parameter is only available on task level.\n"},"noAlertForSkippedRuns":{"type":"boolean","description":"(Bool) don't send alert for skipped runs.\n"}},"type":"object"},"databricks:index/JobParameter:JobParameter":{"properties":{"default":{"type":"string","description":"Default value of the parameter.\n\n*You can use this block only together with \u003cspan pulumi-lang-nodejs=\"`task`\" pulumi-lang-dotnet=\"`Task`\" pulumi-lang-go=\"`task`\" pulumi-lang-python=\"`task`\" pulumi-lang-yaml=\"`task`\" pulumi-lang-java=\"`task`\"\u003e`task`\u003c/span\u003e blocks, not with the legacy tasks specification!*\n"},"name":{"type":"string","description":"The name of the defined parameter. May only contain alphanumeric characters, `_`, `-`, and `.`.\n"}},"type":"object","required":["default","name"]},"databricks:index/JobPipelineTask:JobPipelineTask":{"properties":{"fullRefresh":{"type":"boolean","description":"(Bool) Specifies if there should be full refresh of the pipeline.\n\n\u003e The following configuration blocks are only supported inside a \u003cspan pulumi-lang-nodejs=\"`task`\" pulumi-lang-dotnet=\"`Task`\" pulumi-lang-go=\"`task`\" pulumi-lang-python=\"`task`\" pulumi-lang-yaml=\"`task`\" pulumi-lang-java=\"`task`\"\u003e`task`\u003c/span\u003e block\n"},"pipelineId":{"type":"string","description":"The pipeline's unique ID.\n"}},"type":"object","required":["pipelineId"]},"databricks:index/JobProviderConfig:JobProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobPythonWheelTask:JobPythonWheelTask":{"properties":{"entryPoint":{"type":"string","description":"Python function as entry point for the task\n"},"namedParameters":{"type":"object","additionalProperties":{"type":"string"},"description":"Named parameters for the task\n"},"packageName":{"type":"string","description":"Name of Python package\n"},"parameters":{"type":"array","items":{"type":"string"},"description":"Parameters for the task\n"}},"type":"object"},"databricks:index/JobQueue:JobQueue":{"properties":{"enabled":{"type":"boolean","description":"If true, enable queueing for the job.\n"}},"type":"object","required":["enabled"]},"databricks:index/JobRunAs:JobRunAs":{"properties":{"groupName":{"type":"string"},"servicePrincipalName":{"type":"string","description":"The application ID of an active service principal. Setting this field requires the `servicePrincipal/user` role.\n\nExample:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Job(\"this\", {runAs: {\n    servicePrincipalName: \"8d23ae77-912e-4a19-81e4-b9c3f5cc9349\",\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Job(\"this\", run_as={\n    \"service_principal_name\": \"8d23ae77-912e-4a19-81e4-b9c3f5cc9349\",\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Job(\"this\", new()\n    {\n        RunAs = new Databricks.Inputs.JobRunAsArgs\n        {\n            ServicePrincipalName = \"8d23ae77-912e-4a19-81e4-b9c3f5cc9349\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewJob(ctx, \"this\", \u0026databricks.JobArgs{\n\t\t\tRunAs: \u0026databricks.JobRunAsArgs{\n\t\t\t\tServicePrincipalName: pulumi.String(\"8d23ae77-912e-4a19-81e4-b9c3f5cc9349\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Job;\nimport com.pulumi.databricks.JobArgs;\nimport com.pulumi.databricks.inputs.JobRunAsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Job(\"this\", JobArgs.builder()\n            .runAs(JobRunAsArgs.builder()\n                .servicePrincipalName(\"8d23ae77-912e-4a19-81e4-b9c3f5cc9349\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Job\n    properties:\n      runAs:\n        servicePrincipalName: 8d23ae77-912e-4a19-81e4-b9c3f5cc9349\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"},"userName":{"type":"string","description":"The email of an active workspace user. Non-admin users can only set this field to their own email.\n"}},"type":"object"},"databricks:index/JobRunJobTask:JobRunJobTask":{"properties":{"jobId":{"type":"integer","description":"(String) ID of the job\n"},"jobParameters":{"type":"object","additionalProperties":{"type":"string"},"description":"(Map) Job parameters for the task\n"}},"type":"object","required":["jobId"]},"databricks:index/JobSchedule:JobSchedule":{"properties":{"pauseStatus":{"type":"string","description":"Indicate whether this schedule is paused or not. Either `PAUSED` or `UNPAUSED`. When the \u003cspan pulumi-lang-nodejs=\"`pauseStatus`\" pulumi-lang-dotnet=\"`PauseStatus`\" pulumi-lang-go=\"`pauseStatus`\" pulumi-lang-python=\"`pause_status`\" pulumi-lang-yaml=\"`pauseStatus`\" pulumi-lang-java=\"`pauseStatus`\"\u003e`pause_status`\u003c/span\u003e field is omitted and a schedule is provided, the server will default to using `UNPAUSED` as a value for \u003cspan pulumi-lang-nodejs=\"`pauseStatus`\" pulumi-lang-dotnet=\"`PauseStatus`\" pulumi-lang-go=\"`pauseStatus`\" pulumi-lang-python=\"`pause_status`\" pulumi-lang-yaml=\"`pauseStatus`\" pulumi-lang-java=\"`pauseStatus`\"\u003e`pause_status`\u003c/span\u003e.\n"},"quartzCronExpression":{"type":"string","description":"A [Cron expression using Quartz syntax](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html) that describes the schedule for a job. This field is required.\n"},"timezoneId":{"type":"string","description":"A Java timezone ID. The schedule for a job will be resolved with respect to this timezone. See Java TimeZone for details. This field is required.\n"}},"type":"object","required":["quartzCronExpression","timezoneId"]},"databricks:index/JobSparkJarTask:JobSparkJarTask":{"properties":{"jarUri":{"type":"string"},"mainClassName":{"type":"string","description":"The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use `SparkContext.getOrCreate` to obtain a Spark context; otherwise, runs of the job will fail.\n"},"parameters":{"type":"array","items":{"type":"string"},"description":"(List) Parameters passed to the main method.\n"}},"type":"object"},"databricks:index/JobSparkPythonTask:JobSparkPythonTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"},"description":"(List) Command line parameters passed to the Python file.\n"},"pythonFile":{"type":"string","description":"The URI of the Python file to be executed. Cloud file URIs (e.g. `s3:/`, `abfss:/`, `gs:/`), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required.\n"},"source":{"type":"string","description":"Location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved from the local Databricks workspace or cloud location (if the\u003cspan pulumi-lang-nodejs=\" pythonFile \" pulumi-lang-dotnet=\" PythonFile \" pulumi-lang-go=\" pythonFile \" pulumi-lang-python=\" python_file \" pulumi-lang-yaml=\" pythonFile \" pulumi-lang-java=\" pythonFile \"\u003e python_file \u003c/span\u003ehas a URI format). When set to `GIT`, the Python file will be retrieved from a Git repository defined in \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e.\n* `WORKSPACE`: The Python file is located in a Databricks workspace or at a cloud filesystem URI.\n* `GIT`: The Python file is located in a remote Git repository.\n"}},"type":"object","required":["pythonFile"]},"databricks:index/JobSparkSubmitTask:JobSparkSubmitTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"},"description":"(List) Command-line parameters passed to spark submit.\n"}},"type":"object"},"databricks:index/JobTask:JobTask":{"properties":{"cleanRoomsNotebookTask":{"$ref":"#/types/databricks:index/JobTaskCleanRoomsNotebookTask:JobTaskCleanRoomsNotebookTask"},"compute":{"$ref":"#/types/databricks:index/JobTaskCompute:JobTaskCompute","description":"Task level compute configuration. This block is documented below.\n\n\u003e If no \u003cspan pulumi-lang-nodejs=\"`jobClusterKey`\" pulumi-lang-dotnet=\"`JobClusterKey`\" pulumi-lang-go=\"`jobClusterKey`\" pulumi-lang-python=\"`job_cluster_key`\" pulumi-lang-yaml=\"`jobClusterKey`\" pulumi-lang-java=\"`jobClusterKey`\"\u003e`job_cluster_key`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`existingClusterId`\" pulumi-lang-dotnet=\"`ExistingClusterId`\" pulumi-lang-go=\"`existingClusterId`\" pulumi-lang-python=\"`existing_cluster_id`\" pulumi-lang-yaml=\"`existingClusterId`\" pulumi-lang-java=\"`existingClusterId`\"\u003e`existing_cluster_id`\u003c/span\u003e, or \u003cspan pulumi-lang-nodejs=\"`newCluster`\" pulumi-lang-dotnet=\"`NewCluster`\" pulumi-lang-go=\"`newCluster`\" pulumi-lang-python=\"`new_cluster`\" pulumi-lang-yaml=\"`newCluster`\" pulumi-lang-java=\"`newCluster`\"\u003e`new_cluster`\u003c/span\u003e were specified in task definition, then task will executed using serverless compute.\n"},"conditionTask":{"$ref":"#/types/databricks:index/JobTaskConditionTask:JobTaskConditionTask"},"dashboardTask":{"$ref":"#/types/databricks:index/JobTaskDashboardTask:JobTaskDashboardTask"},"dbtCloudTask":{"$ref":"#/types/databricks:index/JobTaskDbtCloudTask:JobTaskDbtCloudTask"},"dbtPlatformTask":{"$ref":"#/types/databricks:index/JobTaskDbtPlatformTask:JobTaskDbtPlatformTask"},"dbtTask":{"$ref":"#/types/databricks:index/JobTaskDbtTask:JobTaskDbtTask"},"dependsOns":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskDependsOn:JobTaskDependsOn"},"description":"block specifying dependency(-ies) for a given task.\n"},"description":{"type":"string","description":"description for this task.\n"},"disableAutoOptimization":{"type":"boolean","description":"A flag to disable auto optimization in serverless tasks.\n"},"disabled":{"type":"boolean"},"emailNotifications":{"$ref":"#/types/databricks:index/JobTaskEmailNotifications:JobTaskEmailNotifications","description":"An optional block to specify a set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This block is documented below.\n"},"environmentKey":{"type":"string","description":"identifier of an \u003cspan pulumi-lang-nodejs=\"`environment`\" pulumi-lang-dotnet=\"`Environment`\" pulumi-lang-go=\"`environment`\" pulumi-lang-python=\"`environment`\" pulumi-lang-yaml=\"`environment`\" pulumi-lang-java=\"`environment`\"\u003e`environment`\u003c/span\u003e block that is used to specify libraries.  Required for some tasks (\u003cspan pulumi-lang-nodejs=\"`sparkPythonTask`\" pulumi-lang-dotnet=\"`SparkPythonTask`\" pulumi-lang-go=\"`sparkPythonTask`\" pulumi-lang-python=\"`spark_python_task`\" pulumi-lang-yaml=\"`sparkPythonTask`\" pulumi-lang-java=\"`sparkPythonTask`\"\u003e`spark_python_task`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`pythonWheelTask`\" pulumi-lang-dotnet=\"`PythonWheelTask`\" pulumi-lang-go=\"`pythonWheelTask`\" pulumi-lang-python=\"`python_wheel_task`\" pulumi-lang-yaml=\"`pythonWheelTask`\" pulumi-lang-java=\"`pythonWheelTask`\"\u003e`python_wheel_task`\u003c/span\u003e, ...) running on serverless compute.\n"},"existingClusterId":{"type":"string","description":"Identifier of the interactive cluster to run job on.  *Note: running tasks on interactive clusters may lead to increased costs!*\n"},"forEachTask":{"$ref":"#/types/databricks:index/JobTaskForEachTask:JobTaskForEachTask"},"genAiComputeTask":{"$ref":"#/types/databricks:index/JobTaskGenAiComputeTask:JobTaskGenAiComputeTask"},"health":{"$ref":"#/types/databricks:index/JobTaskHealth:JobTaskHealth","description":"block described below that specifies health conditions for a given task.\n"},"jobClusterKey":{"type":"string","description":"Identifier of the Job cluster specified in the \u003cspan pulumi-lang-nodejs=\"`jobCluster`\" pulumi-lang-dotnet=\"`JobCluster`\" pulumi-lang-go=\"`jobCluster`\" pulumi-lang-python=\"`job_cluster`\" pulumi-lang-yaml=\"`jobCluster`\" pulumi-lang-java=\"`jobCluster`\"\u003e`job_cluster`\u003c/span\u003e block.\n"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskLibrary:JobTaskLibrary"},"description":"(Set) An optional list of libraries to be installed on the cluster that will execute the job.\n"},"maxRetries":{"type":"integer","description":"(Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a `FAILED` or `INTERNAL_ERROR` lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: `PENDING`, `RUNNING`, `TERMINATING`, `TERMINATED`, `SKIPPED` or `INTERNAL_ERROR`.\n"},"minRetryIntervalMillis":{"type":"integer","description":"(Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.\n"},"newCluster":{"$ref":"#/types/databricks:index/JobTaskNewCluster:JobTaskNewCluster","description":"Task will run on a dedicated cluster.  See\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003edocumentation for specification. *Some parameters, such as \u003cspan pulumi-lang-nodejs=\"`autoterminationMinutes`\" pulumi-lang-dotnet=\"`AutoterminationMinutes`\" pulumi-lang-go=\"`autoterminationMinutes`\" pulumi-lang-python=\"`autotermination_minutes`\" pulumi-lang-yaml=\"`autoterminationMinutes`\" pulumi-lang-java=\"`autoterminationMinutes`\"\u003e`autotermination_minutes`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`isPinned`\" pulumi-lang-dotnet=\"`IsPinned`\" pulumi-lang-go=\"`isPinned`\" pulumi-lang-python=\"`is_pinned`\" pulumi-lang-yaml=\"`isPinned`\" pulumi-lang-java=\"`isPinned`\"\u003e`is_pinned`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`workloadType`\" pulumi-lang-dotnet=\"`WorkloadType`\" pulumi-lang-go=\"`workloadType`\" pulumi-lang-python=\"`workload_type`\" pulumi-lang-yaml=\"`workloadType`\" pulumi-lang-java=\"`workloadType`\"\u003e`workload_type`\u003c/span\u003e aren't supported!*\n"},"notebookTask":{"$ref":"#/types/databricks:index/JobTaskNotebookTask:JobTaskNotebookTask"},"notificationSettings":{"$ref":"#/types/databricks:index/JobTaskNotificationSettings:JobTaskNotificationSettings","description":"An optional block controlling the notification settings on the job level documented below.\n"},"pipelineTask":{"$ref":"#/types/databricks:index/JobTaskPipelineTask:JobTaskPipelineTask"},"powerBiTask":{"$ref":"#/types/databricks:index/JobTaskPowerBiTask:JobTaskPowerBiTask"},"pythonWheelTask":{"$ref":"#/types/databricks:index/JobTaskPythonWheelTask:JobTaskPythonWheelTask"},"retryOnTimeout":{"type":"boolean","description":"(Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.\n"},"runIf":{"type":"string","description":"An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. One of `ALL_SUCCESS`, `AT_LEAST_ONE_SUCCESS`, `NONE_FAILED`, `ALL_DONE`, `AT_LEAST_ONE_FAILED` or `ALL_FAILED`. When omitted, defaults to `ALL_SUCCESS`.\n"},"runJobTask":{"$ref":"#/types/databricks:index/JobTaskRunJobTask:JobTaskRunJobTask"},"sparkJarTask":{"$ref":"#/types/databricks:index/JobTaskSparkJarTask:JobTaskSparkJarTask"},"sparkPythonTask":{"$ref":"#/types/databricks:index/JobTaskSparkPythonTask:JobTaskSparkPythonTask"},"sparkSubmitTask":{"$ref":"#/types/databricks:index/JobTaskSparkSubmitTask:JobTaskSparkSubmitTask"},"sqlTask":{"$ref":"#/types/databricks:index/JobTaskSqlTask:JobTaskSqlTask"},"taskKey":{"type":"string","description":"string specifying an unique key for a given task.\n* `*_task` - (Required) one of the specific task blocks described below:\n"},"timeoutSeconds":{"type":"integer","description":"(Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.\n"},"webhookNotifications":{"$ref":"#/types/databricks:index/JobTaskWebhookNotifications:JobTaskWebhookNotifications","description":"(List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.\n"}},"type":"object","required":["taskKey"],"language":{"nodejs":{"requiredOutputs":["retryOnTimeout","taskKey"]}}},"databricks:index/JobTaskCleanRoomsNotebookTask:JobTaskCleanRoomsNotebookTask":{"properties":{"cleanRoomName":{"type":"string","description":"The clean room that the notebook belongs to.\n"},"etag":{"type":"string","description":"Checksum to validate the freshness of the notebook resource.\n"},"notebookBaseParameters":{"type":"object","additionalProperties":{"type":"string"},"description":"Base parameters to be used for the clean room notebook job.\n"},"notebookName":{"type":"string","description":"Name of the notebook being run.\n"}},"type":"object","required":["cleanRoomName","notebookName"]},"databricks:index/JobTaskCompute:JobTaskCompute":{"properties":{"hardwareAccelerator":{"type":"string","description":"Hardware accelerator configuration for Serverless GPU workloads. Supported values are:\n* `GPU_1xA10`: GPU_1xA10: Single A10 GPU configuration.\n* `GPU_8xH100`: GPU_8xH100: 8x H100 GPU configuration.\n"}},"type":"object"},"databricks:index/JobTaskConditionTask:JobTaskConditionTask":{"properties":{"left":{"type":"string","description":"The left operand of the condition task. It could be a string value, job state, or a parameter reference.\n"},"op":{"type":"string","description":"The string specifying the operation used to compare operands.  Currently, following operators are supported: `EQUAL_TO`, `GREATER_THAN`, `GREATER_THAN_OR_EQUAL`, `LESS_THAN`, `LESS_THAN_OR_EQUAL`, `NOT_EQUAL`. (Check the [API docs](https://docs.databricks.com/api/workspace/jobs/create) for the latest information).\n\nThis task does not require a cluster to execute and does not support retries or notifications.\n"},"right":{"type":"string","description":"The right operand of the condition task. It could be a string value, job state, or parameter reference.\n"}},"type":"object","required":["left","op","right"]},"databricks:index/JobTaskDashboardTask:JobTaskDashboardTask":{"properties":{"dashboardId":{"type":"string","description":"The identifier of the dashboard to refresh\n"},"filters":{"type":"object","additionalProperties":{"type":"string"}},"subscription":{"$ref":"#/types/databricks:index/JobTaskDashboardTaskSubscription:JobTaskDashboardTaskSubscription","description":"Represents a subscription configuration for scheduled dashboard snapshots.\n"},"warehouseId":{"type":"string","description":"The warehouse id to execute the dashboard with for the schedule. If not specified, will use the default warehouse of dashboard\n"}},"type":"object"},"databricks:index/JobTaskDashboardTaskSubscription:JobTaskDashboardTaskSubscription":{"properties":{"customSubject":{"type":"string","description":"Allows users to specify a custom subject line on the email sent to subscribers.\n"},"paused":{"type":"boolean","description":"When true, the subscription will not send emails.\n"},"subscribers":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskDashboardTaskSubscriptionSubscriber:JobTaskDashboardTaskSubscriptionSubscriber"},"description":"The list of subscribers to send the snapshot of the dashboard to.\n"}},"type":"object"},"databricks:index/JobTaskDashboardTaskSubscriptionSubscriber:JobTaskDashboardTaskSubscriptionSubscriber":{"properties":{"destinationId":{"type":"string","description":"A snapshot of the dashboard will be sent to the destination when the \u003cspan pulumi-lang-nodejs=\"`destinationId`\" pulumi-lang-dotnet=\"`DestinationId`\" pulumi-lang-go=\"`destinationId`\" pulumi-lang-python=\"`destination_id`\" pulumi-lang-yaml=\"`destinationId`\" pulumi-lang-java=\"`destinationId`\"\u003e`destination_id`\u003c/span\u003e field is present.\n"},"userName":{"type":"string","description":"A snapshot of the dashboard will be sent to the user's email when the \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e field is present.\n"}},"type":"object"},"databricks:index/JobTaskDbtCloudTask:JobTaskDbtCloudTask":{"properties":{"connectionResourceName":{"type":"string","description":"The resource name of the UC connection to authenticate from Databricks to Power BI\n"},"dbtCloudJobId":{"type":"integer"}},"type":"object"},"databricks:index/JobTaskDbtPlatformTask:JobTaskDbtPlatformTask":{"properties":{"connectionResourceName":{"type":"string","description":"The resource name of the UC connection to authenticate from Databricks to Power BI\n"},"dbtPlatformJobId":{"type":"string"}},"type":"object"},"databricks:index/JobTaskDbtTask:JobTaskDbtTask":{"properties":{"catalog":{"type":"string","description":"The name of the catalog to use inside Unity Catalog.\n"},"commands":{"type":"array","items":{"type":"string"},"description":"(Array) Series of dbt commands to execute in sequence. Every command must start with \"dbt\".\n"},"profilesDirectory":{"type":"string","description":"The relative path to the directory in the repository specified by \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e where dbt should look in for the `profiles.yml` file. If not specified, defaults to the repository's root directory. Equivalent to passing `--profile-dir` to a dbt command.\n"},"projectDirectory":{"type":"string","description":"The path where dbt should look for `dbt_project.yml`. Equivalent to passing `--project-dir` to the dbt CLI.\n* If \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e is `GIT`: Relative path to the directory in the repository specified in the \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e block. Defaults to the repository's root directory when not specified.\n* If \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e is `WORKSPACE`: Absolute path to the folder in the workspace.\n"},"schema":{"type":"string","description":"The name of the schema dbt should run in. Defaults to \u003cspan pulumi-lang-nodejs=\"`default`\" pulumi-lang-dotnet=\"`Default`\" pulumi-lang-go=\"`default`\" pulumi-lang-python=\"`default`\" pulumi-lang-yaml=\"`default`\" pulumi-lang-java=\"`default`\"\u003e`default`\u003c/span\u003e.\n"},"source":{"type":"string","description":"The source of the project. Possible values are `WORKSPACE` and `GIT`.  Defaults to `GIT` if a \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e block is present in the job definition.\n"},"warehouseId":{"type":"string","description":"The ID of the SQL warehouse that dbt should execute against.\n\nYou also need to include a \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e block to configure the repository that contains the dbt project.\n"}},"type":"object","required":["commands"]},"databricks:index/JobTaskDependsOn:JobTaskDependsOn":{"properties":{"outcome":{"type":"string","description":"Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are `\"true\"` or `\"false\"`.\n\n\u003e Similar to the tasks themselves, each dependency inside the task need to be declared in alphabetical order with respect to\u003cspan pulumi-lang-nodejs=\" taskKey \" pulumi-lang-dotnet=\" TaskKey \" pulumi-lang-go=\" taskKey \" pulumi-lang-python=\" task_key \" pulumi-lang-yaml=\" taskKey \" pulumi-lang-java=\" taskKey \"\u003e task_key \u003c/span\u003ein order to get consistent Pulumi diffs.\n"},"taskKey":{"type":"string","description":"The name of the task this task depends on.\n"}},"type":"object","required":["taskKey"]},"databricks:index/JobTaskEmailNotifications:JobTaskEmailNotifications":{"properties":{"noAlertForSkippedRuns":{"type":"boolean","description":"(Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the \u003cspan pulumi-lang-nodejs=\"`notificationSettings`\" pulumi-lang-dotnet=\"`NotificationSettings`\" pulumi-lang-go=\"`notificationSettings`\" pulumi-lang-python=\"`notification_settings`\" pulumi-lang-yaml=\"`notificationSettings`\" pulumi-lang-java=\"`notificationSettings`\"\u003e`notification_settings`\u003c/span\u003e configuration block).\n"},"onDurationWarningThresholdExceededs":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the duration of a run exceeds the threshold specified by the `RUN_DURATION_SECONDS` metric in the \u003cspan pulumi-lang-nodejs=\"`health`\" pulumi-lang-dotnet=\"`Health`\" pulumi-lang-go=\"`health`\" pulumi-lang-python=\"`health`\" pulumi-lang-yaml=\"`health`\" pulumi-lang-java=\"`health`\"\u003e`health`\u003c/span\u003e block.\n"},"onFailures":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the run fails.\n"},"onStarts":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the run starts.\n"},"onStreamingBacklogExceededs":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when any streaming backlog thresholds are exceeded for any stream.\n\nThe following parameter is only available for the job level configuration.\n"},"onSuccesses":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the run completes successfully.\n"}},"type":"object"},"databricks:index/JobTaskForEachTask:JobTaskForEachTask":{"properties":{"concurrency":{"type":"integer","description":"Controls the number of active iteration task runs. Default is 20, maximum allowed is 100.\n"},"inputs":{"type":"string","description":"(String) Array for task to iterate on. This can be a JSON string or a reference to an array parameter.\n"},"task":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTask:JobTaskForEachTaskTask","description":"Task to run against the \u003cspan pulumi-lang-nodejs=\"`inputs`\" pulumi-lang-dotnet=\"`Inputs`\" pulumi-lang-go=\"`inputs`\" pulumi-lang-python=\"`inputs`\" pulumi-lang-yaml=\"`inputs`\" pulumi-lang-java=\"`inputs`\"\u003e`inputs`\u003c/span\u003e list.\n"}},"type":"object","required":["inputs","task"]},"databricks:index/JobTaskForEachTaskTask:JobTaskForEachTaskTask":{"properties":{"cleanRoomsNotebookTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskCleanRoomsNotebookTask:JobTaskForEachTaskTaskCleanRoomsNotebookTask"},"compute":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskCompute:JobTaskForEachTaskTaskCompute","description":"Task level compute configuration. This block is documented below.\n\n\u003e If no \u003cspan pulumi-lang-nodejs=\"`jobClusterKey`\" pulumi-lang-dotnet=\"`JobClusterKey`\" pulumi-lang-go=\"`jobClusterKey`\" pulumi-lang-python=\"`job_cluster_key`\" pulumi-lang-yaml=\"`jobClusterKey`\" pulumi-lang-java=\"`jobClusterKey`\"\u003e`job_cluster_key`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`existingClusterId`\" pulumi-lang-dotnet=\"`ExistingClusterId`\" pulumi-lang-go=\"`existingClusterId`\" pulumi-lang-python=\"`existing_cluster_id`\" pulumi-lang-yaml=\"`existingClusterId`\" pulumi-lang-java=\"`existingClusterId`\"\u003e`existing_cluster_id`\u003c/span\u003e, or \u003cspan pulumi-lang-nodejs=\"`newCluster`\" pulumi-lang-dotnet=\"`NewCluster`\" pulumi-lang-go=\"`newCluster`\" pulumi-lang-python=\"`new_cluster`\" pulumi-lang-yaml=\"`newCluster`\" pulumi-lang-java=\"`newCluster`\"\u003e`new_cluster`\u003c/span\u003e were specified in task definition, then task will executed using serverless compute.\n"},"conditionTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskConditionTask:JobTaskForEachTaskTaskConditionTask"},"dashboardTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskDashboardTask:JobTaskForEachTaskTaskDashboardTask"},"dbtCloudTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskDbtCloudTask:JobTaskForEachTaskTaskDbtCloudTask"},"dbtPlatformTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskDbtPlatformTask:JobTaskForEachTaskTaskDbtPlatformTask"},"dbtTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskDbtTask:JobTaskForEachTaskTaskDbtTask"},"dependsOns":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskDependsOn:JobTaskForEachTaskTaskDependsOn"},"description":"block specifying dependency(-ies) for a given task.\n"},"description":{"type":"string","description":"description for this task.\n"},"disableAutoOptimization":{"type":"boolean","description":"A flag to disable auto optimization in serverless tasks.\n"},"disabled":{"type":"boolean"},"emailNotifications":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskEmailNotifications:JobTaskForEachTaskTaskEmailNotifications","description":"An optional block to specify a set of email addresses notified when this task begins, completes or fails. The default behavior is to not send any emails. This block is documented below.\n"},"environmentKey":{"type":"string","description":"identifier of an \u003cspan pulumi-lang-nodejs=\"`environment`\" pulumi-lang-dotnet=\"`Environment`\" pulumi-lang-go=\"`environment`\" pulumi-lang-python=\"`environment`\" pulumi-lang-yaml=\"`environment`\" pulumi-lang-java=\"`environment`\"\u003e`environment`\u003c/span\u003e block that is used to specify libraries.  Required for some tasks (\u003cspan pulumi-lang-nodejs=\"`sparkPythonTask`\" pulumi-lang-dotnet=\"`SparkPythonTask`\" pulumi-lang-go=\"`sparkPythonTask`\" pulumi-lang-python=\"`spark_python_task`\" pulumi-lang-yaml=\"`sparkPythonTask`\" pulumi-lang-java=\"`sparkPythonTask`\"\u003e`spark_python_task`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`pythonWheelTask`\" pulumi-lang-dotnet=\"`PythonWheelTask`\" pulumi-lang-go=\"`pythonWheelTask`\" pulumi-lang-python=\"`python_wheel_task`\" pulumi-lang-yaml=\"`pythonWheelTask`\" pulumi-lang-java=\"`pythonWheelTask`\"\u003e`python_wheel_task`\u003c/span\u003e, ...) running on serverless compute.\n"},"existingClusterId":{"type":"string","description":"Identifier of the interactive cluster to run job on.  *Note: running tasks on interactive clusters may lead to increased costs!*\n"},"genAiComputeTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskGenAiComputeTask:JobTaskForEachTaskTaskGenAiComputeTask"},"health":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskHealth:JobTaskForEachTaskTaskHealth","description":"block described below that specifies health conditions for a given task.\n"},"jobClusterKey":{"type":"string","description":"Identifier of the Job cluster specified in the \u003cspan pulumi-lang-nodejs=\"`jobCluster`\" pulumi-lang-dotnet=\"`JobCluster`\" pulumi-lang-go=\"`jobCluster`\" pulumi-lang-python=\"`job_cluster`\" pulumi-lang-yaml=\"`jobCluster`\" pulumi-lang-java=\"`jobCluster`\"\u003e`job_cluster`\u003c/span\u003e block.\n"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskLibrary:JobTaskForEachTaskTaskLibrary"},"description":"(Set) An optional list of libraries to be installed on the cluster that will execute the job.\n"},"maxRetries":{"type":"integer","description":"(Integer) An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a `FAILED` or `INTERNAL_ERROR` lifecycle state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. A run can have the following lifecycle state: `PENDING`, `RUNNING`, `TERMINATING`, `TERMINATED`, `SKIPPED` or `INTERNAL_ERROR`.\n"},"minRetryIntervalMillis":{"type":"integer","description":"(Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.\n"},"newCluster":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewCluster:JobTaskForEachTaskTaskNewCluster","description":"Task will run on a dedicated cluster.  See\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003edocumentation for specification. *Some parameters, such as \u003cspan pulumi-lang-nodejs=\"`autoterminationMinutes`\" pulumi-lang-dotnet=\"`AutoterminationMinutes`\" pulumi-lang-go=\"`autoterminationMinutes`\" pulumi-lang-python=\"`autotermination_minutes`\" pulumi-lang-yaml=\"`autoterminationMinutes`\" pulumi-lang-java=\"`autoterminationMinutes`\"\u003e`autotermination_minutes`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`isPinned`\" pulumi-lang-dotnet=\"`IsPinned`\" pulumi-lang-go=\"`isPinned`\" pulumi-lang-python=\"`is_pinned`\" pulumi-lang-yaml=\"`isPinned`\" pulumi-lang-java=\"`isPinned`\"\u003e`is_pinned`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`workloadType`\" pulumi-lang-dotnet=\"`WorkloadType`\" pulumi-lang-go=\"`workloadType`\" pulumi-lang-python=\"`workload_type`\" pulumi-lang-yaml=\"`workloadType`\" pulumi-lang-java=\"`workloadType`\"\u003e`workload_type`\u003c/span\u003e aren't supported!*\n"},"notebookTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNotebookTask:JobTaskForEachTaskTaskNotebookTask"},"notificationSettings":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNotificationSettings:JobTaskForEachTaskTaskNotificationSettings","description":"An optional block controlling the notification settings on the job level documented below.\n"},"pipelineTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskPipelineTask:JobTaskForEachTaskTaskPipelineTask"},"powerBiTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskPowerBiTask:JobTaskForEachTaskTaskPowerBiTask"},"pythonWheelTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskPythonWheelTask:JobTaskForEachTaskTaskPythonWheelTask"},"retryOnTimeout":{"type":"boolean","description":"(Bool) An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout.\n"},"runIf":{"type":"string","description":"An optional value indicating the condition that determines whether the task should be run once its dependencies have been completed. One of `ALL_SUCCESS`, `AT_LEAST_ONE_SUCCESS`, `NONE_FAILED`, `ALL_DONE`, `AT_LEAST_ONE_FAILED` or `ALL_FAILED`. When omitted, defaults to `ALL_SUCCESS`.\n"},"runJobTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskRunJobTask:JobTaskForEachTaskTaskRunJobTask"},"sparkJarTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskSparkJarTask:JobTaskForEachTaskTaskSparkJarTask"},"sparkPythonTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskSparkPythonTask:JobTaskForEachTaskTaskSparkPythonTask"},"sparkSubmitTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskSparkSubmitTask:JobTaskForEachTaskTaskSparkSubmitTask"},"sqlTask":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskSqlTask:JobTaskForEachTaskTaskSqlTask"},"taskKey":{"type":"string","description":"string specifying an unique key for a given task.\n* `*_task` - (Required) one of the specific task blocks described below:\n"},"timeoutSeconds":{"type":"integer","description":"(Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.\n"},"webhookNotifications":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskWebhookNotifications:JobTaskForEachTaskTaskWebhookNotifications","description":"(List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this task begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.\n"}},"type":"object","required":["taskKey"],"language":{"nodejs":{"requiredOutputs":["retryOnTimeout","taskKey"]}}},"databricks:index/JobTaskForEachTaskTaskCleanRoomsNotebookTask:JobTaskForEachTaskTaskCleanRoomsNotebookTask":{"properties":{"cleanRoomName":{"type":"string","description":"The clean room that the notebook belongs to.\n"},"etag":{"type":"string","description":"Checksum to validate the freshness of the notebook resource.\n"},"notebookBaseParameters":{"type":"object","additionalProperties":{"type":"string"},"description":"Base parameters to be used for the clean room notebook job.\n"},"notebookName":{"type":"string","description":"Name of the notebook being run.\n"}},"type":"object","required":["cleanRoomName","notebookName"]},"databricks:index/JobTaskForEachTaskTaskCompute:JobTaskForEachTaskTaskCompute":{"properties":{"hardwareAccelerator":{"type":"string","description":"Hardware accelerator configuration for Serverless GPU workloads. Supported values are:\n* `GPU_1xA10`: GPU_1xA10: Single A10 GPU configuration.\n* `GPU_8xH100`: GPU_8xH100: 8x H100 GPU configuration.\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskConditionTask:JobTaskForEachTaskTaskConditionTask":{"properties":{"left":{"type":"string","description":"The left operand of the condition task. It could be a string value, job state, or a parameter reference.\n"},"op":{"type":"string","description":"The string specifying the operation used to compare operands.  Currently, following operators are supported: `EQUAL_TO`, `GREATER_THAN`, `GREATER_THAN_OR_EQUAL`, `LESS_THAN`, `LESS_THAN_OR_EQUAL`, `NOT_EQUAL`. (Check the [API docs](https://docs.databricks.com/api/workspace/jobs/create) for the latest information).\n\nThis task does not require a cluster to execute and does not support retries or notifications.\n"},"right":{"type":"string","description":"The right operand of the condition task. It could be a string value, job state, or parameter reference.\n"}},"type":"object","required":["left","op","right"]},"databricks:index/JobTaskForEachTaskTaskDashboardTask:JobTaskForEachTaskTaskDashboardTask":{"properties":{"dashboardId":{"type":"string","description":"The identifier of the dashboard to refresh\n"},"filters":{"type":"object","additionalProperties":{"type":"string"}},"subscription":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskDashboardTaskSubscription:JobTaskForEachTaskTaskDashboardTaskSubscription","description":"Represents a subscription configuration for scheduled dashboard snapshots.\n"},"warehouseId":{"type":"string","description":"The warehouse id to execute the dashboard with for the schedule. If not specified, will use the default warehouse of dashboard\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskDashboardTaskSubscription:JobTaskForEachTaskTaskDashboardTaskSubscription":{"properties":{"customSubject":{"type":"string","description":"Allows users to specify a custom subject line on the email sent to subscribers.\n"},"paused":{"type":"boolean","description":"When true, the subscription will not send emails.\n"},"subscribers":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskDashboardTaskSubscriptionSubscriber:JobTaskForEachTaskTaskDashboardTaskSubscriptionSubscriber"},"description":"The list of subscribers to send the snapshot of the dashboard to.\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskDashboardTaskSubscriptionSubscriber:JobTaskForEachTaskTaskDashboardTaskSubscriptionSubscriber":{"properties":{"destinationId":{"type":"string","description":"A snapshot of the dashboard will be sent to the destination when the \u003cspan pulumi-lang-nodejs=\"`destinationId`\" pulumi-lang-dotnet=\"`DestinationId`\" pulumi-lang-go=\"`destinationId`\" pulumi-lang-python=\"`destination_id`\" pulumi-lang-yaml=\"`destinationId`\" pulumi-lang-java=\"`destinationId`\"\u003e`destination_id`\u003c/span\u003e field is present.\n"},"userName":{"type":"string","description":"A snapshot of the dashboard will be sent to the user's email when the \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e field is present.\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskDbtCloudTask:JobTaskForEachTaskTaskDbtCloudTask":{"properties":{"connectionResourceName":{"type":"string","description":"The resource name of the UC connection to authenticate from Databricks to Power BI\n"},"dbtCloudJobId":{"type":"integer"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskDbtPlatformTask:JobTaskForEachTaskTaskDbtPlatformTask":{"properties":{"connectionResourceName":{"type":"string","description":"The resource name of the UC connection to authenticate from Databricks to Power BI\n"},"dbtPlatformJobId":{"type":"string"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskDbtTask:JobTaskForEachTaskTaskDbtTask":{"properties":{"catalog":{"type":"string","description":"The name of the catalog to use inside Unity Catalog.\n"},"commands":{"type":"array","items":{"type":"string"},"description":"(Array) Series of dbt commands to execute in sequence. Every command must start with \"dbt\".\n"},"profilesDirectory":{"type":"string","description":"The relative path to the directory in the repository specified by \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e where dbt should look in for the `profiles.yml` file. If not specified, defaults to the repository's root directory. Equivalent to passing `--profile-dir` to a dbt command.\n"},"projectDirectory":{"type":"string","description":"The path where dbt should look for `dbt_project.yml`. Equivalent to passing `--project-dir` to the dbt CLI.\n* If \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e is `GIT`: Relative path to the directory in the repository specified in the \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e block. Defaults to the repository's root directory when not specified.\n* If \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e is `WORKSPACE`: Absolute path to the folder in the workspace.\n"},"schema":{"type":"string","description":"The name of the schema dbt should run in. Defaults to \u003cspan pulumi-lang-nodejs=\"`default`\" pulumi-lang-dotnet=\"`Default`\" pulumi-lang-go=\"`default`\" pulumi-lang-python=\"`default`\" pulumi-lang-yaml=\"`default`\" pulumi-lang-java=\"`default`\"\u003e`default`\u003c/span\u003e.\n"},"source":{"type":"string","description":"The source of the project. Possible values are `WORKSPACE` and `GIT`.  Defaults to `GIT` if a \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e block is present in the job definition.\n"},"warehouseId":{"type":"string","description":"The ID of the SQL warehouse that dbt should execute against.\n\nYou also need to include a \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e block to configure the repository that contains the dbt project.\n"}},"type":"object","required":["commands"]},"databricks:index/JobTaskForEachTaskTaskDependsOn:JobTaskForEachTaskTaskDependsOn":{"properties":{"outcome":{"type":"string","description":"Can only be specified on condition task dependencies. The outcome of the dependent task that must be met for this task to run. Possible values are `\"true\"` or `\"false\"`.\n\n\u003e Similar to the tasks themselves, each dependency inside the task need to be declared in alphabetical order with respect to\u003cspan pulumi-lang-nodejs=\" taskKey \" pulumi-lang-dotnet=\" TaskKey \" pulumi-lang-go=\" taskKey \" pulumi-lang-python=\" task_key \" pulumi-lang-yaml=\" taskKey \" pulumi-lang-java=\" taskKey \"\u003e task_key \u003c/span\u003ein order to get consistent Pulumi diffs.\n"},"taskKey":{"type":"string","description":"The name of the task this task depends on.\n"}},"type":"object","required":["taskKey"]},"databricks:index/JobTaskForEachTaskTaskEmailNotifications:JobTaskForEachTaskTaskEmailNotifications":{"properties":{"noAlertForSkippedRuns":{"type":"boolean","description":"(Bool) don't send alert for skipped runs. (It's recommended to use the corresponding setting in the \u003cspan pulumi-lang-nodejs=\"`notificationSettings`\" pulumi-lang-dotnet=\"`NotificationSettings`\" pulumi-lang-go=\"`notificationSettings`\" pulumi-lang-python=\"`notification_settings`\" pulumi-lang-yaml=\"`notificationSettings`\" pulumi-lang-java=\"`notificationSettings`\"\u003e`notification_settings`\u003c/span\u003e configuration block).\n"},"onDurationWarningThresholdExceededs":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the duration of a run exceeds the threshold specified by the `RUN_DURATION_SECONDS` metric in the \u003cspan pulumi-lang-nodejs=\"`health`\" pulumi-lang-dotnet=\"`Health`\" pulumi-lang-go=\"`health`\" pulumi-lang-python=\"`health`\" pulumi-lang-yaml=\"`health`\" pulumi-lang-java=\"`health`\"\u003e`health`\u003c/span\u003e block.\n"},"onFailures":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the run fails.\n"},"onStarts":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the run starts.\n"},"onStreamingBacklogExceededs":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when any streaming backlog thresholds are exceeded for any stream.\n\nThe following parameter is only available for the job level configuration.\n"},"onSuccesses":{"type":"array","items":{"type":"string"},"description":"(List) list of emails to notify when the run completes successfully.\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskGenAiComputeTask:JobTaskForEachTaskTaskGenAiComputeTask":{"properties":{"command":{"type":"string"},"compute":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskGenAiComputeTaskCompute:JobTaskForEachTaskTaskGenAiComputeTaskCompute","description":"Task level compute configuration. This block is documented below.\n\n\u003e If no \u003cspan pulumi-lang-nodejs=\"`jobClusterKey`\" pulumi-lang-dotnet=\"`JobClusterKey`\" pulumi-lang-go=\"`jobClusterKey`\" pulumi-lang-python=\"`job_cluster_key`\" pulumi-lang-yaml=\"`jobClusterKey`\" pulumi-lang-java=\"`jobClusterKey`\"\u003e`job_cluster_key`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`existingClusterId`\" pulumi-lang-dotnet=\"`ExistingClusterId`\" pulumi-lang-go=\"`existingClusterId`\" pulumi-lang-python=\"`existing_cluster_id`\" pulumi-lang-yaml=\"`existingClusterId`\" pulumi-lang-java=\"`existingClusterId`\"\u003e`existing_cluster_id`\u003c/span\u003e, or \u003cspan pulumi-lang-nodejs=\"`newCluster`\" pulumi-lang-dotnet=\"`NewCluster`\" pulumi-lang-go=\"`newCluster`\" pulumi-lang-python=\"`new_cluster`\" pulumi-lang-yaml=\"`newCluster`\" pulumi-lang-java=\"`newCluster`\"\u003e`new_cluster`\u003c/span\u003e were specified in task definition, then task will executed using serverless compute.\n"},"dlRuntimeImage":{"type":"string"},"mlflowExperimentName":{"type":"string"},"source":{"type":"string"},"trainingScriptPath":{"type":"string"},"yamlParameters":{"type":"string"},"yamlParametersFilePath":{"type":"string"}},"type":"object","required":["dlRuntimeImage"]},"databricks:index/JobTaskForEachTaskTaskGenAiComputeTaskCompute:JobTaskForEachTaskTaskGenAiComputeTaskCompute":{"properties":{"gpuNodePoolId":{"type":"string"},"gpuType":{"type":"string"},"numGpus":{"type":"integer"}},"type":"object","required":["numGpus"]},"databricks:index/JobTaskForEachTaskTaskHealth:JobTaskForEachTaskTaskHealth":{"properties":{"rules":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskHealthRule:JobTaskForEachTaskTaskHealthRule"},"description":"list of rules that are represented as objects with the following attributes:\n"}},"type":"object","required":["rules"]},"databricks:index/JobTaskForEachTaskTaskHealthRule:JobTaskForEachTaskTaskHealthRule":{"properties":{"metric":{"type":"string","description":"string specifying the metric to check, like `RUN_DURATION_SECONDS`, `STREAMING_BACKLOG_FILES`, etc. - check the [Jobs REST API documentation](https://docs.databricks.com/api/workspace/jobs/create#health-rules-metric) for the full list of supported metrics.\n"},"op":{"type":"string","description":"string specifying the operation used to evaluate the given metric. The only supported operation is `GREATER_THAN`.\n"},"value":{"type":"integer","description":"integer value used to compare to the given metric.\n"}},"type":"object","required":["metric","op","value"]},"databricks:index/JobTaskForEachTaskTaskLibrary:JobTaskForEachTaskTaskLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskLibraryCran:JobTaskForEachTaskTaskLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskLibraryMaven:JobTaskForEachTaskTaskLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskLibraryProviderConfig:JobTaskForEachTaskTaskLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskLibraryPypi:JobTaskForEachTaskTaskLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskLibraryCran:JobTaskForEachTaskTaskLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobTaskForEachTaskTaskLibraryMaven:JobTaskForEachTaskTaskLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/JobTaskForEachTaskTaskLibraryProviderConfig:JobTaskForEachTaskTaskLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobTaskForEachTaskTaskLibraryPypi:JobTaskForEachTaskTaskLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobTaskForEachTaskTaskNewCluster:JobTaskForEachTaskTaskNewCluster":{"properties":{"applyPolicyDefaultValues":{"type":"boolean"},"autoscale":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterAutoscale:JobTaskForEachTaskTaskNewClusterAutoscale"},"awsAttributes":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterAwsAttributes:JobTaskForEachTaskTaskNewClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterAzureAttributes:JobTaskForEachTaskTaskNewClusterAzureAttributes"},"clusterId":{"type":"string"},"clusterLogConf":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterClusterLogConf:JobTaskForEachTaskTaskNewClusterClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterClusterMountInfo:JobTaskForEachTaskTaskNewClusterClusterMountInfo"}},"clusterName":{"type":"string"},"customTags":{"type":"object","additionalProperties":{"type":"string"}},"dataSecurityMode":{"type":"string"},"dockerImage":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterDockerImage:JobTaskForEachTaskTaskNewClusterDockerImage"},"driverInstancePoolId":{"type":"string"},"driverNodeTypeFlexibility":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterDriverNodeTypeFlexibility:JobTaskForEachTaskTaskNewClusterDriverNodeTypeFlexibility"},"driverNodeTypeId":{"type":"string"},"enableElasticDisk":{"type":"boolean"},"enableLocalDiskEncryption":{"type":"boolean"},"gcpAttributes":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterGcpAttributes:JobTaskForEachTaskTaskNewClusterGcpAttributes"},"idempotencyToken":{"type":"string","willReplaceOnChanges":true},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterInitScript:JobTaskForEachTaskTaskNewClusterInitScript"}},"instancePoolId":{"type":"string"},"isSingleNode":{"type":"boolean"},"kind":{"type":"string"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterLibrary:JobTaskForEachTaskTaskNewClusterLibrary"},"description":"(List) An optional list of libraries to be installed on the cluster that will execute the job. See library Configuration Block below.\n"},"nodeTypeId":{"type":"string"},"numWorkers":{"type":"integer"},"policyId":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterProviderConfig:JobTaskForEachTaskTaskNewClusterProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"remoteDiskThroughput":{"type":"integer"},"runtimeEngine":{"type":"string"},"singleUserName":{"type":"string"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"}},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"}},"sparkVersion":{"type":"string"},"sshPublicKeys":{"type":"array","items":{"type":"string"}},"totalInitialRemoteDiskSize":{"type":"integer"},"useMlRuntime":{"type":"boolean"},"workerNodeTypeFlexibility":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterWorkerNodeTypeFlexibility:JobTaskForEachTaskTaskNewClusterWorkerNodeTypeFlexibility"},"workloadType":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterWorkloadType:JobTaskForEachTaskTaskNewClusterWorkloadType","description":"isn't supported\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["driverInstancePoolId","driverNodeTypeId","enableElasticDisk","enableLocalDiskEncryption","nodeTypeId"]}}},"databricks:index/JobTaskForEachTaskTaskNewClusterAutoscale:JobTaskForEachTaskTaskNewClusterAutoscale":{"properties":{"maxWorkers":{"type":"integer"},"minWorkers":{"type":"integer"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskNewClusterAwsAttributes:JobTaskForEachTaskTaskNewClusterAwsAttributes":{"properties":{"availability":{"type":"string"},"ebsVolumeCount":{"type":"integer"},"ebsVolumeIops":{"type":"integer"},"ebsVolumeSize":{"type":"integer"},"ebsVolumeThroughput":{"type":"integer"},"ebsVolumeType":{"type":"string"},"firstOnDemand":{"type":"integer"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskNewClusterAzureAttributes:JobTaskForEachTaskTaskNewClusterAzureAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"logAnalyticsInfo":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterAzureAttributesLogAnalyticsInfo:JobTaskForEachTaskTaskNewClusterAzureAttributesLogAnalyticsInfo"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskNewClusterAzureAttributesLogAnalyticsInfo:JobTaskForEachTaskTaskNewClusterAzureAttributesLogAnalyticsInfo":{"properties":{"logAnalyticsPrimaryKey":{"type":"string"},"logAnalyticsWorkspaceId":{"type":"string"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskNewClusterClusterLogConf:JobTaskForEachTaskTaskNewClusterClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterClusterLogConfDbfs:JobTaskForEachTaskTaskNewClusterClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterClusterLogConfS3:JobTaskForEachTaskTaskNewClusterClusterLogConfS3"},"volumes":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterClusterLogConfVolumes:JobTaskForEachTaskTaskNewClusterClusterLogConfVolumes"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskNewClusterClusterLogConfDbfs:JobTaskForEachTaskTaskNewClusterClusterLogConfDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskForEachTaskTaskNewClusterClusterLogConfS3:JobTaskForEachTaskTaskNewClusterClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskForEachTaskTaskNewClusterClusterLogConfVolumes:JobTaskForEachTaskTaskNewClusterClusterLogConfVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskForEachTaskTaskNewClusterClusterMountInfo:JobTaskForEachTaskTaskNewClusterClusterMountInfo":{"properties":{"localMountDirPath":{"type":"string"},"networkFilesystemInfo":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfo:JobTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfo"},"remoteMountDirPath":{"type":"string"}},"type":"object","required":["localMountDirPath","networkFilesystemInfo"]},"databricks:index/JobTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfo:JobTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfo":{"properties":{"mountOptions":{"type":"string"},"serverAddress":{"type":"string"}},"type":"object","required":["serverAddress"]},"databricks:index/JobTaskForEachTaskTaskNewClusterDockerImage:JobTaskForEachTaskTaskNewClusterDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterDockerImageBasicAuth:JobTaskForEachTaskTaskNewClusterDockerImageBasicAuth"},"url":{"type":"string","description":"URL of the job on the given workspace\n"}},"type":"object","required":["url"]},"databricks:index/JobTaskForEachTaskTaskNewClusterDockerImageBasicAuth:JobTaskForEachTaskTaskNewClusterDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true},"username":{"type":"string"}},"type":"object","required":["password","username"]},"databricks:index/JobTaskForEachTaskTaskNewClusterDriverNodeTypeFlexibility:JobTaskForEachTaskTaskNewClusterDriverNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskNewClusterGcpAttributes:JobTaskForEachTaskTaskNewClusterGcpAttributes":{"properties":{"availability":{"type":"string"},"bootDiskSize":{"type":"integer"},"firstOnDemand":{"type":"integer"},"googleServiceAccount":{"type":"string"},"localSsdCount":{"type":"integer"},"usePreemptibleExecutors":{"type":"boolean"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskNewClusterInitScript:JobTaskForEachTaskTaskNewClusterInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptAbfss:JobTaskForEachTaskTaskNewClusterInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptDbfs:JobTaskForEachTaskTaskNewClusterInitScriptDbfs","deprecationMessage":"For init scripts use 'volumes', 'workspace' or cloud storage location instead of 'dbfs'."},"file":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptFile:JobTaskForEachTaskTaskNewClusterInitScriptFile","description":"block consisting of single string fields:\n"},"gcs":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptGcs:JobTaskForEachTaskTaskNewClusterInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptS3:JobTaskForEachTaskTaskNewClusterInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptVolumes:JobTaskForEachTaskTaskNewClusterInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptWorkspace:JobTaskForEachTaskTaskNewClusterInitScriptWorkspace"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptAbfss:JobTaskForEachTaskTaskNewClusterInitScriptAbfss":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptDbfs:JobTaskForEachTaskTaskNewClusterInitScriptDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptFile:JobTaskForEachTaskTaskNewClusterInitScriptFile":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptGcs:JobTaskForEachTaskTaskNewClusterInitScriptGcs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptS3:JobTaskForEachTaskTaskNewClusterInitScriptS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptVolumes:JobTaskForEachTaskTaskNewClusterInitScriptVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskForEachTaskTaskNewClusterInitScriptWorkspace:JobTaskForEachTaskTaskNewClusterInitScriptWorkspace":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskForEachTaskTaskNewClusterLibrary:JobTaskForEachTaskTaskNewClusterLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterLibraryCran:JobTaskForEachTaskTaskNewClusterLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterLibraryMaven:JobTaskForEachTaskTaskNewClusterLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterLibraryProviderConfig:JobTaskForEachTaskTaskNewClusterLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterLibraryPypi:JobTaskForEachTaskTaskNewClusterLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskNewClusterLibraryCran:JobTaskForEachTaskTaskNewClusterLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobTaskForEachTaskTaskNewClusterLibraryMaven:JobTaskForEachTaskTaskNewClusterLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/JobTaskForEachTaskTaskNewClusterLibraryProviderConfig:JobTaskForEachTaskTaskNewClusterLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobTaskForEachTaskTaskNewClusterLibraryPypi:JobTaskForEachTaskTaskNewClusterLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobTaskForEachTaskTaskNewClusterProviderConfig:JobTaskForEachTaskTaskNewClusterProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobTaskForEachTaskTaskNewClusterWorkerNodeTypeFlexibility:JobTaskForEachTaskTaskNewClusterWorkerNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskNewClusterWorkloadType:JobTaskForEachTaskTaskNewClusterWorkloadType":{"properties":{"clients":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskNewClusterWorkloadTypeClients:JobTaskForEachTaskTaskNewClusterWorkloadTypeClients"}},"type":"object","required":["clients"]},"databricks:index/JobTaskForEachTaskTaskNewClusterWorkloadTypeClients:JobTaskForEachTaskTaskNewClusterWorkloadTypeClients":{"properties":{"jobs":{"type":"boolean"},"notebooks":{"type":"boolean"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskNotebookTask:JobTaskForEachTaskTaskNotebookTask":{"properties":{"baseParameters":{"type":"object","additionalProperties":{"type":"string"},"description":"(Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in\u003cspan pulumi-lang-nodejs=\" baseParameters \" pulumi-lang-dotnet=\" BaseParameters \" pulumi-lang-go=\" baseParameters \" pulumi-lang-python=\" base_parameters \" pulumi-lang-yaml=\" baseParameters \" pulumi-lang-java=\" baseParameters \"\u003e base_parameters \u003c/span\u003eand in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job's\u003cspan pulumi-lang-nodejs=\" baseParameters \" pulumi-lang-dotnet=\" BaseParameters \" pulumi-lang-go=\" baseParameters \" pulumi-lang-python=\" base_parameters \" pulumi-lang-yaml=\" baseParameters \" pulumi-lang-java=\" baseParameters \"\u003e base_parameters \u003c/span\u003eor the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using `dbutils.widgets.get`.\n"},"notebookPath":{"type":"string","description":"The path of the\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.\n"},"source":{"type":"string","description":"Location type of the notebook, can only be `WORKSPACE` or `GIT`. When set to `WORKSPACE`, the notebook will be retrieved from the local Databricks workspace. When set to `GIT`, the notebook will be retrieved from a Git repository defined in \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e. If the value is empty, the task will use `GIT` if \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e is defined and `WORKSPACE` otherwise.\n"},"warehouseId":{"type":"string","description":"ID of the (the databricks_sql_endpoint) that will be used to execute the task with SQL notebook.\n"}},"type":"object","required":["notebookPath"]},"databricks:index/JobTaskForEachTaskTaskNotificationSettings:JobTaskForEachTaskTaskNotificationSettings":{"properties":{"alertOnLastAttempt":{"type":"boolean","description":"(Bool) do not send notifications to recipients specified in \u003cspan pulumi-lang-nodejs=\"`onStart`\" pulumi-lang-dotnet=\"`OnStart`\" pulumi-lang-go=\"`onStart`\" pulumi-lang-python=\"`on_start`\" pulumi-lang-yaml=\"`onStart`\" pulumi-lang-java=\"`onStart`\"\u003e`on_start`\u003c/span\u003e for the retried runs and do not send notifications to recipients specified in \u003cspan pulumi-lang-nodejs=\"`onFailure`\" pulumi-lang-dotnet=\"`OnFailure`\" pulumi-lang-go=\"`onFailure`\" pulumi-lang-python=\"`on_failure`\" pulumi-lang-yaml=\"`onFailure`\" pulumi-lang-java=\"`onFailure`\"\u003e`on_failure`\u003c/span\u003e until the last retry of the run.\n"},"noAlertForCanceledRuns":{"type":"boolean","description":"(Bool) don't send alert for cancelled runs.\n\nThe following parameter is only available on task level.\n"},"noAlertForSkippedRuns":{"type":"boolean","description":"(Bool) don't send alert for skipped runs.\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskPipelineTask:JobTaskForEachTaskTaskPipelineTask":{"properties":{"fullRefresh":{"type":"boolean","description":"(Bool) Specifies if there should be full refresh of the pipeline.\n\n\u003e The following configuration blocks are only supported inside a \u003cspan pulumi-lang-nodejs=\"`task`\" pulumi-lang-dotnet=\"`Task`\" pulumi-lang-go=\"`task`\" pulumi-lang-python=\"`task`\" pulumi-lang-yaml=\"`task`\" pulumi-lang-java=\"`task`\"\u003e`task`\u003c/span\u003e block\n"},"pipelineId":{"type":"string","description":"The pipeline's unique ID.\n"}},"type":"object","required":["pipelineId"]},"databricks:index/JobTaskForEachTaskTaskPowerBiTask:JobTaskForEachTaskTaskPowerBiTask":{"properties":{"connectionResourceName":{"type":"string","description":"The resource name of the UC connection to authenticate from Databricks to Power BI\n"},"powerBiModel":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskPowerBiTaskPowerBiModel:JobTaskForEachTaskTaskPowerBiTaskPowerBiModel","description":"The semantic model to update. Block consists of following fields:\n"},"refreshAfterUpdate":{"type":"boolean","description":"Whether the model should be refreshed after the update. Default is false\n"},"tables":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskPowerBiTaskTable:JobTaskForEachTaskTaskPowerBiTaskTable"},"description":"The tables to be exported to Power BI. Block consists of following fields:\n"},"warehouseId":{"type":"string","description":"The SQL warehouse ID to use as the Power BI data source\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskPowerBiTaskPowerBiModel:JobTaskForEachTaskTaskPowerBiTaskPowerBiModel":{"properties":{"authenticationMethod":{"type":"string","description":"How the published Power BI model authenticates to Databricks\n"},"modelName":{"type":"string","description":"The name of the Power BI model\n"},"overwriteExisting":{"type":"boolean","description":"Whether to overwrite existing Power BI models. Default is false\n"},"storageMode":{"type":"string","description":"The default storage mode of the Power BI model\n"},"workspaceName":{"type":"string","description":"The name of the Power BI workspace of the model\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskPowerBiTaskTable:JobTaskForEachTaskTaskPowerBiTaskTable":{"properties":{"catalog":{"type":"string","description":"The catalog name in Databricks\n"},"name":{"type":"string","description":"The table name in Databricks. If empty, all tables under the schema are selected.\n"},"schema":{"type":"string","description":"The schema name in Databricks\n"},"storageMode":{"type":"string","description":"The Power BI storage mode of the table\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskPythonWheelTask:JobTaskForEachTaskTaskPythonWheelTask":{"properties":{"entryPoint":{"type":"string","description":"Python function as entry point for the task\n"},"namedParameters":{"type":"object","additionalProperties":{"type":"string"},"description":"Named parameters for the task\n"},"packageName":{"type":"string","description":"Name of Python package\n"},"parameters":{"type":"array","items":{"type":"string"},"description":"Parameters for the task\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskRunJobTask:JobTaskForEachTaskTaskRunJobTask":{"properties":{"dbtCommands":{"type":"array","items":{"type":"string"}},"jarParams":{"type":"array","items":{"type":"string"}},"jobId":{"type":"integer","description":"(String) ID of the job\n"},"jobParameters":{"type":"object","additionalProperties":{"type":"string"},"description":"(Map) Job parameters for the task\n"},"notebookParams":{"type":"object","additionalProperties":{"type":"string"}},"pipelineParams":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskRunJobTaskPipelineParams:JobTaskForEachTaskTaskRunJobTaskPipelineParams"},"pythonNamedParams":{"type":"object","additionalProperties":{"type":"string"}},"pythonParams":{"type":"array","items":{"type":"string"}},"sparkSubmitParams":{"type":"array","items":{"type":"string"}},"sqlParams":{"type":"object","additionalProperties":{"type":"string"}}},"type":"object","required":["jobId"]},"databricks:index/JobTaskForEachTaskTaskRunJobTaskPipelineParams:JobTaskForEachTaskTaskRunJobTaskPipelineParams":{"properties":{"fullRefresh":{"type":"boolean","description":"(Bool) Specifies if there should be full refresh of the pipeline.\n\n\u003e The following configuration blocks are only supported inside a \u003cspan pulumi-lang-nodejs=\"`task`\" pulumi-lang-dotnet=\"`Task`\" pulumi-lang-go=\"`task`\" pulumi-lang-python=\"`task`\" pulumi-lang-yaml=\"`task`\" pulumi-lang-java=\"`task`\"\u003e`task`\u003c/span\u003e block\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskSparkJarTask:JobTaskForEachTaskTaskSparkJarTask":{"properties":{"jarUri":{"type":"string"},"mainClassName":{"type":"string","description":"The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use `SparkContext.getOrCreate` to obtain a Spark context; otherwise, runs of the job will fail.\n"},"parameters":{"type":"array","items":{"type":"string"},"description":"(List) Parameters passed to the main method.\n"},"runAsRepl":{"type":"boolean"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskSparkPythonTask:JobTaskForEachTaskTaskSparkPythonTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"},"description":"(List) Command line parameters passed to the Python file.\n"},"pythonFile":{"type":"string","description":"The URI of the Python file to be executed. Cloud file URIs (e.g. `s3:/`, `abfss:/`, `gs:/`), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required.\n"},"source":{"type":"string","description":"Location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved from the local Databricks workspace or cloud location (if the\u003cspan pulumi-lang-nodejs=\" pythonFile \" pulumi-lang-dotnet=\" PythonFile \" pulumi-lang-go=\" pythonFile \" pulumi-lang-python=\" python_file \" pulumi-lang-yaml=\" pythonFile \" pulumi-lang-java=\" pythonFile \"\u003e python_file \u003c/span\u003ehas a URI format). When set to `GIT`, the Python file will be retrieved from a Git repository defined in \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e.\n* `WORKSPACE`: The Python file is located in a Databricks workspace or at a cloud filesystem URI.\n* `GIT`: The Python file is located in a remote Git repository.\n"}},"type":"object","required":["pythonFile"]},"databricks:index/JobTaskForEachTaskTaskSparkSubmitTask:JobTaskForEachTaskTaskSparkSubmitTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"},"description":"(List) Command-line parameters passed to spark submit.\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskSqlTask:JobTaskForEachTaskTaskSqlTask":{"properties":{"alert":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskSqlTaskAlert:JobTaskForEachTaskTaskSqlTaskAlert","description":"block consisting of following fields:\n"},"dashboard":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskSqlTaskDashboard:JobTaskForEachTaskTaskSqlTaskDashboard","description":"block consisting of following fields:\n"},"file":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskSqlTaskFile:JobTaskForEachTaskTaskSqlTaskFile","description":"block consisting of single string fields:\n"},"parameters":{"type":"object","additionalProperties":{"type":"string"},"description":"(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.\n"},"query":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskSqlTaskQuery:JobTaskForEachTaskTaskSqlTaskQuery","description":"block consisting of single string field: \u003cspan pulumi-lang-nodejs=\"`queryId`\" pulumi-lang-dotnet=\"`QueryId`\" pulumi-lang-go=\"`queryId`\" pulumi-lang-python=\"`query_id`\" pulumi-lang-yaml=\"`queryId`\" pulumi-lang-java=\"`queryId`\"\u003e`query_id`\u003c/span\u003e - identifier of the Databricks Query (databricks_query).\n"},"warehouseId":{"type":"string","description":"ID of the (the databricks_sql_endpoint) that will be used to execute the task.  Only Serverless \u0026 Pro warehouses are supported right now.\n"}},"type":"object","required":["warehouseId"]},"databricks:index/JobTaskForEachTaskTaskSqlTaskAlert:JobTaskForEachTaskTaskSqlTaskAlert":{"properties":{"alertId":{"type":"string","description":"(String) identifier of the Databricks Alert (databricks_alert).\n"},"pauseSubscriptions":{"type":"boolean","description":"flag that specifies if subscriptions are paused or not.\n"},"subscriptions":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskSqlTaskAlertSubscription:JobTaskForEachTaskTaskSqlTaskAlertSubscription"},"description":"a list of subscription blocks consisting out of one of the required fields: \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e for user emails or \u003cspan pulumi-lang-nodejs=\"`destinationId`\" pulumi-lang-dotnet=\"`DestinationId`\" pulumi-lang-go=\"`destinationId`\" pulumi-lang-python=\"`destination_id`\" pulumi-lang-yaml=\"`destinationId`\" pulumi-lang-java=\"`destinationId`\"\u003e`destination_id`\u003c/span\u003e - for Alert destination's identifier.\n"}},"type":"object","required":["alertId"]},"databricks:index/JobTaskForEachTaskTaskSqlTaskAlertSubscription:JobTaskForEachTaskTaskSqlTaskAlertSubscription":{"properties":{"destinationId":{"type":"string","description":"A snapshot of the dashboard will be sent to the destination when the \u003cspan pulumi-lang-nodejs=\"`destinationId`\" pulumi-lang-dotnet=\"`DestinationId`\" pulumi-lang-go=\"`destinationId`\" pulumi-lang-python=\"`destination_id`\" pulumi-lang-yaml=\"`destinationId`\" pulumi-lang-java=\"`destinationId`\"\u003e`destination_id`\u003c/span\u003e field is present.\n"},"userName":{"type":"string"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskSqlTaskDashboard:JobTaskForEachTaskTaskSqlTaskDashboard":{"properties":{"customSubject":{"type":"string","description":"string specifying a custom subject of email sent.\n"},"dashboardId":{"type":"string","description":"(String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.\n"},"pauseSubscriptions":{"type":"boolean","description":"flag that specifies if subscriptions are paused or not.\n"},"subscriptions":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskSqlTaskDashboardSubscription:JobTaskForEachTaskTaskSqlTaskDashboardSubscription"},"description":"a list of subscription blocks consisting out of one of the required fields: \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e for user emails or \u003cspan pulumi-lang-nodejs=\"`destinationId`\" pulumi-lang-dotnet=\"`DestinationId`\" pulumi-lang-go=\"`destinationId`\" pulumi-lang-python=\"`destination_id`\" pulumi-lang-yaml=\"`destinationId`\" pulumi-lang-java=\"`destinationId`\"\u003e`destination_id`\u003c/span\u003e - for Alert destination's identifier.\n"}},"type":"object","required":["dashboardId"]},"databricks:index/JobTaskForEachTaskTaskSqlTaskDashboardSubscription:JobTaskForEachTaskTaskSqlTaskDashboardSubscription":{"properties":{"destinationId":{"type":"string","description":"A snapshot of the dashboard will be sent to the destination when the \u003cspan pulumi-lang-nodejs=\"`destinationId`\" pulumi-lang-dotnet=\"`DestinationId`\" pulumi-lang-go=\"`destinationId`\" pulumi-lang-python=\"`destination_id`\" pulumi-lang-yaml=\"`destinationId`\" pulumi-lang-java=\"`destinationId`\"\u003e`destination_id`\u003c/span\u003e field is present.\n"},"userName":{"type":"string"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskSqlTaskFile:JobTaskForEachTaskTaskSqlTaskFile":{"properties":{"path":{"type":"string","description":"If \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e is `GIT`: Relative path to the file in the repository specified in the \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e block with SQL commands to execute. If \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e is `WORKSPACE`: Absolute path to the file in the workspace with SQL commands to execute.\n\nExample\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sqlAggregationJob = new databricks.Job(\"sql_aggregation_job\", {\n    name: \"Example SQL Job\",\n    tasks: [\n        {\n            taskKey: \"run_agg_query\",\n            sqlTask: {\n                warehouseId: sqlJobWarehouse.id,\n                query: {\n                    queryId: aggQuery.id,\n                },\n            },\n        },\n        {\n            taskKey: \"run_dashboard\",\n            sqlTask: {\n                warehouseId: sqlJobWarehouse.id,\n                dashboard: {\n                    dashboardId: dash.id,\n                    subscriptions: [{\n                        userName: \"user@domain.com\",\n                    }],\n                },\n            },\n        },\n        {\n            taskKey: \"run_alert\",\n            sqlTask: {\n                warehouseId: sqlJobWarehouse.id,\n                alert: {\n                    alertId: alert.id,\n                    subscriptions: [{\n                        userName: \"user@domain.com\",\n                    }],\n                },\n            },\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsql_aggregation_job = databricks.Job(\"sql_aggregation_job\",\n    name=\"Example SQL Job\",\n    tasks=[\n        {\n            \"task_key\": \"run_agg_query\",\n            \"sql_task\": {\n                \"warehouse_id\": sql_job_warehouse[\"id\"],\n                \"query\": {\n                    \"query_id\": agg_query[\"id\"],\n                },\n            },\n        },\n        {\n            \"task_key\": \"run_dashboard\",\n            \"sql_task\": {\n                \"warehouse_id\": sql_job_warehouse[\"id\"],\n                \"dashboard\": {\n                    \"dashboard_id\": dash[\"id\"],\n                    \"subscriptions\": [{\n                        \"user_name\": \"user@domain.com\",\n                    }],\n                },\n            },\n        },\n        {\n            \"task_key\": \"run_alert\",\n            \"sql_task\": {\n                \"warehouse_id\": sql_job_warehouse[\"id\"],\n                \"alert\": {\n                    \"alert_id\": alert[\"id\"],\n                    \"subscriptions\": [{\n                        \"user_name\": \"user@domain.com\",\n                    }],\n                },\n            },\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sqlAggregationJob = new Databricks.Job(\"sql_aggregation_job\", new()\n    {\n        Name = \"Example SQL Job\",\n        Tasks = new[]\n        {\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"run_agg_query\",\n                SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs\n                {\n                    WarehouseId = sqlJobWarehouse.Id,\n                    Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs\n                    {\n                        QueryId = aggQuery.Id,\n                    },\n                },\n            },\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"run_dashboard\",\n                SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs\n                {\n                    WarehouseId = sqlJobWarehouse.Id,\n                    Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs\n                    {\n                        DashboardId = dash.Id,\n                        Subscriptions = new[]\n                        {\n                            new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs\n                            {\n                                UserName = \"user@domain.com\",\n                            },\n                        },\n                    },\n                },\n            },\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"run_alert\",\n                SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs\n                {\n                    WarehouseId = sqlJobWarehouse.Id,\n                    Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs\n                    {\n                        AlertId = alert.Id,\n                        Subscriptions = new[]\n                        {\n                            new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs\n                            {\n                                UserName = \"user@domain.com\",\n                            },\n                        },\n                    },\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewJob(ctx, \"sql_aggregation_job\", \u0026databricks.JobArgs{\n\t\t\tName: pulumi.String(\"Example SQL Job\"),\n\t\t\tTasks: databricks.JobTaskArray{\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"run_agg_query\"),\n\t\t\t\t\tSqlTask: \u0026databricks.JobTaskSqlTaskArgs{\n\t\t\t\t\t\tWarehouseId: pulumi.Any(sqlJobWarehouse.Id),\n\t\t\t\t\t\tQuery: \u0026databricks.JobTaskSqlTaskQueryArgs{\n\t\t\t\t\t\t\tQueryId: pulumi.Any(aggQuery.Id),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"run_dashboard\"),\n\t\t\t\t\tSqlTask: \u0026databricks.JobTaskSqlTaskArgs{\n\t\t\t\t\t\tWarehouseId: pulumi.Any(sqlJobWarehouse.Id),\n\t\t\t\t\t\tDashboard: \u0026databricks.JobTaskSqlTaskDashboardArgs{\n\t\t\t\t\t\t\tDashboardId: pulumi.Any(dash.Id),\n\t\t\t\t\t\t\tSubscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{\n\t\t\t\t\t\t\t\t\u0026databricks.JobTaskSqlTaskDashboardSubscriptionArgs{\n\t\t\t\t\t\t\t\t\tUserName: pulumi.String(\"user@domain.com\"),\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"run_alert\"),\n\t\t\t\t\tSqlTask: \u0026databricks.JobTaskSqlTaskArgs{\n\t\t\t\t\t\tWarehouseId: pulumi.Any(sqlJobWarehouse.Id),\n\t\t\t\t\t\tAlert: \u0026databricks.JobTaskSqlTaskAlertArgs{\n\t\t\t\t\t\t\tAlertId: pulumi.Any(alert.Id),\n\t\t\t\t\t\t\tSubscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{\n\t\t\t\t\t\t\t\t\u0026databricks.JobTaskSqlTaskAlertSubscriptionArgs{\n\t\t\t\t\t\t\t\t\tUserName: pulumi.String(\"user@domain.com\"),\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Job;\nimport com.pulumi.databricks.JobArgs;\nimport com.pulumi.databricks.inputs.JobTaskArgs;\nimport com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;\nimport com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;\nimport com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;\nimport com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sqlAggregationJob = new Job(\"sqlAggregationJob\", JobArgs.builder()\n            .name(\"Example SQL Job\")\n            .tasks(            \n                JobTaskArgs.builder()\n                    .taskKey(\"run_agg_query\")\n                    .sqlTask(JobTaskSqlTaskArgs.builder()\n                        .warehouseId(sqlJobWarehouse.id())\n                        .query(JobTaskSqlTaskQueryArgs.builder()\n                            .queryId(aggQuery.id())\n                            .build())\n                        .build())\n                    .build(),\n                JobTaskArgs.builder()\n                    .taskKey(\"run_dashboard\")\n                    .sqlTask(JobTaskSqlTaskArgs.builder()\n                        .warehouseId(sqlJobWarehouse.id())\n                        .dashboard(JobTaskSqlTaskDashboardArgs.builder()\n                            .dashboardId(dash.id())\n                            .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()\n                                .userName(\"user@domain.com\")\n                                .build())\n                            .build())\n                        .build())\n                    .build(),\n                JobTaskArgs.builder()\n                    .taskKey(\"run_alert\")\n                    .sqlTask(JobTaskSqlTaskArgs.builder()\n                        .warehouseId(sqlJobWarehouse.id())\n                        .alert(JobTaskSqlTaskAlertArgs.builder()\n                            .alertId(alert.id())\n                            .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()\n                                .userName(\"user@domain.com\")\n                                .build())\n                            .build())\n                        .build())\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sqlAggregationJob:\n    type: databricks:Job\n    name: sql_aggregation_job\n    properties:\n      name: Example SQL Job\n      tasks:\n        - taskKey: run_agg_query\n          sqlTask:\n            warehouseId: ${sqlJobWarehouse.id}\n            query:\n              queryId: ${aggQuery.id}\n        - taskKey: run_dashboard\n          sqlTask:\n            warehouseId: ${sqlJobWarehouse.id}\n            dashboard:\n              dashboardId: ${dash.id}\n              subscriptions:\n                - userName: user@domain.com\n        - taskKey: run_alert\n          sqlTask:\n            warehouseId: ${sqlJobWarehouse.id}\n            alert:\n              alertId: ${alert.id}\n              subscriptions:\n                - userName: user@domain.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"},"source":{"type":"string","description":"The source of the project. Possible values are `WORKSPACE` and `GIT`.\n"}},"type":"object","required":["path"]},"databricks:index/JobTaskForEachTaskTaskSqlTaskQuery:JobTaskForEachTaskTaskSqlTaskQuery":{"properties":{"queryId":{"type":"string"}},"type":"object","required":["queryId"]},"databricks:index/JobTaskForEachTaskTaskWebhookNotifications:JobTaskForEachTaskTaskWebhookNotifications":{"properties":{"onDurationWarningThresholdExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded:JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded"},"description":"(List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the `RUN_DURATION_SECONDS` metric in the \u003cspan pulumi-lang-nodejs=\"`health`\" pulumi-lang-dotnet=\"`Health`\" pulumi-lang-go=\"`health`\" pulumi-lang-python=\"`health`\" pulumi-lang-yaml=\"`health`\" pulumi-lang-java=\"`health`\"\u003e`health`\u003c/span\u003e block.\n"},"onFailures":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskWebhookNotificationsOnFailure:JobTaskForEachTaskTaskWebhookNotificationsOnFailure"},"description":"(List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.\n"},"onStarts":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskWebhookNotificationsOnStart:JobTaskForEachTaskTaskWebhookNotificationsOnStart"},"description":"(List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.\n"},"onStreamingBacklogExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskWebhookNotificationsOnStreamingBacklogExceeded:JobTaskForEachTaskTaskWebhookNotificationsOnStreamingBacklogExceeded"},"description":"(List) list of notification IDs to call when any streaming backlog thresholds are exceeded for any stream.\n\nNote that the \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e is not to be confused with the name of the alert destination. The \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e can be retrieved through the API or the URL of Databricks UI `https://\u003cworkspace host\u003e/sql/destinations/\u003cnotification id\u003e?o=\u003cworkspace id\u003e`\n\nExample\n\n"},"onSuccesses":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskForEachTaskTaskWebhookNotificationsOnSuccess:JobTaskForEachTaskTaskWebhookNotificationsOnSuccess"},"description":"(List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.\n"}},"type":"object"},"databricks:index/JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded:JobTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobTaskForEachTaskTaskWebhookNotificationsOnFailure:JobTaskForEachTaskTaskWebhookNotificationsOnFailure":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobTaskForEachTaskTaskWebhookNotificationsOnStart:JobTaskForEachTaskTaskWebhookNotificationsOnStart":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobTaskForEachTaskTaskWebhookNotificationsOnStreamingBacklogExceeded:JobTaskForEachTaskTaskWebhookNotificationsOnStreamingBacklogExceeded":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobTaskForEachTaskTaskWebhookNotificationsOnSuccess:JobTaskForEachTaskTaskWebhookNotificationsOnSuccess":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobTaskGenAiComputeTask:JobTaskGenAiComputeTask":{"properties":{"command":{"type":"string"},"compute":{"$ref":"#/types/databricks:index/JobTaskGenAiComputeTaskCompute:JobTaskGenAiComputeTaskCompute","description":"Task level compute configuration. This block is documented below.\n\n\u003e If no \u003cspan pulumi-lang-nodejs=\"`jobClusterKey`\" pulumi-lang-dotnet=\"`JobClusterKey`\" pulumi-lang-go=\"`jobClusterKey`\" pulumi-lang-python=\"`job_cluster_key`\" pulumi-lang-yaml=\"`jobClusterKey`\" pulumi-lang-java=\"`jobClusterKey`\"\u003e`job_cluster_key`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`existingClusterId`\" pulumi-lang-dotnet=\"`ExistingClusterId`\" pulumi-lang-go=\"`existingClusterId`\" pulumi-lang-python=\"`existing_cluster_id`\" pulumi-lang-yaml=\"`existingClusterId`\" pulumi-lang-java=\"`existingClusterId`\"\u003e`existing_cluster_id`\u003c/span\u003e, or \u003cspan pulumi-lang-nodejs=\"`newCluster`\" pulumi-lang-dotnet=\"`NewCluster`\" pulumi-lang-go=\"`newCluster`\" pulumi-lang-python=\"`new_cluster`\" pulumi-lang-yaml=\"`newCluster`\" pulumi-lang-java=\"`newCluster`\"\u003e`new_cluster`\u003c/span\u003e were specified in task definition, then task will executed using serverless compute.\n"},"dlRuntimeImage":{"type":"string"},"mlflowExperimentName":{"type":"string"},"source":{"type":"string"},"trainingScriptPath":{"type":"string"},"yamlParameters":{"type":"string"},"yamlParametersFilePath":{"type":"string"}},"type":"object","required":["dlRuntimeImage"]},"databricks:index/JobTaskGenAiComputeTaskCompute:JobTaskGenAiComputeTaskCompute":{"properties":{"gpuNodePoolId":{"type":"string"},"gpuType":{"type":"string"},"numGpus":{"type":"integer"}},"type":"object","required":["numGpus"]},"databricks:index/JobTaskHealth:JobTaskHealth":{"properties":{"rules":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskHealthRule:JobTaskHealthRule"},"description":"list of rules that are represented as objects with the following attributes:\n"}},"type":"object","required":["rules"]},"databricks:index/JobTaskHealthRule:JobTaskHealthRule":{"properties":{"metric":{"type":"string","description":"string specifying the metric to check, like `RUN_DURATION_SECONDS`, `STREAMING_BACKLOG_FILES`, etc. - check the [Jobs REST API documentation](https://docs.databricks.com/api/workspace/jobs/create#health-rules-metric) for the full list of supported metrics.\n"},"op":{"type":"string","description":"string specifying the operation used to evaluate the given metric. The only supported operation is `GREATER_THAN`.\n"},"value":{"type":"integer","description":"integer value used to compare to the given metric.\n"}},"type":"object","required":["metric","op","value"]},"databricks:index/JobTaskLibrary:JobTaskLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/JobTaskLibraryCran:JobTaskLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/JobTaskLibraryMaven:JobTaskLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/JobTaskLibraryProviderConfig:JobTaskLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/JobTaskLibraryPypi:JobTaskLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/JobTaskLibraryCran:JobTaskLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobTaskLibraryMaven:JobTaskLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/JobTaskLibraryProviderConfig:JobTaskLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobTaskLibraryPypi:JobTaskLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobTaskNewCluster:JobTaskNewCluster":{"properties":{"__applyPolicyDefaultValuesAllowLists":{"type":"array","items":{"type":"string"}},"applyPolicyDefaultValues":{"type":"boolean"},"autoscale":{"$ref":"#/types/databricks:index/JobTaskNewClusterAutoscale:JobTaskNewClusterAutoscale"},"awsAttributes":{"$ref":"#/types/databricks:index/JobTaskNewClusterAwsAttributes:JobTaskNewClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/JobTaskNewClusterAzureAttributes:JobTaskNewClusterAzureAttributes"},"clusterId":{"type":"string"},"clusterLogConf":{"$ref":"#/types/databricks:index/JobTaskNewClusterClusterLogConf:JobTaskNewClusterClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskNewClusterClusterMountInfo:JobTaskNewClusterClusterMountInfo"}},"clusterName":{"type":"string"},"customTags":{"type":"object","additionalProperties":{"type":"string"}},"dataSecurityMode":{"type":"string"},"dockerImage":{"$ref":"#/types/databricks:index/JobTaskNewClusterDockerImage:JobTaskNewClusterDockerImage"},"driverInstancePoolId":{"type":"string"},"driverNodeTypeFlexibility":{"$ref":"#/types/databricks:index/JobTaskNewClusterDriverNodeTypeFlexibility:JobTaskNewClusterDriverNodeTypeFlexibility"},"driverNodeTypeId":{"type":"string"},"enableElasticDisk":{"type":"boolean"},"enableLocalDiskEncryption":{"type":"boolean"},"gcpAttributes":{"$ref":"#/types/databricks:index/JobTaskNewClusterGcpAttributes:JobTaskNewClusterGcpAttributes"},"idempotencyToken":{"type":"string","willReplaceOnChanges":true},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskNewClusterInitScript:JobTaskNewClusterInitScript"}},"instancePoolId":{"type":"string"},"isSingleNode":{"type":"boolean"},"kind":{"type":"string"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskNewClusterLibrary:JobTaskNewClusterLibrary"},"description":"(List) An optional list of libraries to be installed on the cluster that will execute the job. See library Configuration Block below.\n"},"nodeTypeId":{"type":"string"},"numWorkers":{"type":"integer"},"policyId":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/JobTaskNewClusterProviderConfig:JobTaskNewClusterProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"remoteDiskThroughput":{"type":"integer"},"runtimeEngine":{"type":"string"},"singleUserName":{"type":"string"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"}},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"}},"sparkVersion":{"type":"string"},"sshPublicKeys":{"type":"array","items":{"type":"string"}},"totalInitialRemoteDiskSize":{"type":"integer"},"useMlRuntime":{"type":"boolean"},"workerNodeTypeFlexibility":{"$ref":"#/types/databricks:index/JobTaskNewClusterWorkerNodeTypeFlexibility:JobTaskNewClusterWorkerNodeTypeFlexibility"},"workloadType":{"$ref":"#/types/databricks:index/JobTaskNewClusterWorkloadType:JobTaskNewClusterWorkloadType","description":"isn't supported\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["driverInstancePoolId","driverNodeTypeId","enableElasticDisk","enableLocalDiskEncryption","nodeTypeId"]}}},"databricks:index/JobTaskNewClusterAutoscale:JobTaskNewClusterAutoscale":{"properties":{"maxWorkers":{"type":"integer"},"minWorkers":{"type":"integer"}},"type":"object"},"databricks:index/JobTaskNewClusterAwsAttributes:JobTaskNewClusterAwsAttributes":{"properties":{"availability":{"type":"string"},"ebsVolumeCount":{"type":"integer"},"ebsVolumeIops":{"type":"integer"},"ebsVolumeSize":{"type":"integer"},"ebsVolumeThroughput":{"type":"integer"},"ebsVolumeType":{"type":"string"},"firstOnDemand":{"type":"integer"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/JobTaskNewClusterAzureAttributes:JobTaskNewClusterAzureAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"logAnalyticsInfo":{"$ref":"#/types/databricks:index/JobTaskNewClusterAzureAttributesLogAnalyticsInfo:JobTaskNewClusterAzureAttributesLogAnalyticsInfo"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/JobTaskNewClusterAzureAttributesLogAnalyticsInfo:JobTaskNewClusterAzureAttributesLogAnalyticsInfo":{"properties":{"logAnalyticsPrimaryKey":{"type":"string"},"logAnalyticsWorkspaceId":{"type":"string"}},"type":"object"},"databricks:index/JobTaskNewClusterClusterLogConf:JobTaskNewClusterClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/JobTaskNewClusterClusterLogConfDbfs:JobTaskNewClusterClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/JobTaskNewClusterClusterLogConfS3:JobTaskNewClusterClusterLogConfS3"},"volumes":{"$ref":"#/types/databricks:index/JobTaskNewClusterClusterLogConfVolumes:JobTaskNewClusterClusterLogConfVolumes"}},"type":"object"},"databricks:index/JobTaskNewClusterClusterLogConfDbfs:JobTaskNewClusterClusterLogConfDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskNewClusterClusterLogConfS3:JobTaskNewClusterClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskNewClusterClusterLogConfVolumes:JobTaskNewClusterClusterLogConfVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskNewClusterClusterMountInfo:JobTaskNewClusterClusterMountInfo":{"properties":{"localMountDirPath":{"type":"string"},"networkFilesystemInfo":{"$ref":"#/types/databricks:index/JobTaskNewClusterClusterMountInfoNetworkFilesystemInfo:JobTaskNewClusterClusterMountInfoNetworkFilesystemInfo"},"remoteMountDirPath":{"type":"string"}},"type":"object","required":["localMountDirPath","networkFilesystemInfo"]},"databricks:index/JobTaskNewClusterClusterMountInfoNetworkFilesystemInfo:JobTaskNewClusterClusterMountInfoNetworkFilesystemInfo":{"properties":{"mountOptions":{"type":"string"},"serverAddress":{"type":"string"}},"type":"object","required":["serverAddress"]},"databricks:index/JobTaskNewClusterDockerImage:JobTaskNewClusterDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/JobTaskNewClusterDockerImageBasicAuth:JobTaskNewClusterDockerImageBasicAuth"},"url":{"type":"string","description":"URL of the job on the given workspace\n"}},"type":"object","required":["url"]},"databricks:index/JobTaskNewClusterDockerImageBasicAuth:JobTaskNewClusterDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true},"username":{"type":"string"}},"type":"object","required":["password","username"]},"databricks:index/JobTaskNewClusterDriverNodeTypeFlexibility:JobTaskNewClusterDriverNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/JobTaskNewClusterGcpAttributes:JobTaskNewClusterGcpAttributes":{"properties":{"availability":{"type":"string"},"bootDiskSize":{"type":"integer"},"firstOnDemand":{"type":"integer"},"googleServiceAccount":{"type":"string"},"localSsdCount":{"type":"integer"},"usePreemptibleExecutors":{"type":"boolean"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/JobTaskNewClusterInitScript:JobTaskNewClusterInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/JobTaskNewClusterInitScriptAbfss:JobTaskNewClusterInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/JobTaskNewClusterInitScriptDbfs:JobTaskNewClusterInitScriptDbfs","deprecationMessage":"For init scripts use 'volumes', 'workspace' or cloud storage location instead of 'dbfs'."},"file":{"$ref":"#/types/databricks:index/JobTaskNewClusterInitScriptFile:JobTaskNewClusterInitScriptFile","description":"block consisting of single string fields:\n"},"gcs":{"$ref":"#/types/databricks:index/JobTaskNewClusterInitScriptGcs:JobTaskNewClusterInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/JobTaskNewClusterInitScriptS3:JobTaskNewClusterInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/JobTaskNewClusterInitScriptVolumes:JobTaskNewClusterInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/JobTaskNewClusterInitScriptWorkspace:JobTaskNewClusterInitScriptWorkspace"}},"type":"object"},"databricks:index/JobTaskNewClusterInitScriptAbfss:JobTaskNewClusterInitScriptAbfss":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskNewClusterInitScriptDbfs:JobTaskNewClusterInitScriptDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskNewClusterInitScriptFile:JobTaskNewClusterInitScriptFile":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskNewClusterInitScriptGcs:JobTaskNewClusterInitScriptGcs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskNewClusterInitScriptS3:JobTaskNewClusterInitScriptS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskNewClusterInitScriptVolumes:JobTaskNewClusterInitScriptVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskNewClusterInitScriptWorkspace:JobTaskNewClusterInitScriptWorkspace":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/JobTaskNewClusterLibrary:JobTaskNewClusterLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/JobTaskNewClusterLibraryCran:JobTaskNewClusterLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/JobTaskNewClusterLibraryMaven:JobTaskNewClusterLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/JobTaskNewClusterLibraryProviderConfig:JobTaskNewClusterLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/JobTaskNewClusterLibraryPypi:JobTaskNewClusterLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/JobTaskNewClusterLibraryCran:JobTaskNewClusterLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobTaskNewClusterLibraryMaven:JobTaskNewClusterLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/JobTaskNewClusterLibraryProviderConfig:JobTaskNewClusterLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobTaskNewClusterLibraryPypi:JobTaskNewClusterLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/JobTaskNewClusterProviderConfig:JobTaskNewClusterProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/JobTaskNewClusterWorkerNodeTypeFlexibility:JobTaskNewClusterWorkerNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/JobTaskNewClusterWorkloadType:JobTaskNewClusterWorkloadType":{"properties":{"clients":{"$ref":"#/types/databricks:index/JobTaskNewClusterWorkloadTypeClients:JobTaskNewClusterWorkloadTypeClients"}},"type":"object","required":["clients"]},"databricks:index/JobTaskNewClusterWorkloadTypeClients:JobTaskNewClusterWorkloadTypeClients":{"properties":{"jobs":{"type":"boolean"},"notebooks":{"type":"boolean"}},"type":"object"},"databricks:index/JobTaskNotebookTask:JobTaskNotebookTask":{"properties":{"baseParameters":{"type":"object","additionalProperties":{"type":"string"},"description":"(Map) Base parameters to be used for each run of this job. If the run is initiated by a call to run-now with parameters specified, the two parameters maps will be merged. If the same key is specified in\u003cspan pulumi-lang-nodejs=\" baseParameters \" pulumi-lang-dotnet=\" BaseParameters \" pulumi-lang-go=\" baseParameters \" pulumi-lang-python=\" base_parameters \" pulumi-lang-yaml=\" baseParameters \" pulumi-lang-java=\" baseParameters \"\u003e base_parameters \u003c/span\u003eand in run-now, the value from run-now will be used. If the notebook takes a parameter that is not specified in the job's\u003cspan pulumi-lang-nodejs=\" baseParameters \" pulumi-lang-dotnet=\" BaseParameters \" pulumi-lang-go=\" baseParameters \" pulumi-lang-python=\" base_parameters \" pulumi-lang-yaml=\" baseParameters \" pulumi-lang-java=\" baseParameters \"\u003e base_parameters \u003c/span\u003eor the run-now override parameters, the default value from the notebook will be used. Retrieve these parameters in a notebook using `dbutils.widgets.get`.\n"},"notebookPath":{"type":"string","description":"The path of the\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto be run in the Databricks workspace or remote repository. For notebooks stored in the Databricks workspace, the path must be absolute and begin with a slash. For notebooks stored in a remote repository, the path must be relative. This field is required.\n"},"source":{"type":"string","description":"Location type of the notebook, can only be `WORKSPACE` or `GIT`. When set to `WORKSPACE`, the notebook will be retrieved from the local Databricks workspace. When set to `GIT`, the notebook will be retrieved from a Git repository defined in \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e. If the value is empty, the task will use `GIT` if \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e is defined and `WORKSPACE` otherwise.\n"},"warehouseId":{"type":"string","description":"ID of the (the databricks_sql_endpoint) that will be used to execute the task with SQL notebook.\n"}},"type":"object","required":["notebookPath"]},"databricks:index/JobTaskNotificationSettings:JobTaskNotificationSettings":{"properties":{"alertOnLastAttempt":{"type":"boolean","description":"(Bool) do not send notifications to recipients specified in \u003cspan pulumi-lang-nodejs=\"`onStart`\" pulumi-lang-dotnet=\"`OnStart`\" pulumi-lang-go=\"`onStart`\" pulumi-lang-python=\"`on_start`\" pulumi-lang-yaml=\"`onStart`\" pulumi-lang-java=\"`onStart`\"\u003e`on_start`\u003c/span\u003e for the retried runs and do not send notifications to recipients specified in \u003cspan pulumi-lang-nodejs=\"`onFailure`\" pulumi-lang-dotnet=\"`OnFailure`\" pulumi-lang-go=\"`onFailure`\" pulumi-lang-python=\"`on_failure`\" pulumi-lang-yaml=\"`onFailure`\" pulumi-lang-java=\"`onFailure`\"\u003e`on_failure`\u003c/span\u003e until the last retry of the run.\n"},"noAlertForCanceledRuns":{"type":"boolean","description":"(Bool) don't send alert for cancelled runs.\n\nThe following parameter is only available on task level.\n"},"noAlertForSkippedRuns":{"type":"boolean","description":"(Bool) don't send alert for skipped runs.\n"}},"type":"object"},"databricks:index/JobTaskPipelineTask:JobTaskPipelineTask":{"properties":{"fullRefresh":{"type":"boolean","description":"(Bool) Specifies if there should be full refresh of the pipeline.\n\n\u003e The following configuration blocks are only supported inside a \u003cspan pulumi-lang-nodejs=\"`task`\" pulumi-lang-dotnet=\"`Task`\" pulumi-lang-go=\"`task`\" pulumi-lang-python=\"`task`\" pulumi-lang-yaml=\"`task`\" pulumi-lang-java=\"`task`\"\u003e`task`\u003c/span\u003e block\n"},"pipelineId":{"type":"string","description":"The pipeline's unique ID.\n"}},"type":"object","required":["pipelineId"]},"databricks:index/JobTaskPowerBiTask:JobTaskPowerBiTask":{"properties":{"connectionResourceName":{"type":"string","description":"The resource name of the UC connection to authenticate from Databricks to Power BI\n"},"powerBiModel":{"$ref":"#/types/databricks:index/JobTaskPowerBiTaskPowerBiModel:JobTaskPowerBiTaskPowerBiModel","description":"The semantic model to update. Block consists of following fields:\n"},"refreshAfterUpdate":{"type":"boolean","description":"Whether the model should be refreshed after the update. Default is false\n"},"tables":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskPowerBiTaskTable:JobTaskPowerBiTaskTable"},"description":"The tables to be exported to Power BI. Block consists of following fields:\n"},"warehouseId":{"type":"string","description":"The SQL warehouse ID to use as the Power BI data source\n"}},"type":"object"},"databricks:index/JobTaskPowerBiTaskPowerBiModel:JobTaskPowerBiTaskPowerBiModel":{"properties":{"authenticationMethod":{"type":"string","description":"How the published Power BI model authenticates to Databricks\n"},"modelName":{"type":"string","description":"The name of the Power BI model\n"},"overwriteExisting":{"type":"boolean","description":"Whether to overwrite existing Power BI models. Default is false\n"},"storageMode":{"type":"string","description":"The default storage mode of the Power BI model\n"},"workspaceName":{"type":"string","description":"The name of the Power BI workspace of the model\n"}},"type":"object"},"databricks:index/JobTaskPowerBiTaskTable:JobTaskPowerBiTaskTable":{"properties":{"catalog":{"type":"string","description":"The catalog name in Databricks\n"},"name":{"type":"string","description":"The table name in Databricks. If empty, all tables under the schema are selected.\n"},"schema":{"type":"string","description":"The schema name in Databricks\n"},"storageMode":{"type":"string","description":"The Power BI storage mode of the table\n"}},"type":"object"},"databricks:index/JobTaskPythonWheelTask:JobTaskPythonWheelTask":{"properties":{"entryPoint":{"type":"string","description":"Python function as entry point for the task\n"},"namedParameters":{"type":"object","additionalProperties":{"type":"string"},"description":"Named parameters for the task\n"},"packageName":{"type":"string","description":"Name of Python package\n"},"parameters":{"type":"array","items":{"type":"string"},"description":"Parameters for the task\n"}},"type":"object"},"databricks:index/JobTaskRunJobTask:JobTaskRunJobTask":{"properties":{"dbtCommands":{"type":"array","items":{"type":"string"}},"jarParams":{"type":"array","items":{"type":"string"}},"jobId":{"type":"integer","description":"(String) ID of the job\n"},"jobParameters":{"type":"object","additionalProperties":{"type":"string"},"description":"(Map) Job parameters for the task\n"},"notebookParams":{"type":"object","additionalProperties":{"type":"string"}},"pipelineParams":{"$ref":"#/types/databricks:index/JobTaskRunJobTaskPipelineParams:JobTaskRunJobTaskPipelineParams"},"pythonNamedParams":{"type":"object","additionalProperties":{"type":"string"}},"pythonParams":{"type":"array","items":{"type":"string"}},"sparkSubmitParams":{"type":"array","items":{"type":"string"}},"sqlParams":{"type":"object","additionalProperties":{"type":"string"}}},"type":"object","required":["jobId"]},"databricks:index/JobTaskRunJobTaskPipelineParams:JobTaskRunJobTaskPipelineParams":{"properties":{"fullRefresh":{"type":"boolean","description":"(Bool) Specifies if there should be full refresh of the pipeline.\n\n\u003e The following configuration blocks are only supported inside a \u003cspan pulumi-lang-nodejs=\"`task`\" pulumi-lang-dotnet=\"`Task`\" pulumi-lang-go=\"`task`\" pulumi-lang-python=\"`task`\" pulumi-lang-yaml=\"`task`\" pulumi-lang-java=\"`task`\"\u003e`task`\u003c/span\u003e block\n"}},"type":"object"},"databricks:index/JobTaskSparkJarTask:JobTaskSparkJarTask":{"properties":{"jarUri":{"type":"string"},"mainClassName":{"type":"string","description":"The full name of the class containing the main method to be executed. This class must be contained in a JAR provided as a library. The code should use `SparkContext.getOrCreate` to obtain a Spark context; otherwise, runs of the job will fail.\n"},"parameters":{"type":"array","items":{"type":"string"},"description":"(List) Parameters passed to the main method.\n"},"runAsRepl":{"type":"boolean"}},"type":"object","language":{"nodejs":{"requiredOutputs":["runAsRepl"]}}},"databricks:index/JobTaskSparkPythonTask:JobTaskSparkPythonTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"},"description":"(List) Command line parameters passed to the Python file.\n"},"pythonFile":{"type":"string","description":"The URI of the Python file to be executed. Cloud file URIs (e.g. `s3:/`, `abfss:/`, `gs:/`), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required.\n"},"source":{"type":"string","description":"Location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved from the local Databricks workspace or cloud location (if the\u003cspan pulumi-lang-nodejs=\" pythonFile \" pulumi-lang-dotnet=\" PythonFile \" pulumi-lang-go=\" pythonFile \" pulumi-lang-python=\" python_file \" pulumi-lang-yaml=\" pythonFile \" pulumi-lang-java=\" pythonFile \"\u003e python_file \u003c/span\u003ehas a URI format). When set to `GIT`, the Python file will be retrieved from a Git repository defined in \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e.\n* `WORKSPACE`: The Python file is located in a Databricks workspace or at a cloud filesystem URI.\n* `GIT`: The Python file is located in a remote Git repository.\n"}},"type":"object","required":["pythonFile"]},"databricks:index/JobTaskSparkSubmitTask:JobTaskSparkSubmitTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"},"description":"(List) Command-line parameters passed to spark submit.\n"}},"type":"object"},"databricks:index/JobTaskSqlTask:JobTaskSqlTask":{"properties":{"alert":{"$ref":"#/types/databricks:index/JobTaskSqlTaskAlert:JobTaskSqlTaskAlert","description":"block consisting of following fields:\n"},"dashboard":{"$ref":"#/types/databricks:index/JobTaskSqlTaskDashboard:JobTaskSqlTaskDashboard","description":"block consisting of following fields:\n"},"file":{"$ref":"#/types/databricks:index/JobTaskSqlTaskFile:JobTaskSqlTaskFile","description":"block consisting of single string fields:\n"},"parameters":{"type":"object","additionalProperties":{"type":"string"},"description":"(Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.\n"},"query":{"$ref":"#/types/databricks:index/JobTaskSqlTaskQuery:JobTaskSqlTaskQuery","description":"block consisting of single string field: \u003cspan pulumi-lang-nodejs=\"`queryId`\" pulumi-lang-dotnet=\"`QueryId`\" pulumi-lang-go=\"`queryId`\" pulumi-lang-python=\"`query_id`\" pulumi-lang-yaml=\"`queryId`\" pulumi-lang-java=\"`queryId`\"\u003e`query_id`\u003c/span\u003e - identifier of the Databricks Query (databricks_query).\n"},"warehouseId":{"type":"string","description":"ID of the (the databricks_sql_endpoint) that will be used to execute the task.  Only Serverless \u0026 Pro warehouses are supported right now.\n"}},"type":"object","required":["warehouseId"]},"databricks:index/JobTaskSqlTaskAlert:JobTaskSqlTaskAlert":{"properties":{"alertId":{"type":"string","description":"(String) identifier of the Databricks Alert (databricks_alert).\n"},"pauseSubscriptions":{"type":"boolean","description":"flag that specifies if subscriptions are paused or not.\n"},"subscriptions":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskSqlTaskAlertSubscription:JobTaskSqlTaskAlertSubscription"},"description":"a list of subscription blocks consisting out of one of the required fields: \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e for user emails or \u003cspan pulumi-lang-nodejs=\"`destinationId`\" pulumi-lang-dotnet=\"`DestinationId`\" pulumi-lang-go=\"`destinationId`\" pulumi-lang-python=\"`destination_id`\" pulumi-lang-yaml=\"`destinationId`\" pulumi-lang-java=\"`destinationId`\"\u003e`destination_id`\u003c/span\u003e - for Alert destination's identifier.\n"}},"type":"object","required":["alertId"]},"databricks:index/JobTaskSqlTaskAlertSubscription:JobTaskSqlTaskAlertSubscription":{"properties":{"destinationId":{"type":"string","description":"A snapshot of the dashboard will be sent to the destination when the \u003cspan pulumi-lang-nodejs=\"`destinationId`\" pulumi-lang-dotnet=\"`DestinationId`\" pulumi-lang-go=\"`destinationId`\" pulumi-lang-python=\"`destination_id`\" pulumi-lang-yaml=\"`destinationId`\" pulumi-lang-java=\"`destinationId`\"\u003e`destination_id`\u003c/span\u003e field is present.\n"},"userName":{"type":"string"}},"type":"object"},"databricks:index/JobTaskSqlTaskDashboard:JobTaskSqlTaskDashboard":{"properties":{"customSubject":{"type":"string","description":"string specifying a custom subject of email sent.\n"},"dashboardId":{"type":"string","description":"(String) identifier of the Databricks SQL Dashboard databricks_sql_dashboard.\n"},"pauseSubscriptions":{"type":"boolean","description":"flag that specifies if subscriptions are paused or not.\n"},"subscriptions":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskSqlTaskDashboardSubscription:JobTaskSqlTaskDashboardSubscription"},"description":"a list of subscription blocks consisting out of one of the required fields: \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e for user emails or \u003cspan pulumi-lang-nodejs=\"`destinationId`\" pulumi-lang-dotnet=\"`DestinationId`\" pulumi-lang-go=\"`destinationId`\" pulumi-lang-python=\"`destination_id`\" pulumi-lang-yaml=\"`destinationId`\" pulumi-lang-java=\"`destinationId`\"\u003e`destination_id`\u003c/span\u003e - for Alert destination's identifier.\n"}},"type":"object","required":["dashboardId"]},"databricks:index/JobTaskSqlTaskDashboardSubscription:JobTaskSqlTaskDashboardSubscription":{"properties":{"destinationId":{"type":"string","description":"A snapshot of the dashboard will be sent to the destination when the \u003cspan pulumi-lang-nodejs=\"`destinationId`\" pulumi-lang-dotnet=\"`DestinationId`\" pulumi-lang-go=\"`destinationId`\" pulumi-lang-python=\"`destination_id`\" pulumi-lang-yaml=\"`destinationId`\" pulumi-lang-java=\"`destinationId`\"\u003e`destination_id`\u003c/span\u003e field is present.\n"},"userName":{"type":"string"}},"type":"object"},"databricks:index/JobTaskSqlTaskFile:JobTaskSqlTaskFile":{"properties":{"path":{"type":"string","description":"If \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e is `GIT`: Relative path to the file in the repository specified in the \u003cspan pulumi-lang-nodejs=\"`gitSource`\" pulumi-lang-dotnet=\"`GitSource`\" pulumi-lang-go=\"`gitSource`\" pulumi-lang-python=\"`git_source`\" pulumi-lang-yaml=\"`gitSource`\" pulumi-lang-java=\"`gitSource`\"\u003e`git_source`\u003c/span\u003e block with SQL commands to execute. If \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e is `WORKSPACE`: Absolute path to the file in the workspace with SQL commands to execute.\n\nExample\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sqlAggregationJob = new databricks.Job(\"sql_aggregation_job\", {\n    name: \"Example SQL Job\",\n    tasks: [\n        {\n            taskKey: \"run_agg_query\",\n            sqlTask: {\n                warehouseId: sqlJobWarehouse.id,\n                query: {\n                    queryId: aggQuery.id,\n                },\n            },\n        },\n        {\n            taskKey: \"run_dashboard\",\n            sqlTask: {\n                warehouseId: sqlJobWarehouse.id,\n                dashboard: {\n                    dashboardId: dash.id,\n                    subscriptions: [{\n                        userName: \"user@domain.com\",\n                    }],\n                },\n            },\n        },\n        {\n            taskKey: \"run_alert\",\n            sqlTask: {\n                warehouseId: sqlJobWarehouse.id,\n                alert: {\n                    alertId: alert.id,\n                    subscriptions: [{\n                        userName: \"user@domain.com\",\n                    }],\n                },\n            },\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsql_aggregation_job = databricks.Job(\"sql_aggregation_job\",\n    name=\"Example SQL Job\",\n    tasks=[\n        {\n            \"task_key\": \"run_agg_query\",\n            \"sql_task\": {\n                \"warehouse_id\": sql_job_warehouse[\"id\"],\n                \"query\": {\n                    \"query_id\": agg_query[\"id\"],\n                },\n            },\n        },\n        {\n            \"task_key\": \"run_dashboard\",\n            \"sql_task\": {\n                \"warehouse_id\": sql_job_warehouse[\"id\"],\n                \"dashboard\": {\n                    \"dashboard_id\": dash[\"id\"],\n                    \"subscriptions\": [{\n                        \"user_name\": \"user@domain.com\",\n                    }],\n                },\n            },\n        },\n        {\n            \"task_key\": \"run_alert\",\n            \"sql_task\": {\n                \"warehouse_id\": sql_job_warehouse[\"id\"],\n                \"alert\": {\n                    \"alert_id\": alert[\"id\"],\n                    \"subscriptions\": [{\n                        \"user_name\": \"user@domain.com\",\n                    }],\n                },\n            },\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sqlAggregationJob = new Databricks.Job(\"sql_aggregation_job\", new()\n    {\n        Name = \"Example SQL Job\",\n        Tasks = new[]\n        {\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"run_agg_query\",\n                SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs\n                {\n                    WarehouseId = sqlJobWarehouse.Id,\n                    Query = new Databricks.Inputs.JobTaskSqlTaskQueryArgs\n                    {\n                        QueryId = aggQuery.Id,\n                    },\n                },\n            },\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"run_dashboard\",\n                SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs\n                {\n                    WarehouseId = sqlJobWarehouse.Id,\n                    Dashboard = new Databricks.Inputs.JobTaskSqlTaskDashboardArgs\n                    {\n                        DashboardId = dash.Id,\n                        Subscriptions = new[]\n                        {\n                            new Databricks.Inputs.JobTaskSqlTaskDashboardSubscriptionArgs\n                            {\n                                UserName = \"user@domain.com\",\n                            },\n                        },\n                    },\n                },\n            },\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"run_alert\",\n                SqlTask = new Databricks.Inputs.JobTaskSqlTaskArgs\n                {\n                    WarehouseId = sqlJobWarehouse.Id,\n                    Alert = new Databricks.Inputs.JobTaskSqlTaskAlertArgs\n                    {\n                        AlertId = alert.Id,\n                        Subscriptions = new[]\n                        {\n                            new Databricks.Inputs.JobTaskSqlTaskAlertSubscriptionArgs\n                            {\n                                UserName = \"user@domain.com\",\n                            },\n                        },\n                    },\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewJob(ctx, \"sql_aggregation_job\", \u0026databricks.JobArgs{\n\t\t\tName: pulumi.String(\"Example SQL Job\"),\n\t\t\tTasks: databricks.JobTaskArray{\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"run_agg_query\"),\n\t\t\t\t\tSqlTask: \u0026databricks.JobTaskSqlTaskArgs{\n\t\t\t\t\t\tWarehouseId: pulumi.Any(sqlJobWarehouse.Id),\n\t\t\t\t\t\tQuery: \u0026databricks.JobTaskSqlTaskQueryArgs{\n\t\t\t\t\t\t\tQueryId: pulumi.Any(aggQuery.Id),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"run_dashboard\"),\n\t\t\t\t\tSqlTask: \u0026databricks.JobTaskSqlTaskArgs{\n\t\t\t\t\t\tWarehouseId: pulumi.Any(sqlJobWarehouse.Id),\n\t\t\t\t\t\tDashboard: \u0026databricks.JobTaskSqlTaskDashboardArgs{\n\t\t\t\t\t\t\tDashboardId: pulumi.Any(dash.Id),\n\t\t\t\t\t\t\tSubscriptions: databricks.JobTaskSqlTaskDashboardSubscriptionArray{\n\t\t\t\t\t\t\t\t\u0026databricks.JobTaskSqlTaskDashboardSubscriptionArgs{\n\t\t\t\t\t\t\t\t\tUserName: pulumi.String(\"user@domain.com\"),\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"run_alert\"),\n\t\t\t\t\tSqlTask: \u0026databricks.JobTaskSqlTaskArgs{\n\t\t\t\t\t\tWarehouseId: pulumi.Any(sqlJobWarehouse.Id),\n\t\t\t\t\t\tAlert: \u0026databricks.JobTaskSqlTaskAlertArgs{\n\t\t\t\t\t\t\tAlertId: pulumi.Any(alert.Id),\n\t\t\t\t\t\t\tSubscriptions: databricks.JobTaskSqlTaskAlertSubscriptionArray{\n\t\t\t\t\t\t\t\t\u0026databricks.JobTaskSqlTaskAlertSubscriptionArgs{\n\t\t\t\t\t\t\t\t\tUserName: pulumi.String(\"user@domain.com\"),\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Job;\nimport com.pulumi.databricks.JobArgs;\nimport com.pulumi.databricks.inputs.JobTaskArgs;\nimport com.pulumi.databricks.inputs.JobTaskSqlTaskArgs;\nimport com.pulumi.databricks.inputs.JobTaskSqlTaskQueryArgs;\nimport com.pulumi.databricks.inputs.JobTaskSqlTaskDashboardArgs;\nimport com.pulumi.databricks.inputs.JobTaskSqlTaskAlertArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sqlAggregationJob = new Job(\"sqlAggregationJob\", JobArgs.builder()\n            .name(\"Example SQL Job\")\n            .tasks(            \n                JobTaskArgs.builder()\n                    .taskKey(\"run_agg_query\")\n                    .sqlTask(JobTaskSqlTaskArgs.builder()\n                        .warehouseId(sqlJobWarehouse.id())\n                        .query(JobTaskSqlTaskQueryArgs.builder()\n                            .queryId(aggQuery.id())\n                            .build())\n                        .build())\n                    .build(),\n                JobTaskArgs.builder()\n                    .taskKey(\"run_dashboard\")\n                    .sqlTask(JobTaskSqlTaskArgs.builder()\n                        .warehouseId(sqlJobWarehouse.id())\n                        .dashboard(JobTaskSqlTaskDashboardArgs.builder()\n                            .dashboardId(dash.id())\n                            .subscriptions(JobTaskSqlTaskDashboardSubscriptionArgs.builder()\n                                .userName(\"user@domain.com\")\n                                .build())\n                            .build())\n                        .build())\n                    .build(),\n                JobTaskArgs.builder()\n                    .taskKey(\"run_alert\")\n                    .sqlTask(JobTaskSqlTaskArgs.builder()\n                        .warehouseId(sqlJobWarehouse.id())\n                        .alert(JobTaskSqlTaskAlertArgs.builder()\n                            .alertId(alert.id())\n                            .subscriptions(JobTaskSqlTaskAlertSubscriptionArgs.builder()\n                                .userName(\"user@domain.com\")\n                                .build())\n                            .build())\n                        .build())\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sqlAggregationJob:\n    type: databricks:Job\n    name: sql_aggregation_job\n    properties:\n      name: Example SQL Job\n      tasks:\n        - taskKey: run_agg_query\n          sqlTask:\n            warehouseId: ${sqlJobWarehouse.id}\n            query:\n              queryId: ${aggQuery.id}\n        - taskKey: run_dashboard\n          sqlTask:\n            warehouseId: ${sqlJobWarehouse.id}\n            dashboard:\n              dashboardId: ${dash.id}\n              subscriptions:\n                - userName: user@domain.com\n        - taskKey: run_alert\n          sqlTask:\n            warehouseId: ${sqlJobWarehouse.id}\n            alert:\n              alertId: ${alert.id}\n              subscriptions:\n                - userName: user@domain.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"},"source":{"type":"string","description":"The source of the project. Possible values are `WORKSPACE` and `GIT`.\n"}},"type":"object","required":["path"]},"databricks:index/JobTaskSqlTaskQuery:JobTaskSqlTaskQuery":{"properties":{"queryId":{"type":"string"}},"type":"object","required":["queryId"]},"databricks:index/JobTaskWebhookNotifications:JobTaskWebhookNotifications":{"properties":{"onDurationWarningThresholdExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskWebhookNotificationsOnDurationWarningThresholdExceeded:JobTaskWebhookNotificationsOnDurationWarningThresholdExceeded"},"description":"(List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the `RUN_DURATION_SECONDS` metric in the \u003cspan pulumi-lang-nodejs=\"`health`\" pulumi-lang-dotnet=\"`Health`\" pulumi-lang-go=\"`health`\" pulumi-lang-python=\"`health`\" pulumi-lang-yaml=\"`health`\" pulumi-lang-java=\"`health`\"\u003e`health`\u003c/span\u003e block.\n"},"onFailures":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskWebhookNotificationsOnFailure:JobTaskWebhookNotificationsOnFailure"},"description":"(List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.\n"},"onStarts":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskWebhookNotificationsOnStart:JobTaskWebhookNotificationsOnStart"},"description":"(List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.\n"},"onStreamingBacklogExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskWebhookNotificationsOnStreamingBacklogExceeded:JobTaskWebhookNotificationsOnStreamingBacklogExceeded"},"description":"(List) list of notification IDs to call when any streaming backlog thresholds are exceeded for any stream.\n\nNote that the \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e is not to be confused with the name of the alert destination. The \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e can be retrieved through the API or the URL of Databricks UI `https://\u003cworkspace host\u003e/sql/destinations/\u003cnotification id\u003e?o=\u003cworkspace id\u003e`\n\nExample\n\n"},"onSuccesses":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTaskWebhookNotificationsOnSuccess:JobTaskWebhookNotificationsOnSuccess"},"description":"(List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.\n"}},"type":"object"},"databricks:index/JobTaskWebhookNotificationsOnDurationWarningThresholdExceeded:JobTaskWebhookNotificationsOnDurationWarningThresholdExceeded":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobTaskWebhookNotificationsOnFailure:JobTaskWebhookNotificationsOnFailure":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobTaskWebhookNotificationsOnStart:JobTaskWebhookNotificationsOnStart":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobTaskWebhookNotificationsOnStreamingBacklogExceeded:JobTaskWebhookNotificationsOnStreamingBacklogExceeded":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobTaskWebhookNotificationsOnSuccess:JobTaskWebhookNotificationsOnSuccess":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobTrigger:JobTrigger":{"properties":{"fileArrival":{"$ref":"#/types/databricks:index/JobTriggerFileArrival:JobTriggerFileArrival","description":"configuration block to define a trigger for [File Arrival events](https://learn.microsoft.com/en-us/azure/databricks/workflows/jobs/file-arrival-triggers) consisting of following attributes:\n"},"model":{"$ref":"#/types/databricks:index/JobTriggerModel:JobTriggerModel"},"pauseStatus":{"type":"string","description":"Indicate whether this trigger is paused or not. Either `PAUSED` or `UNPAUSED`. When the \u003cspan pulumi-lang-nodejs=\"`pauseStatus`\" pulumi-lang-dotnet=\"`PauseStatus`\" pulumi-lang-go=\"`pauseStatus`\" pulumi-lang-python=\"`pause_status`\" pulumi-lang-yaml=\"`pauseStatus`\" pulumi-lang-java=\"`pauseStatus`\"\u003e`pause_status`\u003c/span\u003e field is omitted in the block, the server will default to using `UNPAUSED` as a value for \u003cspan pulumi-lang-nodejs=\"`pauseStatus`\" pulumi-lang-dotnet=\"`PauseStatus`\" pulumi-lang-go=\"`pauseStatus`\" pulumi-lang-python=\"`pause_status`\" pulumi-lang-yaml=\"`pauseStatus`\" pulumi-lang-java=\"`pauseStatus`\"\u003e`pause_status`\u003c/span\u003e.\n"},"periodic":{"$ref":"#/types/databricks:index/JobTriggerPeriodic:JobTriggerPeriodic","description":"configuration block to define a trigger for Periodic Triggers consisting of the following attributes:\n"},"tableUpdate":{"$ref":"#/types/databricks:index/JobTriggerTableUpdate:JobTriggerTableUpdate","description":"configuration block to define a trigger for [Table Updates](https://docs.databricks.com/aws/en/jobs/trigger-table-update) consisting of following attributes:\n"}},"type":"object"},"databricks:index/JobTriggerFileArrival:JobTriggerFileArrival":{"properties":{"minTimeBetweenTriggersSeconds":{"type":"integer","description":"If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.\n"},"url":{"type":"string","description":"URL to be monitored for file arrivals. The path must point to the root or a subpath of the external location. Please note that the URL must have a trailing slash character (`/`).\n"},"waitAfterLastChangeSeconds":{"type":"integer","description":"If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.\n"}},"type":"object","required":["url"]},"databricks:index/JobTriggerModel:JobTriggerModel":{"properties":{"aliases":{"type":"array","items":{"type":"string"}},"condition":{"type":"string","description":"The table(s) condition based on which to trigger a job run.  Possible values are `ANY_UPDATED`, `ALL_UPDATED`.\n"},"minTimeBetweenTriggersSeconds":{"type":"integer"},"securableName":{"type":"string"},"waitAfterLastChangeSeconds":{"type":"integer"}},"type":"object","required":["condition"]},"databricks:index/JobTriggerPeriodic:JobTriggerPeriodic":{"properties":{"interval":{"type":"integer","description":"Specifies the interval at which the job should run.\n"},"unit":{"type":"string","description":"The unit of time for the interval.  Possible values are: `DAYS`, `HOURS`, `WEEKS`.\n"}},"type":"object","required":["interval","unit"]},"databricks:index/JobTriggerTableUpdate:JobTriggerTableUpdate":{"properties":{"condition":{"type":"string","description":"The table(s) condition based on which to trigger a job run.  Possible values are `ANY_UPDATED`, `ALL_UPDATED`.\n"},"minTimeBetweenTriggersSeconds":{"type":"integer","description":"If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds.\n"},"tableNames":{"type":"array","items":{"type":"string"},"description":"A non-empty list of tables to monitor for changes. The table name must be in the format `catalog_name.schema_name.table_name`.\n"},"waitAfterLastChangeSeconds":{"type":"integer","description":"If set, the trigger starts a run only after no file activity has occurred for the specified amount of time. This makes it possible to wait for a batch of incoming files to arrive before triggering a run. The minimum allowed value is 60 seconds.\n"}},"type":"object","required":["tableNames"]},"databricks:index/JobWebhookNotifications:JobWebhookNotifications":{"properties":{"onDurationWarningThresholdExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/JobWebhookNotificationsOnDurationWarningThresholdExceeded:JobWebhookNotificationsOnDurationWarningThresholdExceeded"},"description":"(List) list of notification IDs to call when the duration of a run exceeds the threshold specified by the `RUN_DURATION_SECONDS` metric in the \u003cspan pulumi-lang-nodejs=\"`health`\" pulumi-lang-dotnet=\"`Health`\" pulumi-lang-go=\"`health`\" pulumi-lang-python=\"`health`\" pulumi-lang-yaml=\"`health`\" pulumi-lang-java=\"`health`\"\u003e`health`\u003c/span\u003e block.\n"},"onFailures":{"type":"array","items":{"$ref":"#/types/databricks:index/JobWebhookNotificationsOnFailure:JobWebhookNotificationsOnFailure"},"description":"(List) list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified.\n"},"onStarts":{"type":"array","items":{"$ref":"#/types/databricks:index/JobWebhookNotificationsOnStart:JobWebhookNotificationsOnStart"},"description":"(List) list of notification IDs to call when the run starts. A maximum of 3 destinations can be specified.\n"},"onStreamingBacklogExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/JobWebhookNotificationsOnStreamingBacklogExceeded:JobWebhookNotificationsOnStreamingBacklogExceeded"},"description":"(List) list of notification IDs to call when any streaming backlog thresholds are exceeded for any stream.\n\nNote that the \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e is not to be confused with the name of the alert destination. The \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e can be retrieved through the API or the URL of Databricks UI `https://\u003cworkspace host\u003e/sql/destinations/\u003cnotification id\u003e?o=\u003cworkspace id\u003e`\n\nExample\n\n"},"onSuccesses":{"type":"array","items":{"$ref":"#/types/databricks:index/JobWebhookNotificationsOnSuccess:JobWebhookNotificationsOnSuccess"},"description":"(List) list of notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified.\n"}},"type":"object"},"databricks:index/JobWebhookNotificationsOnDurationWarningThresholdExceeded:JobWebhookNotificationsOnDurationWarningThresholdExceeded":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobWebhookNotificationsOnFailure:JobWebhookNotificationsOnFailure":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobWebhookNotificationsOnStart:JobWebhookNotificationsOnStart":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobWebhookNotificationsOnStreamingBacklogExceeded:JobWebhookNotificationsOnStreamingBacklogExceeded":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/JobWebhookNotificationsOnSuccess:JobWebhookNotificationsOnSuccess":{"properties":{"id":{"type":"string","description":"ID of the job\n"}},"type":"object","required":["id"]},"databricks:index/LakehouseMonitorCustomMetric:LakehouseMonitorCustomMetric":{"properties":{"definition":{"type":"string","description":"[create metric definition](https://docs.databricks.com/en/lakehouse-monitoring/custom-metrics.html#create-definition)\n"},"inputColumns":{"type":"array","items":{"type":"string"},"description":"Columns on the monitored table to apply the custom metrics to.\n"},"name":{"type":"string","description":"Name of the custom metric.\n"},"outputDataType":{"type":"string","description":"The output type of the custom metric.\n"},"type":{"type":"string","description":"The type of the custom metric.\n"}},"type":"object","required":["definition","inputColumns","name","outputDataType","type"]},"databricks:index/LakehouseMonitorDataClassificationConfig:LakehouseMonitorDataClassificationConfig":{"properties":{"enabled":{"type":"boolean"}},"type":"object"},"databricks:index/LakehouseMonitorInferenceLog:LakehouseMonitorInferenceLog":{"properties":{"granularities":{"type":"array","items":{"type":"string"},"description":"List of granularities to use when aggregating data into time windows based on their timestamp.\n"},"labelCol":{"type":"string","description":"Column of the model label\n"},"modelIdCol":{"type":"string","description":"Column of the model id or version\n"},"predictionCol":{"type":"string","description":"Column of the model prediction\n"},"predictionProbaCol":{"type":"string","description":"Column of the model prediction probabilities\n"},"problemType":{"type":"string","description":"Problem type the model aims to solve. Either `PROBLEM_TYPE_CLASSIFICATION` or `PROBLEM_TYPE_REGRESSION`\n"},"timestampCol":{"type":"string","description":"Column of the timestamp of predictions\n"}},"type":"object","required":["granularities","modelIdCol","predictionCol","problemType","timestampCol"]},"databricks:index/LakehouseMonitorNotifications:LakehouseMonitorNotifications":{"properties":{"onFailure":{"$ref":"#/types/databricks:index/LakehouseMonitorNotificationsOnFailure:LakehouseMonitorNotificationsOnFailure","description":"who to send notifications to on monitor failure.\n"},"onNewClassificationTagDetected":{"$ref":"#/types/databricks:index/LakehouseMonitorNotificationsOnNewClassificationTagDetected:LakehouseMonitorNotificationsOnNewClassificationTagDetected","description":"Who to send notifications to when new data classification tags are detected.\n"}},"type":"object"},"databricks:index/LakehouseMonitorNotificationsOnFailure:LakehouseMonitorNotificationsOnFailure":{"properties":{"emailAddresses":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/LakehouseMonitorNotificationsOnNewClassificationTagDetected:LakehouseMonitorNotificationsOnNewClassificationTagDetected":{"properties":{"emailAddresses":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/LakehouseMonitorProviderConfig:LakehouseMonitorProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/LakehouseMonitorSchedule:LakehouseMonitorSchedule":{"properties":{"pauseStatus":{"type":"string","description":"optional string field that indicates whether a schedule is paused (`PAUSED`) or not (`UNPAUSED`).\n"},"quartzCronExpression":{"type":"string","description":"string expression that determines when to run the monitor. See [Quartz documentation](https://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html) for examples.\n"},"timezoneId":{"type":"string","description":"string with timezone id (e.g., `PST`) in which to evaluate the Quartz expression.\n"}},"type":"object","required":["quartzCronExpression","timezoneId"],"language":{"nodejs":{"requiredOutputs":["pauseStatus","quartzCronExpression","timezoneId"]}}},"databricks:index/LakehouseMonitorSnapshot:LakehouseMonitorSnapshot":{"type":"object"},"databricks:index/LakehouseMonitorTimeSeries:LakehouseMonitorTimeSeries":{"properties":{"granularities":{"type":"array","items":{"type":"string"},"description":"List of granularities to use when aggregating data into time windows based on their timestamp.\n"},"timestampCol":{"type":"string","description":"Column of the timestamp of predictions\n"}},"type":"object","required":["granularities","timestampCol"]},"databricks:index/LibraryCran:LibraryCran":{"properties":{"package":{"type":"string","description":"The name of the CRAN package to install.\n"},"repo":{"type":"string","description":"The repository where the package can be found. If not specified, the default CRAN repo is used.\n"}},"type":"object","required":["package"]},"databricks:index/LibraryMaven:LibraryMaven":{"properties":{"coordinates":{"type":"string","description":"Gradle-style Maven coordinates. For example: `org.jsoup:jsoup:1.7.2`.\n"},"exclusions":{"type":"array","items":{"type":"string"},"description":"List of dependencies to exclude. For example: `[\"slf4j:slf4j\", \"*:hadoop-client\"]`. See [Maven dependency exclusions](https://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html) for more information.\n"},"repo":{"type":"string","description":"Maven repository to install the Maven package from. If omitted, both Maven Central Repository and Spark Packages are searched.\n"}},"type":"object","required":["coordinates"]},"databricks:index/LibraryProviderConfig:LibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID that the resource belongs to. This workspace must be part of the account that the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/LibraryPypi:LibraryPypi":{"properties":{"package":{"type":"string","description":"The name of the PyPI package to install. An optional exact version specification is also supported. For example: \u003cspan pulumi-lang-nodejs=\"`simplejson`\" pulumi-lang-dotnet=\"`Simplejson`\" pulumi-lang-go=\"`simplejson`\" pulumi-lang-python=\"`simplejson`\" pulumi-lang-yaml=\"`simplejson`\" pulumi-lang-java=\"`simplejson`\"\u003e`simplejson`\u003c/span\u003e or `simplejson==3.8.0`.\n"},"repo":{"type":"string","description":"The repository where the package can be found. If not specified, the default pip index is used.\n"}},"type":"object","required":["package"]},"databricks:index/MaterializedFeaturesFeatureTagProviderConfig:MaterializedFeaturesFeatureTagProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/MetastoreDataAccessAwsIamRole:MetastoreDataAccessAwsIamRole":{"properties":{"externalId":{"type":"string"},"roleArn":{"type":"string","willReplaceOnChanges":true},"unityCatalogIamArn":{"type":"string"}},"type":"object","required":["roleArn"],"language":{"nodejs":{"requiredOutputs":["externalId","roleArn","unityCatalogIamArn"]}}},"databricks:index/MetastoreDataAccessAzureManagedIdentity:MetastoreDataAccessAzureManagedIdentity":{"properties":{"accessConnectorId":{"type":"string","willReplaceOnChanges":true},"credentialId":{"type":"string"},"managedIdentityId":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["accessConnectorId"],"language":{"nodejs":{"requiredOutputs":["accessConnectorId","credentialId"]}}},"databricks:index/MetastoreDataAccessAzureServicePrincipal:MetastoreDataAccessAzureServicePrincipal":{"properties":{"applicationId":{"type":"string","willReplaceOnChanges":true},"clientSecret":{"type":"string","secret":true,"willReplaceOnChanges":true},"directoryId":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["applicationId","clientSecret","directoryId"]},"databricks:index/MetastoreDataAccessCloudflareApiToken:MetastoreDataAccessCloudflareApiToken":{"properties":{"accessKeyId":{"type":"string","willReplaceOnChanges":true},"accountId":{"type":"string","willReplaceOnChanges":true},"secretAccessKey":{"type":"string","secret":true,"willReplaceOnChanges":true}},"type":"object","required":["accessKeyId","accountId","secretAccessKey"]},"databricks:index/MetastoreDataAccessDatabricksGcpServiceAccount:MetastoreDataAccessDatabricksGcpServiceAccount":{"properties":{"credentialId":{"type":"string"},"email":{"type":"string"}},"type":"object","language":{"nodejs":{"requiredOutputs":["credentialId","email"]}}},"databricks:index/MetastoreDataAccessGcpServiceAccountKey:MetastoreDataAccessGcpServiceAccountKey":{"properties":{"email":{"type":"string","willReplaceOnChanges":true},"privateKey":{"type":"string","secret":true,"willReplaceOnChanges":true},"privateKeyId":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["email","privateKey","privateKeyId"]},"databricks:index/MetastoreProviderProviderConfig:MetastoreProviderProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/MlflowExperimentProviderConfig:MlflowExperimentProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/MlflowExperimentTag:MlflowExperimentTag":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/MlflowModelProviderConfig:MlflowModelProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/MlflowModelTag:MlflowModelTag":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/MlflowWebhookHttpUrlSpec:MlflowWebhookHttpUrlSpec":{"properties":{"authorization":{"type":"string","description":"Value of the authorization header that should be sent in the request sent by the wehbook.  It should be of the form `\u003cauth type\u003e \u003ccredentials\u003e`, e.g. `Bearer \u003caccess_token\u003e`. If set to an empty string, no authorization header will be included in the request.\n"},"enableSslVerification":{"type":"boolean","description":"Enable/disable SSL certificate validation. Default is \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e. For self-signed certificates, this field must be \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e AND the destination server must disable certificate validation as well. For security purposes, it is encouraged to perform secret validation with the HMAC-encoded portion of the payload and acknowledge the risk associated with disabling hostname validation whereby it becomes more likely that requests can be maliciously routed to an unintended host.\n"},"secret":{"type":"string","description":"Shared secret required for HMAC encoding payload. The HMAC-encoded payload will be sent in the header as `X-Databricks-Signature:\u003cspan pulumi-lang-nodejs=\" encodedPayload`\" pulumi-lang-dotnet=\" EncodedPayload`\" pulumi-lang-go=\" encodedPayload`\" pulumi-lang-python=\" encoded_payload`\" pulumi-lang-yaml=\" encodedPayload`\" pulumi-lang-java=\" encodedPayload`\"\u003e encoded_payload`\u003c/span\u003e.\n","secret":true},"url":{"type":"string","description":"External HTTPS URL called on event trigger (by using a POST request). Structure of payload depends on the event type, refer to [documentation](https://docs.databricks.com/applications/mlflow/model-registry-webhooks.html) for more details.\n"}},"type":"object","required":["url"]},"databricks:index/MlflowWebhookJobSpec:MlflowWebhookJobSpec":{"properties":{"accessToken":{"type":"string","description":"The personal access token used to authorize webhook's job runs.\n","secret":true},"jobId":{"type":"string","description":"ID of the Databricks job that the webhook runs.\n"},"workspaceUrl":{"type":"string","description":"URL of the workspace containing the job that this webhook runs. If not specified, the job’s workspace URL is assumed to be the same as the workspace where the webhook is created.\n"}},"type":"object","required":["accessToken","jobId"]},"databricks:index/MlflowWebhookProviderConfig:MlflowWebhookProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/ModelServingAiGateway:ModelServingAiGateway":{"properties":{"fallbackConfig":{"$ref":"#/types/databricks:index/ModelServingAiGatewayFallbackConfig:ModelServingAiGatewayFallbackConfig","description":"block with configuration for traffic fallback which auto fallbacks to other served entities if the request to a served entity fails with certain error codes, to increase availability.\n"},"guardrails":{"$ref":"#/types/databricks:index/ModelServingAiGatewayGuardrails:ModelServingAiGatewayGuardrails","description":"Block with configuration for AI Guardrails to prevent unwanted data and unsafe data in requests and responses. Consists of the following attributes:\n"},"inferenceTableConfig":{"$ref":"#/types/databricks:index/ModelServingAiGatewayInferenceTableConfig:ModelServingAiGatewayInferenceTableConfig","description":"Block describing the configuration of usage tracking. Consists of the following attributes:\n"},"rateLimits":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingAiGatewayRateLimit:ModelServingAiGatewayRateLimit"},"description":"Block describing rate limits for AI gateway. For details see the description of \u003cspan pulumi-lang-nodejs=\"`rateLimits`\" pulumi-lang-dotnet=\"`RateLimits`\" pulumi-lang-go=\"`rateLimits`\" pulumi-lang-python=\"`rate_limits`\" pulumi-lang-yaml=\"`rateLimits`\" pulumi-lang-java=\"`rateLimits`\"\u003e`rate_limits`\u003c/span\u003e block above.\n"},"usageTrackingConfig":{"$ref":"#/types/databricks:index/ModelServingAiGatewayUsageTrackingConfig:ModelServingAiGatewayUsageTrackingConfig","description":"Block with configuration for payload logging using inference tables. For details see the description of \u003cspan pulumi-lang-nodejs=\"`autoCaptureConfig`\" pulumi-lang-dotnet=\"`AutoCaptureConfig`\" pulumi-lang-go=\"`autoCaptureConfig`\" pulumi-lang-python=\"`auto_capture_config`\" pulumi-lang-yaml=\"`autoCaptureConfig`\" pulumi-lang-java=\"`autoCaptureConfig`\"\u003e`auto_capture_config`\u003c/span\u003e block above.\n"}},"type":"object"},"databricks:index/ModelServingAiGatewayFallbackConfig:ModelServingAiGatewayFallbackConfig":{"properties":{"enabled":{"type":"boolean","description":"Whether to enable traffic fallback. When a served entity in the serving endpoint returns specific error codes (e.g. 500), the request will automatically be round-robin attempted with other served entities in the same endpoint, following the order of served entity list, until a successful response is returned.\n"}},"type":"object","required":["enabled"]},"databricks:index/ModelServingAiGatewayGuardrails:ModelServingAiGatewayGuardrails":{"properties":{"input":{"$ref":"#/types/databricks:index/ModelServingAiGatewayGuardrailsInput:ModelServingAiGatewayGuardrailsInput","description":"A block with configuration for input guardrail filters:\n"},"output":{"$ref":"#/types/databricks:index/ModelServingAiGatewayGuardrailsOutput:ModelServingAiGatewayGuardrailsOutput","description":"A block with configuration for output guardrail filters.  Has the same structure as \u003cspan pulumi-lang-nodejs=\"`input`\" pulumi-lang-dotnet=\"`Input`\" pulumi-lang-go=\"`input`\" pulumi-lang-python=\"`input`\" pulumi-lang-yaml=\"`input`\" pulumi-lang-java=\"`input`\"\u003e`input`\u003c/span\u003e block.\n"}},"type":"object"},"databricks:index/ModelServingAiGatewayGuardrailsInput:ModelServingAiGatewayGuardrailsInput":{"properties":{"invalidKeywords":{"type":"array","items":{"type":"string"},"description":"List of invalid keywords. AI guardrail uses keyword or string matching to decide if the keyword exists in the request or response content.\n","deprecationMessage":"Please use 'pii' and 'safety' instead."},"pii":{"$ref":"#/types/databricks:index/ModelServingAiGatewayGuardrailsInputPii:ModelServingAiGatewayGuardrailsInputPii","description":"Block with configuration for guardrail PII filter:\n"},"safety":{"type":"boolean","description":"the boolean flag that indicates whether the safety filter is enabled.\n"},"validTopics":{"type":"array","items":{"type":"string"},"description":"The list of allowed topics. Given a chat request, this guardrail flags the request if its topic is not in the allowed topics.\n","deprecationMessage":"Please use 'pii' and 'safety' instead."}},"type":"object"},"databricks:index/ModelServingAiGatewayGuardrailsInputPii:ModelServingAiGatewayGuardrailsInputPii":{"properties":{"behavior":{"type":"string","description":"a string that describes the behavior for PII filter. Currently only `BLOCK` value is supported.\n"}},"type":"object"},"databricks:index/ModelServingAiGatewayGuardrailsOutput:ModelServingAiGatewayGuardrailsOutput":{"properties":{"invalidKeywords":{"type":"array","items":{"type":"string"},"description":"List of invalid keywords. AI guardrail uses keyword or string matching to decide if the keyword exists in the request or response content.\n","deprecationMessage":"Please use 'pii' and 'safety' instead."},"pii":{"$ref":"#/types/databricks:index/ModelServingAiGatewayGuardrailsOutputPii:ModelServingAiGatewayGuardrailsOutputPii","description":"Block with configuration for guardrail PII filter:\n"},"safety":{"type":"boolean","description":"the boolean flag that indicates whether the safety filter is enabled.\n"},"validTopics":{"type":"array","items":{"type":"string"},"description":"The list of allowed topics. Given a chat request, this guardrail flags the request if its topic is not in the allowed topics.\n","deprecationMessage":"Please use 'pii' and 'safety' instead."}},"type":"object"},"databricks:index/ModelServingAiGatewayGuardrailsOutputPii:ModelServingAiGatewayGuardrailsOutputPii":{"properties":{"behavior":{"type":"string","description":"a string that describes the behavior for PII filter. Currently only `BLOCK` value is supported.\n"}},"type":"object"},"databricks:index/ModelServingAiGatewayInferenceTableConfig:ModelServingAiGatewayInferenceTableConfig":{"properties":{"catalogName":{"type":"string","description":"The name of the catalog in Unity Catalog. NOTE: On update, you cannot change the catalog name if it was already set.\n"},"enabled":{"type":"boolean","description":"boolean flag specifying if usage tracking is enabled.\n"},"schemaName":{"type":"string","description":"The name of the schema in Unity Catalog. NOTE: On update, you cannot change the schema name if it was already set.\n"},"tableNamePrefix":{"type":"string","description":"The prefix of the table in Unity Catalog. NOTE: On update, you cannot change the prefix name if it was already set.\n"}},"type":"object"},"databricks:index/ModelServingAiGatewayRateLimit:ModelServingAiGatewayRateLimit":{"properties":{"calls":{"type":"integer","description":"Used to specify how many calls are allowed for a key within the renewal_period.\n"},"key":{"type":"string","description":"Key field for a serving endpoint rate limit. Currently, \u003cspan pulumi-lang-nodejs=\"`user`\" pulumi-lang-dotnet=\"`User`\" pulumi-lang-go=\"`user`\" pulumi-lang-python=\"`user`\" pulumi-lang-yaml=\"`user`\" pulumi-lang-java=\"`user`\"\u003e`user`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`userGroup`\" pulumi-lang-dotnet=\"`UserGroup`\" pulumi-lang-go=\"`userGroup`\" pulumi-lang-python=\"`user_group`\" pulumi-lang-yaml=\"`userGroup`\" pulumi-lang-java=\"`userGroup`\"\u003e`user_group`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`servicePrincipal`\" pulumi-lang-dotnet=\"`ServicePrincipal`\" pulumi-lang-go=\"`servicePrincipal`\" pulumi-lang-python=\"`service_principal`\" pulumi-lang-yaml=\"`servicePrincipal`\" pulumi-lang-java=\"`servicePrincipal`\"\u003e`service_principal`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`endpoint`\" pulumi-lang-dotnet=\"`Endpoint`\" pulumi-lang-go=\"`endpoint`\" pulumi-lang-python=\"`endpoint`\" pulumi-lang-yaml=\"`endpoint`\" pulumi-lang-java=\"`endpoint`\"\u003e`endpoint`\u003c/span\u003e are supported, with \u003cspan pulumi-lang-nodejs=\"`endpoint`\" pulumi-lang-dotnet=\"`Endpoint`\" pulumi-lang-go=\"`endpoint`\" pulumi-lang-python=\"`endpoint`\" pulumi-lang-yaml=\"`endpoint`\" pulumi-lang-java=\"`endpoint`\"\u003e`endpoint`\u003c/span\u003e being the default if not specified.\n"},"principal":{"type":"string","description":"Principal field for a user, user group, or service principal to apply rate limiting to. Accepts a user email, group name, or service principal application ID.\n"},"renewalPeriod":{"type":"string","description":"Renewal period field for a serving endpoint rate limit. Currently, only \u003cspan pulumi-lang-nodejs=\"`minute`\" pulumi-lang-dotnet=\"`Minute`\" pulumi-lang-go=\"`minute`\" pulumi-lang-python=\"`minute`\" pulumi-lang-yaml=\"`minute`\" pulumi-lang-java=\"`minute`\"\u003e`minute`\u003c/span\u003e is supported.\n"},"tokens":{"type":"integer","description":"Specifies how many tokens are allowed for a key within the renewal_period.\n"}},"type":"object","required":["renewalPeriod"]},"databricks:index/ModelServingAiGatewayUsageTrackingConfig:ModelServingAiGatewayUsageTrackingConfig":{"properties":{"enabled":{"type":"boolean"}},"type":"object"},"databricks:index/ModelServingConfig:ModelServingConfig":{"properties":{"autoCaptureConfig":{"$ref":"#/types/databricks:index/ModelServingConfigAutoCaptureConfig:ModelServingConfigAutoCaptureConfig","description":"Configuration for Inference Tables which automatically logs requests and responses to Unity Catalog.\n"},"servedEntities":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntity:ModelServingConfigServedEntity"},"description":"A list of served entities for the endpoint to serve. A serving endpoint can have up to 10 served entities.\n"},"servedModels":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingConfigServedModel:ModelServingConfigServedModel"},"description":"Each block represents a served model for the endpoint to serve. A model serving endpoint can have up to 10 served models.\n","deprecationMessage":"Please use 'config.served_entities' instead of 'config.served_models'."},"trafficConfig":{"$ref":"#/types/databricks:index/ModelServingConfigTrafficConfig:ModelServingConfigTrafficConfig","description":"A single block represents the traffic split configuration amongst the served models.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["trafficConfig"]}}},"databricks:index/ModelServingConfigAutoCaptureConfig:ModelServingConfigAutoCaptureConfig":{"properties":{"catalogName":{"type":"string","description":"The name of the catalog in Unity Catalog. NOTE: On update, you cannot change the catalog name if it was already set.\n","willReplaceOnChanges":true},"enabled":{"type":"boolean","description":"If inference tables are enabled or not. NOTE: If you have already disabled payload logging once, you cannot enable it again.\n"},"schemaName":{"type":"string","description":"The name of the schema in Unity Catalog. NOTE: On update, you cannot change the schema name if it was already set.\n","willReplaceOnChanges":true},"tableNamePrefix":{"type":"string","description":"The prefix of the table in Unity Catalog. NOTE: On update, you cannot change the prefix name if it was already set.\n","willReplaceOnChanges":true}},"type":"object","language":{"nodejs":{"requiredOutputs":["enabled","tableNamePrefix"]}}},"databricks:index/ModelServingConfigServedEntity:ModelServingConfigServedEntity":{"properties":{"burstScalingEnabled":{"type":"boolean"},"entityName":{"type":"string","description":"The name of the entity to be served. The entity may be a model in the Databricks Model Registry, a model in the Unity Catalog (UC), or a function of type `FEATURE_SPEC` in the UC. If it is a UC object, the full name of the object should be given in the form of `catalog_name.schema_name.model_name`.\n"},"entityVersion":{"type":"string","description":"The version of the model in Databricks Model Registry to be served or empty if the entity is a `FEATURE_SPEC`.\n"},"environmentVars":{"type":"object","additionalProperties":{"type":"string"},"description":"An object containing a set of optional, user-specified environment variable key-value pairs used for serving this entity. Note: this is an experimental feature and is subject to change. Example entity environment variables that refer to Databricks secrets: ```{\"OPENAI_API_KEY\": \"{{secrets/my_scope/my_key}}\", \"DATABRICKS_TOKEN\": \"{{secrets/my_scope2/my_key2}}\"}```\n"},"externalModel":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModel:ModelServingConfigServedEntityExternalModel","description":"The external model to be served. NOTE: Only one of \u003cspan pulumi-lang-nodejs=\"`externalModel`\" pulumi-lang-dotnet=\"`ExternalModel`\" pulumi-lang-go=\"`externalModel`\" pulumi-lang-python=\"`external_model`\" pulumi-lang-yaml=\"`externalModel`\" pulumi-lang-java=\"`externalModel`\"\u003e`external_model`\u003c/span\u003e and (\u003cspan pulumi-lang-nodejs=\"`entityName`\" pulumi-lang-dotnet=\"`EntityName`\" pulumi-lang-go=\"`entityName`\" pulumi-lang-python=\"`entity_name`\" pulumi-lang-yaml=\"`entityName`\" pulumi-lang-java=\"`entityName`\"\u003e`entity_name`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`entityVersion`\" pulumi-lang-dotnet=\"`EntityVersion`\" pulumi-lang-go=\"`entityVersion`\" pulumi-lang-python=\"`entity_version`\" pulumi-lang-yaml=\"`entityVersion`\" pulumi-lang-java=\"`entityVersion`\"\u003e`entity_version`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`workloadSize`\" pulumi-lang-dotnet=\"`WorkloadSize`\" pulumi-lang-go=\"`workloadSize`\" pulumi-lang-python=\"`workload_size`\" pulumi-lang-yaml=\"`workloadSize`\" pulumi-lang-java=\"`workloadSize`\"\u003e`workload_size`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`workloadType`\" pulumi-lang-dotnet=\"`WorkloadType`\" pulumi-lang-go=\"`workloadType`\" pulumi-lang-python=\"`workload_type`\" pulumi-lang-yaml=\"`workloadType`\" pulumi-lang-java=\"`workloadType`\"\u003e`workload_type`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`scaleToZeroEnabled`\" pulumi-lang-dotnet=\"`ScaleToZeroEnabled`\" pulumi-lang-go=\"`scaleToZeroEnabled`\" pulumi-lang-python=\"`scale_to_zero_enabled`\" pulumi-lang-yaml=\"`scaleToZeroEnabled`\" pulumi-lang-java=\"`scaleToZeroEnabled`\"\u003e`scale_to_zero_enabled`\u003c/span\u003e) can be specified with the latter set being used for custom model serving for a Databricks registered model. When an \u003cspan pulumi-lang-nodejs=\"`externalModel`\" pulumi-lang-dotnet=\"`ExternalModel`\" pulumi-lang-go=\"`externalModel`\" pulumi-lang-python=\"`external_model`\" pulumi-lang-yaml=\"`externalModel`\" pulumi-lang-java=\"`externalModel`\"\u003e`external_model`\u003c/span\u003e is present, the served entities list can only have one \u003cspan pulumi-lang-nodejs=\"`servedEntity`\" pulumi-lang-dotnet=\"`ServedEntity`\" pulumi-lang-go=\"`servedEntity`\" pulumi-lang-python=\"`served_entity`\" pulumi-lang-yaml=\"`servedEntity`\" pulumi-lang-java=\"`servedEntity`\"\u003e`served_entity`\u003c/span\u003e object. An existing endpoint with \u003cspan pulumi-lang-nodejs=\"`externalModel`\" pulumi-lang-dotnet=\"`ExternalModel`\" pulumi-lang-go=\"`externalModel`\" pulumi-lang-python=\"`external_model`\" pulumi-lang-yaml=\"`externalModel`\" pulumi-lang-java=\"`externalModel`\"\u003e`external_model`\u003c/span\u003e can not be updated to an endpoint without \u003cspan pulumi-lang-nodejs=\"`externalModel`\" pulumi-lang-dotnet=\"`ExternalModel`\" pulumi-lang-go=\"`externalModel`\" pulumi-lang-python=\"`external_model`\" pulumi-lang-yaml=\"`externalModel`\" pulumi-lang-java=\"`externalModel`\"\u003e`external_model`\u003c/span\u003e. If the endpoint is created without \u003cspan pulumi-lang-nodejs=\"`externalModel`\" pulumi-lang-dotnet=\"`ExternalModel`\" pulumi-lang-go=\"`externalModel`\" pulumi-lang-python=\"`external_model`\" pulumi-lang-yaml=\"`externalModel`\" pulumi-lang-java=\"`externalModel`\"\u003e`external_model`\u003c/span\u003e, users cannot update it to add \u003cspan pulumi-lang-nodejs=\"`externalModel`\" pulumi-lang-dotnet=\"`ExternalModel`\" pulumi-lang-go=\"`externalModel`\" pulumi-lang-python=\"`external_model`\" pulumi-lang-yaml=\"`externalModel`\" pulumi-lang-java=\"`externalModel`\"\u003e`external_model`\u003c/span\u003e later.\n"},"instanceProfileArn":{"type":"string","description":"ARN of the instance profile that the served entity uses to access AWS resources.\n"},"maxProvisionedConcurrency":{"type":"integer","description":"The maximum provisioned concurrency that the endpoint can scale up to. Conflicts with \u003cspan pulumi-lang-nodejs=\"`workloadSize`\" pulumi-lang-dotnet=\"`WorkloadSize`\" pulumi-lang-go=\"`workloadSize`\" pulumi-lang-python=\"`workload_size`\" pulumi-lang-yaml=\"`workloadSize`\" pulumi-lang-java=\"`workloadSize`\"\u003e`workload_size`\u003c/span\u003e.\n"},"maxProvisionedThroughput":{"type":"integer","description":"The maximum tokens per second that the endpoint can scale up to.\n"},"minProvisionedConcurrency":{"type":"integer","description":"The minimum provisioned concurrency that the endpoint can scale down to. Conflicts with \u003cspan pulumi-lang-nodejs=\"`workloadSize`\" pulumi-lang-dotnet=\"`WorkloadSize`\" pulumi-lang-go=\"`workloadSize`\" pulumi-lang-python=\"`workload_size`\" pulumi-lang-yaml=\"`workloadSize`\" pulumi-lang-java=\"`workloadSize`\"\u003e`workload_size`\u003c/span\u003e.\n"},"minProvisionedThroughput":{"type":"integer","description":"The minimum tokens per second that the endpoint can scale down to.\n"},"name":{"type":"string","description":"The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores. If not specified for an external model, this field defaults to `external_model.name`, with '.' and ':' replaced with '-', and if not specified for other entities, it defaults to -.\n"},"provisionedModelUnits":{"type":"integer"},"scaleToZeroEnabled":{"type":"boolean","description":"Whether the compute resources for the served entity should scale down to zero.\n"},"workloadSize":{"type":"string","description":"The workload size of the served entity. The workload size corresponds to a range of provisioned concurrency that the compute autoscales between. A single unit of provisioned concurrency can process one request at a time. Valid workload sizes are `Small` (4 - 4 provisioned concurrency), `Medium` (8 - 16 provisioned concurrency), and `Large` (16 - 64 provisioned concurrency). If `scale-to-zero` is enabled, the lower bound of the provisioned concurrency for each workload size is 0. Conflicts with \u003cspan pulumi-lang-nodejs=\"`minProvisionedConcurrency`\" pulumi-lang-dotnet=\"`MinProvisionedConcurrency`\" pulumi-lang-go=\"`minProvisionedConcurrency`\" pulumi-lang-python=\"`min_provisioned_concurrency`\" pulumi-lang-yaml=\"`minProvisionedConcurrency`\" pulumi-lang-java=\"`minProvisionedConcurrency`\"\u003e`min_provisioned_concurrency`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`maxProvisionedConcurrency`\" pulumi-lang-dotnet=\"`MaxProvisionedConcurrency`\" pulumi-lang-go=\"`maxProvisionedConcurrency`\" pulumi-lang-python=\"`max_provisioned_concurrency`\" pulumi-lang-yaml=\"`maxProvisionedConcurrency`\" pulumi-lang-java=\"`maxProvisionedConcurrency`\"\u003e`max_provisioned_concurrency`\u003c/span\u003e.\n"},"workloadType":{"type":"string","description":"The workload type of the served entity. The workload type selects which type of compute to use in the endpoint. The default value for this parameter is `CPU`. For deep learning workloads, GPU acceleration is available by selecting workload types like `GPU_SMALL` and others. See the available [GPU types](https://docs.databricks.com/machine-learning/model-serving/create-manage-serving-endpoints.html#gpu-workload-types).\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["name","workloadType"]}}},"databricks:index/ModelServingConfigServedEntityExternalModel:ModelServingConfigServedEntityExternalModel":{"properties":{"ai21labsConfig":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModelAi21labsConfig:ModelServingConfigServedEntityExternalModelAi21labsConfig","description":"AI21Labs Config\n"},"amazonBedrockConfig":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModelAmazonBedrockConfig:ModelServingConfigServedEntityExternalModelAmazonBedrockConfig","description":"Amazon Bedrock Config\n"},"anthropicConfig":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModelAnthropicConfig:ModelServingConfigServedEntityExternalModelAnthropicConfig","description":"Anthropic Config\n"},"cohereConfig":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModelCohereConfig:ModelServingConfigServedEntityExternalModelCohereConfig","description":"Cohere Config\n"},"customProviderConfig":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModelCustomProviderConfig:ModelServingConfigServedEntityExternalModelCustomProviderConfig","description":"Custom Provider Config. Only required if the provider is 'custom'.\n"},"databricksModelServingConfig":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModelDatabricksModelServingConfig:ModelServingConfigServedEntityExternalModelDatabricksModelServingConfig","description":"Databricks Model Serving Config\n"},"googleCloudVertexAiConfig":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModelGoogleCloudVertexAiConfig:ModelServingConfigServedEntityExternalModelGoogleCloudVertexAiConfig","description":"Google Cloud Vertex AI Config.\n"},"name":{"type":"string","description":"The name of the external model.\n"},"openaiConfig":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModelOpenaiConfig:ModelServingConfigServedEntityExternalModelOpenaiConfig","description":"OpenAI Config\n"},"palmConfig":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModelPalmConfig:ModelServingConfigServedEntityExternalModelPalmConfig","description":"PaLM Config\n"},"provider":{"type":"string","description":"The name of the provider for the external model. Currently, the supported providers are \u003cspan pulumi-lang-nodejs=\"`ai21labs`\" pulumi-lang-dotnet=\"`Ai21labs`\" pulumi-lang-go=\"`ai21labs`\" pulumi-lang-python=\"`ai21labs`\" pulumi-lang-yaml=\"`ai21labs`\" pulumi-lang-java=\"`ai21labs`\"\u003e`ai21labs`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`anthropic`\" pulumi-lang-dotnet=\"`Anthropic`\" pulumi-lang-go=\"`anthropic`\" pulumi-lang-python=\"`anthropic`\" pulumi-lang-yaml=\"`anthropic`\" pulumi-lang-java=\"`anthropic`\"\u003e`anthropic`\u003c/span\u003e, `amazon-bedrock`, \u003cspan pulumi-lang-nodejs=\"`cohere`\" pulumi-lang-dotnet=\"`Cohere`\" pulumi-lang-go=\"`cohere`\" pulumi-lang-python=\"`cohere`\" pulumi-lang-yaml=\"`cohere`\" pulumi-lang-java=\"`cohere`\"\u003e`cohere`\u003c/span\u003e, `databricks-model-serving`, `google-cloud-vertex-ai`, \u003cspan pulumi-lang-nodejs=\"`openai`\" pulumi-lang-dotnet=\"`Openai`\" pulumi-lang-go=\"`openai`\" pulumi-lang-python=\"`openai`\" pulumi-lang-yaml=\"`openai`\" pulumi-lang-java=\"`openai`\"\u003e`openai`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`palm`\" pulumi-lang-dotnet=\"`Palm`\" pulumi-lang-go=\"`palm`\" pulumi-lang-python=\"`palm`\" pulumi-lang-yaml=\"`palm`\" pulumi-lang-java=\"`palm`\"\u003e`palm`\u003c/span\u003e.\n"},"task":{"type":"string","description":"The task type of the external model.\n"}},"type":"object","required":["name","provider","task"]},"databricks:index/ModelServingConfigServedEntityExternalModelAi21labsConfig:ModelServingConfigServedEntityExternalModelAi21labsConfig":{"properties":{"ai21labsApiKey":{"type":"string","description":"The Databricks secret key reference for an AI21Labs API key.\n"},"ai21labsApiKeyPlaintext":{"type":"string","description":"An AI21 Labs API key provided as a plaintext string.\n"}},"type":"object"},"databricks:index/ModelServingConfigServedEntityExternalModelAmazonBedrockConfig:ModelServingConfigServedEntityExternalModelAmazonBedrockConfig":{"properties":{"awsAccessKeyId":{"type":"string","description":"The Databricks secret key reference for an AWS Access Key ID with permissions to interact with Bedrock services.\n"},"awsAccessKeyIdPlaintext":{"type":"string","description":"An AWS access key ID with permissions to interact with Bedrock services provided as a plaintext string.\n"},"awsRegion":{"type":"string","description":"The AWS region to use. Bedrock has to be enabled there.\n"},"awsSecretAccessKey":{"type":"string","description":"The Databricks secret key reference for an AWS Secret Access Key paired with the access key ID, with permissions to interact with Bedrock services.\n"},"awsSecretAccessKeyPlaintext":{"type":"string","description":"An AWS secret access key paired with the access key ID, with permissions to interact with Bedrock services provided as a plaintext string.\n"},"bedrockProvider":{"type":"string","description":"The underlying provider in Amazon Bedrock. Supported values (case insensitive) include: `Anthropic`, `Cohere`, `AI21Labs`, `Amazon`.\n"},"instanceProfileArn":{"type":"string"}},"type":"object","required":["awsRegion","bedrockProvider"]},"databricks:index/ModelServingConfigServedEntityExternalModelAnthropicConfig:ModelServingConfigServedEntityExternalModelAnthropicConfig":{"properties":{"anthropicApiKey":{"type":"string","description":"The Databricks secret key reference for an Anthropic API key.\n"},"anthropicApiKeyPlaintext":{"type":"string","description":"The Anthropic API key provided as a plaintext string.\n"}},"type":"object"},"databricks:index/ModelServingConfigServedEntityExternalModelCohereConfig:ModelServingConfigServedEntityExternalModelCohereConfig":{"properties":{"cohereApiBase":{"type":"string"},"cohereApiKey":{"type":"string","description":"The Databricks secret key reference for a Cohere API key.\n"},"cohereApiKeyPlaintext":{"type":"string","description":"The Cohere API key provided as a plaintext string.\n"}},"type":"object"},"databricks:index/ModelServingConfigServedEntityExternalModelCustomProviderConfig:ModelServingConfigServedEntityExternalModelCustomProviderConfig":{"properties":{"apiKeyAuth":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModelCustomProviderConfigApiKeyAuth:ModelServingConfigServedEntityExternalModelCustomProviderConfigApiKeyAuth","description":"API key authentication for the custom provider API. Conflicts with \u003cspan pulumi-lang-nodejs=\"`bearerTokenAuth`\" pulumi-lang-dotnet=\"`BearerTokenAuth`\" pulumi-lang-go=\"`bearerTokenAuth`\" pulumi-lang-python=\"`bearer_token_auth`\" pulumi-lang-yaml=\"`bearerTokenAuth`\" pulumi-lang-java=\"`bearerTokenAuth`\"\u003e`bearer_token_auth`\u003c/span\u003e.\n"},"bearerTokenAuth":{"$ref":"#/types/databricks:index/ModelServingConfigServedEntityExternalModelCustomProviderConfigBearerTokenAuth:ModelServingConfigServedEntityExternalModelCustomProviderConfigBearerTokenAuth","description":"bearer token authentication for the custom provider API.  Conflicts with \u003cspan pulumi-lang-nodejs=\"`apiKeyAuth`\" pulumi-lang-dotnet=\"`ApiKeyAuth`\" pulumi-lang-go=\"`apiKeyAuth`\" pulumi-lang-python=\"`api_key_auth`\" pulumi-lang-yaml=\"`apiKeyAuth`\" pulumi-lang-java=\"`apiKeyAuth`\"\u003e`api_key_auth`\u003c/span\u003e.\n"},"customProviderUrl":{"type":"string","description":"URL of the custom provider API.\n"}},"type":"object","required":["customProviderUrl"]},"databricks:index/ModelServingConfigServedEntityExternalModelCustomProviderConfigApiKeyAuth:ModelServingConfigServedEntityExternalModelCustomProviderConfigApiKeyAuth":{"properties":{"key":{"type":"string"},"value":{"type":"string"},"valuePlaintext":{"type":"string","description":"The API Key provided as a plaintext string.\n"}},"type":"object","required":["key"]},"databricks:index/ModelServingConfigServedEntityExternalModelCustomProviderConfigBearerTokenAuth:ModelServingConfigServedEntityExternalModelCustomProviderConfigBearerTokenAuth":{"properties":{"token":{"type":"string","description":"The Databricks secret key reference for a token.\n"},"tokenPlaintext":{"type":"string","description":"The token provided as a plaintext string.\n"}},"type":"object"},"databricks:index/ModelServingConfigServedEntityExternalModelDatabricksModelServingConfig:ModelServingConfigServedEntityExternalModelDatabricksModelServingConfig":{"properties":{"databricksApiToken":{"type":"string","description":"The Databricks secret key reference for a Databricks API token that corresponds to a user or service principal with Can Query access to the model serving endpoint pointed to by this external model.\n"},"databricksApiTokenPlaintext":{"type":"string","description":"The Databricks API token that corresponds to a user or service principal with Can Query access to the model serving endpoint pointed to by this external model provided as a plaintext string.\n"},"databricksWorkspaceUrl":{"type":"string","description":"The URL of the Databricks workspace containing the model serving endpoint pointed to by this external model.\n"}},"type":"object","required":["databricksWorkspaceUrl"]},"databricks:index/ModelServingConfigServedEntityExternalModelGoogleCloudVertexAiConfig:ModelServingConfigServedEntityExternalModelGoogleCloudVertexAiConfig":{"properties":{"privateKey":{"type":"string","description":"The Databricks secret key reference for a private key for the service account that has access to the Google Cloud Vertex AI Service.\n"},"privateKeyPlaintext":{"type":"string","description":"The private key for the service account that has access to the Google Cloud Vertex AI Service is provided as a plaintext secret.\n"},"projectId":{"type":"string","description":"This is the Google Cloud project id that the service account is associated with.\n"},"region":{"type":"string","description":"This is the region for the Google Cloud Vertex AI Service.\n"}},"type":"object","required":["projectId","region"]},"databricks:index/ModelServingConfigServedEntityExternalModelOpenaiConfig:ModelServingConfigServedEntityExternalModelOpenaiConfig":{"properties":{"microsoftEntraClientId":{"type":"string","description":"This field is only required for Azure AD OpenAI and is the Microsoft Entra Client ID.\n"},"microsoftEntraClientSecret":{"type":"string","description":"The Databricks secret key reference for a client secret used for Microsoft Entra ID authentication.\n"},"microsoftEntraClientSecretPlaintext":{"type":"string","description":"The client secret used for Microsoft Entra ID authentication provided as a plaintext string.\n"},"microsoftEntraTenantId":{"type":"string","description":"This field is only required for Azure AD OpenAI and is the Microsoft Entra Tenant ID.\n"},"openaiApiBase":{"type":"string","description":"This is the base URL for the OpenAI API (default: \"\u003chttps://api.openai.com/v1\u003e\"). For Azure OpenAI, this field is required and is the base URL for the Azure OpenAI API service provided by Azure.\n"},"openaiApiKey":{"type":"string","description":"The Databricks secret key reference for an OpenAI or Azure OpenAI API key.\n"},"openaiApiKeyPlaintext":{"type":"string","description":"The OpenAI API key using the OpenAI or Azure service provided as a plaintext string.\n"},"openaiApiType":{"type":"string","description":"This is an optional field to specify the type of OpenAI API to use. For Azure OpenAI, this field is required, and this parameter represents the preferred security access validation protocol. For access token validation, use \u003cspan pulumi-lang-nodejs=\"`azure`\" pulumi-lang-dotnet=\"`Azure`\" pulumi-lang-go=\"`azure`\" pulumi-lang-python=\"`azure`\" pulumi-lang-yaml=\"`azure`\" pulumi-lang-java=\"`azure`\"\u003e`azure`\u003c/span\u003e. For authentication using Azure Active Directory (Azure AD) use, \u003cspan pulumi-lang-nodejs=\"`azuread`\" pulumi-lang-dotnet=\"`Azuread`\" pulumi-lang-go=\"`azuread`\" pulumi-lang-python=\"`azuread`\" pulumi-lang-yaml=\"`azuread`\" pulumi-lang-java=\"`azuread`\"\u003e`azuread`\u003c/span\u003e.\n"},"openaiApiVersion":{"type":"string","description":"This is an optional field to specify the OpenAI API version. For Azure OpenAI, this field is required and is the version of the Azure OpenAI service to utilize, specified by a date.\n"},"openaiDeploymentName":{"type":"string","description":"This field is only required for Azure OpenAI and is the name of the deployment resource for the Azure OpenAI service.\n"},"openaiOrganization":{"type":"string","description":"This is an optional field to specify the organization in OpenAI or Azure OpenAI.\n"}},"type":"object"},"databricks:index/ModelServingConfigServedEntityExternalModelPalmConfig:ModelServingConfigServedEntityExternalModelPalmConfig":{"properties":{"palmApiKey":{"type":"string","description":"The Databricks secret key reference for a PaLM API key.\n"},"palmApiKeyPlaintext":{"type":"string","description":"The PaLM API key provided as a plaintext string.\n"}},"type":"object"},"databricks:index/ModelServingConfigServedModel:ModelServingConfigServedModel":{"properties":{"burstScalingEnabled":{"type":"boolean"},"environmentVars":{"type":"object","additionalProperties":{"type":"string"},"description":"a map of environment variable names/values that will be used for serving this model.  Environment variables may refer to Databricks secrets using the standard syntax: `{{secrets/secret_scope/secret_key}}`.\n"},"instanceProfileArn":{"type":"string","description":"ARN of the instance profile that the served model will use to access AWS resources.\n"},"maxProvisionedConcurrency":{"type":"integer","description":"The maximum provisioned concurrency that the endpoint can scale up to. Conflicts with \u003cspan pulumi-lang-nodejs=\"`workloadSize`\" pulumi-lang-dotnet=\"`WorkloadSize`\" pulumi-lang-go=\"`workloadSize`\" pulumi-lang-python=\"`workload_size`\" pulumi-lang-yaml=\"`workloadSize`\" pulumi-lang-java=\"`workloadSize`\"\u003e`workload_size`\u003c/span\u003e.\n"},"maxProvisionedThroughput":{"type":"integer","description":"The maximum tokens per second that the endpoint can scale up to.\n"},"minProvisionedConcurrency":{"type":"integer","description":"The minimum provisioned concurrency that the endpoint can scale down to. Conflicts with \u003cspan pulumi-lang-nodejs=\"`workloadSize`\" pulumi-lang-dotnet=\"`WorkloadSize`\" pulumi-lang-go=\"`workloadSize`\" pulumi-lang-python=\"`workload_size`\" pulumi-lang-yaml=\"`workloadSize`\" pulumi-lang-java=\"`workloadSize`\"\u003e`workload_size`\u003c/span\u003e.\n"},"minProvisionedThroughput":{"type":"integer","description":"The minimum tokens per second that the endpoint can scale down to.\n"},"modelName":{"type":"string","description":"The name of the model in Databricks Model Registry to be served.\n"},"modelVersion":{"type":"string","description":"The version of the model in Databricks Model Registry to be served.\n"},"name":{"type":"string","description":"The name of a served model. It must be unique across an endpoint. If not specified, this field will default to `modelname-modelversion`. A served model name can consist of alphanumeric characters, dashes, and underscores.\n"},"provisionedModelUnits":{"type":"integer"},"scaleToZeroEnabled":{"type":"boolean","description":"Whether the compute resources for the served model should scale down to zero. If `scale-to-zero` is enabled, the lower bound of the provisioned concurrency for each workload size will be 0. The default value is \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e.\n"},"workloadSize":{"type":"string","description":"The workload size of the served model. The workload size corresponds to a range of provisioned concurrency that the compute will autoscale between. A single unit of provisioned concurrency can process one request at a time. Valid workload sizes are `Small` (4 - 4 provisioned concurrency), `Medium` (8 - 16 provisioned concurrency), and `Large` (16 - 64 provisioned concurrency).\n"},"workloadType":{"type":"string","description":"The workload type of the served model. The workload type selects which type of compute to use in the endpoint. For deep learning workloads, GPU acceleration is available by selecting workload types like `GPU_SMALL` and others. See the documentation for all options. The default value is `CPU`.\n"}},"type":"object","required":["modelName","modelVersion"],"language":{"nodejs":{"requiredOutputs":["modelName","modelVersion","name","workloadType"]}}},"databricks:index/ModelServingConfigTrafficConfig:ModelServingConfigTrafficConfig":{"properties":{"routes":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingConfigTrafficConfigRoute:ModelServingConfigTrafficConfigRoute"},"description":"Each block represents a route that defines traffic to each served entity. Each \u003cspan pulumi-lang-nodejs=\"`servedEntity`\" pulumi-lang-dotnet=\"`ServedEntity`\" pulumi-lang-go=\"`servedEntity`\" pulumi-lang-python=\"`served_entity`\" pulumi-lang-yaml=\"`servedEntity`\" pulumi-lang-java=\"`servedEntity`\"\u003e`served_entity`\u003c/span\u003e block needs to have a corresponding \u003cspan pulumi-lang-nodejs=\"`routes`\" pulumi-lang-dotnet=\"`Routes`\" pulumi-lang-go=\"`routes`\" pulumi-lang-python=\"`routes`\" pulumi-lang-yaml=\"`routes`\" pulumi-lang-java=\"`routes`\"\u003e`routes`\u003c/span\u003e block.\n"}},"type":"object"},"databricks:index/ModelServingConfigTrafficConfigRoute:ModelServingConfigTrafficConfigRoute":{"properties":{"servedEntityName":{"type":"string","description":"The name of the served entity this route configures traffic for. This needs to match the name of a \u003cspan pulumi-lang-nodejs=\"`servedEntity`\" pulumi-lang-dotnet=\"`ServedEntity`\" pulumi-lang-go=\"`servedEntity`\" pulumi-lang-python=\"`served_entity`\" pulumi-lang-yaml=\"`servedEntity`\" pulumi-lang-java=\"`servedEntity`\"\u003e`served_entity`\u003c/span\u003e block.\n"},"servedModelName":{"type":"string"},"trafficPercentage":{"type":"integer","description":"The percentage of endpoint traffic to send to this route. It must be an integer between 0 and 100 inclusive.\n"}},"type":"object","required":["trafficPercentage"]},"databricks:index/ModelServingEmailNotifications:ModelServingEmailNotifications":{"properties":{"onUpdateFailures":{"type":"array","items":{"type":"string"},"description":"a list of email addresses to be notified when an endpoint fails to update its configuration or state.\n"},"onUpdateSuccesses":{"type":"array","items":{"type":"string"},"description":"a list of email addresses to be notified when an endpoint successfully updates its configuration or state.\n"}},"type":"object"},"databricks:index/ModelServingProviderConfig:ModelServingProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/ModelServingProvisionedThroughputAiGateway:ModelServingProvisionedThroughputAiGateway":{"properties":{"fallbackConfig":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGatewayFallbackConfig:ModelServingProvisionedThroughputAiGatewayFallbackConfig"},"guardrails":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGatewayGuardrails:ModelServingProvisionedThroughputAiGatewayGuardrails","description":"Block with configuration for AI Guardrails to prevent unwanted data and unsafe data in requests and responses. Consists of the following attributes:\n"},"inferenceTableConfig":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGatewayInferenceTableConfig:ModelServingProvisionedThroughputAiGatewayInferenceTableConfig","description":"Block describing the configuration of usage tracking. Consists of the following attributes:\n"},"rateLimits":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGatewayRateLimit:ModelServingProvisionedThroughputAiGatewayRateLimit"},"description":"Block describing rate limits for AI gateway. For details see the description of \u003cspan pulumi-lang-nodejs=\"`rateLimits`\" pulumi-lang-dotnet=\"`RateLimits`\" pulumi-lang-go=\"`rateLimits`\" pulumi-lang-python=\"`rate_limits`\" pulumi-lang-yaml=\"`rateLimits`\" pulumi-lang-java=\"`rateLimits`\"\u003e`rate_limits`\u003c/span\u003e block above.\n"},"usageTrackingConfig":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGatewayUsageTrackingConfig:ModelServingProvisionedThroughputAiGatewayUsageTrackingConfig","description":"Block with configuration for payload logging using inference tables. For details see the description of \u003cspan pulumi-lang-nodejs=\"`autoCaptureConfig`\" pulumi-lang-dotnet=\"`AutoCaptureConfig`\" pulumi-lang-go=\"`autoCaptureConfig`\" pulumi-lang-python=\"`auto_capture_config`\" pulumi-lang-yaml=\"`autoCaptureConfig`\" pulumi-lang-java=\"`autoCaptureConfig`\"\u003e`auto_capture_config`\u003c/span\u003e block above.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["usageTrackingConfig"]}}},"databricks:index/ModelServingProvisionedThroughputAiGatewayFallbackConfig:ModelServingProvisionedThroughputAiGatewayFallbackConfig":{"properties":{"enabled":{"type":"boolean","description":"boolean flag specifying if usage tracking is enabled.\n"}},"type":"object","required":["enabled"]},"databricks:index/ModelServingProvisionedThroughputAiGatewayGuardrails:ModelServingProvisionedThroughputAiGatewayGuardrails":{"properties":{"input":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGatewayGuardrailsInput:ModelServingProvisionedThroughputAiGatewayGuardrailsInput","description":"A block with configuration for input guardrail filters:\n"},"output":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGatewayGuardrailsOutput:ModelServingProvisionedThroughputAiGatewayGuardrailsOutput","description":"A block with configuration for output guardrail filters.  Has the same structure as \u003cspan pulumi-lang-nodejs=\"`input`\" pulumi-lang-dotnet=\"`Input`\" pulumi-lang-go=\"`input`\" pulumi-lang-python=\"`input`\" pulumi-lang-yaml=\"`input`\" pulumi-lang-java=\"`input`\"\u003e`input`\u003c/span\u003e block.\n"}},"type":"object"},"databricks:index/ModelServingProvisionedThroughputAiGatewayGuardrailsInput:ModelServingProvisionedThroughputAiGatewayGuardrailsInput":{"properties":{"invalidKeywords":{"type":"array","items":{"type":"string"},"description":"List of invalid keywords. AI guardrail uses keyword or string matching to decide if the keyword exists in the request or response content.\n"},"pii":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGatewayGuardrailsInputPii:ModelServingProvisionedThroughputAiGatewayGuardrailsInputPii","description":"Block with configuration for guardrail PII filter:\n"},"safety":{"type":"boolean","description":"the boolean flag that indicates whether the safety filter is enabled.\n"},"validTopics":{"type":"array","items":{"type":"string"},"description":"The list of allowed topics. Given a chat request, this guardrail flags the request if its topic is not in the allowed topics.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["pii"]}}},"databricks:index/ModelServingProvisionedThroughputAiGatewayGuardrailsInputPii:ModelServingProvisionedThroughputAiGatewayGuardrailsInputPii":{"properties":{"behavior":{"type":"string","description":"a string that describes the behavior for PII filter. Currently only `BLOCK` value is supported.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["behavior"]}}},"databricks:index/ModelServingProvisionedThroughputAiGatewayGuardrailsOutput:ModelServingProvisionedThroughputAiGatewayGuardrailsOutput":{"properties":{"invalidKeywords":{"type":"array","items":{"type":"string"},"description":"List of invalid keywords. AI guardrail uses keyword or string matching to decide if the keyword exists in the request or response content.\n"},"pii":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGatewayGuardrailsOutputPii:ModelServingProvisionedThroughputAiGatewayGuardrailsOutputPii","description":"Block with configuration for guardrail PII filter:\n"},"safety":{"type":"boolean","description":"the boolean flag that indicates whether the safety filter is enabled.\n"},"validTopics":{"type":"array","items":{"type":"string"},"description":"The list of allowed topics. Given a chat request, this guardrail flags the request if its topic is not in the allowed topics.\n"}},"type":"object"},"databricks:index/ModelServingProvisionedThroughputAiGatewayGuardrailsOutputPii:ModelServingProvisionedThroughputAiGatewayGuardrailsOutputPii":{"properties":{"behavior":{"type":"string","description":"a string that describes the behavior for PII filter. Currently only `BLOCK` value is supported.\n"}},"type":"object"},"databricks:index/ModelServingProvisionedThroughputAiGatewayInferenceTableConfig:ModelServingProvisionedThroughputAiGatewayInferenceTableConfig":{"properties":{"catalogName":{"type":"string"},"enabled":{"type":"boolean","description":"boolean flag specifying if usage tracking is enabled.\n"},"schemaName":{"type":"string"},"tableNamePrefix":{"type":"string"}},"type":"object"},"databricks:index/ModelServingProvisionedThroughputAiGatewayRateLimit:ModelServingProvisionedThroughputAiGatewayRateLimit":{"properties":{"calls":{"type":"integer"},"key":{"type":"string","description":"The key field for a tag.\n"},"principal":{"type":"string"},"renewalPeriod":{"type":"string"},"tokens":{"type":"integer"}},"type":"object","required":["renewalPeriod"]},"databricks:index/ModelServingProvisionedThroughputAiGatewayUsageTrackingConfig:ModelServingProvisionedThroughputAiGatewayUsageTrackingConfig":{"properties":{"enabled":{"type":"boolean","description":"boolean flag specifying if usage tracking is enabled.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["enabled"]}}},"databricks:index/ModelServingProvisionedThroughputConfig:ModelServingProvisionedThroughputConfig":{"properties":{"servedEntities":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputConfigServedEntity:ModelServingProvisionedThroughputConfigServedEntity"},"description":"A list of served entities for the endpoint to serve.\n"},"trafficConfig":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputConfigTrafficConfig:ModelServingProvisionedThroughputConfigTrafficConfig","description":"A single block represents the traffic split configuration amongst the served models.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["trafficConfig"]}}},"databricks:index/ModelServingProvisionedThroughputConfigServedEntity:ModelServingProvisionedThroughputConfigServedEntity":{"properties":{"burstScalingEnabled":{"type":"boolean"},"entityName":{"type":"string","description":"The full path of the UC model to be served, given in the form of `catalog_name.schema_name.model_name`.\n"},"entityVersion":{"type":"string","description":"The version of the model in UC to be served.\n"},"name":{"type":"string","description":"The name of a served entity. It must be unique across an endpoint. A served entity name can consist of alphanumeric characters, dashes, and underscores. If not specified for an external model, this field will be created from the \u003cspan pulumi-lang-nodejs=\"`entityName`\" pulumi-lang-dotnet=\"`EntityName`\" pulumi-lang-go=\"`entityName`\" pulumi-lang-python=\"`entity_name`\" pulumi-lang-yaml=\"`entityName`\" pulumi-lang-java=\"`entityName`\"\u003e`entity_name`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`entityVersion`\" pulumi-lang-dotnet=\"`EntityVersion`\" pulumi-lang-go=\"`entityVersion`\" pulumi-lang-python=\"`entity_version`\" pulumi-lang-yaml=\"`entityVersion`\" pulumi-lang-java=\"`entityVersion`\"\u003e`entity_version`\u003c/span\u003e\n"},"provisionedModelUnits":{"type":"integer","description":"The number of model units to be provisioned.\n"}},"type":"object","required":["entityName","entityVersion","provisionedModelUnits"],"language":{"nodejs":{"requiredOutputs":["entityName","entityVersion","name","provisionedModelUnits"]}}},"databricks:index/ModelServingProvisionedThroughputConfigTrafficConfig:ModelServingProvisionedThroughputConfigTrafficConfig":{"properties":{"routes":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputConfigTrafficConfigRoute:ModelServingProvisionedThroughputConfigTrafficConfigRoute"},"description":"Each block represents a route that defines traffic to each served entity. Each \u003cspan pulumi-lang-nodejs=\"`servedEntity`\" pulumi-lang-dotnet=\"`ServedEntity`\" pulumi-lang-go=\"`servedEntity`\" pulumi-lang-python=\"`served_entity`\" pulumi-lang-yaml=\"`servedEntity`\" pulumi-lang-java=\"`servedEntity`\"\u003e`served_entity`\u003c/span\u003e block needs to have a corresponding \u003cspan pulumi-lang-nodejs=\"`routes`\" pulumi-lang-dotnet=\"`Routes`\" pulumi-lang-go=\"`routes`\" pulumi-lang-python=\"`routes`\" pulumi-lang-yaml=\"`routes`\" pulumi-lang-java=\"`routes`\"\u003e`routes`\u003c/span\u003e block.\n"}},"type":"object"},"databricks:index/ModelServingProvisionedThroughputConfigTrafficConfigRoute:ModelServingProvisionedThroughputConfigTrafficConfigRoute":{"properties":{"servedEntityName":{"type":"string","description":"The name of the served entity this route configures traffic for. This needs to match the name of a \u003cspan pulumi-lang-nodejs=\"`servedEntity`\" pulumi-lang-dotnet=\"`ServedEntity`\" pulumi-lang-go=\"`servedEntity`\" pulumi-lang-python=\"`served_entity`\" pulumi-lang-yaml=\"`servedEntity`\" pulumi-lang-java=\"`servedEntity`\"\u003e`served_entity`\u003c/span\u003e block.\n"},"servedModelName":{"type":"string"},"trafficPercentage":{"type":"integer","description":"The percentage of endpoint traffic to send to this route. It must be an integer between 0 and 100 inclusive.\n"}},"type":"object","required":["trafficPercentage"]},"databricks:index/ModelServingProvisionedThroughputEmailNotifications:ModelServingProvisionedThroughputEmailNotifications":{"properties":{"onUpdateFailures":{"type":"array","items":{"type":"string"},"description":"a list of email addresses to be notified when an endpoint fails to update its configuration or state.\n"},"onUpdateSuccesses":{"type":"array","items":{"type":"string"},"description":"a list of email addresses to be notified when an endpoint successfully updates its configuration or state.\n"}},"type":"object"},"databricks:index/ModelServingProvisionedThroughputProviderConfig:ModelServingProvisionedThroughputProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/ModelServingProvisionedThroughputTag:ModelServingProvisionedThroughputTag":{"properties":{"key":{"type":"string","description":"The key field for a tag.\n"},"value":{"type":"string","description":"The value field for a tag.\n"}},"type":"object","required":["key"]},"databricks:index/ModelServingRateLimit:ModelServingRateLimit":{"properties":{"calls":{"type":"integer","description":"Used to specify how many calls are allowed for a key within the renewal_period.\n"},"key":{"type":"string","description":"Key field for a serving endpoint rate limit. Currently, \u003cspan pulumi-lang-nodejs=\"`user`\" pulumi-lang-dotnet=\"`User`\" pulumi-lang-go=\"`user`\" pulumi-lang-python=\"`user`\" pulumi-lang-yaml=\"`user`\" pulumi-lang-java=\"`user`\"\u003e`user`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`userGroup`\" pulumi-lang-dotnet=\"`UserGroup`\" pulumi-lang-go=\"`userGroup`\" pulumi-lang-python=\"`user_group`\" pulumi-lang-yaml=\"`userGroup`\" pulumi-lang-java=\"`userGroup`\"\u003e`user_group`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`servicePrincipal`\" pulumi-lang-dotnet=\"`ServicePrincipal`\" pulumi-lang-go=\"`servicePrincipal`\" pulumi-lang-python=\"`service_principal`\" pulumi-lang-yaml=\"`servicePrincipal`\" pulumi-lang-java=\"`servicePrincipal`\"\u003e`service_principal`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`endpoint`\" pulumi-lang-dotnet=\"`Endpoint`\" pulumi-lang-go=\"`endpoint`\" pulumi-lang-python=\"`endpoint`\" pulumi-lang-yaml=\"`endpoint`\" pulumi-lang-java=\"`endpoint`\"\u003e`endpoint`\u003c/span\u003e are supported, with \u003cspan pulumi-lang-nodejs=\"`endpoint`\" pulumi-lang-dotnet=\"`Endpoint`\" pulumi-lang-go=\"`endpoint`\" pulumi-lang-python=\"`endpoint`\" pulumi-lang-yaml=\"`endpoint`\" pulumi-lang-java=\"`endpoint`\"\u003e`endpoint`\u003c/span\u003e being the default if not specified.\n"},"renewalPeriod":{"type":"string","description":"Renewal period field for a serving endpoint rate limit. Currently, only \u003cspan pulumi-lang-nodejs=\"`minute`\" pulumi-lang-dotnet=\"`Minute`\" pulumi-lang-go=\"`minute`\" pulumi-lang-python=\"`minute`\" pulumi-lang-yaml=\"`minute`\" pulumi-lang-java=\"`minute`\"\u003e`minute`\u003c/span\u003e is supported.\n"}},"type":"object","required":["calls","renewalPeriod"]},"databricks:index/ModelServingTag:ModelServingTag":{"properties":{"key":{"type":"string","description":"The key field for a tag.\n"},"value":{"type":"string","description":"The value field for a tag.\n"}},"type":"object","required":["key"]},"databricks:index/MountAbfs:MountAbfs":{"properties":{"clientId":{"type":"string","willReplaceOnChanges":true},"clientSecretKey":{"type":"string","willReplaceOnChanges":true},"clientSecretScope":{"type":"string","willReplaceOnChanges":true},"containerName":{"type":"string","willReplaceOnChanges":true},"directory":{"type":"string","willReplaceOnChanges":true},"initializeFileSystem":{"type":"boolean","willReplaceOnChanges":true},"storageAccountName":{"type":"string","willReplaceOnChanges":true},"tenantId":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["clientId","clientSecretKey","clientSecretScope","initializeFileSystem"],"language":{"nodejs":{"requiredOutputs":["clientId","clientSecretKey","clientSecretScope","containerName","initializeFileSystem","storageAccountName","tenantId"]}}},"databricks:index/MountAdl:MountAdl":{"properties":{"clientId":{"type":"string","willReplaceOnChanges":true},"clientSecretKey":{"type":"string","willReplaceOnChanges":true},"clientSecretScope":{"type":"string","willReplaceOnChanges":true},"directory":{"type":"string","willReplaceOnChanges":true},"sparkConfPrefix":{"type":"string","willReplaceOnChanges":true},"storageResourceName":{"type":"string","willReplaceOnChanges":true},"tenantId":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["clientId","clientSecretKey","clientSecretScope"],"language":{"nodejs":{"requiredOutputs":["clientId","clientSecretKey","clientSecretScope","storageResourceName","tenantId"]}}},"databricks:index/MountGs:MountGs":{"properties":{"bucketName":{"type":"string","willReplaceOnChanges":true},"serviceAccount":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["bucketName"]},"databricks:index/MountProviderConfig:MountProviderConfig":{"properties":{"workspaceId":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/MountS3:MountS3":{"properties":{"bucketName":{"type":"string","willReplaceOnChanges":true},"instanceProfile":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["bucketName"]},"databricks:index/MountWasb:MountWasb":{"properties":{"authType":{"type":"string","willReplaceOnChanges":true},"containerName":{"type":"string","willReplaceOnChanges":true},"directory":{"type":"string","willReplaceOnChanges":true},"storageAccountName":{"type":"string","willReplaceOnChanges":true},"tokenSecretKey":{"type":"string","willReplaceOnChanges":true},"tokenSecretScope":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["authType","tokenSecretKey","tokenSecretScope"],"language":{"nodejs":{"requiredOutputs":["authType","containerName","storageAccountName","tokenSecretKey","tokenSecretScope"]}}},"databricks:index/MwsCustomerManagedKeysAwsKeyInfo:MwsCustomerManagedKeysAwsKeyInfo":{"properties":{"keyAlias":{"type":"string","description":"The AWS KMS key alias.\n","willReplaceOnChanges":true},"keyArn":{"type":"string","description":"The AWS KMS key's Amazon Resource Name (ARN).\n","willReplaceOnChanges":true},"keyRegion":{"type":"string","description":"(Computed) The AWS region in which KMS key is deployed to. This is not required.\n"}},"type":"object","required":["keyArn"],"language":{"nodejs":{"requiredOutputs":["keyArn","keyRegion"]}}},"databricks:index/MwsCustomerManagedKeysGcpKeyInfo:MwsCustomerManagedKeysGcpKeyInfo":{"properties":{"kmsKeyId":{"type":"string","description":"The GCP KMS key's resource name.\n","willReplaceOnChanges":true}},"type":"object","required":["kmsKeyId"]},"databricks:index/MwsNetworkConnectivityConfigEgressConfig:MwsNetworkConnectivityConfigEgressConfig":{"properties":{"defaultRules":{"$ref":"#/types/databricks:index/MwsNetworkConnectivityConfigEgressConfigDefaultRules:MwsNetworkConnectivityConfigEgressConfigDefaultRules","description":"block describing network connectivity rules that are applied by default without resource specific configurations.  Consists of the following fields:\n"},"targetRules":{"$ref":"#/types/databricks:index/MwsNetworkConnectivityConfigEgressConfigTargetRules:MwsNetworkConnectivityConfigEgressConfigTargetRules","description":"block describing network connectivity rules that configured for each destinations. These rules override default rules.  Consists of the following fields:\n"}},"type":"object"},"databricks:index/MwsNetworkConnectivityConfigEgressConfigDefaultRules:MwsNetworkConnectivityConfigEgressConfigDefaultRules":{"properties":{"awsStableIpRule":{"$ref":"#/types/databricks:index/MwsNetworkConnectivityConfigEgressConfigDefaultRulesAwsStableIpRule:MwsNetworkConnectivityConfigEgressConfigDefaultRulesAwsStableIpRule","description":"(AWS only) - block with information about stable AWS IP CIDR blocks. You can use these to configure the firewall of your resources to allow traffic from your Databricks workspace.  Consists of the following fields:\n"},"azureServiceEndpointRule":{"$ref":"#/types/databricks:index/MwsNetworkConnectivityConfigEgressConfigDefaultRulesAzureServiceEndpointRule:MwsNetworkConnectivityConfigEgressConfigDefaultRulesAzureServiceEndpointRule","description":"(Azure only) - block with information about stable Azure service endpoints. You can configure the firewall of your Azure resources to allow traffic from your Databricks serverless compute resources.  Consists of the following fields:\n"}},"type":"object"},"databricks:index/MwsNetworkConnectivityConfigEgressConfigDefaultRulesAwsStableIpRule:MwsNetworkConnectivityConfigEgressConfigDefaultRulesAwsStableIpRule":{"properties":{"cidrBlocks":{"type":"array","items":{"type":"string"},"description":"list of IP CIDR blocks.\n"}},"type":"object"},"databricks:index/MwsNetworkConnectivityConfigEgressConfigDefaultRulesAzureServiceEndpointRule:MwsNetworkConnectivityConfigEgressConfigDefaultRulesAzureServiceEndpointRule":{"properties":{"subnets":{"type":"array","items":{"type":"string"},"description":"list of subnets from which Databricks network traffic originates when accessing your Azure resources.\n"},"targetRegion":{"type":"string","description":"the Azure region in which this service endpoint rule applies.\n"},"targetServices":{"type":"array","items":{"type":"string"},"description":"the Azure services to which this service endpoint rule applies to.\n"}},"type":"object"},"databricks:index/MwsNetworkConnectivityConfigEgressConfigTargetRules:MwsNetworkConnectivityConfigEgressConfigTargetRules":{"properties":{"awsPrivateEndpointRules":{"type":"array","items":{"$ref":"#/types/databricks:index/MwsNetworkConnectivityConfigEgressConfigTargetRulesAwsPrivateEndpointRule:MwsNetworkConnectivityConfigEgressConfigTargetRulesAwsPrivateEndpointRule"},"description":"(AWS only) - list containing information about configure AWS Private Endpoints.\n"},"azurePrivateEndpointRules":{"type":"array","items":{"$ref":"#/types/databricks:index/MwsNetworkConnectivityConfigEgressConfigTargetRulesAzurePrivateEndpointRule:MwsNetworkConnectivityConfigEgressConfigTargetRulesAzurePrivateEndpointRule"},"description":"(Azure only) - list containing information about configure Azure Private Endpoints.\n"}},"type":"object"},"databricks:index/MwsNetworkConnectivityConfigEgressConfigTargetRulesAwsPrivateEndpointRule:MwsNetworkConnectivityConfigEgressConfigTargetRulesAwsPrivateEndpointRule":{"properties":{"accountId":{"type":"string"},"connectionState":{"type":"string"},"creationTime":{"type":"integer","description":"time in epoch milliseconds when this object was created.\n"},"deactivated":{"type":"boolean"},"deactivatedAt":{"type":"integer"},"domainNames":{"type":"array","items":{"type":"string"}},"enabled":{"type":"boolean"},"endpointService":{"type":"string"},"errorMessage":{"type":"string"},"networkConnectivityConfigId":{"type":"string","description":"Canonical unique identifier of Network Connectivity Config in Databricks Account\n"},"resourceNames":{"type":"array","items":{"type":"string"}},"ruleId":{"type":"string"},"updatedTime":{"type":"integer","description":"time in epoch milliseconds when this object was updated.\n"},"vpcEndpointId":{"type":"string"}},"type":"object"},"databricks:index/MwsNetworkConnectivityConfigEgressConfigTargetRulesAzurePrivateEndpointRule:MwsNetworkConnectivityConfigEgressConfigTargetRulesAzurePrivateEndpointRule":{"properties":{"connectionState":{"type":"string"},"creationTime":{"type":"integer","description":"time in epoch milliseconds when this object was created.\n"},"deactivated":{"type":"boolean"},"deactivatedAt":{"type":"integer"},"domainNames":{"type":"array","items":{"type":"string"}},"endpointName":{"type":"string"},"errorMessage":{"type":"string"},"groupId":{"type":"string"},"networkConnectivityConfigId":{"type":"string","description":"Canonical unique identifier of Network Connectivity Config in Databricks Account\n"},"resourceId":{"type":"string"},"ruleId":{"type":"string"},"updatedTime":{"type":"integer","description":"time in epoch milliseconds when this object was updated.\n"}},"type":"object"},"databricks:index/MwsNetworksErrorMessage:MwsNetworksErrorMessage":{"properties":{"errorMessage":{"type":"string"},"errorType":{"type":"string"}},"type":"object"},"databricks:index/MwsNetworksGcpNetworkInfo:MwsNetworksGcpNetworkInfo":{"properties":{"networkProjectId":{"type":"string","description":"The Google Cloud project ID of the VPC network.\n","willReplaceOnChanges":true},"podIpRangeName":{"type":"string","deprecationMessage":"gcp_network_info.pod_ip_range_name is deprecated and will be removed in a future release. For more information, review the documentation at https://registry.terraform.io/providers/databricks/databricks/1.109.0/docs/guides/gcp-workspace#creating-a-vpc"},"serviceIpRangeName":{"type":"string","deprecationMessage":"gcp_network_info.service_ip_range_name is deprecated and will be removed in a future release. For more information, review the documentation at https://registry.terraform.io/providers/databricks/databricks/1.109.0/docs/guides/gcp-workspace#creating-a-vpc"},"subnetId":{"type":"string","description":"The ID of the subnet associated with this network.\n","willReplaceOnChanges":true},"subnetRegion":{"type":"string","description":"The Google Cloud region of the workspace data plane. For example, `us-east4`.\n","willReplaceOnChanges":true},"vpcId":{"type":"string","description":"The ID of the VPC associated with this network. VPC IDs can be used in multiple network configurations.\n","willReplaceOnChanges":true}},"type":"object","required":["networkProjectId","subnetId","subnetRegion","vpcId"]},"databricks:index/MwsNetworksVpcEndpoints:MwsNetworksVpcEndpoints":{"properties":{"dataplaneRelays":{"type":"array","items":{"type":"string"}},"restApis":{"type":"array","items":{"type":"string"}}},"type":"object","required":["dataplaneRelays","restApis"]},"databricks:index/MwsVpcEndpointGcpVpcEndpointInfo:MwsVpcEndpointGcpVpcEndpointInfo":{"properties":{"endpointRegion":{"type":"string","description":"Region of the PSC endpoint.\n","willReplaceOnChanges":true},"projectId":{"type":"string","description":"The Google Cloud project ID of the VPC network where the PSC connection resides.\n","willReplaceOnChanges":true},"pscConnectionId":{"type":"string","description":"The unique ID of this PSC connection.\n"},"pscEndpointName":{"type":"string","description":"The name of the PSC endpoint in the Google Cloud project.\n","willReplaceOnChanges":true},"serviceAttachmentId":{"type":"string","description":"The service attachment this PSC connection connects to.\n"}},"type":"object","required":["endpointRegion","projectId","pscEndpointName"],"language":{"nodejs":{"requiredOutputs":["endpointRegion","projectId","pscConnectionId","pscEndpointName","serviceAttachmentId"]}}},"databricks:index/MwsWorkspacesCloudResourceContainer:MwsWorkspacesCloudResourceContainer":{"properties":{"gcp":{"$ref":"#/types/databricks:index/MwsWorkspacesCloudResourceContainerGcp:MwsWorkspacesCloudResourceContainerGcp","description":"A block that consists of the following field:\n"}},"type":"object","required":["gcp"]},"databricks:index/MwsWorkspacesCloudResourceContainerGcp:MwsWorkspacesCloudResourceContainerGcp":{"properties":{"projectId":{"type":"string","description":"The Google Cloud project ID, which the workspace uses to instantiate cloud resources for your workspace.\n"}},"type":"object","required":["projectId"]},"databricks:index/MwsWorkspacesExternalCustomerInfo:MwsWorkspacesExternalCustomerInfo":{"properties":{"authoritativeUserEmail":{"type":"string"},"authoritativeUserFullName":{"type":"string"},"customerName":{"type":"string"}},"type":"object","required":["authoritativeUserEmail","authoritativeUserFullName","customerName"]},"databricks:index/MwsWorkspacesGcpManagedNetworkConfig:MwsWorkspacesGcpManagedNetworkConfig":{"properties":{"gkeClusterPodIpRange":{"type":"string","deprecationMessage":"gcp_managed_network_config.gke_cluster_pod_ip_range is deprecated and will be removed in a future release. For more information, review the documentation at https://registry.terraform.io/providers/databricks/databricks/1.109.0/docs/guides/gcp-workspace#creating-a-databricks-workspace"},"gkeClusterServiceIpRange":{"type":"string","deprecationMessage":"gcp_managed_network_config.gke_cluster_service_ip_range is deprecated and will be removed in a future release. For more information, review the documentation at https://registry.terraform.io/providers/databricks/databricks/1.109.0/docs/guides/gcp-workspace#creating-a-databricks-workspace"},"subnetCidr":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["subnetCidr"]},"databricks:index/MwsWorkspacesGkeConfig:MwsWorkspacesGkeConfig":{"properties":{"connectivityType":{"type":"string"},"masterIpRange":{"type":"string"}},"type":"object"},"databricks:index/MwsWorkspacesToken:MwsWorkspacesToken":{"properties":{"comment":{"type":"string","description":"Comment, that will appear in \"User Settings / Access Tokens\" page on Workspace UI. By default it's \"Pulumi PAT\".\n"},"lifetimeSeconds":{"type":"integer","description":"Token expiry lifetime. By default its 2592000 (30 days).\n"},"tokenId":{"type":"string"},"tokenValue":{"type":"string","secret":true}},"type":"object","language":{"nodejs":{"requiredOutputs":["tokenId","tokenValue"]}}},"databricks:index/NotebookProviderConfig:NotebookProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/NotificationDestinationConfig:NotificationDestinationConfig":{"properties":{"email":{"$ref":"#/types/databricks:index/NotificationDestinationConfigEmail:NotificationDestinationConfigEmail","description":"The email configuration of the Notification Destination. It must contain the following:\n","willReplaceOnChanges":true},"genericWebhook":{"$ref":"#/types/databricks:index/NotificationDestinationConfigGenericWebhook:NotificationDestinationConfigGenericWebhook","description":"The Generic Webhook configuration of the Notification Destination. It must contain the following:\n","willReplaceOnChanges":true},"microsoftTeams":{"$ref":"#/types/databricks:index/NotificationDestinationConfigMicrosoftTeams:NotificationDestinationConfigMicrosoftTeams","description":"The Microsoft Teams configuration of the Notification Destination. It must contain the following:\n","willReplaceOnChanges":true},"pagerduty":{"$ref":"#/types/databricks:index/NotificationDestinationConfigPagerduty:NotificationDestinationConfigPagerduty","description":"The PagerDuty configuration of the Notification Destination. It must contain the following:\n","willReplaceOnChanges":true},"slack":{"$ref":"#/types/databricks:index/NotificationDestinationConfigSlack:NotificationDestinationConfigSlack","description":"The Slack configuration of the Notification Destination. It must contain the following:\n","willReplaceOnChanges":true}},"type":"object"},"databricks:index/NotificationDestinationConfigEmail:NotificationDestinationConfigEmail":{"properties":{"addresses":{"type":"array","items":{"type":"string"},"description":"The list of email addresses to send notifications to.\n"}},"type":"object"},"databricks:index/NotificationDestinationConfigGenericWebhook:NotificationDestinationConfigGenericWebhook":{"properties":{"password":{"type":"string","description":"The password for basic authentication.\n","secret":true},"passwordSet":{"type":"boolean"},"url":{"type":"string","description":"The Generic Webhook URL.\n","secret":true},"urlSet":{"type":"boolean"},"username":{"type":"string","description":"The username for basic authentication.\n","secret":true},"usernameSet":{"type":"boolean"}},"type":"object","language":{"nodejs":{"requiredOutputs":["passwordSet","urlSet","usernameSet"]}}},"databricks:index/NotificationDestinationConfigMicrosoftTeams:NotificationDestinationConfigMicrosoftTeams":{"properties":{"appId":{"type":"string","description":"App ID for Microsoft Teams App.\n","secret":true},"appIdSet":{"type":"boolean"},"authSecret":{"type":"string","description":"Secret for Microsoft Teams App authentication.\n","secret":true},"authSecretSet":{"type":"boolean"},"channelUrl":{"type":"string","description":"Channel URL for Microsoft Teams App.\n","secret":true},"channelUrlSet":{"type":"boolean"},"tenantId":{"type":"string","description":"Tenant ID for Microsoft Teams App.\n","secret":true},"tenantIdSet":{"type":"boolean"},"url":{"type":"string","description":"The Microsoft Teams webhook URL.\n","secret":true},"urlSet":{"type":"boolean"}},"type":"object","language":{"nodejs":{"requiredOutputs":["urlSet"]}}},"databricks:index/NotificationDestinationConfigPagerduty:NotificationDestinationConfigPagerduty":{"properties":{"integrationKey":{"type":"string","description":"The PagerDuty integration key.\n","secret":true},"integrationKeySet":{"type":"boolean"}},"type":"object","language":{"nodejs":{"requiredOutputs":["integrationKeySet"]}}},"databricks:index/NotificationDestinationConfigSlack:NotificationDestinationConfigSlack":{"properties":{"channelId":{"type":"string","description":"Slack channel ID for notifications.\n","secret":true},"channelIdSet":{"type":"boolean"},"oauthToken":{"type":"string","description":"OAuth token for Slack authentication.\n","secret":true},"oauthTokenSet":{"type":"boolean"},"url":{"type":"string","description":"The Slack webhook URL.\n","secret":true},"urlSet":{"type":"boolean"}},"type":"object","language":{"nodejs":{"requiredOutputs":["channelIdSet","oauthTokenSet","urlSet"]}}},"databricks:index/NotificationDestinationProviderConfig:NotificationDestinationProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n\n\u003e **NOTE** If the type of notification destination is changed, the existing notification destination will be deleted and a new notification destination will be created with the new type.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/OboTokenProviderConfig:OboTokenProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/OnlineStoreProviderConfig:OnlineStoreProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/OnlineTableProviderConfig:OnlineTableProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/OnlineTableSpec:OnlineTableSpec":{"properties":{"performFullCopy":{"type":"boolean","description":"Whether to create a full-copy pipeline -- a pipeline that stops after creates a full copy of the source table upon initialization and does not process any change data feeds (CDFs) afterwards. The pipeline can still be manually triggered afterwards, but it always perform a full copy of the source table and there are no incremental updates. This mode is useful for syncing views or tables without CDFs to online tables. Note that the full-copy pipeline only supports \"triggered\" scheduling policy.\n","willReplaceOnChanges":true},"pipelineId":{"type":"string","description":"ID of the associated Delta Live Table pipeline.\n"},"primaryKeyColumns":{"type":"array","items":{"type":"string"},"description":"list of the columns comprising the primary key.\n","willReplaceOnChanges":true},"runContinuously":{"$ref":"#/types/databricks:index/OnlineTableSpecRunContinuously:OnlineTableSpecRunContinuously","description":"empty block that specifies that pipeline runs continuously after generating the initial data.  Conflicts with \u003cspan pulumi-lang-nodejs=\"`runTriggered`\" pulumi-lang-dotnet=\"`RunTriggered`\" pulumi-lang-go=\"`runTriggered`\" pulumi-lang-python=\"`run_triggered`\" pulumi-lang-yaml=\"`runTriggered`\" pulumi-lang-java=\"`runTriggered`\"\u003e`run_triggered`\u003c/span\u003e.\n","willReplaceOnChanges":true},"runTriggered":{"$ref":"#/types/databricks:index/OnlineTableSpecRunTriggered:OnlineTableSpecRunTriggered","description":"empty block that specifies that pipeline stops after generating the initial data and can be triggered later (manually, through a cron job or through data triggers).\n","willReplaceOnChanges":true},"sourceTableFullName":{"type":"string","description":"full name of the source table.\n","willReplaceOnChanges":true},"timeseriesKey":{"type":"string","description":"Time series key to deduplicate (tie-break) rows with the same primary key.\n","willReplaceOnChanges":true}},"type":"object","language":{"nodejs":{"requiredOutputs":["pipelineId"]}}},"databricks:index/OnlineTableSpecRunContinuously:OnlineTableSpecRunContinuously":{"type":"object"},"databricks:index/OnlineTableSpecRunTriggered:OnlineTableSpecRunTriggered":{"type":"object"},"databricks:index/OnlineTableStatus:OnlineTableStatus":{"properties":{"continuousUpdateStatus":{"$ref":"#/types/databricks:index/OnlineTableStatusContinuousUpdateStatus:OnlineTableStatusContinuousUpdateStatus"},"detailedState":{"type":"string","description":"The state of the online table.\n"},"failedStatus":{"$ref":"#/types/databricks:index/OnlineTableStatusFailedStatus:OnlineTableStatusFailedStatus"},"message":{"type":"string","description":"A text description of the current state of the online table.\n"},"provisioningStatus":{"$ref":"#/types/databricks:index/OnlineTableStatusProvisioningStatus:OnlineTableStatusProvisioningStatus"},"triggeredUpdateStatus":{"$ref":"#/types/databricks:index/OnlineTableStatusTriggeredUpdateStatus:OnlineTableStatusTriggeredUpdateStatus"}},"type":"object"},"databricks:index/OnlineTableStatusContinuousUpdateStatus:OnlineTableStatusContinuousUpdateStatus":{"properties":{"initialPipelineSyncProgress":{"$ref":"#/types/databricks:index/OnlineTableStatusContinuousUpdateStatusInitialPipelineSyncProgress:OnlineTableStatusContinuousUpdateStatusInitialPipelineSyncProgress"},"lastProcessedCommitVersion":{"type":"integer"},"timestamp":{"type":"string"}},"type":"object"},"databricks:index/OnlineTableStatusContinuousUpdateStatusInitialPipelineSyncProgress:OnlineTableStatusContinuousUpdateStatusInitialPipelineSyncProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number"},"latestVersionCurrentlyProcessing":{"type":"integer"},"syncProgressCompletion":{"type":"number"},"syncedRowCount":{"type":"integer"},"totalRowCount":{"type":"integer"}},"type":"object"},"databricks:index/OnlineTableStatusFailedStatus:OnlineTableStatusFailedStatus":{"properties":{"lastProcessedCommitVersion":{"type":"integer"},"timestamp":{"type":"string"}},"type":"object"},"databricks:index/OnlineTableStatusProvisioningStatus:OnlineTableStatusProvisioningStatus":{"properties":{"initialPipelineSyncProgress":{"$ref":"#/types/databricks:index/OnlineTableStatusProvisioningStatusInitialPipelineSyncProgress:OnlineTableStatusProvisioningStatusInitialPipelineSyncProgress"}},"type":"object"},"databricks:index/OnlineTableStatusProvisioningStatusInitialPipelineSyncProgress:OnlineTableStatusProvisioningStatusInitialPipelineSyncProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number"},"latestVersionCurrentlyProcessing":{"type":"integer"},"syncProgressCompletion":{"type":"number"},"syncedRowCount":{"type":"integer"},"totalRowCount":{"type":"integer"}},"type":"object"},"databricks:index/OnlineTableStatusTriggeredUpdateStatus:OnlineTableStatusTriggeredUpdateStatus":{"properties":{"lastProcessedCommitVersion":{"type":"integer"},"timestamp":{"type":"string"},"triggeredUpdateProgress":{"$ref":"#/types/databricks:index/OnlineTableStatusTriggeredUpdateStatusTriggeredUpdateProgress:OnlineTableStatusTriggeredUpdateStatusTriggeredUpdateProgress"}},"type":"object"},"databricks:index/OnlineTableStatusTriggeredUpdateStatusTriggeredUpdateProgress:OnlineTableStatusTriggeredUpdateStatusTriggeredUpdateProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number"},"latestVersionCurrentlyProcessing":{"type":"integer"},"syncProgressCompletion":{"type":"number"},"syncedRowCount":{"type":"integer"},"totalRowCount":{"type":"integer"}},"type":"object"},"databricks:index/PermissionAssignmentProviderConfig:PermissionAssignmentProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/PermissionsAccessControl:PermissionsAccessControl":{"properties":{"groupName":{"type":"string","description":"name of the group. We recommend setting permissions on groups.\n"},"permissionLevel":{"type":"string","description":"permission level according to specific resource. See examples above for the reference.\n\nExactly one of the below arguments is required:\n"},"servicePrincipalName":{"type":"string","description":"Application ID (**not service principal name!**) of the service_principal.\n"},"userName":{"type":"string","description":"name of the user.\n"}},"type":"object"},"databricks:index/PermissionsProviderConfig:PermissionsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/PipelineCluster:PipelineCluster":{"properties":{"applyPolicyDefaultValues":{"type":"boolean"},"autoscale":{"$ref":"#/types/databricks:index/PipelineClusterAutoscale:PipelineClusterAutoscale"},"awsAttributes":{"$ref":"#/types/databricks:index/PipelineClusterAwsAttributes:PipelineClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/PipelineClusterAzureAttributes:PipelineClusterAzureAttributes"},"clusterLogConf":{"$ref":"#/types/databricks:index/PipelineClusterClusterLogConf:PipelineClusterClusterLogConf"},"customTags":{"type":"object","additionalProperties":{"type":"string"}},"driverInstancePoolId":{"type":"string"},"driverNodeTypeId":{"type":"string"},"enableLocalDiskEncryption":{"type":"boolean"},"gcpAttributes":{"$ref":"#/types/databricks:index/PipelineClusterGcpAttributes:PipelineClusterGcpAttributes"},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineClusterInitScript:PipelineClusterInitScript"}},"instancePoolId":{"type":"string"},"label":{"type":"string"},"nodeTypeId":{"type":"string"},"numWorkers":{"type":"integer"},"policyId":{"type":"string"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"}},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"}},"sshPublicKeys":{"type":"array","items":{"type":"string"}}},"type":"object","language":{"nodejs":{"requiredOutputs":["driverNodeTypeId","enableLocalDiskEncryption","nodeTypeId"]}}},"databricks:index/PipelineClusterAutoscale:PipelineClusterAutoscale":{"properties":{"maxWorkers":{"type":"integer"},"minWorkers":{"type":"integer"},"mode":{"type":"string"}},"type":"object","required":["maxWorkers","minWorkers"]},"databricks:index/PipelineClusterAwsAttributes:PipelineClusterAwsAttributes":{"properties":{"availability":{"type":"string"},"ebsVolumeCount":{"type":"integer"},"ebsVolumeIops":{"type":"integer"},"ebsVolumeSize":{"type":"integer"},"ebsVolumeThroughput":{"type":"integer"},"ebsVolumeType":{"type":"string"},"firstOnDemand":{"type":"integer"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/PipelineClusterAzureAttributes:PipelineClusterAzureAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"logAnalyticsInfo":{"$ref":"#/types/databricks:index/PipelineClusterAzureAttributesLogAnalyticsInfo:PipelineClusterAzureAttributesLogAnalyticsInfo"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/PipelineClusterAzureAttributesLogAnalyticsInfo:PipelineClusterAzureAttributesLogAnalyticsInfo":{"properties":{"logAnalyticsPrimaryKey":{"type":"string"},"logAnalyticsWorkspaceId":{"type":"string"}},"type":"object"},"databricks:index/PipelineClusterClusterLogConf:PipelineClusterClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/PipelineClusterClusterLogConfDbfs:PipelineClusterClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/PipelineClusterClusterLogConfS3:PipelineClusterClusterLogConfS3"},"volumes":{"$ref":"#/types/databricks:index/PipelineClusterClusterLogConfVolumes:PipelineClusterClusterLogConfVolumes"}},"type":"object"},"databricks:index/PipelineClusterClusterLogConfDbfs:PipelineClusterClusterLogConfDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/PipelineClusterClusterLogConfS3:PipelineClusterClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/PipelineClusterClusterLogConfVolumes:PipelineClusterClusterLogConfVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/PipelineClusterGcpAttributes:PipelineClusterGcpAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"googleServiceAccount":{"type":"string"},"localSsdCount":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/PipelineClusterInitScript:PipelineClusterInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/PipelineClusterInitScriptAbfss:PipelineClusterInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/PipelineClusterInitScriptDbfs:PipelineClusterInitScriptDbfs","deprecationMessage":"For init scripts use 'volumes', 'workspace' or cloud storage location instead of 'dbfs'."},"file":{"$ref":"#/types/databricks:index/PipelineClusterInitScriptFile:PipelineClusterInitScriptFile","description":"specifies path to a file in Databricks Workspace to include as source. Actual path is specified as \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e attribute inside the block.\n"},"gcs":{"$ref":"#/types/databricks:index/PipelineClusterInitScriptGcs:PipelineClusterInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/PipelineClusterInitScriptS3:PipelineClusterInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/PipelineClusterInitScriptVolumes:PipelineClusterInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/PipelineClusterInitScriptWorkspace:PipelineClusterInitScriptWorkspace"}},"type":"object"},"databricks:index/PipelineClusterInitScriptAbfss:PipelineClusterInitScriptAbfss":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/PipelineClusterInitScriptDbfs:PipelineClusterInitScriptDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/PipelineClusterInitScriptFile:PipelineClusterInitScriptFile":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/PipelineClusterInitScriptGcs:PipelineClusterInitScriptGcs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/PipelineClusterInitScriptS3:PipelineClusterInitScriptS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/PipelineClusterInitScriptVolumes:PipelineClusterInitScriptVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/PipelineClusterInitScriptWorkspace:PipelineClusterInitScriptWorkspace":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/PipelineDeployment:PipelineDeployment":{"properties":{"kind":{"type":"string","description":"The deployment method that manages the pipeline.\n"},"metadataFilePath":{"type":"string","description":"The path to the file containing metadata about the deployment.\n"}},"type":"object","required":["kind"]},"databricks:index/PipelineEnvironment:PipelineEnvironment":{"properties":{"dependencies":{"type":"array","items":{"type":"string"},"description":"a list of pip dependencies, as supported by the version of pip in this environment. Each dependency is a [pip requirement file line](https://pip.pypa.io/en/stable/reference/requirements-file-format/).  See [API docs](https://docs.databricks.com/api/azure/workspace/pipelines/create#environment-dependencies) for more information.\n\nExample:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Pipeline(\"this\", {\n    name: \"Serverless demo\",\n    serverless: true,\n    catalog: \"main\",\n    schema: \"ldp_demo\",\n    environment: {\n        dependencies: [\n            \"foo==0.0.1\",\n            \"-r /Workspace/Users/user.name/my-pipeline/requirements.txt\",\n            \"/Volumes/main/default/libs/my_lib.whl\",\n        ],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Pipeline(\"this\",\n    name=\"Serverless demo\",\n    serverless=True,\n    catalog=\"main\",\n    schema=\"ldp_demo\",\n    environment={\n        \"dependencies\": [\n            \"foo==0.0.1\",\n            \"-r /Workspace/Users/user.name/my-pipeline/requirements.txt\",\n            \"/Volumes/main/default/libs/my_lib.whl\",\n        ],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Pipeline(\"this\", new()\n    {\n        Name = \"Serverless demo\",\n        Serverless = true,\n        Catalog = \"main\",\n        Schema = \"ldp_demo\",\n        Environment = new Databricks.Inputs.PipelineEnvironmentArgs\n        {\n            Dependencies = new[]\n            {\n                \"foo==0.0.1\",\n                \"-r /Workspace/Users/user.name/my-pipeline/requirements.txt\",\n                \"/Volumes/main/default/libs/my_lib.whl\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPipeline(ctx, \"this\", \u0026databricks.PipelineArgs{\n\t\t\tName:       pulumi.String(\"Serverless demo\"),\n\t\t\tServerless: pulumi.Bool(true),\n\t\t\tCatalog:    pulumi.String(\"main\"),\n\t\t\tSchema:     pulumi.String(\"ldp_demo\"),\n\t\t\tEnvironment: \u0026databricks.PipelineEnvironmentArgs{\n\t\t\t\tDependencies: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"foo==0.0.1\"),\n\t\t\t\t\tpulumi.String(\"-r /Workspace/Users/user.name/my-pipeline/requirements.txt\"),\n\t\t\t\t\tpulumi.String(\"/Volumes/main/default/libs/my_lib.whl\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Pipeline;\nimport com.pulumi.databricks.PipelineArgs;\nimport com.pulumi.databricks.inputs.PipelineEnvironmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Pipeline(\"this\", PipelineArgs.builder()\n            .name(\"Serverless demo\")\n            .serverless(true)\n            .catalog(\"main\")\n            .schema(\"ldp_demo\")\n            .environment(PipelineEnvironmentArgs.builder()\n                .dependencies(                \n                    \"foo==0.0.1\",\n                    \"-r /Workspace/Users/user.name/my-pipeline/requirements.txt\",\n                    \"/Volumes/main/default/libs/my_lib.whl\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Pipeline\n    properties:\n      name: Serverless demo\n      serverless: true\n      catalog: main\n      schema: ldp_demo\n      environment:\n        dependencies:\n          - foo==0.0.1\n          - -r /Workspace/Users/user.name/my-pipeline/requirements.txt\n          - /Volumes/main/default/libs/my_lib.whl\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"}},"type":"object"},"databricks:index/PipelineEventLog:PipelineEventLog":{"properties":{"catalog":{"type":"string","description":"The UC catalog the event log is published under.\n"},"name":{"type":"string","description":"The table name the event log is published to in UC.\n"},"schema":{"type":"string","description":"The UC schema the event log is published under.\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredOutputs":["catalog","name","schema"]}}},"databricks:index/PipelineFilters:PipelineFilters":{"properties":{"excludes":{"type":"array","items":{"type":"string"},"description":"Paths to exclude.\n"},"includes":{"type":"array","items":{"type":"string"},"description":"Paths to include.\n"}},"type":"object"},"databricks:index/PipelineGatewayDefinition:PipelineGatewayDefinition":{"properties":{"connectionId":{"type":"string","description":"Deprecated, Immutable. The Unity Catalog connection this gateway pipeline uses to communicate with the source. *Use \u003cspan pulumi-lang-nodejs=\"`connectionName`\" pulumi-lang-dotnet=\"`ConnectionName`\" pulumi-lang-go=\"`connectionName`\" pulumi-lang-python=\"`connection_name`\" pulumi-lang-yaml=\"`connectionName`\" pulumi-lang-java=\"`connectionName`\"\u003e`connection_name`\u003c/span\u003e instead!*\n","willReplaceOnChanges":true},"connectionName":{"type":"string","description":"Immutable. The Unity Catalog connection that this gateway pipeline uses to communicate with the source.\n","willReplaceOnChanges":true},"connectionParameters":{"$ref":"#/types/databricks:index/PipelineGatewayDefinitionConnectionParameters:PipelineGatewayDefinitionConnectionParameters"},"gatewayStorageCatalog":{"type":"string","description":"Required, Immutable. The name of the catalog for the gateway pipeline's storage location.\n","willReplaceOnChanges":true},"gatewayStorageName":{"type":"string","description":"Required. The Unity Catalog-compatible naming for the gateway storage location. This is the destination to use for the data that is extracted by the gateway. Lakeflow Declarative Pipelines system will automatically create the storage location under the catalog and schema.\n"},"gatewayStorageSchema":{"type":"string","description":"Required, Immutable. The name of the schema for the gateway pipelines's storage location.\n","willReplaceOnChanges":true}},"type":"object","required":["connectionName","gatewayStorageCatalog","gatewayStorageSchema"]},"databricks:index/PipelineGatewayDefinitionConnectionParameters:PipelineGatewayDefinitionConnectionParameters":{"properties":{"sourceCatalog":{"type":"string"}},"type":"object"},"databricks:index/PipelineIngestionDefinition:PipelineIngestionDefinition":{"properties":{"connectionName":{"type":"string","willReplaceOnChanges":true},"fullRefreshWindow":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionFullRefreshWindow:PipelineIngestionDefinitionFullRefreshWindow"},"ingestFromUcForeignCatalog":{"type":"boolean","willReplaceOnChanges":true},"ingestionGatewayId":{"type":"string","willReplaceOnChanges":true},"netsuiteJarPath":{"type":"string"},"objects":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObject:PipelineIngestionDefinitionObject"}},"sourceConfigurations":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionSourceConfiguration:PipelineIngestionDefinitionSourceConfiguration"}},"sourceType":{"type":"string"},"tableConfiguration":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionTableConfiguration:PipelineIngestionDefinitionTableConfiguration"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionFullRefreshWindow:PipelineIngestionDefinitionFullRefreshWindow":{"properties":{"daysOfWeeks":{"type":"array","items":{"type":"string"}},"startHour":{"type":"integer"},"timeZoneId":{"type":"string"}},"type":"object","required":["startHour"]},"databricks:index/PipelineIngestionDefinitionObject:PipelineIngestionDefinitionObject":{"properties":{"report":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectReport:PipelineIngestionDefinitionObjectReport"},"schema":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectSchema:PipelineIngestionDefinitionObjectSchema","description":"The default schema (database) where tables are read from or published to. The presence of this attribute implies that the pipeline is in direct publishing mode.\n"},"table":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectTable:PipelineIngestionDefinitionObjectTable"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectReport:PipelineIngestionDefinitionObjectReport":{"properties":{"destinationCatalog":{"type":"string"},"destinationSchema":{"type":"string"},"destinationTable":{"type":"string"},"sourceUrl":{"type":"string"},"tableConfiguration":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectReportTableConfiguration:PipelineIngestionDefinitionObjectReportTableConfiguration"}},"type":"object","required":["destinationCatalog","destinationSchema","sourceUrl"]},"databricks:index/PipelineIngestionDefinitionObjectReportTableConfiguration:PipelineIngestionDefinitionObjectReportTableConfiguration":{"properties":{"autoFullRefreshPolicy":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectReportTableConfigurationAutoFullRefreshPolicy:PipelineIngestionDefinitionObjectReportTableConfigurationAutoFullRefreshPolicy"},"excludeColumns":{"type":"array","items":{"type":"string"}},"includeColumns":{"type":"array","items":{"type":"string"}},"primaryKeys":{"type":"array","items":{"type":"string"}},"queryBasedConnectorConfig":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectReportTableConfigurationQueryBasedConnectorConfig:PipelineIngestionDefinitionObjectReportTableConfigurationQueryBasedConnectorConfig"},"rowFilter":{"type":"string"},"salesforceIncludeFormulaFields":{"type":"boolean"},"scdType":{"type":"string"},"sequenceBies":{"type":"array","items":{"type":"string"}},"workdayReportParameters":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectReportTableConfigurationWorkdayReportParameters:PipelineIngestionDefinitionObjectReportTableConfigurationWorkdayReportParameters"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectReportTableConfigurationAutoFullRefreshPolicy:PipelineIngestionDefinitionObjectReportTableConfigurationAutoFullRefreshPolicy":{"properties":{"enabled":{"type":"boolean"},"minIntervalHours":{"type":"integer"}},"type":"object","required":["enabled"]},"databricks:index/PipelineIngestionDefinitionObjectReportTableConfigurationQueryBasedConnectorConfig:PipelineIngestionDefinitionObjectReportTableConfigurationQueryBasedConnectorConfig":{"properties":{"cursorColumns":{"type":"array","items":{"type":"string"}},"deletionCondition":{"type":"string"},"hardDeletionSyncMinIntervalInSeconds":{"type":"integer"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectReportTableConfigurationWorkdayReportParameters:PipelineIngestionDefinitionObjectReportTableConfigurationWorkdayReportParameters":{"properties":{"incremental":{"type":"boolean"},"parameters":{"type":"object","additionalProperties":{"type":"string"}},"reportParameters":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectReportTableConfigurationWorkdayReportParametersReportParameter:PipelineIngestionDefinitionObjectReportTableConfigurationWorkdayReportParametersReportParameter"}}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectReportTableConfigurationWorkdayReportParametersReportParameter:PipelineIngestionDefinitionObjectReportTableConfigurationWorkdayReportParametersReportParameter":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectSchema:PipelineIngestionDefinitionObjectSchema":{"properties":{"destinationCatalog":{"type":"string"},"destinationSchema":{"type":"string"},"sourceCatalog":{"type":"string"},"sourceSchema":{"type":"string"},"tableConfiguration":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectSchemaTableConfiguration:PipelineIngestionDefinitionObjectSchemaTableConfiguration"}},"type":"object","required":["destinationCatalog","destinationSchema","sourceSchema"]},"databricks:index/PipelineIngestionDefinitionObjectSchemaTableConfiguration:PipelineIngestionDefinitionObjectSchemaTableConfiguration":{"properties":{"autoFullRefreshPolicy":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectSchemaTableConfigurationAutoFullRefreshPolicy:PipelineIngestionDefinitionObjectSchemaTableConfigurationAutoFullRefreshPolicy"},"excludeColumns":{"type":"array","items":{"type":"string"}},"includeColumns":{"type":"array","items":{"type":"string"}},"primaryKeys":{"type":"array","items":{"type":"string"}},"queryBasedConnectorConfig":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectSchemaTableConfigurationQueryBasedConnectorConfig:PipelineIngestionDefinitionObjectSchemaTableConfigurationQueryBasedConnectorConfig"},"rowFilter":{"type":"string"},"salesforceIncludeFormulaFields":{"type":"boolean"},"scdType":{"type":"string"},"sequenceBies":{"type":"array","items":{"type":"string"}},"workdayReportParameters":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectSchemaTableConfigurationWorkdayReportParameters:PipelineIngestionDefinitionObjectSchemaTableConfigurationWorkdayReportParameters"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectSchemaTableConfigurationAutoFullRefreshPolicy:PipelineIngestionDefinitionObjectSchemaTableConfigurationAutoFullRefreshPolicy":{"properties":{"enabled":{"type":"boolean"},"minIntervalHours":{"type":"integer"}},"type":"object","required":["enabled"]},"databricks:index/PipelineIngestionDefinitionObjectSchemaTableConfigurationQueryBasedConnectorConfig:PipelineIngestionDefinitionObjectSchemaTableConfigurationQueryBasedConnectorConfig":{"properties":{"cursorColumns":{"type":"array","items":{"type":"string"}},"deletionCondition":{"type":"string"},"hardDeletionSyncMinIntervalInSeconds":{"type":"integer"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectSchemaTableConfigurationWorkdayReportParameters:PipelineIngestionDefinitionObjectSchemaTableConfigurationWorkdayReportParameters":{"properties":{"incremental":{"type":"boolean"},"parameters":{"type":"object","additionalProperties":{"type":"string"}},"reportParameters":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectSchemaTableConfigurationWorkdayReportParametersReportParameter:PipelineIngestionDefinitionObjectSchemaTableConfigurationWorkdayReportParametersReportParameter"}}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectSchemaTableConfigurationWorkdayReportParametersReportParameter:PipelineIngestionDefinitionObjectSchemaTableConfigurationWorkdayReportParametersReportParameter":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectTable:PipelineIngestionDefinitionObjectTable":{"properties":{"destinationCatalog":{"type":"string"},"destinationSchema":{"type":"string"},"destinationTable":{"type":"string"},"sourceCatalog":{"type":"string"},"sourceSchema":{"type":"string"},"sourceTable":{"type":"string"},"tableConfiguration":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectTableTableConfiguration:PipelineIngestionDefinitionObjectTableTableConfiguration"}},"type":"object","required":["destinationCatalog","destinationSchema","sourceTable"]},"databricks:index/PipelineIngestionDefinitionObjectTableTableConfiguration:PipelineIngestionDefinitionObjectTableTableConfiguration":{"properties":{"autoFullRefreshPolicy":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectTableTableConfigurationAutoFullRefreshPolicy:PipelineIngestionDefinitionObjectTableTableConfigurationAutoFullRefreshPolicy"},"excludeColumns":{"type":"array","items":{"type":"string"}},"includeColumns":{"type":"array","items":{"type":"string"}},"primaryKeys":{"type":"array","items":{"type":"string"}},"queryBasedConnectorConfig":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectTableTableConfigurationQueryBasedConnectorConfig:PipelineIngestionDefinitionObjectTableTableConfigurationQueryBasedConnectorConfig"},"rowFilter":{"type":"string"},"salesforceIncludeFormulaFields":{"type":"boolean"},"scdType":{"type":"string"},"sequenceBies":{"type":"array","items":{"type":"string"}},"workdayReportParameters":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectTableTableConfigurationWorkdayReportParameters:PipelineIngestionDefinitionObjectTableTableConfigurationWorkdayReportParameters"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectTableTableConfigurationAutoFullRefreshPolicy:PipelineIngestionDefinitionObjectTableTableConfigurationAutoFullRefreshPolicy":{"properties":{"enabled":{"type":"boolean"},"minIntervalHours":{"type":"integer"}},"type":"object","required":["enabled"]},"databricks:index/PipelineIngestionDefinitionObjectTableTableConfigurationQueryBasedConnectorConfig:PipelineIngestionDefinitionObjectTableTableConfigurationQueryBasedConnectorConfig":{"properties":{"cursorColumns":{"type":"array","items":{"type":"string"}},"deletionCondition":{"type":"string"},"hardDeletionSyncMinIntervalInSeconds":{"type":"integer"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectTableTableConfigurationWorkdayReportParameters:PipelineIngestionDefinitionObjectTableTableConfigurationWorkdayReportParameters":{"properties":{"incremental":{"type":"boolean"},"parameters":{"type":"object","additionalProperties":{"type":"string"}},"reportParameters":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionObjectTableTableConfigurationWorkdayReportParametersReportParameter:PipelineIngestionDefinitionObjectTableTableConfigurationWorkdayReportParametersReportParameter"}}},"type":"object"},"databricks:index/PipelineIngestionDefinitionObjectTableTableConfigurationWorkdayReportParametersReportParameter:PipelineIngestionDefinitionObjectTableTableConfigurationWorkdayReportParametersReportParameter":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionSourceConfiguration:PipelineIngestionDefinitionSourceConfiguration":{"properties":{"catalog":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionSourceConfigurationCatalog:PipelineIngestionDefinitionSourceConfigurationCatalog","description":"The name of default catalog in Unity Catalog. *Change of this parameter forces recreation of the pipeline if you switch from \u003cspan pulumi-lang-nodejs=\"`storage`\" pulumi-lang-dotnet=\"`Storage`\" pulumi-lang-go=\"`storage`\" pulumi-lang-python=\"`storage`\" pulumi-lang-yaml=\"`storage`\" pulumi-lang-java=\"`storage`\"\u003e`storage`\u003c/span\u003e to \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e or vice versa.  If pipeline was already created with \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e set, the value could be changed.* (Conflicts with \u003cspan pulumi-lang-nodejs=\"`storage`\" pulumi-lang-dotnet=\"`Storage`\" pulumi-lang-go=\"`storage`\" pulumi-lang-python=\"`storage`\" pulumi-lang-yaml=\"`storage`\" pulumi-lang-java=\"`storage`\"\u003e`storage`\u003c/span\u003e).\n"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionSourceConfigurationCatalog:PipelineIngestionDefinitionSourceConfigurationCatalog":{"properties":{"postgres":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionSourceConfigurationCatalogPostgres:PipelineIngestionDefinitionSourceConfigurationCatalogPostgres"},"sourceCatalog":{"type":"string"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionSourceConfigurationCatalogPostgres:PipelineIngestionDefinitionSourceConfigurationCatalogPostgres":{"properties":{"slotConfig":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionSourceConfigurationCatalogPostgresSlotConfig:PipelineIngestionDefinitionSourceConfigurationCatalogPostgresSlotConfig"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionSourceConfigurationCatalogPostgresSlotConfig:PipelineIngestionDefinitionSourceConfigurationCatalogPostgresSlotConfig":{"properties":{"publicationName":{"type":"string"},"slotName":{"type":"string"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionTableConfiguration:PipelineIngestionDefinitionTableConfiguration":{"properties":{"autoFullRefreshPolicy":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionTableConfigurationAutoFullRefreshPolicy:PipelineIngestionDefinitionTableConfigurationAutoFullRefreshPolicy"},"excludeColumns":{"type":"array","items":{"type":"string"}},"includeColumns":{"type":"array","items":{"type":"string"}},"primaryKeys":{"type":"array","items":{"type":"string"}},"queryBasedConnectorConfig":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionTableConfigurationQueryBasedConnectorConfig:PipelineIngestionDefinitionTableConfigurationQueryBasedConnectorConfig"},"rowFilter":{"type":"string"},"salesforceIncludeFormulaFields":{"type":"boolean"},"scdType":{"type":"string"},"sequenceBies":{"type":"array","items":{"type":"string"}},"workdayReportParameters":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionTableConfigurationWorkdayReportParameters:PipelineIngestionDefinitionTableConfigurationWorkdayReportParameters"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionTableConfigurationAutoFullRefreshPolicy:PipelineIngestionDefinitionTableConfigurationAutoFullRefreshPolicy":{"properties":{"enabled":{"type":"boolean"},"minIntervalHours":{"type":"integer"}},"type":"object","required":["enabled"]},"databricks:index/PipelineIngestionDefinitionTableConfigurationQueryBasedConnectorConfig:PipelineIngestionDefinitionTableConfigurationQueryBasedConnectorConfig":{"properties":{"cursorColumns":{"type":"array","items":{"type":"string"}},"deletionCondition":{"type":"string"},"hardDeletionSyncMinIntervalInSeconds":{"type":"integer"}},"type":"object"},"databricks:index/PipelineIngestionDefinitionTableConfigurationWorkdayReportParameters:PipelineIngestionDefinitionTableConfigurationWorkdayReportParameters":{"properties":{"incremental":{"type":"boolean"},"parameters":{"type":"object","additionalProperties":{"type":"string"}},"reportParameters":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineIngestionDefinitionTableConfigurationWorkdayReportParametersReportParameter:PipelineIngestionDefinitionTableConfigurationWorkdayReportParametersReportParameter"}}},"type":"object"},"databricks:index/PipelineIngestionDefinitionTableConfigurationWorkdayReportParametersReportParameter:PipelineIngestionDefinitionTableConfigurationWorkdayReportParametersReportParameter":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/PipelineLatestUpdate:PipelineLatestUpdate":{"properties":{"creationTime":{"type":"string"},"state":{"type":"string"},"updateId":{"type":"string"}},"type":"object"},"databricks:index/PipelineLibrary:PipelineLibrary":{"properties":{"file":{"$ref":"#/types/databricks:index/PipelineLibraryFile:PipelineLibraryFile","description":"specifies path to a file in Databricks Workspace to include as source. Actual path is specified as \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e attribute inside the block.\n"},"glob":{"$ref":"#/types/databricks:index/PipelineLibraryGlob:PipelineLibraryGlob","description":"The unified field to include source code. Each entry should have the \u003cspan pulumi-lang-nodejs=\"`include`\" pulumi-lang-dotnet=\"`Include`\" pulumi-lang-go=\"`include`\" pulumi-lang-python=\"`include`\" pulumi-lang-yaml=\"`include`\" pulumi-lang-java=\"`include`\"\u003e`include`\u003c/span\u003e attribute that can specify a notebook path, a file path, or a folder path that ends `/**` (to include everything from that folder). This field cannot be used together with \u003cspan pulumi-lang-nodejs=\"`notebook`\" pulumi-lang-dotnet=\"`Notebook`\" pulumi-lang-go=\"`notebook`\" pulumi-lang-python=\"`notebook`\" pulumi-lang-yaml=\"`notebook`\" pulumi-lang-java=\"`notebook`\"\u003e`notebook`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`file`\" pulumi-lang-dotnet=\"`File`\" pulumi-lang-go=\"`file`\" pulumi-lang-python=\"`file`\" pulumi-lang-yaml=\"`file`\" pulumi-lang-java=\"`file`\"\u003e`file`\u003c/span\u003e.\n"},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/PipelineLibraryMaven:PipelineLibraryMaven"},"notebook":{"$ref":"#/types/databricks:index/PipelineLibraryNotebook:PipelineLibraryNotebook","description":"specifies path to a Databricks Notebook to include as source. Actual path is specified as \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e attribute inside the block.\n"},"whl":{"type":"string","deprecationMessage":"The 'whl' field is deprecated"}},"type":"object"},"databricks:index/PipelineLibraryFile:PipelineLibraryFile":{"properties":{"path":{"type":"string"}},"type":"object","required":["path"]},"databricks:index/PipelineLibraryGlob:PipelineLibraryGlob":{"properties":{"include":{"type":"string","description":"Paths to include.\n"}},"type":"object","required":["include"]},"databricks:index/PipelineLibraryMaven:PipelineLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/PipelineLibraryNotebook:PipelineLibraryNotebook":{"properties":{"path":{"type":"string"}},"type":"object","required":["path"]},"databricks:index/PipelineNotification:PipelineNotification":{"properties":{"alerts":{"type":"array","items":{"type":"string"},"description":"non-empty list of alert types. Right now following alert types are supported, consult documentation for actual list\n* `on-update-success` - a pipeline update completes successfully.\n* `on-update-failure` - a pipeline update fails with a retryable error.\n* `on-update-fatal-failure` - a pipeline update fails with a non-retryable (fatal) error.\n* `on-flow-failure` - a single data flow fails.\n"},"emailRecipients":{"type":"array","items":{"type":"string"},"description":"non-empty list of emails to notify.\n"}},"type":"object"},"databricks:index/PipelineProviderConfig:PipelineProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/PipelineRestartWindow:PipelineRestartWindow":{"properties":{"daysOfWeeks":{"type":"array","items":{"type":"string"}},"startHour":{"type":"integer"},"timeZoneId":{"type":"string"}},"type":"object","required":["startHour"]},"databricks:index/PipelineRunAs:PipelineRunAs":{"properties":{"servicePrincipalName":{"type":"string","description":"The application ID of an active service principal. Setting this field requires the `servicePrincipal/user` role.\n\nExample:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Pipeline(\"this\", {runAs: {\n    servicePrincipalName: \"8d23ae77-912e-4a19-81e4-b9c3f5cc9349\",\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Pipeline(\"this\", run_as={\n    \"service_principal_name\": \"8d23ae77-912e-4a19-81e4-b9c3f5cc9349\",\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Pipeline(\"this\", new()\n    {\n        RunAs = new Databricks.Inputs.PipelineRunAsArgs\n        {\n            ServicePrincipalName = \"8d23ae77-912e-4a19-81e4-b9c3f5cc9349\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPipeline(ctx, \"this\", \u0026databricks.PipelineArgs{\n\t\t\tRunAs: \u0026databricks.PipelineRunAsArgs{\n\t\t\t\tServicePrincipalName: pulumi.String(\"8d23ae77-912e-4a19-81e4-b9c3f5cc9349\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Pipeline;\nimport com.pulumi.databricks.PipelineArgs;\nimport com.pulumi.databricks.inputs.PipelineRunAsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Pipeline(\"this\", PipelineArgs.builder()\n            .runAs(PipelineRunAsArgs.builder()\n                .servicePrincipalName(\"8d23ae77-912e-4a19-81e4-b9c3f5cc9349\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Pipeline\n    properties:\n      runAs:\n        servicePrincipalName: 8d23ae77-912e-4a19-81e4-b9c3f5cc9349\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"},"userName":{"type":"string","description":"The email of an active workspace user. Non-admin users can only set this field to their own email.\n"}},"type":"object"},"databricks:index/PipelineTrigger:PipelineTrigger":{"properties":{"cron":{"$ref":"#/types/databricks:index/PipelineTriggerCron:PipelineTriggerCron"},"manual":{"$ref":"#/types/databricks:index/PipelineTriggerManual:PipelineTriggerManual"}},"type":"object"},"databricks:index/PipelineTriggerCron:PipelineTriggerCron":{"properties":{"quartzCronSchedule":{"type":"string"},"timezoneId":{"type":"string"}},"type":"object"},"databricks:index/PipelineTriggerManual:PipelineTriggerManual":{"type":"object"},"databricks:index/PolicyInfoColumnMask:PolicyInfoColumnMask":{"properties":{"functionName":{"type":"string"},"onColumn":{"type":"string","description":"The alias of the column to be masked. The alias must refer to one of matched columns.\nThe values of the column is passed to the column mask function as the first argument.\nRequired on create and update\n"},"usings":{"type":"array","items":{"$ref":"#/types/databricks:index/PolicyInfoColumnMaskUsing:PolicyInfoColumnMaskUsing"}}},"type":"object","required":["functionName","onColumn"]},"databricks:index/PolicyInfoColumnMaskUsing:PolicyInfoColumnMaskUsing":{"properties":{"alias":{"type":"string"},"constant":{"type":"string","description":"A constant literal\n"}},"type":"object"},"databricks:index/PolicyInfoMatchColumn:PolicyInfoMatchColumn":{"properties":{"alias":{"type":"string"},"condition":{"type":"string","description":"The condition expression used to match a table column\n"}},"type":"object"},"databricks:index/PolicyInfoProviderConfig:PolicyInfoProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/PolicyInfoRowFilter:PolicyInfoRowFilter":{"properties":{"functionName":{"type":"string"},"usings":{"type":"array","items":{"$ref":"#/types/databricks:index/PolicyInfoRowFilterUsing:PolicyInfoRowFilterUsing"}}},"type":"object","required":["functionName"]},"databricks:index/PolicyInfoRowFilterUsing:PolicyInfoRowFilterUsing":{"properties":{"alias":{"type":"string"},"constant":{"type":"string","description":"A constant literal\n"}},"type":"object"},"databricks:index/PostgresBranchProviderConfig:PostgresBranchProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/PostgresBranchSpec:PostgresBranchSpec":{"properties":{"expireTime":{"type":"string","description":"(string) - Absolute expiration time for the branch. Empty if expiration is disabled\n"},"isProtected":{"type":"boolean","description":"(boolean) - Whether the branch is protected\n"},"noExpiry":{"type":"boolean","description":"Explicitly disable expiration. When set to true, the branch will not expire.\nIf set to false, the request is invalid; provide either ttl or\u003cspan pulumi-lang-nodejs=\" expireTime \" pulumi-lang-dotnet=\" ExpireTime \" pulumi-lang-go=\" expireTime \" pulumi-lang-python=\" expire_time \" pulumi-lang-yaml=\" expireTime \" pulumi-lang-java=\" expireTime \"\u003e expire_time \u003c/span\u003einstead\n"},"sourceBranch":{"type":"string","description":"(string) - The name of the source branch from which this branch was created.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"sourceBranchLsn":{"type":"string","description":"(string) - The Log Sequence Number (LSN) on the source branch from which this branch was created\n"},"sourceBranchTime":{"type":"string","description":"(string) - The point in time on the source branch from which this branch was created\n"},"ttl":{"type":"string","description":"Relative time-to-live duration. When set, the branch will expire at\u003cspan pulumi-lang-nodejs=\" creationTime \" pulumi-lang-dotnet=\" CreationTime \" pulumi-lang-go=\" creationTime \" pulumi-lang-python=\" creation_time \" pulumi-lang-yaml=\" creationTime \" pulumi-lang-java=\" creationTime \"\u003e creation_time \u003c/span\u003e+ ttl\n"}},"type":"object"},"databricks:index/PostgresBranchStatus:PostgresBranchStatus":{"properties":{"currentState":{"type":"string","description":"(string) - The branch's state, indicating if it is initializing, ready for use, or archived. Possible values are: `ARCHIVED`, `IMPORTING`, `INIT`, `READY`, `RESETTING`\n"},"default":{"type":"boolean","description":"(boolean) - Whether the branch is the project's default branch\n"},"expireTime":{"type":"string","description":"(string) - Absolute expiration time for the branch. Empty if expiration is disabled\n"},"isProtected":{"type":"boolean","description":"(boolean) - Whether the branch is protected\n"},"logicalSizeBytes":{"type":"integer","description":"(integer) - The logical size of the branch\n"},"pendingState":{"type":"string","description":"(string) - The pending state of the branch, if a state transition is in progress. Possible values are: `ARCHIVED`, `IMPORTING`, `INIT`, `READY`, `RESETTING`\n"},"sourceBranch":{"type":"string","description":"(string) - The name of the source branch from which this branch was created.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"sourceBranchLsn":{"type":"string","description":"(string) - The Log Sequence Number (LSN) on the source branch from which this branch was created\n"},"sourceBranchTime":{"type":"string","description":"(string) - The point in time on the source branch from which this branch was created\n"},"stateChangeTime":{"type":"string","description":"(string) - A timestamp indicating when the \u003cspan pulumi-lang-nodejs=\"`currentState`\" pulumi-lang-dotnet=\"`CurrentState`\" pulumi-lang-go=\"`currentState`\" pulumi-lang-python=\"`current_state`\" pulumi-lang-yaml=\"`currentState`\" pulumi-lang-java=\"`currentState`\"\u003e`current_state`\u003c/span\u003e began\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["currentState","default","expireTime","isProtected","logicalSizeBytes","pendingState","sourceBranch","sourceBranchLsn","sourceBranchTime","stateChangeTime"]}}},"databricks:index/PostgresEndpointProviderConfig:PostgresEndpointProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/PostgresEndpointSpec:PostgresEndpointSpec":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"(number) - The maximum number of Compute Units\n"},"autoscalingLimitMinCu":{"type":"number","description":"(number) - The minimum number of Compute Units\n"},"disabled":{"type":"boolean","description":"(boolean) - Whether to restrict connections to the compute endpoint.\nEnabling this option schedules a suspend compute operation.\nA disabled compute endpoint cannot be enabled by a connection or\nconsole action\n"},"endpointType":{"type":"string","description":"(string) - The endpoint type. A branch can only have one READ_WRITE endpoint. Possible values are: `ENDPOINT_TYPE_READ_ONLY`, `ENDPOINT_TYPE_READ_WRITE`\n"},"noSuspension":{"type":"boolean","description":"When set to true, explicitly disables automatic suspension (never suspend).\nShould be set to true when provided\n"},"settings":{"$ref":"#/types/databricks:index/PostgresEndpointSpecSettings:PostgresEndpointSpecSettings","description":"(EndpointSettings)\n"},"suspendTimeoutDuration":{"type":"string","description":"(string) - Duration of inactivity after which the compute endpoint is automatically suspended\n"}},"type":"object","required":["endpointType"]},"databricks:index/PostgresEndpointSpecSettings:PostgresEndpointSpecSettings":{"properties":{"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"A raw representation of Postgres settings\n"}},"type":"object"},"databricks:index/PostgresEndpointStatus:PostgresEndpointStatus":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"(number) - The maximum number of Compute Units\n"},"autoscalingLimitMinCu":{"type":"number","description":"(number) - The minimum number of Compute Units\n"},"currentState":{"type":"string","description":"(string) - Possible values are: `ACTIVE`, `IDLE`, `INIT`\n"},"disabled":{"type":"boolean","description":"(boolean) - Whether to restrict connections to the compute endpoint.\nEnabling this option schedules a suspend compute operation.\nA disabled compute endpoint cannot be enabled by a connection or\nconsole action\n"},"endpointType":{"type":"string","description":"(string) - The endpoint type. A branch can only have one READ_WRITE endpoint. Possible values are: `ENDPOINT_TYPE_READ_ONLY`, `ENDPOINT_TYPE_READ_WRITE`\n"},"hosts":{"$ref":"#/types/databricks:index/PostgresEndpointStatusHosts:PostgresEndpointStatusHosts","description":"(EndpointHosts) - Contains host information for connecting to the endpoint\n"},"pendingState":{"type":"string","description":"(string) - Possible values are: `ACTIVE`, `IDLE`, `INIT`\n"},"settings":{"$ref":"#/types/databricks:index/PostgresEndpointStatusSettings:PostgresEndpointStatusSettings","description":"(EndpointSettings)\n"},"suspendTimeoutDuration":{"type":"string","description":"(string) - Duration of inactivity after which the compute endpoint is automatically suspended\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["autoscalingLimitMaxCu","autoscalingLimitMinCu","currentState","disabled","endpointType","hosts","pendingState","settings","suspendTimeoutDuration"]}}},"databricks:index/PostgresEndpointStatusHosts:PostgresEndpointStatusHosts":{"properties":{"host":{"type":"string","description":"(string) - The hostname to connect to this endpoint. For read-write endpoints, this is a read-write hostname which connects\nto the primary compute. For read-only endpoints, this is a read-only hostname which allows read-only operations\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["host"]}}},"databricks:index/PostgresEndpointStatusSettings:PostgresEndpointStatusSettings":{"properties":{"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"A raw representation of Postgres settings\n"}},"type":"object"},"databricks:index/PostgresProjectProviderConfig:PostgresProjectProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/PostgresProjectSpec:PostgresProjectSpec":{"properties":{"budgetPolicyId":{"type":"string","description":"(string) - The budget policy that is applied to the project\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/PostgresProjectSpecCustomTag:PostgresProjectSpecCustomTag"},"description":"(list of ProjectCustomTag) - The effective custom tags associated with the project\n"},"defaultEndpointSettings":{"$ref":"#/types/databricks:index/PostgresProjectSpecDefaultEndpointSettings:PostgresProjectSpecDefaultEndpointSettings","description":"(ProjectDefaultEndpointSettings) - The effective default endpoint settings\n"},"displayName":{"type":"string","description":"(string) - The effective human-readable project name\n"},"historyRetentionDuration":{"type":"string","description":"(string) - The effective number of seconds to retain the shared history for point in time recovery\n"},"pgVersion":{"type":"integer","description":"(integer) - The effective major Postgres version number\n"}},"type":"object"},"databricks:index/PostgresProjectSpecCustomTag:PostgresProjectSpecCustomTag":{"properties":{"key":{"type":"string","description":"The key of the custom tag\n"},"value":{"type":"string","description":"The value of the custom tag\n"}},"type":"object"},"databricks:index/PostgresProjectSpecDefaultEndpointSettings:PostgresProjectSpecDefaultEndpointSettings":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"The maximum number of Compute Units. Minimum value is 0.5\n"},"autoscalingLimitMinCu":{"type":"number","description":"The minimum number of Compute Units. Minimum value is 0.5\n"},"noSuspension":{"type":"boolean","description":"When set to true, explicitly disables automatic suspension (never suspend).\nShould be set to true when provided\n"},"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"A raw representation of Postgres settings\n"},"suspendTimeoutDuration":{"type":"string","description":"Duration of inactivity after which the compute endpoint is automatically suspended.\nIf specified should be between 60s and 604800s (1 minute to 1 week)\n"}},"type":"object"},"databricks:index/PostgresProjectStatus:PostgresProjectStatus":{"properties":{"branchLogicalSizeLimitBytes":{"type":"integer","description":"(integer) - The logical size limit for a branch\n"},"budgetPolicyId":{"type":"string","description":"(string) - The budget policy that is applied to the project\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/PostgresProjectStatusCustomTag:PostgresProjectStatusCustomTag"},"description":"(list of ProjectCustomTag) - The effective custom tags associated with the project\n"},"defaultEndpointSettings":{"$ref":"#/types/databricks:index/PostgresProjectStatusDefaultEndpointSettings:PostgresProjectStatusDefaultEndpointSettings","description":"(ProjectDefaultEndpointSettings) - The effective default endpoint settings\n"},"displayName":{"type":"string","description":"(string) - The effective human-readable project name\n"},"historyRetentionDuration":{"type":"string","description":"(string) - The effective number of seconds to retain the shared history for point in time recovery\n"},"owner":{"type":"string","description":"(string) - The email of the project owner\n"},"pgVersion":{"type":"integer","description":"(integer) - The effective major Postgres version number\n"},"syntheticStorageSizeBytes":{"type":"integer","description":"(integer) - The current space occupied by the project in storage\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["branchLogicalSizeLimitBytes","budgetPolicyId","customTags","defaultEndpointSettings","displayName","historyRetentionDuration","owner","pgVersion","syntheticStorageSizeBytes"]}}},"databricks:index/PostgresProjectStatusCustomTag:PostgresProjectStatusCustomTag":{"properties":{"key":{"type":"string","description":"The key of the custom tag\n"},"value":{"type":"string","description":"The value of the custom tag\n"}},"type":"object"},"databricks:index/PostgresProjectStatusDefaultEndpointSettings:PostgresProjectStatusDefaultEndpointSettings":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"The maximum number of Compute Units. Minimum value is 0.5\n"},"autoscalingLimitMinCu":{"type":"number","description":"The minimum number of Compute Units. Minimum value is 0.5\n"},"noSuspension":{"type":"boolean","description":"When set to true, explicitly disables automatic suspension (never suspend).\nShould be set to true when provided\n"},"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"A raw representation of Postgres settings\n"},"suspendTimeoutDuration":{"type":"string","description":"Duration of inactivity after which the compute endpoint is automatically suspended.\nIf specified should be between 60s and 604800s (1 minute to 1 week)\n"}},"type":"object"},"databricks:index/QualityMonitorCustomMetric:QualityMonitorCustomMetric":{"properties":{"definition":{"type":"string","description":"[create metric definition](https://docs.databricks.com/en/lakehouse-monitoring/custom-metrics.html#create-definition)\n"},"inputColumns":{"type":"array","items":{"type":"string"},"description":"Columns on the monitored table to apply the custom metrics to.\n"},"name":{"type":"string","description":"Name of the custom metric.\n"},"outputDataType":{"type":"string","description":"The output type of the custom metric.\n"},"type":{"type":"string","description":"The type of the custom metric.\n"}},"type":"object","required":["definition","inputColumns","name","outputDataType","type"]},"databricks:index/QualityMonitorDataClassificationConfig:QualityMonitorDataClassificationConfig":{"properties":{"enabled":{"type":"boolean","description":"Whether to enable data classification\n"}},"type":"object"},"databricks:index/QualityMonitorInferenceLog:QualityMonitorInferenceLog":{"properties":{"granularities":{"type":"array","items":{"type":"string"},"description":"List of granularities to use when aggregating data into time windows based on their timestamp.\n"},"labelCol":{"type":"string","description":"Column of the model label\n"},"modelIdCol":{"type":"string","description":"Column of the model id or version\n"},"predictionCol":{"type":"string","description":"Column of the model prediction\n"},"predictionProbaCol":{"type":"string","description":"Column of the model prediction probabilities\n"},"problemType":{"type":"string","description":"Problem type the model aims to solve. Either `PROBLEM_TYPE_CLASSIFICATION` or `PROBLEM_TYPE_REGRESSION`\n"},"timestampCol":{"type":"string","description":"Column of the timestamp of predictions\n"}},"type":"object","required":["granularities","modelIdCol","predictionCol","problemType","timestampCol"]},"databricks:index/QualityMonitorNotifications:QualityMonitorNotifications":{"properties":{"onFailure":{"$ref":"#/types/databricks:index/QualityMonitorNotificationsOnFailure:QualityMonitorNotificationsOnFailure","description":"who to send notifications to on monitor failure.\n"},"onNewClassificationTagDetected":{"$ref":"#/types/databricks:index/QualityMonitorNotificationsOnNewClassificationTagDetected:QualityMonitorNotificationsOnNewClassificationTagDetected","description":"Who to send notifications to when new data classification tags are detected.\n"}},"type":"object"},"databricks:index/QualityMonitorNotificationsOnFailure:QualityMonitorNotificationsOnFailure":{"properties":{"emailAddresses":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/QualityMonitorNotificationsOnNewClassificationTagDetected:QualityMonitorNotificationsOnNewClassificationTagDetected":{"properties":{"emailAddresses":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/QualityMonitorProviderConfig:QualityMonitorProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/QualityMonitorSchedule:QualityMonitorSchedule":{"properties":{"pauseStatus":{"type":"string"},"quartzCronExpression":{"type":"string","description":"string expression that determines when to run the monitor. See [Quartz documentation](https://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html) for examples.\n"},"timezoneId":{"type":"string","description":"string with timezone id (e.g., `PST`) in which to evaluate the Quartz expression.\n"}},"type":"object","required":["quartzCronExpression","timezoneId"],"language":{"nodejs":{"requiredOutputs":["pauseStatus","quartzCronExpression","timezoneId"]}}},"databricks:index/QualityMonitorSnapshot:QualityMonitorSnapshot":{"type":"object"},"databricks:index/QualityMonitorTimeSeries:QualityMonitorTimeSeries":{"properties":{"granularities":{"type":"array","items":{"type":"string"},"description":"List of granularities to use when aggregating data into time windows based on their timestamp.\n"},"timestampCol":{"type":"string","description":"Column of the timestamp of predictions\n"}},"type":"object","required":["granularities","timestampCol"]},"databricks:index/QualityMonitorV2AnomalyDetectionConfig:QualityMonitorV2AnomalyDetectionConfig":{"properties":{"excludedTableFullNames":{"type":"array","items":{"type":"string"},"description":"List of fully qualified table names to exclude from anomaly detection\n"},"lastRunId":{"type":"string","description":"(string) - Run id of the last run of the workflow\n"},"latestRunStatus":{"type":"string","description":"(string) - The status of the last run of the workflow. Possible values are: `ANOMALY_DETECTION_RUN_STATUS_CANCELED`, `ANOMALY_DETECTION_RUN_STATUS_FAILED`, `ANOMALY_DETECTION_RUN_STATUS_JOB_DELETED`, `ANOMALY_DETECTION_RUN_STATUS_PENDING`, `ANOMALY_DETECTION_RUN_STATUS_RUNNING`, `ANOMALY_DETECTION_RUN_STATUS_SUCCESS`, `ANOMALY_DETECTION_RUN_STATUS_UNKNOWN`, `ANOMALY_DETECTION_RUN_STATUS_WORKSPACE_MISMATCH_ERROR`\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["lastRunId","latestRunStatus"]}}},"databricks:index/QualityMonitorV2ProviderConfig:QualityMonitorV2ProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/QualityMonitorV2ValidityCheckConfiguration:QualityMonitorV2ValidityCheckConfiguration":{"properties":{"name":{"type":"string","description":"Can be set by system. Does not need to be user facing\n"},"percentNullValidityCheck":{"$ref":"#/types/databricks:index/QualityMonitorV2ValidityCheckConfigurationPercentNullValidityCheck:QualityMonitorV2ValidityCheckConfigurationPercentNullValidityCheck"},"rangeValidityCheck":{"$ref":"#/types/databricks:index/QualityMonitorV2ValidityCheckConfigurationRangeValidityCheck:QualityMonitorV2ValidityCheckConfigurationRangeValidityCheck"},"uniquenessValidityCheck":{"$ref":"#/types/databricks:index/QualityMonitorV2ValidityCheckConfigurationUniquenessValidityCheck:QualityMonitorV2ValidityCheckConfigurationUniquenessValidityCheck"}},"type":"object"},"databricks:index/QualityMonitorV2ValidityCheckConfigurationPercentNullValidityCheck:QualityMonitorV2ValidityCheckConfigurationPercentNullValidityCheck":{"properties":{"columnNames":{"type":"array","items":{"type":"string"}},"upperBound":{"type":"number"}},"type":"object"},"databricks:index/QualityMonitorV2ValidityCheckConfigurationRangeValidityCheck:QualityMonitorV2ValidityCheckConfigurationRangeValidityCheck":{"properties":{"columnNames":{"type":"array","items":{"type":"string"}},"lowerBound":{"type":"number","description":"Lower bound for the range\n"},"upperBound":{"type":"number"}},"type":"object"},"databricks:index/QualityMonitorV2ValidityCheckConfigurationUniquenessValidityCheck:QualityMonitorV2ValidityCheckConfigurationUniquenessValidityCheck":{"properties":{"columnNames":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/QueryParameter:QueryParameter":{"properties":{"dateRangeValue":{"$ref":"#/types/databricks:index/QueryParameterDateRangeValue:QueryParameterDateRangeValue","description":"Date-range query parameter value. Consists of following attributes (Can only specify one of \u003cspan pulumi-lang-nodejs=\"`dynamicDateRangeValue`\" pulumi-lang-dotnet=\"`DynamicDateRangeValue`\" pulumi-lang-go=\"`dynamicDateRangeValue`\" pulumi-lang-python=\"`dynamic_date_range_value`\" pulumi-lang-yaml=\"`dynamicDateRangeValue`\" pulumi-lang-java=\"`dynamicDateRangeValue`\"\u003e`dynamic_date_range_value`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`dateRangeValue`\" pulumi-lang-dotnet=\"`DateRangeValue`\" pulumi-lang-go=\"`dateRangeValue`\" pulumi-lang-python=\"`date_range_value`\" pulumi-lang-yaml=\"`dateRangeValue`\" pulumi-lang-java=\"`dateRangeValue`\"\u003e`date_range_value`\u003c/span\u003e):\n"},"dateValue":{"$ref":"#/types/databricks:index/QueryParameterDateValue:QueryParameterDateValue","description":"Date query parameter value. Consists of following attributes (Can only specify one of \u003cspan pulumi-lang-nodejs=\"`dynamicDateValue`\" pulumi-lang-dotnet=\"`DynamicDateValue`\" pulumi-lang-go=\"`dynamicDateValue`\" pulumi-lang-python=\"`dynamic_date_value`\" pulumi-lang-yaml=\"`dynamicDateValue`\" pulumi-lang-java=\"`dynamicDateValue`\"\u003e`dynamic_date_value`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`dateValue`\" pulumi-lang-dotnet=\"`DateValue`\" pulumi-lang-go=\"`dateValue`\" pulumi-lang-python=\"`date_value`\" pulumi-lang-yaml=\"`dateValue`\" pulumi-lang-java=\"`dateValue`\"\u003e`date_value`\u003c/span\u003e):\n"},"enumValue":{"$ref":"#/types/databricks:index/QueryParameterEnumValue:QueryParameterEnumValue","description":"Dropdown parameter value. Consists of following attributes:\n"},"name":{"type":"string","description":"Literal parameter marker that appears between double curly braces in the query text.\n"},"numericValue":{"$ref":"#/types/databricks:index/QueryParameterNumericValue:QueryParameterNumericValue","description":"Numeric parameter value. Consists of following attributes:\n"},"queryBackedValue":{"$ref":"#/types/databricks:index/QueryParameterQueryBackedValue:QueryParameterQueryBackedValue","description":"Query-based dropdown parameter value. Consists of following attributes:\n"},"textValue":{"$ref":"#/types/databricks:index/QueryParameterTextValue:QueryParameterTextValue","description":"Text parameter value. Consists of following attributes:\n"},"title":{"type":"string","description":"Text displayed in the user-facing parameter widget in the UI.\n"}},"type":"object","required":["name"]},"databricks:index/QueryParameterDateRangeValue:QueryParameterDateRangeValue":{"properties":{"dateRangeValue":{"$ref":"#/types/databricks:index/QueryParameterDateRangeValueDateRangeValue:QueryParameterDateRangeValueDateRangeValue","description":"Manually specified date-time range value.  Consists of the following attributes:\n"},"dynamicDateRangeValue":{"type":"string","description":"Dynamic date-time range value based on current date-time.  Possible values are `TODAY`, `YESTERDAY`, `THIS_WEEK`, `THIS_MONTH`, `THIS_YEAR`, `LAST_WEEK`, `LAST_MONTH`, `LAST_YEAR`, `LAST_HOUR`, `LAST_8_HOURS`, `LAST_24_HOURS`, `LAST_7_DAYS`, `LAST_14_DAYS`, `LAST_30_DAYS`, `LAST_60_DAYS`, `LAST_90_DAYS`, `LAST_12_MONTHS`.\n"},"precision":{"type":"string","description":"Date-time precision to format the value into when the query is run.  Possible values are `DAY_PRECISION`, `MINUTE_PRECISION`, `SECOND_PRECISION`.  Defaults to `DAY_PRECISION` (`YYYY-MM-DD`).\n"},"startDayOfWeek":{"type":"integer","description":"Specify what day that starts the week.\n"}},"type":"object"},"databricks:index/QueryParameterDateRangeValueDateRangeValue:QueryParameterDateRangeValueDateRangeValue":{"properties":{"end":{"type":"string","description":"end of the date range.\n"},"start":{"type":"string","description":"begin of the date range.\n"}},"type":"object","required":["end","start"]},"databricks:index/QueryParameterDateValue:QueryParameterDateValue":{"properties":{"dateValue":{"type":"string","description":"Manually specified date-time value\n"},"dynamicDateValue":{"type":"string","description":"Dynamic date-time value based on current date-time.  Possible values are `NOW`, `YESTERDAY`.\n"},"precision":{"type":"string","description":"Date-time precision to format the value into when the query is run.  Possible values are `DAY_PRECISION`, `MINUTE_PRECISION`, `SECOND_PRECISION`.  Defaults to `DAY_PRECISION` (`YYYY-MM-DD`).\n"}},"type":"object"},"databricks:index/QueryParameterEnumValue:QueryParameterEnumValue":{"properties":{"enumOptions":{"type":"string","description":"List of valid query parameter values, newline delimited.\n"},"multiValuesOptions":{"$ref":"#/types/databricks:index/QueryParameterEnumValueMultiValuesOptions:QueryParameterEnumValueMultiValuesOptions","description":"If specified, allows multiple values to be selected for this parameter. Consists of following attributes:\n"},"values":{"type":"array","items":{"type":"string"},"description":"List of selected query parameter values.\n"}},"type":"object"},"databricks:index/QueryParameterEnumValueMultiValuesOptions:QueryParameterEnumValueMultiValuesOptions":{"properties":{"prefix":{"type":"string","description":"Character that prefixes each selected parameter value.\n"},"separator":{"type":"string","description":"Character that separates each selected parameter value. Defaults to a comma.\n"},"suffix":{"type":"string","description":"Character that suffixes each selected parameter value.\n"}},"type":"object"},"databricks:index/QueryParameterNumericValue:QueryParameterNumericValue":{"properties":{"value":{"type":"number","description":"actual numeric value.\n"}},"type":"object","required":["value"]},"databricks:index/QueryParameterQueryBackedValue:QueryParameterQueryBackedValue":{"properties":{"multiValuesOptions":{"$ref":"#/types/databricks:index/QueryParameterQueryBackedValueMultiValuesOptions:QueryParameterQueryBackedValueMultiValuesOptions","description":"If specified, allows multiple values to be selected for this parameter. Consists of following attributes:\n"},"queryId":{"type":"string","description":"ID of the query that provides the parameter values.\n"},"values":{"type":"array","items":{"type":"string"},"description":"List of selected query parameter values.\n"}},"type":"object","required":["queryId"]},"databricks:index/QueryParameterQueryBackedValueMultiValuesOptions:QueryParameterQueryBackedValueMultiValuesOptions":{"properties":{"prefix":{"type":"string","description":"Character that prefixes each selected parameter value.\n"},"separator":{"type":"string","description":"Character that separates each selected parameter value. Defaults to a comma.\n"},"suffix":{"type":"string","description":"Character that suffixes each selected parameter value.\n"}},"type":"object"},"databricks:index/QueryParameterTextValue:QueryParameterTextValue":{"properties":{"value":{"type":"string","description":"actual text value.\n"}},"type":"object","required":["value"]},"databricks:index/QueryProviderConfig:QueryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/RecipientIpAccessList:RecipientIpAccessList":{"properties":{"allowedIpAddresses":{"type":"array","items":{"type":"string"},"description":"Allowed IP Addresses in CIDR notation. Limit of 100.\n"}},"type":"object"},"databricks:index/RecipientPropertiesKvpairs:RecipientPropertiesKvpairs":{"properties":{"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"a map of string key-value pairs with recipient's properties.  Properties with name starting with `databricks.` are reserved.\n"}},"type":"object","required":["properties"]},"databricks:index/RecipientProviderConfig:RecipientProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/RecipientToken:RecipientToken":{"properties":{"activationUrl":{"type":"string","description":"Full activation URL to retrieve the access token. It will be empty if the token is already retrieved.\n"},"createdAt":{"type":"integer","description":"Time at which this recipient was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of recipient creator.\n"},"expirationTime":{"type":"integer","description":"Expiration timestamp of the token in epoch milliseconds.\n"},"id":{"type":"string","description":"Unique ID of the recipient token.\n"},"updatedAt":{"type":"integer","description":"Time at which this recipient was updated, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of recipient Token updater.\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["activationUrl","createdAt","createdBy","expirationTime","id","updatedAt","updatedBy"]}}},"databricks:index/RegisteredModelAlias:RegisteredModelAlias":{"properties":{"aliasName":{"type":"string"},"catalogName":{"type":"string","description":"The name of the catalog where the schema and the registered model reside. *Change of this parameter forces recreation of the resource.*\n"},"id":{"type":"string","description":"Equal to the full name of the model (`catalog_name.schema_name.name`) and used to identify the model uniquely across the metastore.\n"},"modelName":{"type":"string"},"schemaName":{"type":"string","description":"The name of the schema where the registered model resides. *Change of this parameter forces recreation of the resource.*\n"},"versionNum":{"type":"integer"}},"type":"object"},"databricks:index/RegisteredModelProviderConfig:RegisteredModelProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/RepoProviderConfig:RepoProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/RepoSparseCheckout:RepoSparseCheckout":{"properties":{"patterns":{"type":"array","items":{"type":"string"},"description":"array of paths (directories) that will be used for sparse checkout.  List of patterns could be updated in-place.\n\nAddition or removal of the \u003cspan pulumi-lang-nodejs=\"`sparseCheckout`\" pulumi-lang-dotnet=\"`SparseCheckout`\" pulumi-lang-go=\"`sparseCheckout`\" pulumi-lang-python=\"`sparse_checkout`\" pulumi-lang-yaml=\"`sparseCheckout`\" pulumi-lang-java=\"`sparseCheckout`\"\u003e`sparse_checkout`\u003c/span\u003e configuration block will lead to recreation of the Git folder.\n"}},"type":"object","required":["patterns"]},"databricks:index/RestrictWorkspaceAdminsSettingProviderConfig:RestrictWorkspaceAdminsSettingProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/RestrictWorkspaceAdminsSettingRestrictWorkspaceAdmins:RestrictWorkspaceAdminsSettingRestrictWorkspaceAdmins":{"properties":{"status":{"type":"string","description":"The restrict workspace admins status for the workspace.\n"}},"type":"object","required":["status"]},"databricks:index/RfaAccessRequestDestinationsDestination:RfaAccessRequestDestinationsDestination":{"properties":{"destinationId":{"type":"string","description":"The identifier for the destination. This is the email address for EMAIL destinations, the URL for URL destinations,\nor the unique Databricks notification destination ID for all other external destinations\n"},"destinationType":{"type":"string","description":"The type of the destination. Possible values are: `EMAIL`, `GENERIC_WEBHOOK`, `MICROSOFT_TEAMS`, `SLACK`, `URL`\n"},"specialDestination":{"type":"string","description":"This field is used to denote whether the destination is the email of the owner of the securable object.\nThe special destination cannot be assigned to a securable and only represents the default destination of the securable.\nThe securable types that support default special destinations are: \"catalog\", \u003cspan pulumi-lang-nodejs=\"\"externalLocation\"\" pulumi-lang-dotnet=\"\"ExternalLocation\"\" pulumi-lang-go=\"\"externalLocation\"\" pulumi-lang-python=\"\"external_location\"\" pulumi-lang-yaml=\"\"externalLocation\"\" pulumi-lang-java=\"\"externalLocation\"\"\u003e\"external_location\"\u003c/span\u003e, \"connection\", \"credential\", and \"metastore\".\nThe **destination_type** of a **special_destination** is always EMAIL. Possible values are: `SPECIAL_DESTINATION_CATALOG_OWNER`, `SPECIAL_DESTINATION_CONNECTION_OWNER`, `SPECIAL_DESTINATION_CREDENTIAL_OWNER`, `SPECIAL_DESTINATION_EXTERNAL_LOCATION_OWNER`, `SPECIAL_DESTINATION_METASTORE_OWNER`\n"}},"type":"object"},"databricks:index/RfaAccessRequestDestinationsDestinationSourceSecurable:RfaAccessRequestDestinationsDestinationSourceSecurable":{"properties":{"fullName":{"type":"string","description":"(string) - The full name of the securable. Redundant with the name in the securable object, but necessary for Pulumi integration\n"},"providerShare":{"type":"string","description":"Optional. The name of the Share object that contains the securable when the securable is\ngetting shared in D2D Delta Sharing\n"},"type":{"type":"string","description":"Required. The type of securable (catalog/schema/table).\nOptional if\u003cspan pulumi-lang-nodejs=\" resourceName \" pulumi-lang-dotnet=\" ResourceName \" pulumi-lang-go=\" resourceName \" pulumi-lang-python=\" resource_name \" pulumi-lang-yaml=\" resourceName \" pulumi-lang-java=\" resourceName \"\u003e resource_name \u003c/span\u003eis present. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"}},"type":"object"},"databricks:index/RfaAccessRequestDestinationsProviderConfig:RfaAccessRequestDestinationsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/RfaAccessRequestDestinationsSecurable:RfaAccessRequestDestinationsSecurable":{"properties":{"fullName":{"type":"string","description":"Required. The full name of the catalog/schema/table.\nOptional if\u003cspan pulumi-lang-nodejs=\" resourceName \" pulumi-lang-dotnet=\" ResourceName \" pulumi-lang-go=\" resourceName \" pulumi-lang-python=\" resource_name \" pulumi-lang-yaml=\" resourceName \" pulumi-lang-java=\" resourceName \"\u003e resource_name \u003c/span\u003eis present\n"},"providerShare":{"type":"string","description":"Optional. The name of the Share object that contains the securable when the securable is\ngetting shared in D2D Delta Sharing\n"},"type":{"type":"string","description":"Required. The type of securable (catalog/schema/table).\nOptional if\u003cspan pulumi-lang-nodejs=\" resourceName \" pulumi-lang-dotnet=\" ResourceName \" pulumi-lang-go=\" resourceName \" pulumi-lang-python=\" resource_name \" pulumi-lang-yaml=\" resourceName \" pulumi-lang-java=\" resourceName \"\u003e resource_name \u003c/span\u003eis present. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"}},"type":"object"},"databricks:index/SchemaProviderConfig:SchemaProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/SecretAclProviderConfig:SecretAclProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/SecretProviderConfig:SecretProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/SecretScopeKeyvaultMetadata:SecretScopeKeyvaultMetadata":{"properties":{"dnsName":{"type":"string","willReplaceOnChanges":true},"resourceId":{"type":"string","willReplaceOnChanges":true}},"type":"object","required":["dnsName","resourceId"]},"databricks:index/SecretScopeProviderConfig:SecretScopeProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/ServicePrincipalFederationPolicyOidcPolicy:ServicePrincipalFederationPolicyOidcPolicy":{"properties":{"audiences":{"type":"array","items":{"type":"string"},"description":"The allowed token audiences, as specified in the 'aud' claim of federated tokens.\nThe audience identifier is intended to represent the recipient of the token.\nCan be any non-empty string value. As long as the audience in the token matches\nat least one audience in the policy, the token is considered a match. If audiences\nis unspecified, defaults to your Databricks account id\n"},"issuer":{"type":"string","description":"The required token issuer, as specified in the 'iss' claim of federated tokens\n"},"jwksJson":{"type":"string","description":"The public keys used to validate the signature of federated tokens, in JWKS format.\nMost use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri \" pulumi-lang-dotnet=\" JwksUri \" pulumi-lang-go=\" jwksUri \" pulumi-lang-python=\" jwks_uri \" pulumi-lang-yaml=\" jwksUri \" pulumi-lang-java=\" jwksUri \"\u003e jwks_uri \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson\n\" pulumi-lang-dotnet=\" JwksJson\n\" pulumi-lang-go=\" jwksJson\n\" pulumi-lang-python=\" jwks_json\n\" pulumi-lang-yaml=\" jwksJson\n\" pulumi-lang-java=\" jwksJson\n\"\u003e jwks_json\n\u003c/span\u003eare both unspecified (recommended), Databricks automatically fetches the public\nkeys from your issuer’s well known endpoint. Databricks strongly recommends\nrelying on your issuer’s well known endpoint for discovering public keys\n"},"jwksUri":{"type":"string","description":"URL of the public keys used to validate the signature of federated tokens, in\nJWKS format. Most use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri\n\" pulumi-lang-dotnet=\" JwksUri\n\" pulumi-lang-go=\" jwksUri\n\" pulumi-lang-python=\" jwks_uri\n\" pulumi-lang-yaml=\" jwksUri\n\" pulumi-lang-java=\" jwksUri\n\"\u003e jwks_uri\n\u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson \" pulumi-lang-dotnet=\" JwksJson \" pulumi-lang-go=\" jwksJson \" pulumi-lang-python=\" jwks_json \" pulumi-lang-yaml=\" jwksJson \" pulumi-lang-java=\" jwksJson \"\u003e jwks_json \u003c/span\u003eare both unspecified (recommended), Databricks automatically\nfetches the public keys from your issuer’s well known endpoint. Databricks\nstrongly recommends relying on your issuer’s well known endpoint for discovering\npublic keys\n"},"subject":{"type":"string","description":"The required token subject, as specified in the subject claim of federated tokens.\nMust be specified for service principal federation policies. Must not be specified\nfor account federation policies\n"},"subjectClaim":{"type":"string","description":"The claim that contains the subject of the token. If unspecified, the default value\nis 'sub'\n"}},"type":"object"},"databricks:index/ServicePrincipalSecretProviderConfig:ServicePrincipalSecretProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/ShareObject:ShareObject":{"properties":{"addedAt":{"type":"integer"},"addedBy":{"type":"string"},"cdfEnabled":{"type":"boolean","description":"Whether to enable Change Data Feed (cdf) on the shared object. When this field is set, field \u003cspan pulumi-lang-nodejs=\"`historyDataSharingStatus`\" pulumi-lang-dotnet=\"`HistoryDataSharingStatus`\" pulumi-lang-go=\"`historyDataSharingStatus`\" pulumi-lang-python=\"`history_data_sharing_status`\" pulumi-lang-yaml=\"`historyDataSharingStatus`\" pulumi-lang-java=\"`historyDataSharingStatus`\"\u003e`history_data_sharing_status`\u003c/span\u003e can not be set.\n"},"comment":{"type":"string","description":"Description about the object.\n"},"content":{"type":"string","description":"The content of the notebook file when the data object type is NOTEBOOK_FILE. This should be base64 encoded. Required for adding a NOTEBOOK_FILE, optional for updating, ignored for other types.\n"},"dataObjectType":{"type":"string","description":"Type of the data object. Supported types: `TABLE`, `FOREIGN_TABLE`, `SCHEMA`, `VIEW`, `MATERIALIZED_VIEW`, `STREAMING_TABLE`, `MODEL`, `NOTEBOOK_FILE`, `FUNCTION`, `FEATURE_SPEC`, and `VOLUME`.\n"},"effectiveCdfEnabled":{"type":"boolean"},"effectiveHistoryDataSharingStatus":{"type":"string"},"effectiveSharedAs":{"type":"string"},"effectiveStartVersion":{"type":"integer"},"effectiveStringSharedAs":{"type":"string"},"historyDataSharingStatus":{"type":"string","description":"Whether to enable history sharing, one of: `ENABLED`, `DISABLED`. When a table has history sharing enabled, recipients can query table data by version, starting from the current table version. If not specified, clients can only query starting from the version of the object at the time it was added to the share. *NOTE*: The\u003cspan pulumi-lang-nodejs=\" startVersion \" pulumi-lang-dotnet=\" StartVersion \" pulumi-lang-go=\" startVersion \" pulumi-lang-python=\" start_version \" pulumi-lang-yaml=\" startVersion \" pulumi-lang-java=\" startVersion \"\u003e start_version \u003c/span\u003eshould be less than or equal the current version of the object. When this field is set, field \u003cspan pulumi-lang-nodejs=\"`cdfEnabled`\" pulumi-lang-dotnet=\"`CdfEnabled`\" pulumi-lang-go=\"`cdfEnabled`\" pulumi-lang-python=\"`cdf_enabled`\" pulumi-lang-yaml=\"`cdfEnabled`\" pulumi-lang-java=\"`cdfEnabled`\"\u003e`cdf_enabled`\u003c/span\u003e can not be set.\n\nTo share only part of a table when you add the table to a share, you can provide partition specifications. This is specified by a number of \u003cspan pulumi-lang-nodejs=\"`partition`\" pulumi-lang-dotnet=\"`Partition`\" pulumi-lang-go=\"`partition`\" pulumi-lang-python=\"`partition`\" pulumi-lang-yaml=\"`partition`\" pulumi-lang-java=\"`partition`\"\u003e`partition`\u003c/span\u003e blocks. Each entry in \u003cspan pulumi-lang-nodejs=\"`partition`\" pulumi-lang-dotnet=\"`Partition`\" pulumi-lang-go=\"`partition`\" pulumi-lang-python=\"`partition`\" pulumi-lang-yaml=\"`partition`\" pulumi-lang-java=\"`partition`\"\u003e`partition`\u003c/span\u003e block takes a list of \u003cspan pulumi-lang-nodejs=\"`value`\" pulumi-lang-dotnet=\"`Value`\" pulumi-lang-go=\"`value`\" pulumi-lang-python=\"`value`\" pulumi-lang-yaml=\"`value`\" pulumi-lang-java=\"`value`\"\u003e`value`\u003c/span\u003e blocks. The field is documented below.\n"},"name":{"type":"string","description":"Full name of the object, e.g. `catalog.schema.name` for a tables, views, volumes and models, or `catalog.schema` for schemas.\n"},"partitions":{"type":"array","items":{"$ref":"#/types/databricks:index/ShareObjectPartition:ShareObjectPartition"},"description":"Array of partitions for the shared data.\n"},"sharedAs":{"type":"string","description":"A user-provided alias name for **table-like data objects** within the share. Use this field for: `TABLE`, `VIEW`, `MATERIALIZED_VIEW`, `STREAMING_TABLE`, `FOREIGN_TABLE`. **Do not use this field for volumes, models, notebooks, or functions** (use \u003cspan pulumi-lang-nodejs=\"`stringSharedAs`\" pulumi-lang-dotnet=\"`StringSharedAs`\" pulumi-lang-go=\"`stringSharedAs`\" pulumi-lang-python=\"`string_shared_as`\" pulumi-lang-yaml=\"`stringSharedAs`\" pulumi-lang-java=\"`stringSharedAs`\"\u003e`string_shared_as`\u003c/span\u003e instead). If not provided, the object's original name will be used. Must be a 2-part name `\u003cschema\u003e.\u003ctable\u003e` containing only alphanumeric characters and underscores. The \u003cspan pulumi-lang-nodejs=\"`sharedAs`\" pulumi-lang-dotnet=\"`SharedAs`\" pulumi-lang-go=\"`sharedAs`\" pulumi-lang-python=\"`shared_as`\" pulumi-lang-yaml=\"`sharedAs`\" pulumi-lang-java=\"`sharedAs`\"\u003e`shared_as`\u003c/span\u003e name must be unique within a share. Change forces creation of a new resource.\n"},"startVersion":{"type":"integer","description":"The start version associated with the object for cdf. This allows data providers to control the lowest object version that is accessible by clients.\n"},"status":{"type":"string","description":"Status of the object, one of: `ACTIVE`, `PERMISSION_DENIED`.\n"},"stringSharedAs":{"type":"string","description":"A user-provided alias name for **non-table data objects** within the share. Use this field for: `VOLUME`, `MODEL`, `NOTEBOOK_FILE`, `FUNCTION`. **Do not use this field for tables, views, or streaming tables** (use \u003cspan pulumi-lang-nodejs=\"`sharedAs`\" pulumi-lang-dotnet=\"`SharedAs`\" pulumi-lang-go=\"`sharedAs`\" pulumi-lang-python=\"`shared_as`\" pulumi-lang-yaml=\"`sharedAs`\" pulumi-lang-java=\"`sharedAs`\"\u003e`shared_as`\u003c/span\u003e instead). Format varies by type: For volumes, models, and functions use `\u003cschema\u003e.\u003cname\u003e` (2-part name); for notebooks use the file name. Names must contain only alphanumeric characters and underscores. The \u003cspan pulumi-lang-nodejs=\"`stringSharedAs`\" pulumi-lang-dotnet=\"`StringSharedAs`\" pulumi-lang-go=\"`stringSharedAs`\" pulumi-lang-python=\"`string_shared_as`\" pulumi-lang-yaml=\"`stringSharedAs`\" pulumi-lang-java=\"`stringSharedAs`\"\u003e`string_shared_as`\u003c/span\u003e name must be unique for objects of the same type within a share. Change forces creation of a new resource.\n"}},"type":"object","required":["dataObjectType","name"],"language":{"nodejs":{"requiredOutputs":["addedAt","addedBy","dataObjectType","effectiveCdfEnabled","effectiveHistoryDataSharingStatus","effectiveSharedAs","effectiveStartVersion","effectiveStringSharedAs","name","status"]}}},"databricks:index/ShareObjectPartition:ShareObjectPartition":{"properties":{"values":{"type":"array","items":{"$ref":"#/types/databricks:index/ShareObjectPartitionValue:ShareObjectPartitionValue"},"description":"The value of the partition column. When this value is not set, it means null value. When this field is set, field \u003cspan pulumi-lang-nodejs=\"`recipientPropertyKey`\" pulumi-lang-dotnet=\"`RecipientPropertyKey`\" pulumi-lang-go=\"`recipientPropertyKey`\" pulumi-lang-python=\"`recipient_property_key`\" pulumi-lang-yaml=\"`recipientPropertyKey`\" pulumi-lang-java=\"`recipientPropertyKey`\"\u003e`recipient_property_key`\u003c/span\u003e can not be set.\n"}},"type":"object"},"databricks:index/ShareObjectPartitionValue:ShareObjectPartitionValue":{"properties":{"name":{"type":"string","description":"The name of the partition column.\n"},"op":{"type":"string","description":"The operator to apply for the value, one of: `EQUAL`, `LIKE`\n"},"recipientPropertyKey":{"type":"string","description":"The key of a Delta Sharing recipient's property. For example `databricks-account-id`. When this field is set, field \u003cspan pulumi-lang-nodejs=\"`value`\" pulumi-lang-dotnet=\"`Value`\" pulumi-lang-go=\"`value`\" pulumi-lang-python=\"`value`\" pulumi-lang-yaml=\"`value`\" pulumi-lang-java=\"`value`\"\u003e`value`\u003c/span\u003e can not be set.\n"},"value":{"type":"string","description":"The value of the partition column. When this value is not set, it means null value. When this field is set, field \u003cspan pulumi-lang-nodejs=\"`recipientPropertyKey`\" pulumi-lang-dotnet=\"`RecipientPropertyKey`\" pulumi-lang-go=\"`recipientPropertyKey`\" pulumi-lang-python=\"`recipient_property_key`\" pulumi-lang-yaml=\"`recipientPropertyKey`\" pulumi-lang-java=\"`recipientPropertyKey`\"\u003e`recipient_property_key`\u003c/span\u003e can not be set.\n"}},"type":"object","required":["name","op"]},"databricks:index/ShareProviderConfig:ShareProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/SqlAlertOptions:SqlAlertOptions":{"properties":{"column":{"type":"string","description":"Name of column in the query result to compare in alert evaluation.\n"},"customBody":{"type":"string","description":"Custom body of alert notification, if it exists. See [Alerts API reference](https://docs.databricks.com/sql/user/alerts/index.html) for custom templating instructions.\n"},"customSubject":{"type":"string","description":"Custom subject of alert notification, if it exists. This includes email subject, Slack notification header, etc. See [Alerts API reference](https://docs.databricks.com/sql/user/alerts/index.html) for custom templating instructions.\n"},"emptyResultState":{"type":"string","description":"State that alert evaluates to when query result is empty.  Currently supported values are \u003cspan pulumi-lang-nodejs=\"`unknown`\" pulumi-lang-dotnet=\"`Unknown`\" pulumi-lang-go=\"`unknown`\" pulumi-lang-python=\"`unknown`\" pulumi-lang-yaml=\"`unknown`\" pulumi-lang-java=\"`unknown`\"\u003e`unknown`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`triggered`\" pulumi-lang-dotnet=\"`Triggered`\" pulumi-lang-go=\"`triggered`\" pulumi-lang-python=\"`triggered`\" pulumi-lang-yaml=\"`triggered`\" pulumi-lang-java=\"`triggered`\"\u003e`triggered`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`ok`\" pulumi-lang-dotnet=\"`Ok`\" pulumi-lang-go=\"`ok`\" pulumi-lang-python=\"`ok`\" pulumi-lang-yaml=\"`ok`\" pulumi-lang-java=\"`ok`\"\u003e`ok`\u003c/span\u003e - check [API documentation](https://docs.databricks.com/api/workspace/alerts/create) for full list of supported values.\n"},"muted":{"type":"boolean","description":"Whether or not the alert is muted. If an alert is muted, it will not notify users and alert destinations when triggered.\n"},"op":{"type":"string","description":"Operator used to compare in alert evaluation. (Enum: `\u003e`, `\u003e=`, `\u003c`, `\u003c=`, `==`, `!=`)\n"},"value":{"type":"string","description":"Value used to compare in alert evaluation.\n"}},"type":"object","required":["column","op","value"]},"databricks:index/SqlAlertProviderConfig:SqlAlertProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/SqlDashboardProviderConfig:SqlDashboardProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/SqlEndpointChannel:SqlEndpointChannel":{"properties":{"dbsqlVersion":{"type":"string"},"name":{"type":"string","description":"Name of the Databricks SQL release channel. Possible values are: `CHANNEL_NAME_PREVIEW` and `CHANNEL_NAME_CURRENT`. Default is `CHANNEL_NAME_CURRENT`.\n"}},"type":"object"},"databricks:index/SqlEndpointHealth:SqlEndpointHealth":{"properties":{"details":{"type":"string"},"failureReason":{"$ref":"#/types/databricks:index/SqlEndpointHealthFailureReason:SqlEndpointHealthFailureReason"},"message":{"type":"string"},"status":{"type":"string"},"summary":{"type":"string"}},"type":"object"},"databricks:index/SqlEndpointHealthFailureReason:SqlEndpointHealthFailureReason":{"properties":{"code":{"type":"string"},"parameters":{"type":"object","additionalProperties":{"type":"string"}},"type":{"type":"string"}},"type":"object"},"databricks:index/SqlEndpointOdbcParams:SqlEndpointOdbcParams":{"properties":{"hostname":{"type":"string"},"path":{"type":"string"},"port":{"type":"integer"},"protocol":{"type":"string"}},"type":"object"},"databricks:index/SqlEndpointProviderConfig:SqlEndpointProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/SqlEndpointTags:SqlEndpointTags":{"properties":{"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlEndpointTagsCustomTag:SqlEndpointTagsCustomTag"}}},"type":"object"},"databricks:index/SqlEndpointTagsCustomTag:SqlEndpointTagsCustomTag":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object","required":["key","value"]},"databricks:index/SqlGlobalConfigProviderConfig:SqlGlobalConfigProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/SqlPermissionsPrivilegeAssignment:SqlPermissionsPrivilegeAssignment":{"properties":{"principal":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`displayName`\" pulumi-lang-dotnet=\"`DisplayName`\" pulumi-lang-go=\"`displayName`\" pulumi-lang-python=\"`display_name`\" pulumi-lang-yaml=\"`displayName`\" pulumi-lang-java=\"`displayName`\"\u003e`display_name`\u003c/span\u003e for a\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eor databricks_user, \u003cspan pulumi-lang-nodejs=\"`applicationId`\" pulumi-lang-dotnet=\"`ApplicationId`\" pulumi-lang-go=\"`applicationId`\" pulumi-lang-python=\"`application_id`\" pulumi-lang-yaml=\"`applicationId`\" pulumi-lang-java=\"`applicationId`\"\u003e`application_id`\u003c/span\u003e for a databricks_service_principal.\n"},"privileges":{"type":"array","items":{"type":"string"},"description":"set of available privilege names in upper case.\n\n\n[Available](https://docs.databricks.com/security/access-control/table-acls/object-privileges.html) privilege names are:\n[Available](https://docs.databricks.com/security/access-control/table-acls/object-privileges.html) privilege names are:\n\n\n* `SELECT` - gives read access to an object.\n* `SELECT` - gives read access to an object.\n* `CREATE` - gives the ability to create an object (for example, a table in a database).\n* `CREATE` - gives the ability to create an object (for example, a table in a database).\n* `MODIFY` - gives the ability to add, delete, and modify data to or from an object.\n* `MODIFY` - gives the ability to add, delete, and modify data to or from an object.\n* `USAGE` - do not give any abilities, but is an additional requirement to perform any action on a database object.\n* `USAGE` - do not give any abilities, but is an additional requirement to perform any action on a database object.\n* `READ_METADATA` - gives the ability to view an object and its metadata.\n* `READ_METADATA` - gives the ability to view an object and its metadata.\n* `CREATE_NAMED_FUNCTION` - gives the ability to create a named UDF in an existing catalog or database.\n* `CREATE_NAMED_FUNCTION` - gives the ability to create a named UDF in an existing catalog or database.\n* `MODIFY_CLASSPATH` - gives the ability to add files to the Spark classpath.\n* `MODIFY_CLASSPATH` - gives the ability to add files to the Spark classpath.\n\n\n\u003e Even though the value `ALL PRIVILEGES` is mentioned in Table ACL documentation, it's not recommended to use it from Pulumi, as it may result in unnecessary state updates.\n\u003e Even though the value `ALL PRIVILEGES` is mentioned in Table ACL documentation, it's not recommended to use it from Pulumi, as it may result in unnecessary state updates.\n"}},"type":"object","required":["principal","privileges"]},"databricks:index/SqlQueryParameter:SqlQueryParameter":{"properties":{"date":{"$ref":"#/types/databricks:index/SqlQueryParameterDate:SqlQueryParameterDate"},"dateRange":{"$ref":"#/types/databricks:index/SqlQueryParameterDateRange:SqlQueryParameterDateRange"},"datetime":{"$ref":"#/types/databricks:index/SqlQueryParameterDatetime:SqlQueryParameterDatetime"},"datetimeRange":{"$ref":"#/types/databricks:index/SqlQueryParameterDatetimeRange:SqlQueryParameterDatetimeRange"},"datetimesec":{"$ref":"#/types/databricks:index/SqlQueryParameterDatetimesec:SqlQueryParameterDatetimesec"},"datetimesecRange":{"$ref":"#/types/databricks:index/SqlQueryParameterDatetimesecRange:SqlQueryParameterDatetimesecRange"},"enum":{"$ref":"#/types/databricks:index/SqlQueryParameterEnum:SqlQueryParameterEnum"},"name":{"type":"string","description":"The literal parameter marker that appears between double curly braces in the query text.\nParameters can have several different types. Type is specified using one of the following configuration blocks: \u003cspan pulumi-lang-nodejs=\"`text`\" pulumi-lang-dotnet=\"`Text`\" pulumi-lang-go=\"`text`\" pulumi-lang-python=\"`text`\" pulumi-lang-yaml=\"`text`\" pulumi-lang-java=\"`text`\"\u003e`text`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`number`\" pulumi-lang-dotnet=\"`Number`\" pulumi-lang-go=\"`number`\" pulumi-lang-python=\"`number`\" pulumi-lang-yaml=\"`number`\" pulumi-lang-java=\"`number`\"\u003e`number`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`enum`\" pulumi-lang-dotnet=\"`Enum`\" pulumi-lang-go=\"`enum`\" pulumi-lang-python=\"`enum`\" pulumi-lang-yaml=\"`enum`\" pulumi-lang-java=\"`enum`\"\u003e`enum`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`query`\" pulumi-lang-dotnet=\"`Query`\" pulumi-lang-go=\"`query`\" pulumi-lang-python=\"`query`\" pulumi-lang-yaml=\"`query`\" pulumi-lang-java=\"`query`\"\u003e`query`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`date`\" pulumi-lang-dotnet=\"`Date`\" pulumi-lang-go=\"`date`\" pulumi-lang-python=\"`date`\" pulumi-lang-yaml=\"`date`\" pulumi-lang-java=\"`date`\"\u003e`date`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`datetime`\" pulumi-lang-dotnet=\"`Datetime`\" pulumi-lang-go=\"`datetime`\" pulumi-lang-python=\"`datetime`\" pulumi-lang-yaml=\"`datetime`\" pulumi-lang-java=\"`datetime`\"\u003e`datetime`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`datetimesec`\" pulumi-lang-dotnet=\"`Datetimesec`\" pulumi-lang-go=\"`datetimesec`\" pulumi-lang-python=\"`datetimesec`\" pulumi-lang-yaml=\"`datetimesec`\" pulumi-lang-java=\"`datetimesec`\"\u003e`datetimesec`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`dateRange`\" pulumi-lang-dotnet=\"`DateRange`\" pulumi-lang-go=\"`dateRange`\" pulumi-lang-python=\"`date_range`\" pulumi-lang-yaml=\"`dateRange`\" pulumi-lang-java=\"`dateRange`\"\u003e`date_range`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`datetimeRange`\" pulumi-lang-dotnet=\"`DatetimeRange`\" pulumi-lang-go=\"`datetimeRange`\" pulumi-lang-python=\"`datetime_range`\" pulumi-lang-yaml=\"`datetimeRange`\" pulumi-lang-java=\"`datetimeRange`\"\u003e`datetime_range`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`datetimesecRange`\" pulumi-lang-dotnet=\"`DatetimesecRange`\" pulumi-lang-go=\"`datetimesecRange`\" pulumi-lang-python=\"`datetimesec_range`\" pulumi-lang-yaml=\"`datetimesecRange`\" pulumi-lang-java=\"`datetimesecRange`\"\u003e`datetimesec_range`\u003c/span\u003e.\n\nFor \u003cspan pulumi-lang-nodejs=\"`text`\" pulumi-lang-dotnet=\"`Text`\" pulumi-lang-go=\"`text`\" pulumi-lang-python=\"`text`\" pulumi-lang-yaml=\"`text`\" pulumi-lang-java=\"`text`\"\u003e`text`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`number`\" pulumi-lang-dotnet=\"`Number`\" pulumi-lang-go=\"`number`\" pulumi-lang-python=\"`number`\" pulumi-lang-yaml=\"`number`\" pulumi-lang-java=\"`number`\"\u003e`number`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`date`\" pulumi-lang-dotnet=\"`Date`\" pulumi-lang-go=\"`date`\" pulumi-lang-python=\"`date`\" pulumi-lang-yaml=\"`date`\" pulumi-lang-java=\"`date`\"\u003e`date`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`datetime`\" pulumi-lang-dotnet=\"`Datetime`\" pulumi-lang-go=\"`datetime`\" pulumi-lang-python=\"`datetime`\" pulumi-lang-yaml=\"`datetime`\" pulumi-lang-java=\"`datetime`\"\u003e`datetime`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`datetimesec`\" pulumi-lang-dotnet=\"`Datetimesec`\" pulumi-lang-go=\"`datetimesec`\" pulumi-lang-python=\"`datetimesec`\" pulumi-lang-yaml=\"`datetimesec`\" pulumi-lang-java=\"`datetimesec`\"\u003e`datetimesec`\u003c/span\u003e block\n"},"number":{"$ref":"#/types/databricks:index/SqlQueryParameterNumber:SqlQueryParameterNumber"},"query":{"$ref":"#/types/databricks:index/SqlQueryParameterQuery:SqlQueryParameterQuery","description":"The text of the query to be run.\n"},"text":{"$ref":"#/types/databricks:index/SqlQueryParameterText:SqlQueryParameterText"},"title":{"type":"string","description":"The text displayed in a parameter picking widget.\n"}},"type":"object","required":["name"]},"databricks:index/SqlQueryParameterDate:SqlQueryParameterDate":{"properties":{"value":{"type":"string","description":"The default value for this parameter.\n"}},"type":"object","required":["value"]},"databricks:index/SqlQueryParameterDateRange:SqlQueryParameterDateRange":{"properties":{"range":{"$ref":"#/types/databricks:index/SqlQueryParameterDateRangeRange:SqlQueryParameterDateRangeRange"},"value":{"type":"string","description":"The default value for this parameter.\n"}},"type":"object"},"databricks:index/SqlQueryParameterDateRangeRange:SqlQueryParameterDateRangeRange":{"properties":{"end":{"type":"string"},"start":{"type":"string"}},"type":"object","required":["end","start"]},"databricks:index/SqlQueryParameterDatetime:SqlQueryParameterDatetime":{"properties":{"value":{"type":"string","description":"The default value for this parameter.\n"}},"type":"object","required":["value"]},"databricks:index/SqlQueryParameterDatetimeRange:SqlQueryParameterDatetimeRange":{"properties":{"range":{"$ref":"#/types/databricks:index/SqlQueryParameterDatetimeRangeRange:SqlQueryParameterDatetimeRangeRange"},"value":{"type":"string","description":"The default value for this parameter.\n"}},"type":"object"},"databricks:index/SqlQueryParameterDatetimeRangeRange:SqlQueryParameterDatetimeRangeRange":{"properties":{"end":{"type":"string"},"start":{"type":"string"}},"type":"object","required":["end","start"]},"databricks:index/SqlQueryParameterDatetimesec:SqlQueryParameterDatetimesec":{"properties":{"value":{"type":"string","description":"The default value for this parameter.\n"}},"type":"object","required":["value"]},"databricks:index/SqlQueryParameterDatetimesecRange:SqlQueryParameterDatetimesecRange":{"properties":{"range":{"$ref":"#/types/databricks:index/SqlQueryParameterDatetimesecRangeRange:SqlQueryParameterDatetimesecRangeRange"},"value":{"type":"string","description":"The default value for this parameter.\n"}},"type":"object"},"databricks:index/SqlQueryParameterDatetimesecRangeRange:SqlQueryParameterDatetimesecRangeRange":{"properties":{"end":{"type":"string"},"start":{"type":"string"}},"type":"object","required":["end","start"]},"databricks:index/SqlQueryParameterEnum:SqlQueryParameterEnum":{"properties":{"multiple":{"$ref":"#/types/databricks:index/SqlQueryParameterEnumMultiple:SqlQueryParameterEnumMultiple"},"options":{"type":"array","items":{"type":"string"}},"value":{"type":"string","description":"The default value for this parameter.\n"},"values":{"type":"array","items":{"type":"string"}}},"type":"object","required":["options"]},"databricks:index/SqlQueryParameterEnumMultiple:SqlQueryParameterEnumMultiple":{"properties":{"prefix":{"type":"string"},"separator":{"type":"string"},"suffix":{"type":"string"}},"type":"object","required":["separator"]},"databricks:index/SqlQueryParameterNumber:SqlQueryParameterNumber":{"properties":{"value":{"type":"number","description":"The default value for this parameter.\n"}},"type":"object","required":["value"]},"databricks:index/SqlQueryParameterQuery:SqlQueryParameterQuery":{"properties":{"multiple":{"$ref":"#/types/databricks:index/SqlQueryParameterQueryMultiple:SqlQueryParameterQueryMultiple"},"queryId":{"type":"string"},"value":{"type":"string","description":"The default value for this parameter.\n"},"values":{"type":"array","items":{"type":"string"}}},"type":"object","required":["queryId"]},"databricks:index/SqlQueryParameterQueryMultiple:SqlQueryParameterQueryMultiple":{"properties":{"prefix":{"type":"string"},"separator":{"type":"string"},"suffix":{"type":"string"}},"type":"object","required":["separator"]},"databricks:index/SqlQueryParameterText:SqlQueryParameterText":{"properties":{"value":{"type":"string","description":"The default value for this parameter.\n"}},"type":"object","required":["value"]},"databricks:index/SqlQueryProviderConfig:SqlQueryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/SqlQuerySchedule:SqlQuerySchedule":{"properties":{"continuous":{"$ref":"#/types/databricks:index/SqlQueryScheduleContinuous:SqlQueryScheduleContinuous"},"daily":{"$ref":"#/types/databricks:index/SqlQueryScheduleDaily:SqlQueryScheduleDaily"},"weekly":{"$ref":"#/types/databricks:index/SqlQueryScheduleWeekly:SqlQueryScheduleWeekly"}},"type":"object"},"databricks:index/SqlQueryScheduleContinuous:SqlQueryScheduleContinuous":{"properties":{"intervalSeconds":{"type":"integer"},"untilDate":{"type":"string"}},"type":"object","required":["intervalSeconds"]},"databricks:index/SqlQueryScheduleDaily:SqlQueryScheduleDaily":{"properties":{"intervalDays":{"type":"integer"},"timeOfDay":{"type":"string"},"untilDate":{"type":"string"}},"type":"object","required":["intervalDays","timeOfDay"]},"databricks:index/SqlQueryScheduleWeekly:SqlQueryScheduleWeekly":{"properties":{"dayOfWeek":{"type":"string"},"intervalWeeks":{"type":"integer"},"timeOfDay":{"type":"string"},"untilDate":{"type":"string"}},"type":"object","required":["dayOfWeek","intervalWeeks","timeOfDay"]},"databricks:index/SqlTableColumn:SqlTableColumn":{"properties":{"comment":{"type":"string","description":"User-supplied free-form text.\n"},"identity":{"type":"string","description":"Whether the field is an identity column. Can be \u003cspan pulumi-lang-nodejs=\"`default`\" pulumi-lang-dotnet=\"`Default`\" pulumi-lang-go=\"`default`\" pulumi-lang-python=\"`default`\" pulumi-lang-yaml=\"`default`\" pulumi-lang-java=\"`default`\"\u003e`default`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`always`\" pulumi-lang-dotnet=\"`Always`\" pulumi-lang-go=\"`always`\" pulumi-lang-python=\"`always`\" pulumi-lang-yaml=\"`always`\" pulumi-lang-java=\"`always`\"\u003e`always`\u003c/span\u003e, or unset. It is unset by default.\n"},"name":{"type":"string","description":"User-visible name of column\n"},"nullable":{"type":"boolean","description":"Whether field is nullable (Default: \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e)\n"},"type":{"type":"string","description":"Column type spec (with metadata) as SQL text. Not supported for `VIEW` table_type.\n"},"typeJson":{"type":"string"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredOutputs":["name","type","typeJson"]}}},"databricks:index/SqlTableProviderConfig:SqlTableProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/SqlVisualizationProviderConfig:SqlVisualizationProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/SqlWidgetParameter:SqlWidgetParameter":{"properties":{"mapTo":{"type":"string"},"name":{"type":"string"},"title":{"type":"string"},"type":{"type":"string"},"value":{"type":"string"},"values":{"type":"array","items":{"type":"string"}}},"type":"object","required":["name","type"]},"databricks:index/SqlWidgetPosition:SqlWidgetPosition":{"properties":{"autoHeight":{"type":"boolean"},"posX":{"type":"integer"},"posY":{"type":"integer"},"sizeX":{"type":"integer"},"sizeY":{"type":"integer"}},"type":"object","required":["sizeX","sizeY"]},"databricks:index/SqlWidgetProviderConfig:SqlWidgetProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/StorageCredentialAwsIamRole:StorageCredentialAwsIamRole":{"properties":{"externalId":{"type":"string","description":"The external ID used in role assumption to prevent the confused deputy problem.\n"},"roleArn":{"type":"string","description":"The Amazon Resource Name (ARN) of the AWS IAM role for S3 data access, of the form `arn:aws:iam::1234567890:role/MyRole-AJJHDSKSDF`.\n\n\u003cspan pulumi-lang-nodejs=\"`azureManagedIdentity`\" pulumi-lang-dotnet=\"`AzureManagedIdentity`\" pulumi-lang-go=\"`azureManagedIdentity`\" pulumi-lang-python=\"`azure_managed_identity`\" pulumi-lang-yaml=\"`azureManagedIdentity`\" pulumi-lang-java=\"`azureManagedIdentity`\"\u003e`azure_managed_identity`\u003c/span\u003e optional configuration block for using managed identity as credential details for Azure (recommended over service principal):\n"},"unityCatalogIamArn":{"type":"string","description":"The Amazon Resource Name (ARN) of the AWS IAM user managed by Databricks. This is the identity that is going to assume the AWS IAM role.\n"}},"type":"object","required":["roleArn"],"language":{"nodejs":{"requiredOutputs":["externalId","roleArn","unityCatalogIamArn"]}}},"databricks:index/StorageCredentialAzureManagedIdentity:StorageCredentialAzureManagedIdentity":{"properties":{"accessConnectorId":{"type":"string","description":"The Resource ID of the Azure Databricks Access Connector resource, of the form `/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-name/providers/Microsoft.Databricks/accessConnectors/connector-name`.\n"},"credentialId":{"type":"string"},"managedIdentityId":{"type":"string","description":"The Resource ID of the Azure User Assigned Managed Identity associated with Azure Databricks Access Connector, of the form `/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-name/providers/Microsoft.ManagedIdentity/userAssignedIdentities/user-managed-identity-name`.\n\n\u003cspan pulumi-lang-nodejs=\"`databricksGcpServiceAccount`\" pulumi-lang-dotnet=\"`DatabricksGcpServiceAccount`\" pulumi-lang-go=\"`databricksGcpServiceAccount`\" pulumi-lang-python=\"`databricks_gcp_service_account`\" pulumi-lang-yaml=\"`databricksGcpServiceAccount`\" pulumi-lang-java=\"`databricksGcpServiceAccount`\"\u003e`databricks_gcp_service_account`\u003c/span\u003e optional configuration block for creating a Databricks-managed GCP Service Account:\n"}},"type":"object","required":["accessConnectorId"],"language":{"nodejs":{"requiredOutputs":["accessConnectorId","credentialId"]}}},"databricks:index/StorageCredentialAzureServicePrincipal:StorageCredentialAzureServicePrincipal":{"properties":{"applicationId":{"type":"string","description":"The application ID of the application registration within the referenced AAD tenant\n"},"clientSecret":{"type":"string","description":"The client secret generated for the above app ID in AAD. **This field is redacted on output**\n","secret":true},"directoryId":{"type":"string","description":"The directory ID corresponding to the Azure Active Directory (AAD) tenant of the application\n"}},"type":"object","required":["applicationId","clientSecret","directoryId"]},"databricks:index/StorageCredentialCloudflareApiToken:StorageCredentialCloudflareApiToken":{"properties":{"accessKeyId":{"type":"string","description":"R2 API token access key ID\n"},"accountId":{"type":"string","description":"R2 account ID\n"},"secretAccessKey":{"type":"string","description":"R2 API token secret access key\n\n\u003cspan pulumi-lang-nodejs=\"`azureServicePrincipal`\" pulumi-lang-dotnet=\"`AzureServicePrincipal`\" pulumi-lang-go=\"`azureServicePrincipal`\" pulumi-lang-python=\"`azure_service_principal`\" pulumi-lang-yaml=\"`azureServicePrincipal`\" pulumi-lang-java=\"`azureServicePrincipal`\"\u003e`azure_service_principal`\u003c/span\u003e optional configuration block to use service principal as credential details for Azure (Legacy):\n","secret":true}},"type":"object","required":["accessKeyId","accountId","secretAccessKey"]},"databricks:index/StorageCredentialDatabricksGcpServiceAccount:StorageCredentialDatabricksGcpServiceAccount":{"properties":{"credentialId":{"type":"string"},"email":{"type":"string","description":"The email of the GCP service account created, to be granted access to relevant buckets.\n\n\u003cspan pulumi-lang-nodejs=\"`cloudflareApiToken`\" pulumi-lang-dotnet=\"`CloudflareApiToken`\" pulumi-lang-go=\"`cloudflareApiToken`\" pulumi-lang-python=\"`cloudflare_api_token`\" pulumi-lang-yaml=\"`cloudflareApiToken`\" pulumi-lang-java=\"`cloudflareApiToken`\"\u003e`cloudflare_api_token`\u003c/span\u003e optional configuration block for using a Cloudflare API Token as credential details. This requires account admin access:\n"}},"type":"object","language":{"nodejs":{"requiredOutputs":["credentialId","email"]}}},"databricks:index/StorageCredentialGcpServiceAccountKey:StorageCredentialGcpServiceAccountKey":{"properties":{"email":{"type":"string","description":"The email of the GCP service account created, to be granted access to relevant buckets.\n\n\u003cspan pulumi-lang-nodejs=\"`cloudflareApiToken`\" pulumi-lang-dotnet=\"`CloudflareApiToken`\" pulumi-lang-go=\"`cloudflareApiToken`\" pulumi-lang-python=\"`cloudflare_api_token`\" pulumi-lang-yaml=\"`cloudflareApiToken`\" pulumi-lang-java=\"`cloudflareApiToken`\"\u003e`cloudflare_api_token`\u003c/span\u003e optional configuration block for using a Cloudflare API Token as credential details. This requires account admin access:\n"},"privateKey":{"type":"string","secret":true},"privateKeyId":{"type":"string"}},"type":"object","required":["email","privateKey","privateKeyId"]},"databricks:index/SystemSchemaProviderConfig:SystemSchemaProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/TableColumn:TableColumn":{"properties":{"comment":{"type":"string"},"name":{"type":"string"},"nullable":{"type":"boolean"},"partitionIndex":{"type":"integer"},"position":{"type":"integer"},"typeIntervalType":{"type":"string"},"typeJson":{"type":"string"},"typeName":{"type":"string"},"typePrecision":{"type":"integer"},"typeScale":{"type":"integer"},"typeText":{"type":"string"}},"type":"object","required":["name","position","typeName","typeText"]},"databricks:index/TableProviderConfig:TableProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/TagPolicyProviderConfig:TagPolicyProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/TagPolicyValue:TagPolicyValue":{"properties":{"name":{"type":"string"}},"type":"object","required":["name"]},"databricks:index/TokenProviderConfig:TokenProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/VectorSearchEndpointEndpointStatus:VectorSearchEndpointEndpointStatus":{"properties":{"message":{"type":"string","description":"Additional status message.\n"},"state":{"type":"string","description":"Current state of the endpoint. Currently following values are supported: `PROVISIONING`, `ONLINE`, and `OFFLINE`.\n"}},"type":"object"},"databricks:index/VectorSearchEndpointProviderConfig:VectorSearchEndpointProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/VectorSearchIndexDeltaSyncIndexSpec:VectorSearchIndexDeltaSyncIndexSpec":{"properties":{"embeddingSourceColumns":{"type":"array","items":{"$ref":"#/types/databricks:index/VectorSearchIndexDeltaSyncIndexSpecEmbeddingSourceColumn:VectorSearchIndexDeltaSyncIndexSpecEmbeddingSourceColumn"},"description":"array of objects representing columns that contain the embedding source.  Each entry consists of:\n","willReplaceOnChanges":true},"embeddingVectorColumns":{"type":"array","items":{"$ref":"#/types/databricks:index/VectorSearchIndexDeltaSyncIndexSpecEmbeddingVectorColumn:VectorSearchIndexDeltaSyncIndexSpecEmbeddingVectorColumn"},"description":"array of objects representing columns that contain the embedding vectors. Each entry consists of:\n","willReplaceOnChanges":true},"embeddingWritebackTable":{"type":"string","description":"Automatically sync the vector index contents and computed embeddings to the specified Delta table. The only supported table name is the index name with the suffix `_writeback_table`.\n","willReplaceOnChanges":true},"pipelineId":{"type":"string","description":"ID of the associated Delta Live Table pipeline.\n"},"pipelineType":{"type":"string","description":"Pipeline execution mode. Possible values are:\n* `TRIGGERED`: If the pipeline uses the triggered execution mode, the system stops processing after successfully refreshing the source table in the pipeline once, ensuring the table is updated based on the data available when the update started.\n* `CONTINUOUS`: If the pipeline uses continuous execution, the pipeline processes new data as it arrives in the source table to keep the vector index fresh.\n","willReplaceOnChanges":true},"sourceTable":{"type":"string","description":"The name of the source table.\n","willReplaceOnChanges":true}},"type":"object","language":{"nodejs":{"requiredOutputs":["pipelineId"]}}},"databricks:index/VectorSearchIndexDeltaSyncIndexSpecEmbeddingSourceColumn:VectorSearchIndexDeltaSyncIndexSpecEmbeddingSourceColumn":{"properties":{"embeddingModelEndpointName":{"type":"string","description":"The name of the embedding model endpoint, used by default for both ingestion and querying.\n","willReplaceOnChanges":true},"modelEndpointNameForQuery":{"type":"string","description":"The name of the embedding model endpoint which, if specified, is used for querying (not ingestion).\n","willReplaceOnChanges":true},"name":{"type":"string","description":"The name of the column\n","willReplaceOnChanges":true}},"type":"object"},"databricks:index/VectorSearchIndexDeltaSyncIndexSpecEmbeddingVectorColumn:VectorSearchIndexDeltaSyncIndexSpecEmbeddingVectorColumn":{"properties":{"embeddingDimension":{"type":"integer","description":"Dimension of the embedding vector.\n","willReplaceOnChanges":true},"name":{"type":"string","description":"The name of the column.\n","willReplaceOnChanges":true}},"type":"object"},"databricks:index/VectorSearchIndexDirectAccessIndexSpec:VectorSearchIndexDirectAccessIndexSpec":{"properties":{"embeddingSourceColumns":{"type":"array","items":{"$ref":"#/types/databricks:index/VectorSearchIndexDirectAccessIndexSpecEmbeddingSourceColumn:VectorSearchIndexDirectAccessIndexSpecEmbeddingSourceColumn"},"description":"array of objects representing columns that contain the embedding source.  Each entry consists of:\n","willReplaceOnChanges":true},"embeddingVectorColumns":{"type":"array","items":{"$ref":"#/types/databricks:index/VectorSearchIndexDirectAccessIndexSpecEmbeddingVectorColumn:VectorSearchIndexDirectAccessIndexSpecEmbeddingVectorColumn"},"description":"array of objects representing columns that contain the embedding vectors. Each entry consists of:\n","willReplaceOnChanges":true},"schemaJson":{"type":"string","description":"The schema of the index in JSON format.  Check the [API documentation](https://docs.databricks.com/api/workspace/vectorsearchindexes/createindex#direct_access_index_spec-schema_json) for a list of supported data types.\n","willReplaceOnChanges":true}},"type":"object"},"databricks:index/VectorSearchIndexDirectAccessIndexSpecEmbeddingSourceColumn:VectorSearchIndexDirectAccessIndexSpecEmbeddingSourceColumn":{"properties":{"embeddingModelEndpointName":{"type":"string","description":"The name of the embedding model endpoint\n","willReplaceOnChanges":true},"modelEndpointNameForQuery":{"type":"string","description":"The name of the embedding model endpoint which, if specified, is used for querying (not ingestion).\n","willReplaceOnChanges":true},"name":{"type":"string","description":"The name of the column\n","willReplaceOnChanges":true}},"type":"object"},"databricks:index/VectorSearchIndexDirectAccessIndexSpecEmbeddingVectorColumn:VectorSearchIndexDirectAccessIndexSpecEmbeddingVectorColumn":{"properties":{"embeddingDimension":{"type":"integer","description":"Dimension of the embedding vector.\n","willReplaceOnChanges":true},"name":{"type":"string","description":"The name of the column.\n","willReplaceOnChanges":true}},"type":"object"},"databricks:index/VectorSearchIndexProviderConfig:VectorSearchIndexProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/VectorSearchIndexStatus:VectorSearchIndexStatus":{"properties":{"indexUrl":{"type":"string","description":"Index API Url to be used to perform operations on the index\n"},"indexedRowCount":{"type":"integer","description":"Number of rows indexed\n"},"message":{"type":"string","description":"Message associated with the index status\n"},"ready":{"type":"boolean","description":"Whether the index is ready for search\n"}},"type":"object"},"databricks:index/VolumeProviderConfig:VolumeProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/WarehousesDefaultWarehouseOverrideProviderConfig:WarehousesDefaultWarehouseOverrideProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/WorkspaceBindingProviderConfig:WorkspaceBindingProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n","willReplaceOnChanges":true}},"type":"object","required":["workspaceId"]},"databricks:index/WorkspaceConfProviderConfig:WorkspaceConfProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/WorkspaceEntityTagAssignmentProviderConfig:WorkspaceEntityTagAssignmentProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/WorkspaceFileProviderConfig:WorkspaceFileProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/WorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy:WorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy":{"properties":{"accessPolicyType":{"type":"string","description":"Possible values are: `ALLOW_ALL_DOMAINS`, `ALLOW_APPROVED_DOMAINS`, `DENY_ALL_DOMAINS`\n"}},"type":"object","required":["accessPolicyType"]},"databricks:index/WorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains:WorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains":{"properties":{"approvedDomains":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspace:WorkspaceSettingV2AutomaticClusterUpdateWorkspace":{"properties":{"canToggle":{"type":"boolean"},"enabled":{"type":"boolean"},"enablementDetails":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails:WorkspaceSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails"},"maintenanceWindow":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow:WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow"},"restartEvenIfNoUpdatesAvailable":{"type":"boolean"}},"type":"object"},"databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails:WorkspaceSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails":{"properties":{"forcedForComplianceMode":{"type":"boolean","description":"The feature is force enabled if compliance mode is active\n"},"unavailableForDisabledEntitlement":{"type":"boolean","description":"The feature is unavailable if the corresponding entitlement disabled (see getShieldEntitlementEnable)\n"},"unavailableForNonEnterpriseTier":{"type":"boolean","description":"The feature is unavailable if the customer doesn't have enterprise tier\n"}},"type":"object"},"databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow:WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow":{"properties":{"weekDayBasedSchedule":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule"}},"type":"object"},"databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule":{"properties":{"dayOfWeek":{"type":"string","description":"Possible values are: `FRIDAY`, `MONDAY`, `SATURDAY`, `SUNDAY`, `THURSDAY`, `TUESDAY`, `WEDNESDAY`\n"},"frequency":{"type":"string","description":"Possible values are: `EVERY_WEEK`, `FIRST_AND_THIRD_OF_MONTH`, `FIRST_OF_MONTH`, `FOURTH_OF_MONTH`, `SECOND_AND_FOURTH_OF_MONTH`, `SECOND_OF_MONTH`, `THIRD_OF_MONTH`\n"},"windowStartTime":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime"}},"type":"object"},"databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:WorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime":{"properties":{"hours":{"type":"integer"},"minutes":{"type":"integer"}},"type":"object"},"databricks:index/WorkspaceSettingV2BooleanVal:WorkspaceSettingV2BooleanVal":{"properties":{"value":{"type":"boolean"}},"type":"object"},"databricks:index/WorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:WorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy":{"properties":{"accessPolicyType":{"type":"string","description":"Possible values are: `ALLOW_ALL_DOMAINS`, `ALLOW_APPROVED_DOMAINS`, `DENY_ALL_DOMAINS`\n"}},"type":"object","required":["accessPolicyType"]},"databricks:index/WorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:WorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains":{"properties":{"approvedDomains":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace":{"properties":{"canToggle":{"type":"boolean"},"enabled":{"type":"boolean"},"enablementDetails":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails"},"maintenanceWindow":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow"},"restartEvenIfNoUpdatesAvailable":{"type":"boolean"}},"type":"object"},"databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails":{"properties":{"forcedForComplianceMode":{"type":"boolean","description":"The feature is force enabled if compliance mode is active\n"},"unavailableForDisabledEntitlement":{"type":"boolean","description":"The feature is unavailable if the corresponding entitlement disabled (see getShieldEntitlementEnable)\n"},"unavailableForNonEnterpriseTier":{"type":"boolean","description":"The feature is unavailable if the customer doesn't have enterprise tier\n"}},"type":"object"},"databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow":{"properties":{"weekDayBasedSchedule":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule"}},"type":"object"},"databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule":{"properties":{"dayOfWeek":{"type":"string","description":"Possible values are: `FRIDAY`, `MONDAY`, `SATURDAY`, `SUNDAY`, `THURSDAY`, `TUESDAY`, `WEDNESDAY`\n"},"frequency":{"type":"string","description":"Possible values are: `EVERY_WEEK`, `FIRST_AND_THIRD_OF_MONTH`, `FIRST_OF_MONTH`, `FOURTH_OF_MONTH`, `SECOND_AND_FOURTH_OF_MONTH`, `SECOND_OF_MONTH`, `THIRD_OF_MONTH`\n"},"windowStartTime":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime"}},"type":"object"},"databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime":{"properties":{"hours":{"type":"integer"},"minutes":{"type":"integer"}},"type":"object"},"databricks:index/WorkspaceSettingV2EffectiveBooleanVal:WorkspaceSettingV2EffectiveBooleanVal":{"properties":{"value":{"type":"boolean"}},"type":"object"},"databricks:index/WorkspaceSettingV2EffectiveIntegerVal:WorkspaceSettingV2EffectiveIntegerVal":{"properties":{"value":{"type":"integer"}},"type":"object"},"databricks:index/WorkspaceSettingV2EffectivePersonalCompute:WorkspaceSettingV2EffectivePersonalCompute":{"properties":{"value":{"type":"string"}},"type":"object"},"databricks:index/WorkspaceSettingV2EffectiveRestrictWorkspaceAdmins:WorkspaceSettingV2EffectiveRestrictWorkspaceAdmins":{"properties":{"status":{"type":"string","description":"Possible values are: `ALLOW_ALL`, `RESTRICT_TOKENS_AND_JOB_RUN_AS`\n"}},"type":"object","required":["status"]},"databricks:index/WorkspaceSettingV2EffectiveStringVal:WorkspaceSettingV2EffectiveStringVal":{"properties":{"value":{"type":"string"}},"type":"object"},"databricks:index/WorkspaceSettingV2IntegerVal:WorkspaceSettingV2IntegerVal":{"properties":{"value":{"type":"integer"}},"type":"object"},"databricks:index/WorkspaceSettingV2PersonalCompute:WorkspaceSettingV2PersonalCompute":{"properties":{"value":{"type":"string"}},"type":"object"},"databricks:index/WorkspaceSettingV2ProviderConfig:WorkspaceSettingV2ProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/WorkspaceSettingV2RestrictWorkspaceAdmins:WorkspaceSettingV2RestrictWorkspaceAdmins":{"properties":{"status":{"type":"string","description":"Possible values are: `ALLOW_ALL`, `RESTRICT_TOKENS_AND_JOB_RUN_AS`\n"}},"type":"object","required":["status"]},"databricks:index/WorkspaceSettingV2StringVal:WorkspaceSettingV2StringVal":{"properties":{"value":{"type":"string"}},"type":"object"},"databricks:index/getAccountFederationPoliciesPolicy:getAccountFederationPoliciesPolicy":{"properties":{"createTime":{"type":"string","description":"(string) - Creation time of the federation policy\n"},"description":{"type":"string","description":"(string) - Description of the federation policy\n"},"name":{"type":"string","description":"(string) - Resource name for the federation policy. Example values include\n`accounts/\u003caccount-id\u003e/federationPolicies/my-federation-policy` for Account Federation Policies, and\n`accounts/\u003caccount-id\u003e/servicePrincipals/\u003cservice-principal-id\u003e/federationPolicies/my-federation-policy`\nfor Service Principal Federation Policies. Typically an output parameter, which does not need to be\nspecified in create or update requests. If specified in a request, must match the value in the\nrequest URL\n"},"oidcPolicy":{"$ref":"#/types/databricks:index/getAccountFederationPoliciesPolicyOidcPolicy:getAccountFederationPoliciesPolicyOidcPolicy","description":"(OidcFederationPolicy)\n"},"policyId":{"type":"string","description":"(string) - The ID of the federation policy. Output only\n"},"servicePrincipalId":{"type":"integer","description":"(integer) - The service principal ID that this federation policy applies to. Output only. Only set for service principal federation policies\n"},"uid":{"type":"string","description":"(string) - Unique, immutable id of the federation policy\n"},"updateTime":{"type":"string","description":"(string) - Last update time of the federation policy\n"}},"type":"object","required":["createTime","description","name","oidcPolicy","policyId","servicePrincipalId","uid","updateTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAccountFederationPoliciesPolicyOidcPolicy:getAccountFederationPoliciesPolicyOidcPolicy":{"properties":{"audiences":{"type":"array","items":{"type":"string"},"description":"(list of string) - The allowed token audiences, as specified in the 'aud' claim of federated tokens.\nThe audience identifier is intended to represent the recipient of the token.\nCan be any non-empty string value. As long as the audience in the token matches\nat least one audience in the policy, the token is considered a match. If audiences\nis unspecified, defaults to your Databricks account id\n"},"issuer":{"type":"string","description":"(string) - The required token issuer, as specified in the 'iss' claim of federated tokens\n"},"jwksJson":{"type":"string","description":"(string) - The public keys used to validate the signature of federated tokens, in JWKS format.\nMost use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri \" pulumi-lang-dotnet=\" JwksUri \" pulumi-lang-go=\" jwksUri \" pulumi-lang-python=\" jwks_uri \" pulumi-lang-yaml=\" jwksUri \" pulumi-lang-java=\" jwksUri \"\u003e jwks_uri \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson\n\" pulumi-lang-dotnet=\" JwksJson\n\" pulumi-lang-go=\" jwksJson\n\" pulumi-lang-python=\" jwks_json\n\" pulumi-lang-yaml=\" jwksJson\n\" pulumi-lang-java=\" jwksJson\n\"\u003e jwks_json\n\u003c/span\u003eare both unspecified (recommended), Databricks automatically fetches the public\nkeys from your issuer’s well known endpoint. Databricks strongly recommends\nrelying on your issuer’s well known endpoint for discovering public keys\n"},"jwksUri":{"type":"string","description":"(string) - URL of the public keys used to validate the signature of federated tokens, in\nJWKS format. Most use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri\n\" pulumi-lang-dotnet=\" JwksUri\n\" pulumi-lang-go=\" jwksUri\n\" pulumi-lang-python=\" jwks_uri\n\" pulumi-lang-yaml=\" jwksUri\n\" pulumi-lang-java=\" jwksUri\n\"\u003e jwks_uri\n\u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson \" pulumi-lang-dotnet=\" JwksJson \" pulumi-lang-go=\" jwksJson \" pulumi-lang-python=\" jwks_json \" pulumi-lang-yaml=\" jwksJson \" pulumi-lang-java=\" jwksJson \"\u003e jwks_json \u003c/span\u003eare both unspecified (recommended), Databricks automatically\nfetches the public keys from your issuer’s well known endpoint. Databricks\nstrongly recommends relying on your issuer’s well known endpoint for discovering\npublic keys\n"},"subject":{"type":"string","description":"(string) - The required token subject, as specified in the subject claim of federated tokens.\nMust be specified for service principal federation policies. Must not be specified\nfor account federation policies\n"},"subjectClaim":{"type":"string","description":"(string) - The claim that contains the subject of the token. If unspecified, the default value\nis 'sub'\n"}},"type":"object"},"databricks:index/getAccountFederationPolicyOidcPolicy:getAccountFederationPolicyOidcPolicy":{"properties":{"audiences":{"type":"array","items":{"type":"string"},"description":"(list of string) - The allowed token audiences, as specified in the 'aud' claim of federated tokens.\nThe audience identifier is intended to represent the recipient of the token.\nCan be any non-empty string value. As long as the audience in the token matches\nat least one audience in the policy, the token is considered a match. If audiences\nis unspecified, defaults to your Databricks account id\n"},"issuer":{"type":"string","description":"(string) - The required token issuer, as specified in the 'iss' claim of federated tokens\n"},"jwksJson":{"type":"string","description":"(string) - The public keys used to validate the signature of federated tokens, in JWKS format.\nMost use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri \" pulumi-lang-dotnet=\" JwksUri \" pulumi-lang-go=\" jwksUri \" pulumi-lang-python=\" jwks_uri \" pulumi-lang-yaml=\" jwksUri \" pulumi-lang-java=\" jwksUri \"\u003e jwks_uri \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson\n\" pulumi-lang-dotnet=\" JwksJson\n\" pulumi-lang-go=\" jwksJson\n\" pulumi-lang-python=\" jwks_json\n\" pulumi-lang-yaml=\" jwksJson\n\" pulumi-lang-java=\" jwksJson\n\"\u003e jwks_json\n\u003c/span\u003eare both unspecified (recommended), Databricks automatically fetches the public\nkeys from your issuer’s well known endpoint. Databricks strongly recommends\nrelying on your issuer’s well known endpoint for discovering public keys\n"},"jwksUri":{"type":"string","description":"(string) - URL of the public keys used to validate the signature of federated tokens, in\nJWKS format. Most use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri\n\" pulumi-lang-dotnet=\" JwksUri\n\" pulumi-lang-go=\" jwksUri\n\" pulumi-lang-python=\" jwks_uri\n\" pulumi-lang-yaml=\" jwksUri\n\" pulumi-lang-java=\" jwksUri\n\"\u003e jwks_uri\n\u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson \" pulumi-lang-dotnet=\" JwksJson \" pulumi-lang-go=\" jwksJson \" pulumi-lang-python=\" jwks_json \" pulumi-lang-yaml=\" jwksJson \" pulumi-lang-java=\" jwksJson \"\u003e jwks_json \u003c/span\u003eare both unspecified (recommended), Databricks automatically\nfetches the public keys from your issuer’s well known endpoint. Databricks\nstrongly recommends relying on your issuer’s well known endpoint for discovering\npublic keys\n"},"subject":{"type":"string","description":"(string) - The required token subject, as specified in the subject claim of federated tokens.\nMust be specified for service principal federation policies. Must not be specified\nfor account federation policies\n"},"subjectClaim":{"type":"string","description":"(string) - The claim that contains the subject of the token. If unspecified, the default value\nis 'sub'\n"}},"type":"object"},"databricks:index/getAccountNetworkPoliciesItem:getAccountNetworkPoliciesItem":{"properties":{"accountId":{"type":"string","description":"(string) - The associated account ID for this Network Policy object\n"},"egress":{"$ref":"#/types/databricks:index/getAccountNetworkPoliciesItemEgress:getAccountNetworkPoliciesItemEgress","description":"(NetworkPolicyEgress) - The network policies applying for egress traffic\n"},"networkPolicyId":{"type":"string","description":"(string) - The unique identifier for the network policy\n"}},"type":"object","required":["accountId","egress","networkPolicyId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAccountNetworkPoliciesItemEgress:getAccountNetworkPoliciesItemEgress":{"properties":{"networkAccess":{"$ref":"#/types/databricks:index/getAccountNetworkPoliciesItemEgressNetworkAccess:getAccountNetworkPoliciesItemEgressNetworkAccess","description":"(EgressNetworkPolicyNetworkAccessPolicy) - The access policy enforced for egress traffic to the internet\n"}},"type":"object"},"databricks:index/getAccountNetworkPoliciesItemEgressNetworkAccess:getAccountNetworkPoliciesItemEgressNetworkAccess":{"properties":{"allowedInternetDestinations":{"type":"array","items":{"$ref":"#/types/databricks:index/getAccountNetworkPoliciesItemEgressNetworkAccessAllowedInternetDestination:getAccountNetworkPoliciesItemEgressNetworkAccessAllowedInternetDestination"},"description":"(list of EgressNetworkPolicyNetworkAccessPolicyInternetDestination) - List of internet destinations that serverless workloads are allowed to access when in RESTRICTED_ACCESS mode\n"},"allowedStorageDestinations":{"type":"array","items":{"$ref":"#/types/databricks:index/getAccountNetworkPoliciesItemEgressNetworkAccessAllowedStorageDestination:getAccountNetworkPoliciesItemEgressNetworkAccessAllowedStorageDestination"},"description":"(list of EgressNetworkPolicyNetworkAccessPolicyStorageDestination) - List of storage destinations that serverless workloads are allowed to access when in RESTRICTED_ACCESS mode\n"},"policyEnforcement":{"$ref":"#/types/databricks:index/getAccountNetworkPoliciesItemEgressNetworkAccessPolicyEnforcement:getAccountNetworkPoliciesItemEgressNetworkAccessPolicyEnforcement","description":"(EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcement) - Optional. When\u003cspan pulumi-lang-nodejs=\" policyEnforcement \" pulumi-lang-dotnet=\" PolicyEnforcement \" pulumi-lang-go=\" policyEnforcement \" pulumi-lang-python=\" policy_enforcement \" pulumi-lang-yaml=\" policyEnforcement \" pulumi-lang-java=\" policyEnforcement \"\u003e policy_enforcement \u003c/span\u003eis not provided, we default to ENFORCE_MODE_ALL_SERVICES\n"},"restrictionMode":{"type":"string","description":"(string) - The restriction mode that controls how serverless workloads can access the internet. Possible values are: `FULL_ACCESS`, `RESTRICTED_ACCESS`\n"}},"type":"object","required":["restrictionMode"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAccountNetworkPoliciesItemEgressNetworkAccessAllowedInternetDestination:getAccountNetworkPoliciesItemEgressNetworkAccessAllowedInternetDestination":{"properties":{"destination":{"type":"string","description":"(string) - The internet destination to which access will be allowed. Format dependent on the destination type\n"},"internetDestinationType":{"type":"string","description":"(string) - The type of internet destination. Currently only DNS_NAME is supported. Possible values are: `DNS_NAME`\n"}},"type":"object"},"databricks:index/getAccountNetworkPoliciesItemEgressNetworkAccessAllowedStorageDestination:getAccountNetworkPoliciesItemEgressNetworkAccessAllowedStorageDestination":{"properties":{"azureStorageAccount":{"type":"string","description":"(string) - The Azure storage account name\n"},"azureStorageService":{"type":"string","description":"(string) - The Azure storage service type (blob, dfs, etc.)\n"},"bucketName":{"type":"string","description":"(string)\n"},"region":{"type":"string","description":"(string)\n"},"storageDestinationType":{"type":"string","description":"(string) - The type of storage destination. Possible values are: `AWS_S3`, `AZURE_STORAGE`, `GOOGLE_CLOUD_STORAGE`\n"}},"type":"object"},"databricks:index/getAccountNetworkPoliciesItemEgressNetworkAccessPolicyEnforcement:getAccountNetworkPoliciesItemEgressNetworkAccessPolicyEnforcement":{"properties":{"dryRunModeProductFilters":{"type":"array","items":{"type":"string"},"description":"(list of string) - When empty, it means dry run for all products.\nWhen non-empty, it means dry run for specific products and for the other products, they will run in enforced mode\n"},"enforcementMode":{"type":"string","description":"(string) - The mode of policy enforcement. ENFORCED blocks traffic that violates policy,\nwhile DRY_RUN only logs violations without blocking. When not specified,\ndefaults to ENFORCED. Possible values are: `DRY_RUN`, `ENFORCED`\n"}},"type":"object"},"databricks:index/getAccountNetworkPolicyEgress:getAccountNetworkPolicyEgress":{"properties":{"networkAccess":{"$ref":"#/types/databricks:index/getAccountNetworkPolicyEgressNetworkAccess:getAccountNetworkPolicyEgressNetworkAccess","description":"(EgressNetworkPolicyNetworkAccessPolicy) - The access policy enforced for egress traffic to the internet\n"}},"type":"object"},"databricks:index/getAccountNetworkPolicyEgressNetworkAccess:getAccountNetworkPolicyEgressNetworkAccess":{"properties":{"allowedInternetDestinations":{"type":"array","items":{"$ref":"#/types/databricks:index/getAccountNetworkPolicyEgressNetworkAccessAllowedInternetDestination:getAccountNetworkPolicyEgressNetworkAccessAllowedInternetDestination"},"description":"(list of EgressNetworkPolicyNetworkAccessPolicyInternetDestination) - List of internet destinations that serverless workloads are allowed to access when in RESTRICTED_ACCESS mode\n"},"allowedStorageDestinations":{"type":"array","items":{"$ref":"#/types/databricks:index/getAccountNetworkPolicyEgressNetworkAccessAllowedStorageDestination:getAccountNetworkPolicyEgressNetworkAccessAllowedStorageDestination"},"description":"(list of EgressNetworkPolicyNetworkAccessPolicyStorageDestination) - List of storage destinations that serverless workloads are allowed to access when in RESTRICTED_ACCESS mode\n"},"policyEnforcement":{"$ref":"#/types/databricks:index/getAccountNetworkPolicyEgressNetworkAccessPolicyEnforcement:getAccountNetworkPolicyEgressNetworkAccessPolicyEnforcement","description":"(EgressNetworkPolicyNetworkAccessPolicyPolicyEnforcement) - Optional. When\u003cspan pulumi-lang-nodejs=\" policyEnforcement \" pulumi-lang-dotnet=\" PolicyEnforcement \" pulumi-lang-go=\" policyEnforcement \" pulumi-lang-python=\" policy_enforcement \" pulumi-lang-yaml=\" policyEnforcement \" pulumi-lang-java=\" policyEnforcement \"\u003e policy_enforcement \u003c/span\u003eis not provided, we default to ENFORCE_MODE_ALL_SERVICES\n"},"restrictionMode":{"type":"string","description":"(string) - The restriction mode that controls how serverless workloads can access the internet. Possible values are: `FULL_ACCESS`, `RESTRICTED_ACCESS`\n"}},"type":"object","required":["restrictionMode"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAccountNetworkPolicyEgressNetworkAccessAllowedInternetDestination:getAccountNetworkPolicyEgressNetworkAccessAllowedInternetDestination":{"properties":{"destination":{"type":"string","description":"(string) - The internet destination to which access will be allowed. Format dependent on the destination type\n"},"internetDestinationType":{"type":"string","description":"(string) - The type of internet destination. Currently only DNS_NAME is supported. Possible values are: `DNS_NAME`\n"}},"type":"object"},"databricks:index/getAccountNetworkPolicyEgressNetworkAccessAllowedStorageDestination:getAccountNetworkPolicyEgressNetworkAccessAllowedStorageDestination":{"properties":{"azureStorageAccount":{"type":"string","description":"(string) - The Azure storage account name\n"},"azureStorageService":{"type":"string","description":"(string) - The Azure storage service type (blob, dfs, etc.)\n"},"bucketName":{"type":"string","description":"(string)\n"},"region":{"type":"string","description":"(string)\n"},"storageDestinationType":{"type":"string","description":"(string) - The type of storage destination. Possible values are: `AWS_S3`, `AZURE_STORAGE`, `GOOGLE_CLOUD_STORAGE`\n"}},"type":"object"},"databricks:index/getAccountNetworkPolicyEgressNetworkAccessPolicyEnforcement:getAccountNetworkPolicyEgressNetworkAccessPolicyEnforcement":{"properties":{"dryRunModeProductFilters":{"type":"array","items":{"type":"string"},"description":"(list of string) - When empty, it means dry run for all products.\nWhen non-empty, it means dry run for specific products and for the other products, they will run in enforced mode\n"},"enforcementMode":{"type":"string","description":"(string) - The mode of policy enforcement. ENFORCED blocks traffic that violates policy,\nwhile DRY_RUN only logs violations without blocking. When not specified,\ndefaults to ENFORCED. Possible values are: `DRY_RUN`, `ENFORCED`\n"}},"type":"object"},"databricks:index/getAccountSettingUserPreferenceV2BooleanVal:getAccountSettingUserPreferenceV2BooleanVal":{"properties":{"value":{"type":"boolean","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAccountSettingUserPreferenceV2EffectiveBooleanVal:getAccountSettingUserPreferenceV2EffectiveBooleanVal":{"properties":{"value":{"type":"boolean","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAccountSettingUserPreferenceV2EffectiveStringVal:getAccountSettingUserPreferenceV2EffectiveStringVal":{"properties":{"value":{"type":"string","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAccountSettingUserPreferenceV2StringVal:getAccountSettingUserPreferenceV2StringVal":{"properties":{"value":{"type":"string","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAccountSettingV2AibiDashboardEmbeddingAccessPolicy:getAccountSettingV2AibiDashboardEmbeddingAccessPolicy":{"properties":{"accessPolicyType":{"type":"string","description":"(string) - Possible values are: `ALLOW_ALL_DOMAINS`, `ALLOW_APPROVED_DOMAINS`, `DENY_ALL_DOMAINS`\n"}},"type":"object","required":["accessPolicyType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAccountSettingV2AibiDashboardEmbeddingApprovedDomains:getAccountSettingV2AibiDashboardEmbeddingApprovedDomains":{"properties":{"approvedDomains":{"type":"array","items":{"type":"string"},"description":"(list of string)\n"}},"type":"object"},"databricks:index/getAccountSettingV2AutomaticClusterUpdateWorkspace:getAccountSettingV2AutomaticClusterUpdateWorkspace":{"properties":{"canToggle":{"type":"boolean","description":"(boolean)\n"},"enabled":{"type":"boolean","description":"(boolean)\n"},"enablementDetails":{"$ref":"#/types/databricks:index/getAccountSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails:getAccountSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails","description":"(ClusterAutoRestartMessageEnablementDetails)\n"},"maintenanceWindow":{"$ref":"#/types/databricks:index/getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow:getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow","description":"(ClusterAutoRestartMessageMaintenanceWindow)\n"},"restartEvenIfNoUpdatesAvailable":{"type":"boolean","description":"(boolean)\n"}},"type":"object"},"databricks:index/getAccountSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails:getAccountSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails":{"properties":{"forcedForComplianceMode":{"type":"boolean","description":"(boolean) - The feature is force enabled if compliance mode is active\n"},"unavailableForDisabledEntitlement":{"type":"boolean","description":"(boolean) - The feature is unavailable if the corresponding entitlement disabled (see getShieldEntitlementEnable)\n"},"unavailableForNonEnterpriseTier":{"type":"boolean","description":"(boolean) - The feature is unavailable if the customer doesn't have enterprise tier\n"}},"type":"object"},"databricks:index/getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow:getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow":{"properties":{"weekDayBasedSchedule":{"$ref":"#/types/databricks:index/getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule","description":"(ClusterAutoRestartMessageMaintenanceWindowWeekDayBasedSchedule)\n"}},"type":"object"},"databricks:index/getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule":{"properties":{"dayOfWeek":{"type":"string","description":"(string) - Possible values are: `FRIDAY`, `MONDAY`, `SATURDAY`, `SUNDAY`, `THURSDAY`, `TUESDAY`, `WEDNESDAY`\n"},"frequency":{"type":"string","description":"(string) - Possible values are: `EVERY_WEEK`, `FIRST_AND_THIRD_OF_MONTH`, `FIRST_OF_MONTH`, `FOURTH_OF_MONTH`, `SECOND_AND_FOURTH_OF_MONTH`, `SECOND_OF_MONTH`, `THIRD_OF_MONTH`\n"},"windowStartTime":{"$ref":"#/types/databricks:index/getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime","description":"(ClusterAutoRestartMessageMaintenanceWindowWindowStartTime)\n"}},"type":"object"},"databricks:index/getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:getAccountSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime":{"properties":{"hours":{"type":"integer","description":"(integer)\n"},"minutes":{"type":"integer","description":"(integer)\n"}},"type":"object"},"databricks:index/getAccountSettingV2BooleanVal:getAccountSettingV2BooleanVal":{"properties":{"value":{"type":"boolean","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:getAccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy":{"properties":{"accessPolicyType":{"type":"string","description":"(string) - Possible values are: `ALLOW_ALL_DOMAINS`, `ALLOW_APPROVED_DOMAINS`, `DENY_ALL_DOMAINS`\n"}},"type":"object","required":["accessPolicyType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:getAccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains":{"properties":{"approvedDomains":{"type":"array","items":{"type":"string"},"description":"(list of string)\n"}},"type":"object"},"databricks:index/getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspace:getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspace":{"properties":{"canToggle":{"type":"boolean","description":"(boolean)\n"},"enabled":{"type":"boolean","description":"(boolean)\n"},"enablementDetails":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails:getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails","description":"(ClusterAutoRestartMessageEnablementDetails)\n"},"maintenanceWindow":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow:getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow","description":"(ClusterAutoRestartMessageMaintenanceWindow)\n"},"restartEvenIfNoUpdatesAvailable":{"type":"boolean","description":"(boolean)\n"}},"type":"object"},"databricks:index/getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails:getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails":{"properties":{"forcedForComplianceMode":{"type":"boolean","description":"(boolean) - The feature is force enabled if compliance mode is active\n"},"unavailableForDisabledEntitlement":{"type":"boolean","description":"(boolean) - The feature is unavailable if the corresponding entitlement disabled (see getShieldEntitlementEnable)\n"},"unavailableForNonEnterpriseTier":{"type":"boolean","description":"(boolean) - The feature is unavailable if the customer doesn't have enterprise tier\n"}},"type":"object"},"databricks:index/getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow:getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow":{"properties":{"weekDayBasedSchedule":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule","description":"(ClusterAutoRestartMessageMaintenanceWindowWeekDayBasedSchedule)\n"}},"type":"object"},"databricks:index/getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule":{"properties":{"dayOfWeek":{"type":"string","description":"(string) - Possible values are: `FRIDAY`, `MONDAY`, `SATURDAY`, `SUNDAY`, `THURSDAY`, `TUESDAY`, `WEDNESDAY`\n"},"frequency":{"type":"string","description":"(string) - Possible values are: `EVERY_WEEK`, `FIRST_AND_THIRD_OF_MONTH`, `FIRST_OF_MONTH`, `FOURTH_OF_MONTH`, `SECOND_AND_FOURTH_OF_MONTH`, `SECOND_OF_MONTH`, `THIRD_OF_MONTH`\n"},"windowStartTime":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime","description":"(ClusterAutoRestartMessageMaintenanceWindowWindowStartTime)\n"}},"type":"object"},"databricks:index/getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime":{"properties":{"hours":{"type":"integer","description":"(integer)\n"},"minutes":{"type":"integer","description":"(integer)\n"}},"type":"object"},"databricks:index/getAccountSettingV2EffectiveBooleanVal:getAccountSettingV2EffectiveBooleanVal":{"properties":{"value":{"type":"boolean","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAccountSettingV2EffectiveIntegerVal:getAccountSettingV2EffectiveIntegerVal":{"properties":{"value":{"type":"integer","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAccountSettingV2EffectivePersonalCompute:getAccountSettingV2EffectivePersonalCompute":{"properties":{"value":{"type":"string","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAccountSettingV2EffectiveRestrictWorkspaceAdmins:getAccountSettingV2EffectiveRestrictWorkspaceAdmins":{"properties":{"status":{"type":"string","description":"(string) - Possible values are: `ALLOW_ALL`, `RESTRICT_TOKENS_AND_JOB_RUN_AS`\n"}},"type":"object","required":["status"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAccountSettingV2EffectiveStringVal:getAccountSettingV2EffectiveStringVal":{"properties":{"value":{"type":"string","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAccountSettingV2IntegerVal:getAccountSettingV2IntegerVal":{"properties":{"value":{"type":"integer","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAccountSettingV2PersonalCompute:getAccountSettingV2PersonalCompute":{"properties":{"value":{"type":"string","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAccountSettingV2RestrictWorkspaceAdmins:getAccountSettingV2RestrictWorkspaceAdmins":{"properties":{"status":{"type":"string","description":"(string) - Possible values are: `ALLOW_ALL`, `RESTRICT_TOKENS_AND_JOB_RUN_AS`\n"}},"type":"object","required":["status"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAccountSettingV2StringVal:getAccountSettingV2StringVal":{"properties":{"value":{"type":"string","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getAlertV2EffectiveRunAs:getAlertV2EffectiveRunAs":{"properties":{"servicePrincipalName":{"type":"string","description":"(string) - Application ID of an active service principal. Setting this field requires the `servicePrincipal/user` role\n"},"userName":{"type":"string","description":"(string) - The email of an active workspace user. Can only set this field to their own email\n"}},"type":"object"},"databricks:index/getAlertV2Evaluation:getAlertV2Evaluation":{"properties":{"comparisonOperator":{"type":"string","description":"(string) - Operator used for comparison in alert evaluation. Possible values are: `EQUAL`, `GREATER_THAN`, `GREATER_THAN_OR_EQUAL`, `IS_NOT_NULL`, `IS_NULL`, `LESS_THAN`, `LESS_THAN_OR_EQUAL`, `NOT_EQUAL`\n"},"emptyResultState":{"type":"string","description":"(string) - Alert state if result is empty. Please avoid setting this field to be `UNKNOWN` because `UNKNOWN` state is planned to be deprecated. Possible values are: `ERROR`, `OK`, `TRIGGERED`, `UNKNOWN`\n"},"lastEvaluatedAt":{"type":"string","description":"(string) - Timestamp of the last evaluation\n"},"notification":{"$ref":"#/types/databricks:index/getAlertV2EvaluationNotification:getAlertV2EvaluationNotification","description":"(AlertV2Notification) - User or Notification Destination to notify when alert is triggered\n"},"source":{"$ref":"#/types/databricks:index/getAlertV2EvaluationSource:getAlertV2EvaluationSource","description":"(AlertV2OperandColumn) - Source column from result to use to evaluate alert\n"},"state":{"type":"string","description":"(string) - Latest state of alert evaluation. Possible values are: `ERROR`, `OK`, `TRIGGERED`, `UNKNOWN`\n"},"threshold":{"$ref":"#/types/databricks:index/getAlertV2EvaluationThreshold:getAlertV2EvaluationThreshold","description":"(AlertV2Operand) - Threshold to user for alert evaluation, can be a column or a value\n"}},"type":"object","required":["comparisonOperator","lastEvaluatedAt","source","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertV2EvaluationNotification:getAlertV2EvaluationNotification":{"properties":{"effectiveNotifyOnOk":{"type":"boolean"},"effectiveRetriggerSeconds":{"type":"integer"},"notifyOnOk":{"type":"boolean","description":"(boolean) - Whether to notify alert subscribers when alert returns back to normal\n"},"retriggerSeconds":{"type":"integer","description":"(integer) - Number of seconds an alert waits after being triggered before it is allowed to send another notification.\nIf set to 0 or omitted, the alert will not send any further notifications after the first trigger\nSetting this value to 1 allows the alert to send a notification on every evaluation where the condition is met, effectively making it always retrigger for notification purposes\n"},"subscriptions":{"type":"array","items":{"$ref":"#/types/databricks:index/getAlertV2EvaluationNotificationSubscription:getAlertV2EvaluationNotificationSubscription"},"description":"(list of AlertV2Subscription)\n"}},"type":"object","required":["effectiveNotifyOnOk","effectiveRetriggerSeconds"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertV2EvaluationNotificationSubscription:getAlertV2EvaluationNotificationSubscription":{"properties":{"destinationId":{"type":"string","description":"(string)\n"},"userEmail":{"type":"string","description":"(string)\n"}},"type":"object"},"databricks:index/getAlertV2EvaluationSource:getAlertV2EvaluationSource":{"properties":{"aggregation":{"type":"string","description":"(string) - If not set, the behavior is equivalent to using `First row` in the UI. Possible values are: `AVG`, `COUNT`, `COUNT_DISTINCT`, `MAX`, `MEDIAN`, `MIN`, `STDDEV`, `SUM`\n"},"display":{"type":"string","description":"(string)\n"},"name":{"type":"string","description":"(string)\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertV2EvaluationThreshold:getAlertV2EvaluationThreshold":{"properties":{"column":{"$ref":"#/types/databricks:index/getAlertV2EvaluationThresholdColumn:getAlertV2EvaluationThresholdColumn","description":"(AlertV2OperandColumn)\n"},"value":{"$ref":"#/types/databricks:index/getAlertV2EvaluationThresholdValue:getAlertV2EvaluationThresholdValue","description":"(AlertV2OperandValue)\n"}},"type":"object"},"databricks:index/getAlertV2EvaluationThresholdColumn:getAlertV2EvaluationThresholdColumn":{"properties":{"aggregation":{"type":"string","description":"(string) - If not set, the behavior is equivalent to using `First row` in the UI. Possible values are: `AVG`, `COUNT`, `COUNT_DISTINCT`, `MAX`, `MEDIAN`, `MIN`, `STDDEV`, `SUM`\n"},"display":{"type":"string","description":"(string)\n"},"name":{"type":"string","description":"(string)\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertV2EvaluationThresholdValue:getAlertV2EvaluationThresholdValue":{"properties":{"boolValue":{"type":"boolean","description":"(boolean)\n"},"doubleValue":{"type":"number","description":"(number)\n"},"stringValue":{"type":"string","description":"(string)\n"}},"type":"object"},"databricks:index/getAlertV2ProviderConfig:getAlertV2ProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getAlertV2RunAs:getAlertV2RunAs":{"properties":{"servicePrincipalName":{"type":"string","description":"(string) - Application ID of an active service principal. Setting this field requires the `servicePrincipal/user` role\n"},"userName":{"type":"string","description":"(string) - The email of an active workspace user. Can only set this field to their own email\n"}},"type":"object"},"databricks:index/getAlertV2Schedule:getAlertV2Schedule":{"properties":{"pauseStatus":{"type":"string","description":"(string) - Indicate whether this schedule is paused or not. Possible values are: `PAUSED`, `UNPAUSED`\n"},"quartzCronSchedule":{"type":"string","description":"(string) - A cron expression using quartz syntax that specifies the schedule for this pipeline.\nShould use the quartz format described here: http://www.quartz-scheduler.org/documentation/quartz-2.1.7/tutorials/tutorial-lesson-06.html\n"},"timezoneId":{"type":"string","description":"(string) - A Java timezone id. The schedule will be resolved using this timezone.\nThis will be combined with the\u003cspan pulumi-lang-nodejs=\" quartzCronSchedule \" pulumi-lang-dotnet=\" QuartzCronSchedule \" pulumi-lang-go=\" quartzCronSchedule \" pulumi-lang-python=\" quartz_cron_schedule \" pulumi-lang-yaml=\" quartzCronSchedule \" pulumi-lang-java=\" quartzCronSchedule \"\u003e quartz_cron_schedule \u003c/span\u003eto determine the schedule.\nSee https://docs.databricks.com/sql/language-manual/sql-ref-syntax-aux-conf-mgmt-set-timezone.html for details\n"}},"type":"object","required":["quartzCronSchedule","timezoneId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertsV2Alert:getAlertsV2Alert":{"properties":{"createTime":{"type":"string","description":"(string) - The timestamp indicating when the alert was created\n"},"customDescription":{"type":"string","description":"(string) - Custom description for the alert. support mustache template\n"},"customSummary":{"type":"string","description":"(string) - Custom summary for the alert. support mustache template\n"},"displayName":{"type":"string","description":"(string) - The display name of the alert\n"},"effectiveRunAs":{"$ref":"#/types/databricks:index/getAlertsV2AlertEffectiveRunAs:getAlertsV2AlertEffectiveRunAs","description":"(AlertV2RunAs) - The actual identity that will be used to execute the alert.\nThis is an output-only field that shows the resolved run-as identity after applying\npermissions and defaults\n"},"evaluation":{"$ref":"#/types/databricks:index/getAlertsV2AlertEvaluation:getAlertsV2AlertEvaluation","description":"(AlertV2Evaluation)\n"},"id":{"type":"string","description":"(string) - UUID identifying the alert\n"},"lifecycleState":{"type":"string","description":"(string) - Indicates whether the query is trashed. Possible values are: `ACTIVE`, `DELETED`\n"},"ownerUserName":{"type":"string","description":"(string) - The owner's username. This field is set to \"Unavailable\" if the user has been deleted\n"},"parentPath":{"type":"string","description":"(string) - The workspace path of the folder containing the alert. Can only be set on create, and cannot be updated\n"},"providerConfig":{"$ref":"#/types/databricks:index/getAlertsV2AlertProviderConfig:getAlertsV2AlertProviderConfig","description":"Configure the provider for management through account provider.\n"},"queryText":{"type":"string","description":"(string) - Text of the query to be run\n"},"runAs":{"$ref":"#/types/databricks:index/getAlertsV2AlertRunAs:getAlertsV2AlertRunAs","description":"(AlertV2RunAs) - Specifies the identity that will be used to run the alert.\nThis field allows you to configure alerts to run as a specific user or service principal.\n- For user identity: Set \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e to the email of an active workspace user. Users can only set this to their own email.\n- For service principal: Set \u003cspan pulumi-lang-nodejs=\"`servicePrincipalName`\" pulumi-lang-dotnet=\"`ServicePrincipalName`\" pulumi-lang-go=\"`servicePrincipalName`\" pulumi-lang-python=\"`service_principal_name`\" pulumi-lang-yaml=\"`servicePrincipalName`\" pulumi-lang-java=\"`servicePrincipalName`\"\u003e`service_principal_name`\u003c/span\u003e to the application ID. Requires the `servicePrincipal/user` role.\nIf not specified, the alert will run as the request user\n"},"runAsUserName":{"type":"string","description":"(string, deprecated) - The run as username or application ID of service principal.\nOn Create and Update, this field can be set to application ID of an active service principal. Setting this field requires the servicePrincipal/user role.\nDeprecated: Use \u003cspan pulumi-lang-nodejs=\"`runAs`\" pulumi-lang-dotnet=\"`RunAs`\" pulumi-lang-go=\"`runAs`\" pulumi-lang-python=\"`run_as`\" pulumi-lang-yaml=\"`runAs`\" pulumi-lang-java=\"`runAs`\"\u003e`run_as`\u003c/span\u003e field instead. This field will be removed in a future release\n"},"schedule":{"$ref":"#/types/databricks:index/getAlertsV2AlertSchedule:getAlertsV2AlertSchedule","description":"(CronSchedule)\n"},"updateTime":{"type":"string","description":"(string) - The timestamp indicating when the alert was updated\n"},"warehouseId":{"type":"string","description":"(string) - ID of the SQL warehouse attached to the alert\n"}},"type":"object","required":["createTime","customDescription","customSummary","displayName","effectiveRunAs","evaluation","id","lifecycleState","ownerUserName","parentPath","queryText","runAs","runAsUserName","schedule","updateTime","warehouseId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertsV2AlertEffectiveRunAs:getAlertsV2AlertEffectiveRunAs":{"properties":{"servicePrincipalName":{"type":"string","description":"(string) - Application ID of an active service principal. Setting this field requires the `servicePrincipal/user` role\n"},"userName":{"type":"string","description":"(string) - The email of an active workspace user. Can only set this field to their own email\n"}},"type":"object"},"databricks:index/getAlertsV2AlertEvaluation:getAlertsV2AlertEvaluation":{"properties":{"comparisonOperator":{"type":"string","description":"(string) - Operator used for comparison in alert evaluation. Possible values are: `EQUAL`, `GREATER_THAN`, `GREATER_THAN_OR_EQUAL`, `IS_NOT_NULL`, `IS_NULL`, `LESS_THAN`, `LESS_THAN_OR_EQUAL`, `NOT_EQUAL`\n"},"emptyResultState":{"type":"string","description":"(string) - Alert state if result is empty. Please avoid setting this field to be `UNKNOWN` because `UNKNOWN` state is planned to be deprecated. Possible values are: `ERROR`, `OK`, `TRIGGERED`, `UNKNOWN`\n"},"lastEvaluatedAt":{"type":"string","description":"(string) - Timestamp of the last evaluation\n"},"notification":{"$ref":"#/types/databricks:index/getAlertsV2AlertEvaluationNotification:getAlertsV2AlertEvaluationNotification","description":"(AlertV2Notification) - User or Notification Destination to notify when alert is triggered\n"},"source":{"$ref":"#/types/databricks:index/getAlertsV2AlertEvaluationSource:getAlertsV2AlertEvaluationSource","description":"(AlertV2OperandColumn) - Source column from result to use to evaluate alert\n"},"state":{"type":"string","description":"(string) - Latest state of alert evaluation. Possible values are: `ERROR`, `OK`, `TRIGGERED`, `UNKNOWN`\n"},"threshold":{"$ref":"#/types/databricks:index/getAlertsV2AlertEvaluationThreshold:getAlertsV2AlertEvaluationThreshold","description":"(AlertV2Operand) - Threshold to user for alert evaluation, can be a column or a value\n"}},"type":"object","required":["comparisonOperator","lastEvaluatedAt","source","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertsV2AlertEvaluationNotification:getAlertsV2AlertEvaluationNotification":{"properties":{"effectiveNotifyOnOk":{"type":"boolean"},"effectiveRetriggerSeconds":{"type":"integer"},"notifyOnOk":{"type":"boolean","description":"(boolean) - Whether to notify alert subscribers when alert returns back to normal\n"},"retriggerSeconds":{"type":"integer","description":"(integer) - Number of seconds an alert waits after being triggered before it is allowed to send another notification.\nIf set to 0 or omitted, the alert will not send any further notifications after the first trigger\nSetting this value to 1 allows the alert to send a notification on every evaluation where the condition is met, effectively making it always retrigger for notification purposes\n"},"subscriptions":{"type":"array","items":{"$ref":"#/types/databricks:index/getAlertsV2AlertEvaluationNotificationSubscription:getAlertsV2AlertEvaluationNotificationSubscription"},"description":"(list of AlertV2Subscription)\n"}},"type":"object","required":["effectiveNotifyOnOk","effectiveRetriggerSeconds"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertsV2AlertEvaluationNotificationSubscription:getAlertsV2AlertEvaluationNotificationSubscription":{"properties":{"destinationId":{"type":"string","description":"(string)\n"},"userEmail":{"type":"string","description":"(string)\n"}},"type":"object"},"databricks:index/getAlertsV2AlertEvaluationSource:getAlertsV2AlertEvaluationSource":{"properties":{"aggregation":{"type":"string","description":"(string) - If not set, the behavior is equivalent to using `First row` in the UI. Possible values are: `AVG`, `COUNT`, `COUNT_DISTINCT`, `MAX`, `MEDIAN`, `MIN`, `STDDEV`, `SUM`\n"},"display":{"type":"string","description":"(string)\n"},"name":{"type":"string","description":"(string)\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertsV2AlertEvaluationThreshold:getAlertsV2AlertEvaluationThreshold":{"properties":{"column":{"$ref":"#/types/databricks:index/getAlertsV2AlertEvaluationThresholdColumn:getAlertsV2AlertEvaluationThresholdColumn","description":"(AlertV2OperandColumn)\n"},"value":{"$ref":"#/types/databricks:index/getAlertsV2AlertEvaluationThresholdValue:getAlertsV2AlertEvaluationThresholdValue","description":"(AlertV2OperandValue)\n"}},"type":"object"},"databricks:index/getAlertsV2AlertEvaluationThresholdColumn:getAlertsV2AlertEvaluationThresholdColumn":{"properties":{"aggregation":{"type":"string","description":"(string) - If not set, the behavior is equivalent to using `First row` in the UI. Possible values are: `AVG`, `COUNT`, `COUNT_DISTINCT`, `MAX`, `MEDIAN`, `MIN`, `STDDEV`, `SUM`\n"},"display":{"type":"string","description":"(string)\n"},"name":{"type":"string","description":"(string)\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertsV2AlertEvaluationThresholdValue:getAlertsV2AlertEvaluationThresholdValue":{"properties":{"boolValue":{"type":"boolean","description":"(boolean)\n"},"doubleValue":{"type":"number","description":"(number)\n"},"stringValue":{"type":"string","description":"(string)\n"}},"type":"object"},"databricks:index/getAlertsV2AlertProviderConfig:getAlertsV2AlertProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertsV2AlertRunAs:getAlertsV2AlertRunAs":{"properties":{"servicePrincipalName":{"type":"string","description":"(string) - Application ID of an active service principal. Setting this field requires the `servicePrincipal/user` role\n"},"userName":{"type":"string","description":"(string) - The email of an active workspace user. Can only set this field to their own email\n"}},"type":"object"},"databricks:index/getAlertsV2AlertSchedule:getAlertsV2AlertSchedule":{"properties":{"pauseStatus":{"type":"string","description":"(string) - Indicate whether this schedule is paused or not. Possible values are: `PAUSED`, `UNPAUSED`\n"},"quartzCronSchedule":{"type":"string","description":"(string) - A cron expression using quartz syntax that specifies the schedule for this pipeline.\nShould use the quartz format described here: http://www.quartz-scheduler.org/documentation/quartz-2.1.7/tutorials/tutorial-lesson-06.html\n"},"timezoneId":{"type":"string","description":"(string) - A Java timezone id. The schedule will be resolved using this timezone.\nThis will be combined with the\u003cspan pulumi-lang-nodejs=\" quartzCronSchedule \" pulumi-lang-dotnet=\" QuartzCronSchedule \" pulumi-lang-go=\" quartzCronSchedule \" pulumi-lang-python=\" quartz_cron_schedule \" pulumi-lang-yaml=\" quartzCronSchedule \" pulumi-lang-java=\" quartzCronSchedule \"\u003e quartz_cron_schedule \u003c/span\u003eto determine the schedule.\nSee https://docs.databricks.com/sql/language-manual/sql-ref-syntax-aux-conf-mgmt-set-timezone.html for details\n"}},"type":"object","required":["quartzCronSchedule","timezoneId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAlertsV2ProviderConfig:getAlertsV2ProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getAppApp:getAppApp":{"properties":{"activeDeployment":{"$ref":"#/types/databricks:index/getAppAppActiveDeployment:getAppAppActiveDeployment"},"appStatus":{"$ref":"#/types/databricks:index/getAppAppAppStatus:getAppAppAppStatus","description":"attribute\n"},"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this resource.\n"},"computeSize":{"type":"string","description":"(Optional) A string specifying compute size for the App.\n"},"computeStatus":{"$ref":"#/types/databricks:index/getAppAppComputeStatus:getAppAppComputeStatus","description":"attribute\n"},"createTime":{"type":"string","description":"The creation time of the app.\n"},"creator":{"type":"string","description":"The email of the user that created the app.\n"},"defaultSourceCodePath":{"type":"string","description":"The default workspace file system path of the source code from which app deployment are created. This field tracks the workspace source code path of the last active deployment.\n"},"description":{"type":"string","description":"The description of the resource.\n"},"effectiveBudgetPolicyId":{"type":"string","description":"The effective budget policy ID.\n"},"effectiveUsagePolicyId":{"type":"string"},"effectiveUserApiScopes":{"type":"array","items":{"type":"string"},"description":"A list of effective api scopes granted to the user access token.\n"},"gitRepository":{"$ref":"#/types/databricks:index/getAppAppGitRepository:getAppAppGitRepository"},"id":{"type":"string","description":"Id of the job to grant permission on.\n"},"name":{"type":"string","description":"The name of the app.\n"},"oauth2AppClientId":{"type":"string"},"oauth2AppIntegrationId":{"type":"string"},"pendingDeployment":{"$ref":"#/types/databricks:index/getAppAppPendingDeployment:getAppAppPendingDeployment"},"resources":{"type":"array","items":{"$ref":"#/types/databricks:index/getAppAppResource:getAppAppResource"},"description":"A list of resources that the app have access to.\n"},"servicePrincipalClientId":{"type":"string","description":"client_id (application_id) of the app service principal\n"},"servicePrincipalId":{"type":"integer","description":"id of the app service principal\n"},"servicePrincipalName":{"type":"string","description":"name of the app service principal\n"},"space":{"type":"string"},"updateTime":{"type":"string","description":"The update time of the app.\n"},"updater":{"type":"string","description":"The email of the user that last updated the app.\n"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"},"usagePolicyId":{"type":"string"},"userApiScopes":{"type":"array","items":{"type":"string"}}},"type":"object","required":["activeDeployment","appStatus","computeStatus","createTime","creator","defaultSourceCodePath","effectiveBudgetPolicyId","effectiveUsagePolicyId","effectiveUserApiScopes","id","name","oauth2AppClientId","oauth2AppIntegrationId","pendingDeployment","servicePrincipalClientId","servicePrincipalId","servicePrincipalName","updateTime","updater","url"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppActiveDeployment:getAppAppActiveDeployment":{"properties":{"commands":{"type":"array","items":{"type":"string"}},"createTime":{"type":"string","description":"The creation time of the app.\n"},"creator":{"type":"string","description":"The email of the user that created the app.\n"},"deploymentArtifacts":{"$ref":"#/types/databricks:index/getAppAppActiveDeploymentDeploymentArtifacts:getAppAppActiveDeploymentDeploymentArtifacts"},"deploymentId":{"type":"string"},"envVars":{"type":"array","items":{"$ref":"#/types/databricks:index/getAppAppActiveDeploymentEnvVar:getAppAppActiveDeploymentEnvVar"}},"gitSource":{"$ref":"#/types/databricks:index/getAppAppActiveDeploymentGitSource:getAppAppActiveDeploymentGitSource"},"mode":{"type":"string"},"sourceCodePath":{"type":"string"},"status":{"$ref":"#/types/databricks:index/getAppAppActiveDeploymentStatus:getAppAppActiveDeploymentStatus"},"updateTime":{"type":"string","description":"The update time of the app.\n"}},"type":"object","required":["createTime","creator","deploymentArtifacts","status","updateTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppActiveDeploymentDeploymentArtifacts:getAppAppActiveDeploymentDeploymentArtifacts":{"properties":{"sourceCodePath":{"type":"string"}},"type":"object"},"databricks:index/getAppAppActiveDeploymentEnvVar:getAppAppActiveDeploymentEnvVar":{"properties":{"name":{"type":"string","description":"The name of the app.\n"},"value":{"type":"string"},"valueFrom":{"type":"string"}},"type":"object"},"databricks:index/getAppAppActiveDeploymentGitSource:getAppAppActiveDeploymentGitSource":{"properties":{"branch":{"type":"string"},"commit":{"type":"string"},"gitRepository":{"$ref":"#/types/databricks:index/getAppAppActiveDeploymentGitSourceGitRepository:getAppAppActiveDeploymentGitSourceGitRepository"},"resolvedCommit":{"type":"string"},"sourceCodePath":{"type":"string"},"tag":{"type":"string"}},"type":"object","required":["gitRepository","resolvedCommit"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppActiveDeploymentGitSourceGitRepository:getAppAppActiveDeploymentGitSourceGitRepository":{"properties":{"provider":{"type":"string"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"}},"type":"object","required":["provider","url"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppActiveDeploymentStatus:getAppAppActiveDeploymentStatus":{"properties":{"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","required":["message","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppAppStatus:getAppAppAppStatus":{"properties":{"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","required":["message","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppComputeStatus:getAppAppComputeStatus":{"properties":{"activeInstances":{"type":"integer"},"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","required":["activeInstances","message","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppGitRepository:getAppAppGitRepository":{"properties":{"provider":{"type":"string"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"}},"type":"object","required":["provider","url"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppPendingDeployment:getAppAppPendingDeployment":{"properties":{"commands":{"type":"array","items":{"type":"string"}},"createTime":{"type":"string","description":"The creation time of the app.\n"},"creator":{"type":"string","description":"The email of the user that created the app.\n"},"deploymentArtifacts":{"$ref":"#/types/databricks:index/getAppAppPendingDeploymentDeploymentArtifacts:getAppAppPendingDeploymentDeploymentArtifacts"},"deploymentId":{"type":"string"},"envVars":{"type":"array","items":{"$ref":"#/types/databricks:index/getAppAppPendingDeploymentEnvVar:getAppAppPendingDeploymentEnvVar"}},"gitSource":{"$ref":"#/types/databricks:index/getAppAppPendingDeploymentGitSource:getAppAppPendingDeploymentGitSource"},"mode":{"type":"string"},"sourceCodePath":{"type":"string"},"status":{"$ref":"#/types/databricks:index/getAppAppPendingDeploymentStatus:getAppAppPendingDeploymentStatus"},"updateTime":{"type":"string","description":"The update time of the app.\n"}},"type":"object","required":["createTime","creator","deploymentArtifacts","status","updateTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppPendingDeploymentDeploymentArtifacts:getAppAppPendingDeploymentDeploymentArtifacts":{"properties":{"sourceCodePath":{"type":"string"}},"type":"object"},"databricks:index/getAppAppPendingDeploymentEnvVar:getAppAppPendingDeploymentEnvVar":{"properties":{"name":{"type":"string","description":"The name of the app.\n"},"value":{"type":"string"},"valueFrom":{"type":"string"}},"type":"object"},"databricks:index/getAppAppPendingDeploymentGitSource:getAppAppPendingDeploymentGitSource":{"properties":{"branch":{"type":"string"},"commit":{"type":"string"},"gitRepository":{"$ref":"#/types/databricks:index/getAppAppPendingDeploymentGitSourceGitRepository:getAppAppPendingDeploymentGitSourceGitRepository"},"resolvedCommit":{"type":"string"},"sourceCodePath":{"type":"string"},"tag":{"type":"string"}},"type":"object","required":["gitRepository","resolvedCommit"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppPendingDeploymentGitSourceGitRepository:getAppAppPendingDeploymentGitSourceGitRepository":{"properties":{"provider":{"type":"string"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"}},"type":"object","required":["provider","url"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppPendingDeploymentStatus:getAppAppPendingDeploymentStatus":{"properties":{"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","required":["message","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppResource:getAppAppResource":{"properties":{"database":{"$ref":"#/types/databricks:index/getAppAppResourceDatabase:getAppAppResourceDatabase","description":"attribute\n"},"description":{"type":"string","description":"The description of the resource.\n"},"experiment":{"$ref":"#/types/databricks:index/getAppAppResourceExperiment:getAppAppResourceExperiment"},"genieSpace":{"$ref":"#/types/databricks:index/getAppAppResourceGenieSpace:getAppAppResourceGenieSpace","description":"attribute\n"},"job":{"$ref":"#/types/databricks:index/getAppAppResourceJob:getAppAppResourceJob","description":"attribute\n"},"name":{"type":"string","description":"The name of the app.\n"},"secret":{"$ref":"#/types/databricks:index/getAppAppResourceSecret:getAppAppResourceSecret","description":"attribute\n"},"servingEndpoint":{"$ref":"#/types/databricks:index/getAppAppResourceServingEndpoint:getAppAppResourceServingEndpoint","description":"attribute\n"},"sqlWarehouse":{"$ref":"#/types/databricks:index/getAppAppResourceSqlWarehouse:getAppAppResourceSqlWarehouse","description":"attribute\n"},"ucSecurable":{"$ref":"#/types/databricks:index/getAppAppResourceUcSecurable:getAppAppResourceUcSecurable","description":"attribute\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppResourceDatabase:getAppAppResourceDatabase":{"properties":{"databaseName":{"type":"string","description":"The name of database.\n"},"instanceName":{"type":"string","description":"The name of database instance.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"}},"type":"object","required":["databaseName","instanceName","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppResourceExperiment:getAppAppResourceExperiment":{"properties":{"experimentId":{"type":"string"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"}},"type":"object","required":["experimentId","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppResourceGenieSpace:getAppAppResourceGenieSpace":{"properties":{"name":{"type":"string","description":"The name of the app.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"},"spaceId":{"type":"string","description":"The unique ID of Genie Space.\n"}},"type":"object","required":["name","permission","spaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppResourceJob:getAppAppResourceJob":{"properties":{"id":{"type":"string","description":"Id of the job to grant permission on.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"}},"type":"object","required":["id","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppResourceSecret:getAppAppResourceSecret":{"properties":{"key":{"type":"string","description":"Key of the secret to grant permission on.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"},"scope":{"type":"string","description":"Scope of the secret to grant permission on.\n"}},"type":"object","required":["key","permission","scope"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppResourceServingEndpoint:getAppAppResourceServingEndpoint":{"properties":{"name":{"type":"string","description":"The name of the app.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"}},"type":"object","required":["name","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppResourceSqlWarehouse:getAppAppResourceSqlWarehouse":{"properties":{"id":{"type":"string","description":"Id of the job to grant permission on.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"}},"type":"object","required":["id","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppAppResourceUcSecurable:getAppAppResourceUcSecurable":{"properties":{"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"},"securableFullName":{"type":"string","description":"the full name of UC securable, i.e. `my-catalog.my-schema.my-volume`.\n"},"securableType":{"type":"string","description":"the type of UC securable, i.e. `VOLUME`.\n"}},"type":"object","required":["permission","securableFullName","securableType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppProviderConfig:getAppProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/getAppsApp:getAppsApp":{"properties":{"activeDeployment":{"$ref":"#/types/databricks:index/getAppsAppActiveDeployment:getAppsAppActiveDeployment"},"appStatus":{"$ref":"#/types/databricks:index/getAppsAppAppStatus:getAppsAppAppStatus","description":"attribute\n"},"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this resource.\n"},"computeSize":{"type":"string","description":"(Optional) A string specifying compute size for the App.\n"},"computeStatus":{"$ref":"#/types/databricks:index/getAppsAppComputeStatus:getAppsAppComputeStatus","description":"attribute\n"},"createTime":{"type":"string","description":"The creation time of the app.\n"},"creator":{"type":"string","description":"The email of the user that created the app.\n"},"defaultSourceCodePath":{"type":"string","description":"The default workspace file system path of the source code from which app deployment are created. This field tracks the workspace source code path of the last active deployment.\n"},"description":{"type":"string","description":"The description of the resource.\n"},"effectiveBudgetPolicyId":{"type":"string","description":"The effective budget policy ID.\n"},"effectiveUsagePolicyId":{"type":"string"},"effectiveUserApiScopes":{"type":"array","items":{"type":"string"},"description":"A list of effective api scopes granted to the user access token.\n"},"gitRepository":{"$ref":"#/types/databricks:index/getAppsAppGitRepository:getAppsAppGitRepository"},"id":{"type":"string","description":"Id of the job to grant permission on.\n"},"name":{"type":"string","description":"The name of Genie Space.\n"},"oauth2AppClientId":{"type":"string"},"oauth2AppIntegrationId":{"type":"string"},"pendingDeployment":{"$ref":"#/types/databricks:index/getAppsAppPendingDeployment:getAppsAppPendingDeployment"},"resources":{"type":"array","items":{"$ref":"#/types/databricks:index/getAppsAppResource:getAppsAppResource"},"description":"A list of resources that the app have access to.\n"},"servicePrincipalClientId":{"type":"string","description":"client_id (application_id) of the app service principal\n"},"servicePrincipalId":{"type":"integer","description":"id of the app service principal\n"},"servicePrincipalName":{"type":"string","description":"name of the app service principal\n"},"space":{"type":"string"},"updateTime":{"type":"string","description":"The update time of the app.\n"},"updater":{"type":"string","description":"The email of the user that last updated the app.\n"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"},"usagePolicyId":{"type":"string"},"userApiScopes":{"type":"array","items":{"type":"string"}}},"type":"object","required":["activeDeployment","appStatus","computeStatus","createTime","creator","defaultSourceCodePath","effectiveBudgetPolicyId","effectiveUsagePolicyId","effectiveUserApiScopes","id","name","oauth2AppClientId","oauth2AppIntegrationId","pendingDeployment","servicePrincipalClientId","servicePrincipalId","servicePrincipalName","updateTime","updater","url"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppActiveDeployment:getAppsAppActiveDeployment":{"properties":{"commands":{"type":"array","items":{"type":"string"}},"createTime":{"type":"string","description":"The creation time of the app.\n"},"creator":{"type":"string","description":"The email of the user that created the app.\n"},"deploymentArtifacts":{"$ref":"#/types/databricks:index/getAppsAppActiveDeploymentDeploymentArtifacts:getAppsAppActiveDeploymentDeploymentArtifacts"},"deploymentId":{"type":"string"},"envVars":{"type":"array","items":{"$ref":"#/types/databricks:index/getAppsAppActiveDeploymentEnvVar:getAppsAppActiveDeploymentEnvVar"}},"gitSource":{"$ref":"#/types/databricks:index/getAppsAppActiveDeploymentGitSource:getAppsAppActiveDeploymentGitSource"},"mode":{"type":"string"},"sourceCodePath":{"type":"string"},"status":{"$ref":"#/types/databricks:index/getAppsAppActiveDeploymentStatus:getAppsAppActiveDeploymentStatus"},"updateTime":{"type":"string","description":"The update time of the app.\n"}},"type":"object","required":["createTime","creator","deploymentArtifacts","status","updateTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppActiveDeploymentDeploymentArtifacts:getAppsAppActiveDeploymentDeploymentArtifacts":{"properties":{"sourceCodePath":{"type":"string"}},"type":"object"},"databricks:index/getAppsAppActiveDeploymentEnvVar:getAppsAppActiveDeploymentEnvVar":{"properties":{"name":{"type":"string","description":"The name of Genie Space.\n"},"value":{"type":"string"},"valueFrom":{"type":"string"}},"type":"object"},"databricks:index/getAppsAppActiveDeploymentGitSource:getAppsAppActiveDeploymentGitSource":{"properties":{"branch":{"type":"string"},"commit":{"type":"string"},"gitRepository":{"$ref":"#/types/databricks:index/getAppsAppActiveDeploymentGitSourceGitRepository:getAppsAppActiveDeploymentGitSourceGitRepository"},"resolvedCommit":{"type":"string"},"sourceCodePath":{"type":"string"},"tag":{"type":"string"}},"type":"object","required":["gitRepository","resolvedCommit"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppActiveDeploymentGitSourceGitRepository:getAppsAppActiveDeploymentGitSourceGitRepository":{"properties":{"provider":{"type":"string"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"}},"type":"object","required":["provider","url"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppActiveDeploymentStatus:getAppsAppActiveDeploymentStatus":{"properties":{"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","required":["message","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppAppStatus:getAppsAppAppStatus":{"properties":{"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","required":["message","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppComputeStatus:getAppsAppComputeStatus":{"properties":{"activeInstances":{"type":"integer"},"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","required":["activeInstances","message","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppGitRepository:getAppsAppGitRepository":{"properties":{"provider":{"type":"string"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"}},"type":"object","required":["provider","url"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppPendingDeployment:getAppsAppPendingDeployment":{"properties":{"commands":{"type":"array","items":{"type":"string"}},"createTime":{"type":"string","description":"The creation time of the app.\n"},"creator":{"type":"string","description":"The email of the user that created the app.\n"},"deploymentArtifacts":{"$ref":"#/types/databricks:index/getAppsAppPendingDeploymentDeploymentArtifacts:getAppsAppPendingDeploymentDeploymentArtifacts"},"deploymentId":{"type":"string"},"envVars":{"type":"array","items":{"$ref":"#/types/databricks:index/getAppsAppPendingDeploymentEnvVar:getAppsAppPendingDeploymentEnvVar"}},"gitSource":{"$ref":"#/types/databricks:index/getAppsAppPendingDeploymentGitSource:getAppsAppPendingDeploymentGitSource"},"mode":{"type":"string"},"sourceCodePath":{"type":"string"},"status":{"$ref":"#/types/databricks:index/getAppsAppPendingDeploymentStatus:getAppsAppPendingDeploymentStatus"},"updateTime":{"type":"string","description":"The update time of the app.\n"}},"type":"object","required":["createTime","creator","deploymentArtifacts","status","updateTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppPendingDeploymentDeploymentArtifacts:getAppsAppPendingDeploymentDeploymentArtifacts":{"properties":{"sourceCodePath":{"type":"string"}},"type":"object"},"databricks:index/getAppsAppPendingDeploymentEnvVar:getAppsAppPendingDeploymentEnvVar":{"properties":{"name":{"type":"string","description":"The name of Genie Space.\n"},"value":{"type":"string"},"valueFrom":{"type":"string"}},"type":"object"},"databricks:index/getAppsAppPendingDeploymentGitSource:getAppsAppPendingDeploymentGitSource":{"properties":{"branch":{"type":"string"},"commit":{"type":"string"},"gitRepository":{"$ref":"#/types/databricks:index/getAppsAppPendingDeploymentGitSourceGitRepository:getAppsAppPendingDeploymentGitSourceGitRepository"},"resolvedCommit":{"type":"string"},"sourceCodePath":{"type":"string"},"tag":{"type":"string"}},"type":"object","required":["gitRepository","resolvedCommit"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppPendingDeploymentGitSourceGitRepository:getAppsAppPendingDeploymentGitSourceGitRepository":{"properties":{"provider":{"type":"string"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"}},"type":"object","required":["provider","url"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppPendingDeploymentStatus:getAppsAppPendingDeploymentStatus":{"properties":{"message":{"type":"string","description":"Application status message\n"},"state":{"type":"string","description":"State of the application.\n"}},"type":"object","required":["message","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppResource:getAppsAppResource":{"properties":{"database":{"$ref":"#/types/databricks:index/getAppsAppResourceDatabase:getAppsAppResourceDatabase","description":"attribute\n"},"description":{"type":"string","description":"The description of the resource.\n"},"experiment":{"$ref":"#/types/databricks:index/getAppsAppResourceExperiment:getAppsAppResourceExperiment"},"genieSpace":{"$ref":"#/types/databricks:index/getAppsAppResourceGenieSpace:getAppsAppResourceGenieSpace","description":"attribute\n"},"job":{"$ref":"#/types/databricks:index/getAppsAppResourceJob:getAppsAppResourceJob","description":"attribute\n"},"name":{"type":"string","description":"The name of Genie Space.\n"},"secret":{"$ref":"#/types/databricks:index/getAppsAppResourceSecret:getAppsAppResourceSecret","description":"attribute\n"},"servingEndpoint":{"$ref":"#/types/databricks:index/getAppsAppResourceServingEndpoint:getAppsAppResourceServingEndpoint","description":"attribute\n"},"sqlWarehouse":{"$ref":"#/types/databricks:index/getAppsAppResourceSqlWarehouse:getAppsAppResourceSqlWarehouse","description":"attribute\n"},"ucSecurable":{"$ref":"#/types/databricks:index/getAppsAppResourceUcSecurable:getAppsAppResourceUcSecurable","description":"attribute\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppResourceDatabase:getAppsAppResourceDatabase":{"properties":{"databaseName":{"type":"string","description":"The name of database.\n"},"instanceName":{"type":"string","description":"The name of database instance.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"}},"type":"object","required":["databaseName","instanceName","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppResourceExperiment:getAppsAppResourceExperiment":{"properties":{"experimentId":{"type":"string"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"}},"type":"object","required":["experimentId","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppResourceGenieSpace:getAppsAppResourceGenieSpace":{"properties":{"name":{"type":"string","description":"The name of Genie Space.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"},"spaceId":{"type":"string","description":"The unique ID of Genie Space.\n"}},"type":"object","required":["name","permission","spaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppResourceJob:getAppsAppResourceJob":{"properties":{"id":{"type":"string","description":"Id of the job to grant permission on.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"}},"type":"object","required":["id","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppResourceSecret:getAppsAppResourceSecret":{"properties":{"key":{"type":"string","description":"Key of the secret to grant permission on.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"},"scope":{"type":"string","description":"Scope of the secret to grant permission on.\n"}},"type":"object","required":["key","permission","scope"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppResourceServingEndpoint:getAppsAppResourceServingEndpoint":{"properties":{"name":{"type":"string","description":"The name of Genie Space.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"}},"type":"object","required":["name","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppResourceSqlWarehouse:getAppsAppResourceSqlWarehouse":{"properties":{"id":{"type":"string","description":"Id of the job to grant permission on.\n"},"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"}},"type":"object","required":["id","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsAppResourceUcSecurable:getAppsAppResourceUcSecurable":{"properties":{"permission":{"type":"string","description":"Permission to grant on database. Supported permissions are: `CAN_CONNECT_AND_CREATE`.\n"},"securableFullName":{"type":"string","description":"the full name of UC securable, i.e. `my-catalog.my-schema.my-volume`.\n"},"securableType":{"type":"string","description":"the type of UC securable, i.e. `VOLUME`.\n"}},"type":"object","required":["permission","securableFullName","securableType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsProviderConfig:getAppsProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/getAppsSettingsCustomTemplateManifest:getAppsSettingsCustomTemplateManifest":{"properties":{"description":{"type":"string","description":"(string) - Description of the App Resource\n"},"name":{"type":"string","description":"The name of the template. It must contain only alphanumeric characters, hyphens, underscores, and whitespaces.\nIt must be unique within the workspace\n"},"resourceSpecs":{"type":"array","items":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplateManifestResourceSpec:getAppsSettingsCustomTemplateManifestResourceSpec"},"description":"(list of AppManifestAppResourceSpec)\n"},"version":{"type":"integer","description":"(integer) - The manifest schema version, for now only 1 is allowed\n"}},"type":"object","required":["name","version"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplateManifestResourceSpec:getAppsSettingsCustomTemplateManifestResourceSpec":{"properties":{"description":{"type":"string","description":"(string) - Description of the App Resource\n"},"experimentSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecExperimentSpec:getAppsSettingsCustomTemplateManifestResourceSpecExperimentSpec","description":"(AppManifestAppResourceExperimentSpec)\n"},"jobSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecJobSpec:getAppsSettingsCustomTemplateManifestResourceSpecJobSpec","description":"(AppManifestAppResourceJobSpec)\n"},"name":{"type":"string","description":"The name of the template. It must contain only alphanumeric characters, hyphens, underscores, and whitespaces.\nIt must be unique within the workspace\n"},"secretSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecSecretSpec:getAppsSettingsCustomTemplateManifestResourceSpecSecretSpec","description":"(AppManifestAppResourceSecretSpec)\n"},"servingEndpointSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecServingEndpointSpec:getAppsSettingsCustomTemplateManifestResourceSpecServingEndpointSpec","description":"(AppManifestAppResourceServingEndpointSpec)\n"},"sqlWarehouseSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecSqlWarehouseSpec:getAppsSettingsCustomTemplateManifestResourceSpecSqlWarehouseSpec","description":"(AppManifestAppResourceSqlWarehouseSpec)\n"},"ucSecurableSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecUcSecurableSpec:getAppsSettingsCustomTemplateManifestResourceSpecUcSecurableSpec","description":"(AppManifestAppResourceUcSecurableSpec)\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecExperimentSpec:getAppsSettingsCustomTemplateManifestResourceSpecExperimentSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecJobSpec:getAppsSettingsCustomTemplateManifestResourceSpecJobSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecSecretSpec:getAppsSettingsCustomTemplateManifestResourceSpecSecretSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecServingEndpointSpec:getAppsSettingsCustomTemplateManifestResourceSpecServingEndpointSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecSqlWarehouseSpec:getAppsSettingsCustomTemplateManifestResourceSpecSqlWarehouseSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplateManifestResourceSpecUcSecurableSpec:getAppsSettingsCustomTemplateManifestResourceSpecUcSecurableSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"},"securableType":{"type":"string","description":"(string) - Possible values are: `CONNECTION`, `FUNCTION`, `TABLE`, `VOLUME`\n"}},"type":"object","required":["permission","securableType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplateProviderConfig:getAppsSettingsCustomTemplateProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getAppsSettingsCustomTemplatesProviderConfig:getAppsSettingsCustomTemplatesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getAppsSettingsCustomTemplatesTemplate:getAppsSettingsCustomTemplatesTemplate":{"properties":{"creator":{"type":"string","description":"(string)\n"},"description":{"type":"string","description":"(string) - Description of the App Resource\n"},"gitProvider":{"type":"string","description":"(string) - The Git provider of the template\n"},"gitRepo":{"type":"string","description":"(string) - The Git repository URL that the template resides in\n"},"manifest":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesTemplateManifest:getAppsSettingsCustomTemplatesTemplateManifest","description":"(AppManifest) - The manifest of the template. It defines fields and default values when installing the template\n"},"name":{"type":"string","description":"(string) - Name of the App Resource\n"},"path":{"type":"string","description":"(string) - The path to the template within the Git repository\n"},"providerConfig":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesTemplateProviderConfig:getAppsSettingsCustomTemplatesTemplateProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["creator","description","gitProvider","gitRepo","manifest","name","path"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplatesTemplateManifest:getAppsSettingsCustomTemplatesTemplateManifest":{"properties":{"description":{"type":"string","description":"(string) - Description of the App Resource\n"},"name":{"type":"string","description":"(string) - Name of the App Resource\n"},"resourceSpecs":{"type":"array","items":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpec"},"description":"(list of AppManifestAppResourceSpec)\n"},"version":{"type":"integer","description":"(integer) - The manifest schema version, for now only 1 is allowed\n"}},"type":"object","required":["name","version"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpec":{"properties":{"description":{"type":"string","description":"(string) - Description of the App Resource\n"},"experimentSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecExperimentSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecExperimentSpec","description":"(AppManifestAppResourceExperimentSpec)\n"},"jobSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecJobSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecJobSpec","description":"(AppManifestAppResourceJobSpec)\n"},"name":{"type":"string","description":"(string) - Name of the App Resource\n"},"secretSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecSecretSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecSecretSpec","description":"(AppManifestAppResourceSecretSpec)\n"},"servingEndpointSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecServingEndpointSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecServingEndpointSpec","description":"(AppManifestAppResourceServingEndpointSpec)\n"},"sqlWarehouseSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecSqlWarehouseSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecSqlWarehouseSpec","description":"(AppManifestAppResourceSqlWarehouseSpec)\n"},"ucSecurableSpec":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecUcSecurableSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecUcSecurableSpec","description":"(AppManifestAppResourceUcSecurableSpec)\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecExperimentSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecExperimentSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecJobSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecJobSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecSecretSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecSecretSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecServingEndpointSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecServingEndpointSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecSqlWarehouseSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecSqlWarehouseSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplatesTemplateManifestResourceSpecUcSecurableSpec:getAppsSettingsCustomTemplatesTemplateManifestResourceSpecUcSecurableSpec":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `MANAGE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"},"securableType":{"type":"string","description":"(string) - Possible values are: `CONNECTION`, `FUNCTION`, `TABLE`, `VOLUME`\n"}},"type":"object","required":["permission","securableType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSettingsCustomTemplatesTemplateProviderConfig:getAppsSettingsCustomTemplatesTemplateProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpaceProviderConfig:getAppsSpaceProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getAppsSpaceResource:getAppsSpaceResource":{"properties":{"database":{"$ref":"#/types/databricks:index/getAppsSpaceResourceDatabase:getAppsSpaceResourceDatabase","description":"(AppResourceDatabase)\n"},"description":{"type":"string","description":"(string) - Description of the App Resource\n"},"experiment":{"$ref":"#/types/databricks:index/getAppsSpaceResourceExperiment:getAppsSpaceResourceExperiment","description":"(AppResourceExperiment)\n"},"genieSpace":{"$ref":"#/types/databricks:index/getAppsSpaceResourceGenieSpace:getAppsSpaceResourceGenieSpace","description":"(AppResourceGenieSpace)\n"},"job":{"$ref":"#/types/databricks:index/getAppsSpaceResourceJob:getAppsSpaceResourceJob","description":"(AppResourceJob)\n"},"name":{"type":"string","description":"The name of the app space. The name must contain only lowercase alphanumeric characters and hyphens.\nIt must be unique within the workspace\n"},"secret":{"$ref":"#/types/databricks:index/getAppsSpaceResourceSecret:getAppsSpaceResourceSecret","description":"(AppResourceSecret)\n"},"servingEndpoint":{"$ref":"#/types/databricks:index/getAppsSpaceResourceServingEndpoint:getAppsSpaceResourceServingEndpoint","description":"(AppResourceServingEndpoint)\n"},"sqlWarehouse":{"$ref":"#/types/databricks:index/getAppsSpaceResourceSqlWarehouse:getAppsSpaceResourceSqlWarehouse","description":"(AppResourceSqlWarehouse)\n"},"ucSecurable":{"$ref":"#/types/databricks:index/getAppsSpaceResourceUcSecurable:getAppsSpaceResourceUcSecurable","description":"(AppResourceUcSecurable)\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpaceResourceDatabase:getAppsSpaceResourceDatabase":{"properties":{"databaseName":{"type":"string","description":"(string)\n"},"instanceName":{"type":"string","description":"(string)\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["databaseName","instanceName","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpaceResourceExperiment:getAppsSpaceResourceExperiment":{"properties":{"experimentId":{"type":"string","description":"(string)\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["experimentId","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpaceResourceGenieSpace:getAppsSpaceResourceGenieSpace":{"properties":{"name":{"type":"string","description":"The name of the app space. The name must contain only lowercase alphanumeric characters and hyphens.\nIt must be unique within the workspace\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"},"spaceId":{"type":"string","description":"(string)\n"}},"type":"object","required":["name","permission","spaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpaceResourceJob:getAppsSpaceResourceJob":{"properties":{"id":{"type":"string","description":"(string) - Id of the SQL warehouse to grant permission on\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["id","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpaceResourceSecret:getAppsSpaceResourceSecret":{"properties":{"key":{"type":"string","description":"(string) - Key of the secret to grant permission on\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"},"scope":{"type":"string","description":"(string) - Scope of the secret to grant permission on\n"}},"type":"object","required":["key","permission","scope"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpaceResourceServingEndpoint:getAppsSpaceResourceServingEndpoint":{"properties":{"name":{"type":"string","description":"The name of the app space. The name must contain only lowercase alphanumeric characters and hyphens.\nIt must be unique within the workspace\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["name","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpaceResourceSqlWarehouse:getAppsSpaceResourceSqlWarehouse":{"properties":{"id":{"type":"string","description":"(string) - Id of the SQL warehouse to grant permission on\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["id","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpaceResourceUcSecurable:getAppsSpaceResourceUcSecurable":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"},"securableFullName":{"type":"string","description":"(string)\n"},"securableType":{"type":"string","description":"(string) - Possible values are: `CONNECTION`, `FUNCTION`, `TABLE`, `VOLUME`\n"}},"type":"object","required":["permission","securableFullName","securableType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpaceStatus:getAppsSpaceStatus":{"properties":{"message":{"type":"string","description":"(string) - Message providing context about the current state\n"},"state":{"type":"string","description":"(string) - The state of the app space. Possible values are: `SPACE_ACTIVE`, `SPACE_CREATING`, `SPACE_DELETED`, `SPACE_DELETING`, `SPACE_ERROR`, `SPACE_UPDATING`\n"}},"type":"object","required":["message","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesProviderConfig:getAppsSpacesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getAppsSpacesSpace:getAppsSpacesSpace":{"properties":{"createTime":{"type":"string","description":"(string) - The creation time of the app space. Formatted timestamp in ISO 6801\n"},"creator":{"type":"string","description":"(string) - The email of the user that created the app space\n"},"description":{"type":"string","description":"(string) - Description of the App Resource\n"},"effectiveUsagePolicyId":{"type":"string","description":"(string) - The effective usage policy ID used by apps in the space\n"},"effectiveUserApiScopes":{"type":"array","items":{"type":"string"},"description":"(list of string) - The effective api scopes granted to the user access token\n"},"id":{"type":"string","description":"(string) - Id of the SQL warehouse to grant permission on\n"},"name":{"type":"string","description":"(string) - Name of the serving endpoint to grant permission on\n"},"oauth2AppClientId":{"type":"string","description":"(string) - The OAuth2 app client ID for the app space\n"},"oauth2AppIntegrationId":{"type":"string","description":"(string) - The OAuth2 app integration ID for the app space\n"},"providerConfig":{"$ref":"#/types/databricks:index/getAppsSpacesSpaceProviderConfig:getAppsSpacesSpaceProviderConfig","description":"Configure the provider for management through account provider.\n"},"resources":{"type":"array","items":{"$ref":"#/types/databricks:index/getAppsSpacesSpaceResource:getAppsSpacesSpaceResource"},"description":"(list of AppResource) - Resources for the app space. Resources configured at the space level are available to all apps in the space\n"},"servicePrincipalClientId":{"type":"string","description":"(string) - The service principal client ID for the app space\n"},"servicePrincipalId":{"type":"integer","description":"(integer) - The service principal ID for the app space\n"},"servicePrincipalName":{"type":"string","description":"(string) - The service principal name for the app space\n"},"status":{"$ref":"#/types/databricks:index/getAppsSpacesSpaceStatus:getAppsSpacesSpaceStatus","description":"(SpaceStatus) - The status of the app space\n"},"updateTime":{"type":"string","description":"(string) - The update time of the app space. Formatted timestamp in ISO 6801\n"},"updater":{"type":"string","description":"(string) - The email of the user that last updated the app space\n"},"usagePolicyId":{"type":"string","description":"(string) - The usage policy ID for managing cost at the space level\n"},"userApiScopes":{"type":"array","items":{"type":"string"},"description":"(list of string) - OAuth scopes for apps in the space\n"}},"type":"object","required":["createTime","creator","description","effectiveUsagePolicyId","effectiveUserApiScopes","id","name","oauth2AppClientId","oauth2AppIntegrationId","resources","servicePrincipalClientId","servicePrincipalId","servicePrincipalName","status","updateTime","updater","usagePolicyId","userApiScopes"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesSpaceProviderConfig:getAppsSpacesSpaceProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesSpaceResource:getAppsSpacesSpaceResource":{"properties":{"database":{"$ref":"#/types/databricks:index/getAppsSpacesSpaceResourceDatabase:getAppsSpacesSpaceResourceDatabase","description":"(AppResourceDatabase)\n"},"description":{"type":"string","description":"(string) - Description of the App Resource\n"},"experiment":{"$ref":"#/types/databricks:index/getAppsSpacesSpaceResourceExperiment:getAppsSpacesSpaceResourceExperiment","description":"(AppResourceExperiment)\n"},"genieSpace":{"$ref":"#/types/databricks:index/getAppsSpacesSpaceResourceGenieSpace:getAppsSpacesSpaceResourceGenieSpace","description":"(AppResourceGenieSpace)\n"},"job":{"$ref":"#/types/databricks:index/getAppsSpacesSpaceResourceJob:getAppsSpacesSpaceResourceJob","description":"(AppResourceJob)\n"},"name":{"type":"string","description":"(string) - Name of the serving endpoint to grant permission on\n"},"secret":{"$ref":"#/types/databricks:index/getAppsSpacesSpaceResourceSecret:getAppsSpacesSpaceResourceSecret","description":"(AppResourceSecret)\n"},"servingEndpoint":{"$ref":"#/types/databricks:index/getAppsSpacesSpaceResourceServingEndpoint:getAppsSpacesSpaceResourceServingEndpoint","description":"(AppResourceServingEndpoint)\n"},"sqlWarehouse":{"$ref":"#/types/databricks:index/getAppsSpacesSpaceResourceSqlWarehouse:getAppsSpacesSpaceResourceSqlWarehouse","description":"(AppResourceSqlWarehouse)\n"},"ucSecurable":{"$ref":"#/types/databricks:index/getAppsSpacesSpaceResourceUcSecurable:getAppsSpacesSpaceResourceUcSecurable","description":"(AppResourceUcSecurable)\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesSpaceResourceDatabase:getAppsSpacesSpaceResourceDatabase":{"properties":{"databaseName":{"type":"string","description":"(string)\n"},"instanceName":{"type":"string","description":"(string)\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["databaseName","instanceName","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesSpaceResourceExperiment:getAppsSpacesSpaceResourceExperiment":{"properties":{"experimentId":{"type":"string","description":"(string)\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["experimentId","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesSpaceResourceGenieSpace:getAppsSpacesSpaceResourceGenieSpace":{"properties":{"name":{"type":"string","description":"(string) - Name of the serving endpoint to grant permission on\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"},"spaceId":{"type":"string","description":"(string)\n"}},"type":"object","required":["name","permission","spaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesSpaceResourceJob:getAppsSpacesSpaceResourceJob":{"properties":{"id":{"type":"string","description":"(string) - Id of the SQL warehouse to grant permission on\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["id","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesSpaceResourceSecret:getAppsSpacesSpaceResourceSecret":{"properties":{"key":{"type":"string","description":"(string) - Key of the secret to grant permission on\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"},"scope":{"type":"string","description":"(string) - Scope of the secret to grant permission on\n"}},"type":"object","required":["key","permission","scope"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesSpaceResourceServingEndpoint:getAppsSpacesSpaceResourceServingEndpoint":{"properties":{"name":{"type":"string","description":"(string) - Name of the serving endpoint to grant permission on\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["name","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesSpaceResourceSqlWarehouse:getAppsSpacesSpaceResourceSqlWarehouse":{"properties":{"id":{"type":"string","description":"(string) - Id of the SQL warehouse to grant permission on\n"},"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"}},"type":"object","required":["id","permission"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesSpaceResourceUcSecurable:getAppsSpacesSpaceResourceUcSecurable":{"properties":{"permission":{"type":"string","description":"(string) - Possible values are: `EXECUTE`, `READ_VOLUME`, `SELECT`, `USE_CONNECTION`, `WRITE_VOLUME`\n"},"securableFullName":{"type":"string","description":"(string)\n"},"securableType":{"type":"string","description":"(string) - Possible values are: `CONNECTION`, `FUNCTION`, `TABLE`, `VOLUME`\n"}},"type":"object","required":["permission","securableFullName","securableType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getAppsSpacesSpaceStatus:getAppsSpacesSpaceStatus":{"properties":{"message":{"type":"string","description":"(string) - Message providing context about the current state\n"},"state":{"type":"string","description":"(string) - The state of the app space. Possible values are: `SPACE_ACTIVE`, `SPACE_CREATING`, `SPACE_DELETED`, `SPACE_DELETING`, `SPACE_ERROR`, `SPACE_UPDATING`\n"}},"type":"object","required":["message","state"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getBudgetPoliciesFilterBy:getBudgetPoliciesFilterBy":{"properties":{"creatorUserId":{"type":"integer","description":"The policy creator user id to be filtered on.\nIf unspecified, all policies will be returned\n"},"creatorUserName":{"type":"string","description":"The policy creator user name to be filtered on.\nIf unspecified, all policies will be returned\n"},"policyName":{"type":"string","description":"(string) - The name of the policy.\n- Must be unique among active policies.\n- Can contain only characters from the ISO 8859-1 (latin1) set.\n- Can't start with reserved keywords such as `databricks:default-policy`\n"}},"type":"object"},"databricks:index/getBudgetPoliciesPolicy:getBudgetPoliciesPolicy":{"properties":{"bindingWorkspaceIds":{"type":"array","items":{"type":"integer"},"description":"(list of integer) - List of workspaces that this budget policy will be exclusively bound to.\nAn empty binding implies that this budget policy is open to any workspace in the account\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/getBudgetPoliciesPolicyCustomTag:getBudgetPoliciesPolicyCustomTag"},"description":"(list of CustomPolicyTag) - A list of tags defined by the customer. At most 20 entries are allowed per policy\n"},"policyId":{"type":"string","description":"(string) - The Id of the policy. This field is generated by Databricks and globally unique\n"},"policyName":{"type":"string","description":"(string) - The name of the policy.\n- Must be unique among active policies.\n- Can contain only characters from the ISO 8859-1 (latin1) set.\n- Can't start with reserved keywords such as `databricks:default-policy`\n"}},"type":"object","required":["bindingWorkspaceIds","customTags","policyId","policyName"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getBudgetPoliciesPolicyCustomTag:getBudgetPoliciesPolicyCustomTag":{"properties":{"key":{"type":"string","description":"(string) - The key of the tag.\n- Must be unique among all custom tags of the same policy\n- Cannot be “budget-policy-name”, “budget-policy-id” or \"budget-policy-resolution-result\" -\nthese tags are preserved\n"},"value":{"type":"string","description":"(string) - The value of the tag\n"}},"type":"object","required":["key"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getBudgetPoliciesSortSpec:getBudgetPoliciesSortSpec":{"properties":{"descending":{"type":"boolean","description":"Whether to sort in descending order\n"},"field":{"type":"string","description":"The filed to sort by. Possible values are: `POLICY_NAME`\n"}},"type":"object"},"databricks:index/getBudgetPolicyCustomTag:getBudgetPolicyCustomTag":{"properties":{"key":{"type":"string","description":"(string) - The key of the tag.\n- Must be unique among all custom tags of the same policy\n- Cannot be “budget-policy-name”, “budget-policy-id” or \"budget-policy-resolution-result\" -\nthese tags are preserved\n"},"value":{"type":"string","description":"(string) - The value of the tag\n"}},"type":"object","required":["key"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getCatalogCatalogInfo:getCatalogCatalogInfo":{"properties":{"browseOnly":{"type":"boolean"},"catalogType":{"type":"string","description":"Type of the catalog, e.g. `MANAGED_CATALOG`, `DELTASHARING_CATALOG`, `SYSTEM_CATALOG`,\n"},"comment":{"type":"string","description":"Free-form text description\n"},"connectionName":{"type":"string","description":"The name of the connection to an external data source.\n"},"createdAt":{"type":"integer","description":"Time at which this catalog was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of catalog creator.\n"},"effectivePredictiveOptimizationFlag":{"$ref":"#/types/databricks:index/getCatalogCatalogInfoEffectivePredictiveOptimizationFlag:getCatalogCatalogInfoEffectivePredictiveOptimizationFlag","description":"object describing applied predictive optimization flag.\n"},"enablePredictiveOptimization":{"type":"string","description":"Whether predictive optimization should be enabled for this object and objects under it.\n"},"fullName":{"type":"string","description":"The full name of the catalog. Corresponds with the name field.\n"},"isolationMode":{"type":"string","description":"Whether the current securable is accessible from all workspaces or a  specific set of workspaces.\n"},"metastoreId":{"type":"string","description":"Unique identifier of parent metastore.\n"},"name":{"type":"string","description":"name of the catalog\n"},"options":{"type":"object","additionalProperties":{"type":"string"},"description":"A map of key-value properties attached to the securable.\n"},"owner":{"type":"string","description":"Current owner of the catalog\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"A map of key-value properties attached to the securable.\n"},"providerName":{"type":"string","description":"The name of delta sharing provider.\n"},"provisioningInfo":{"$ref":"#/types/databricks:index/getCatalogCatalogInfoProvisioningInfo:getCatalogCatalogInfoProvisioningInfo"},"securableType":{"type":"string","description":"Securable type.\n"},"shareName":{"type":"string","description":"The name of the share under the share provider.\n"},"storageLocation":{"type":"string","description":"Storage Location URL (full path) for managed tables within catalog.\n"},"storageRoot":{"type":"string","description":"Storage root URL for managed tables within catalog.\n"},"updatedAt":{"type":"integer","description":"Time at which this catalog was last modified, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of user who last modified catalog.\n"}},"type":"object"},"databricks:index/getCatalogCatalogInfoEffectivePredictiveOptimizationFlag:getCatalogCatalogInfoEffectivePredictiveOptimizationFlag":{"properties":{"inheritedFromName":{"type":"string"},"inheritedFromType":{"type":"string"},"value":{"type":"string"}},"type":"object","required":["value"]},"databricks:index/getCatalogCatalogInfoProvisioningInfo:getCatalogCatalogInfoProvisioningInfo":{"properties":{"state":{"type":"string"}},"type":"object"},"databricks:index/getCatalogProviderConfig:getCatalogProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getCatalogsProviderConfig:getCatalogsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getClusterClusterInfo:getClusterClusterInfo":{"properties":{"autoscale":{"$ref":"#/types/databricks:index/getClusterClusterInfoAutoscale:getClusterClusterInfoAutoscale"},"autoterminationMinutes":{"type":"integer","description":"Automatically terminate the cluster after being inactive for this time in minutes. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination.\n"},"awsAttributes":{"$ref":"#/types/databricks:index/getClusterClusterInfoAwsAttributes:getClusterClusterInfoAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/getClusterClusterInfoAzureAttributes:getClusterClusterInfoAzureAttributes"},"clusterCores":{"type":"number"},"clusterId":{"type":"string","description":"The id of the cluster.\n"},"clusterLogConf":{"$ref":"#/types/databricks:index/getClusterClusterInfoClusterLogConf:getClusterClusterInfoClusterLogConf"},"clusterLogStatus":{"$ref":"#/types/databricks:index/getClusterClusterInfoClusterLogStatus:getClusterClusterInfoClusterLogStatus"},"clusterMemoryMb":{"type":"integer"},"clusterName":{"type":"string","description":"The exact name of the cluster to search. Can only be specified if there is exactly one cluster with the provided name.\n"},"clusterSource":{"type":"string"},"creatorUserName":{"type":"string"},"customTags":{"type":"object","additionalProperties":{"type":"string"},"description":"Additional tags for cluster resources.\n"},"dataSecurityMode":{"type":"string","description":"Security features of the cluster. Unity Catalog requires `SINGLE_USER` or `USER_ISOLATION` mode. `LEGACY_PASSTHROUGH` for passthrough cluster and `LEGACY_TABLE_ACL` for Table ACL cluster. Default to `NONE`, i.e. no security feature enabled.\n"},"defaultTags":{"type":"object","additionalProperties":{"type":"string"}},"dockerImage":{"$ref":"#/types/databricks:index/getClusterClusterInfoDockerImage:getClusterClusterInfoDockerImage"},"driver":{"$ref":"#/types/databricks:index/getClusterClusterInfoDriver:getClusterClusterInfoDriver"},"driverInstancePoolId":{"type":"string","description":"similar to \u003cspan pulumi-lang-nodejs=\"`instancePoolId`\" pulumi-lang-dotnet=\"`InstancePoolId`\" pulumi-lang-go=\"`instancePoolId`\" pulumi-lang-python=\"`instance_pool_id`\" pulumi-lang-yaml=\"`instancePoolId`\" pulumi-lang-java=\"`instancePoolId`\"\u003e`instance_pool_id`\u003c/span\u003e, but for driver node.\n"},"driverNodeTypeFlexibility":{"$ref":"#/types/databricks:index/getClusterClusterInfoDriverNodeTypeFlexibility:getClusterClusterInfoDriverNodeTypeFlexibility"},"driverNodeTypeId":{"type":"string","description":"The node type of the Spark driver.\n"},"enableElasticDisk":{"type":"boolean","description":"Use autoscaling local storage.\n"},"enableLocalDiskEncryption":{"type":"boolean","description":"Enable local disk encryption.\n"},"executors":{"type":"array","items":{"$ref":"#/types/databricks:index/getClusterClusterInfoExecutor:getClusterClusterInfoExecutor"}},"gcpAttributes":{"$ref":"#/types/databricks:index/getClusterClusterInfoGcpAttributes:getClusterClusterInfoGcpAttributes"},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/getClusterClusterInfoInitScript:getClusterClusterInfoInitScript"}},"instancePoolId":{"type":"string","description":"The pool of idle instances the cluster is attached to.\n"},"isSingleNode":{"type":"boolean"},"jdbcPort":{"type":"integer"},"kind":{"type":"string"},"lastRestartedTime":{"type":"integer"},"lastStateLossTime":{"type":"integer"},"nodeTypeId":{"type":"string","description":"Any supported\u003cspan pulumi-lang-nodejs=\" databricks.getNodeType \" pulumi-lang-dotnet=\" databricks.getNodeType \" pulumi-lang-go=\" getNodeType \" pulumi-lang-python=\" get_node_type \" pulumi-lang-yaml=\" databricks.getNodeType \" pulumi-lang-java=\" databricks.getNodeType \"\u003e databricks.getNodeType \u003c/span\u003eid.\n"},"numWorkers":{"type":"integer"},"policyId":{"type":"string","description":"Identifier of Cluster Policy to validate cluster and preset certain defaults.\n"},"remoteDiskThroughput":{"type":"integer"},"runtimeEngine":{"type":"string","description":"The type of runtime of the cluster\n"},"singleUserName":{"type":"string","description":"The optional user name of the user to assign to an interactive cluster. This field is required when using standard AAD Passthrough for Azure Data Lake Storage (ADLS) with a single-user cluster (i.e., not high-concurrency clusters).\n"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"},"description":"Map with key-value pairs to fine-tune Spark clusters.\n"},"sparkContextId":{"type":"integer"},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"},"description":"Map with environment variable key-value pairs to fine-tune Spark clusters. Key-value pairs of the form (X,Y) are exported (i.e., X='Y') while launching the driver and workers.\n"},"sparkVersion":{"type":"string","description":"[Runtime version](https://docs.databricks.com/runtime/index.html) of the cluster.\n"},"spec":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpec:getClusterClusterInfoSpec"},"sshPublicKeys":{"type":"array","items":{"type":"string"},"description":"SSH public key contents that will be added to each Spark node in this cluster.\n"},"startTime":{"type":"integer"},"state":{"type":"string"},"stateMessage":{"type":"string"},"terminatedTime":{"type":"integer"},"terminationReason":{"$ref":"#/types/databricks:index/getClusterClusterInfoTerminationReason:getClusterClusterInfoTerminationReason"},"totalInitialRemoteDiskSize":{"type":"integer"},"useMlRuntime":{"type":"boolean"},"workerNodeTypeFlexibility":{"$ref":"#/types/databricks:index/getClusterClusterInfoWorkerNodeTypeFlexibility:getClusterClusterInfoWorkerNodeTypeFlexibility"},"workloadType":{"$ref":"#/types/databricks:index/getClusterClusterInfoWorkloadType:getClusterClusterInfoWorkloadType"}},"type":"object"},"databricks:index/getClusterClusterInfoAutoscale:getClusterClusterInfoAutoscale":{"properties":{"maxWorkers":{"type":"integer"},"minWorkers":{"type":"integer"}},"type":"object"},"databricks:index/getClusterClusterInfoAwsAttributes:getClusterClusterInfoAwsAttributes":{"properties":{"availability":{"type":"string"},"ebsVolumeCount":{"type":"integer"},"ebsVolumeIops":{"type":"integer"},"ebsVolumeSize":{"type":"integer"},"ebsVolumeThroughput":{"type":"integer"},"ebsVolumeType":{"type":"string"},"firstOnDemand":{"type":"integer"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getClusterClusterInfoAzureAttributes:getClusterClusterInfoAzureAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"logAnalyticsInfo":{"$ref":"#/types/databricks:index/getClusterClusterInfoAzureAttributesLogAnalyticsInfo:getClusterClusterInfoAzureAttributesLogAnalyticsInfo"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/getClusterClusterInfoAzureAttributesLogAnalyticsInfo:getClusterClusterInfoAzureAttributesLogAnalyticsInfo":{"properties":{"logAnalyticsPrimaryKey":{"type":"string"},"logAnalyticsWorkspaceId":{"type":"string"}},"type":"object"},"databricks:index/getClusterClusterInfoClusterLogConf:getClusterClusterInfoClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/getClusterClusterInfoClusterLogConfDbfs:getClusterClusterInfoClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/getClusterClusterInfoClusterLogConfS3:getClusterClusterInfoClusterLogConfS3"},"volumes":{"$ref":"#/types/databricks:index/getClusterClusterInfoClusterLogConfVolumes:getClusterClusterInfoClusterLogConfVolumes"}},"type":"object"},"databricks:index/getClusterClusterInfoClusterLogConfDbfs:getClusterClusterInfoClusterLogConfDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoClusterLogConfS3:getClusterClusterInfoClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoClusterLogConfVolumes:getClusterClusterInfoClusterLogConfVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoClusterLogStatus:getClusterClusterInfoClusterLogStatus":{"properties":{"lastAttempted":{"type":"integer"},"lastException":{"type":"string"}},"type":"object"},"databricks:index/getClusterClusterInfoDockerImage:getClusterClusterInfoDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/getClusterClusterInfoDockerImageBasicAuth:getClusterClusterInfoDockerImageBasicAuth"},"url":{"type":"string"}},"type":"object"},"databricks:index/getClusterClusterInfoDockerImageBasicAuth:getClusterClusterInfoDockerImageBasicAuth":{"properties":{"password":{"type":"string"},"username":{"type":"string"}},"type":"object"},"databricks:index/getClusterClusterInfoDriver:getClusterClusterInfoDriver":{"properties":{"hostPrivateIp":{"type":"string"},"instanceId":{"type":"string"},"nodeAwsAttributes":{"$ref":"#/types/databricks:index/getClusterClusterInfoDriverNodeAwsAttributes:getClusterClusterInfoDriverNodeAwsAttributes"},"nodeId":{"type":"string"},"privateIp":{"type":"string"},"publicDns":{"type":"string"},"startTimestamp":{"type":"integer"}},"type":"object"},"databricks:index/getClusterClusterInfoDriverNodeAwsAttributes:getClusterClusterInfoDriverNodeAwsAttributes":{"properties":{"isSpot":{"type":"boolean"}},"type":"object"},"databricks:index/getClusterClusterInfoDriverNodeTypeFlexibility:getClusterClusterInfoDriverNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getClusterClusterInfoExecutor:getClusterClusterInfoExecutor":{"properties":{"hostPrivateIp":{"type":"string"},"instanceId":{"type":"string"},"nodeAwsAttributes":{"$ref":"#/types/databricks:index/getClusterClusterInfoExecutorNodeAwsAttributes:getClusterClusterInfoExecutorNodeAwsAttributes"},"nodeId":{"type":"string"},"privateIp":{"type":"string"},"publicDns":{"type":"string"},"startTimestamp":{"type":"integer"}},"type":"object"},"databricks:index/getClusterClusterInfoExecutorNodeAwsAttributes:getClusterClusterInfoExecutorNodeAwsAttributes":{"properties":{"isSpot":{"type":"boolean"}},"type":"object"},"databricks:index/getClusterClusterInfoGcpAttributes:getClusterClusterInfoGcpAttributes":{"properties":{"availability":{"type":"string"},"bootDiskSize":{"type":"integer"},"firstOnDemand":{"type":"integer"},"googleServiceAccount":{"type":"string"},"localSsdCount":{"type":"integer"},"usePreemptibleExecutors":{"type":"boolean"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getClusterClusterInfoInitScript:getClusterClusterInfoInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/getClusterClusterInfoInitScriptAbfss:getClusterClusterInfoInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/getClusterClusterInfoInitScriptDbfs:getClusterClusterInfoInitScriptDbfs"},"file":{"$ref":"#/types/databricks:index/getClusterClusterInfoInitScriptFile:getClusterClusterInfoInitScriptFile"},"gcs":{"$ref":"#/types/databricks:index/getClusterClusterInfoInitScriptGcs:getClusterClusterInfoInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/getClusterClusterInfoInitScriptS3:getClusterClusterInfoInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/getClusterClusterInfoInitScriptVolumes:getClusterClusterInfoInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/getClusterClusterInfoInitScriptWorkspace:getClusterClusterInfoInitScriptWorkspace"}},"type":"object"},"databricks:index/getClusterClusterInfoInitScriptAbfss:getClusterClusterInfoInitScriptAbfss":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoInitScriptDbfs:getClusterClusterInfoInitScriptDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoInitScriptFile:getClusterClusterInfoInitScriptFile":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoInitScriptGcs:getClusterClusterInfoInitScriptGcs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoInitScriptS3:getClusterClusterInfoInitScriptS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoInitScriptVolumes:getClusterClusterInfoInitScriptVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoInitScriptWorkspace:getClusterClusterInfoInitScriptWorkspace":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoSpec:getClusterClusterInfoSpec":{"properties":{"applyPolicyDefaultValues":{"type":"boolean"},"autoscale":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecAutoscale:getClusterClusterInfoSpecAutoscale"},"awsAttributes":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecAwsAttributes:getClusterClusterInfoSpecAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecAzureAttributes:getClusterClusterInfoSpecAzureAttributes"},"clusterId":{"type":"string","description":"The id of the cluster.\n"},"clusterLogConf":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecClusterLogConf:getClusterClusterInfoSpecClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecClusterMountInfo:getClusterClusterInfoSpecClusterMountInfo"}},"clusterName":{"type":"string","description":"The exact name of the cluster to search. Can only be specified if there is exactly one cluster with the provided name.\n"},"customTags":{"type":"object","additionalProperties":{"type":"string"},"description":"Additional tags for cluster resources.\n"},"dataSecurityMode":{"type":"string","description":"Security features of the cluster. Unity Catalog requires `SINGLE_USER` or `USER_ISOLATION` mode. `LEGACY_PASSTHROUGH` for passthrough cluster and `LEGACY_TABLE_ACL` for Table ACL cluster. Default to `NONE`, i.e. no security feature enabled.\n"},"dockerImage":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecDockerImage:getClusterClusterInfoSpecDockerImage"},"driverInstancePoolId":{"type":"string","description":"similar to \u003cspan pulumi-lang-nodejs=\"`instancePoolId`\" pulumi-lang-dotnet=\"`InstancePoolId`\" pulumi-lang-go=\"`instancePoolId`\" pulumi-lang-python=\"`instance_pool_id`\" pulumi-lang-yaml=\"`instancePoolId`\" pulumi-lang-java=\"`instancePoolId`\"\u003e`instance_pool_id`\u003c/span\u003e, but for driver node.\n"},"driverNodeTypeFlexibility":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecDriverNodeTypeFlexibility:getClusterClusterInfoSpecDriverNodeTypeFlexibility"},"driverNodeTypeId":{"type":"string","description":"The node type of the Spark driver.\n"},"enableElasticDisk":{"type":"boolean","description":"Use autoscaling local storage.\n"},"enableLocalDiskEncryption":{"type":"boolean","description":"Enable local disk encryption.\n"},"gcpAttributes":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecGcpAttributes:getClusterClusterInfoSpecGcpAttributes"},"idempotencyToken":{"type":"string","description":"An optional token to guarantee the idempotency of cluster creation requests.\n"},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecInitScript:getClusterClusterInfoSpecInitScript"}},"instancePoolId":{"type":"string","description":"The pool of idle instances the cluster is attached to.\n"},"isSingleNode":{"type":"boolean"},"kind":{"type":"string"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecLibrary:getClusterClusterInfoSpecLibrary"}},"nodeTypeId":{"type":"string","description":"Any supported\u003cspan pulumi-lang-nodejs=\" databricks.getNodeType \" pulumi-lang-dotnet=\" databricks.getNodeType \" pulumi-lang-go=\" getNodeType \" pulumi-lang-python=\" get_node_type \" pulumi-lang-yaml=\" databricks.getNodeType \" pulumi-lang-java=\" databricks.getNodeType \"\u003e databricks.getNodeType \u003c/span\u003eid.\n"},"numWorkers":{"type":"integer"},"policyId":{"type":"string","description":"Identifier of Cluster Policy to validate cluster and preset certain defaults.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecProviderConfig:getClusterClusterInfoSpecProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"remoteDiskThroughput":{"type":"integer"},"runtimeEngine":{"type":"string","description":"The type of runtime of the cluster\n"},"singleUserName":{"type":"string","description":"The optional user name of the user to assign to an interactive cluster. This field is required when using standard AAD Passthrough for Azure Data Lake Storage (ADLS) with a single-user cluster (i.e., not high-concurrency clusters).\n"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"},"description":"Map with key-value pairs to fine-tune Spark clusters.\n"},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"},"description":"Map with environment variable key-value pairs to fine-tune Spark clusters. Key-value pairs of the form (X,Y) are exported (i.e., X='Y') while launching the driver and workers.\n"},"sparkVersion":{"type":"string","description":"[Runtime version](https://docs.databricks.com/runtime/index.html) of the cluster.\n"},"sshPublicKeys":{"type":"array","items":{"type":"string"},"description":"SSH public key contents that will be added to each Spark node in this cluster.\n"},"totalInitialRemoteDiskSize":{"type":"integer"},"useMlRuntime":{"type":"boolean"},"workerNodeTypeFlexibility":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecWorkerNodeTypeFlexibility:getClusterClusterInfoSpecWorkerNodeTypeFlexibility"},"workloadType":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecWorkloadType:getClusterClusterInfoSpecWorkloadType"}},"type":"object","required":["clusterId","driverInstancePoolId","driverNodeTypeId","enableElasticDisk","enableLocalDiskEncryption","nodeTypeId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getClusterClusterInfoSpecAutoscale:getClusterClusterInfoSpecAutoscale":{"properties":{"maxWorkers":{"type":"integer"},"minWorkers":{"type":"integer"}},"type":"object"},"databricks:index/getClusterClusterInfoSpecAwsAttributes:getClusterClusterInfoSpecAwsAttributes":{"properties":{"availability":{"type":"string"},"ebsVolumeCount":{"type":"integer"},"ebsVolumeIops":{"type":"integer"},"ebsVolumeSize":{"type":"integer"},"ebsVolumeThroughput":{"type":"integer"},"ebsVolumeType":{"type":"string"},"firstOnDemand":{"type":"integer"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getClusterClusterInfoSpecAzureAttributes:getClusterClusterInfoSpecAzureAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"logAnalyticsInfo":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecAzureAttributesLogAnalyticsInfo:getClusterClusterInfoSpecAzureAttributesLogAnalyticsInfo"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/getClusterClusterInfoSpecAzureAttributesLogAnalyticsInfo:getClusterClusterInfoSpecAzureAttributesLogAnalyticsInfo":{"properties":{"logAnalyticsPrimaryKey":{"type":"string"},"logAnalyticsWorkspaceId":{"type":"string"}},"type":"object"},"databricks:index/getClusterClusterInfoSpecClusterLogConf:getClusterClusterInfoSpecClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecClusterLogConfDbfs:getClusterClusterInfoSpecClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecClusterLogConfS3:getClusterClusterInfoSpecClusterLogConfS3"},"volumes":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecClusterLogConfVolumes:getClusterClusterInfoSpecClusterLogConfVolumes"}},"type":"object"},"databricks:index/getClusterClusterInfoSpecClusterLogConfDbfs:getClusterClusterInfoSpecClusterLogConfDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoSpecClusterLogConfS3:getClusterClusterInfoSpecClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoSpecClusterLogConfVolumes:getClusterClusterInfoSpecClusterLogConfVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoSpecClusterMountInfo:getClusterClusterInfoSpecClusterMountInfo":{"properties":{"localMountDirPath":{"type":"string"},"networkFilesystemInfo":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecClusterMountInfoNetworkFilesystemInfo:getClusterClusterInfoSpecClusterMountInfoNetworkFilesystemInfo"},"remoteMountDirPath":{"type":"string"}},"type":"object","required":["localMountDirPath","networkFilesystemInfo"]},"databricks:index/getClusterClusterInfoSpecClusterMountInfoNetworkFilesystemInfo:getClusterClusterInfoSpecClusterMountInfoNetworkFilesystemInfo":{"properties":{"mountOptions":{"type":"string"},"serverAddress":{"type":"string"}},"type":"object","required":["serverAddress"]},"databricks:index/getClusterClusterInfoSpecDockerImage:getClusterClusterInfoSpecDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecDockerImageBasicAuth:getClusterClusterInfoSpecDockerImageBasicAuth"},"url":{"type":"string"}},"type":"object","required":["url"]},"databricks:index/getClusterClusterInfoSpecDockerImageBasicAuth:getClusterClusterInfoSpecDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true},"username":{"type":"string"}},"type":"object","required":["password","username"]},"databricks:index/getClusterClusterInfoSpecDriverNodeTypeFlexibility:getClusterClusterInfoSpecDriverNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getClusterClusterInfoSpecGcpAttributes:getClusterClusterInfoSpecGcpAttributes":{"properties":{"availability":{"type":"string"},"bootDiskSize":{"type":"integer"},"firstOnDemand":{"type":"integer"},"googleServiceAccount":{"type":"string"},"localSsdCount":{"type":"integer"},"usePreemptibleExecutors":{"type":"boolean"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getClusterClusterInfoSpecInitScript:getClusterClusterInfoSpecInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecInitScriptAbfss:getClusterClusterInfoSpecInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecInitScriptDbfs:getClusterClusterInfoSpecInitScriptDbfs","deprecationMessage":"For init scripts use 'volumes', 'workspace' or cloud storage location instead of 'dbfs'."},"file":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecInitScriptFile:getClusterClusterInfoSpecInitScriptFile"},"gcs":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecInitScriptGcs:getClusterClusterInfoSpecInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecInitScriptS3:getClusterClusterInfoSpecInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecInitScriptVolumes:getClusterClusterInfoSpecInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecInitScriptWorkspace:getClusterClusterInfoSpecInitScriptWorkspace"}},"type":"object"},"databricks:index/getClusterClusterInfoSpecInitScriptAbfss:getClusterClusterInfoSpecInitScriptAbfss":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoSpecInitScriptDbfs:getClusterClusterInfoSpecInitScriptDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoSpecInitScriptFile:getClusterClusterInfoSpecInitScriptFile":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoSpecInitScriptGcs:getClusterClusterInfoSpecInitScriptGcs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoSpecInitScriptS3:getClusterClusterInfoSpecInitScriptS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoSpecInitScriptVolumes:getClusterClusterInfoSpecInitScriptVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoSpecInitScriptWorkspace:getClusterClusterInfoSpecInitScriptWorkspace":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getClusterClusterInfoSpecLibrary:getClusterClusterInfoSpecLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecLibraryCran:getClusterClusterInfoSpecLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecLibraryMaven:getClusterClusterInfoSpecLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecLibraryProviderConfig:getClusterClusterInfoSpecLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecLibraryPypi:getClusterClusterInfoSpecLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/getClusterClusterInfoSpecLibraryCran:getClusterClusterInfoSpecLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/getClusterClusterInfoSpecLibraryMaven:getClusterClusterInfoSpecLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/getClusterClusterInfoSpecLibraryProviderConfig:getClusterClusterInfoSpecLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getClusterClusterInfoSpecLibraryPypi:getClusterClusterInfoSpecLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/getClusterClusterInfoSpecProviderConfig:getClusterClusterInfoSpecProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getClusterClusterInfoSpecWorkerNodeTypeFlexibility:getClusterClusterInfoSpecWorkerNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getClusterClusterInfoSpecWorkloadType:getClusterClusterInfoSpecWorkloadType":{"properties":{"clients":{"$ref":"#/types/databricks:index/getClusterClusterInfoSpecWorkloadTypeClients:getClusterClusterInfoSpecWorkloadTypeClients"}},"type":"object","required":["clients"]},"databricks:index/getClusterClusterInfoSpecWorkloadTypeClients:getClusterClusterInfoSpecWorkloadTypeClients":{"properties":{"jobs":{"type":"boolean"},"notebooks":{"type":"boolean"}},"type":"object"},"databricks:index/getClusterClusterInfoTerminationReason:getClusterClusterInfoTerminationReason":{"properties":{"code":{"type":"string"},"parameters":{"type":"object","additionalProperties":{"type":"string"}},"type":{"type":"string"}},"type":"object"},"databricks:index/getClusterClusterInfoWorkerNodeTypeFlexibility:getClusterClusterInfoWorkerNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getClusterClusterInfoWorkloadType:getClusterClusterInfoWorkloadType":{"properties":{"clients":{"$ref":"#/types/databricks:index/getClusterClusterInfoWorkloadTypeClients:getClusterClusterInfoWorkloadTypeClients"}},"type":"object","required":["clients"]},"databricks:index/getClusterClusterInfoWorkloadTypeClients:getClusterClusterInfoWorkloadTypeClients":{"properties":{"jobs":{"type":"boolean"},"notebooks":{"type":"boolean"}},"type":"object"},"databricks:index/getClusterPolicyProviderConfig:getClusterPolicyProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getClusterProviderConfig:getClusterProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getClustersFilterBy:getClustersFilterBy":{"properties":{"clusterSources":{"type":"array","items":{"type":"string"},"description":"List of cluster sources to filter by. Possible values are `API`, `JOB`, `MODELS`, `PIPELINE`, `PIPELINE_MAINTENANCE`, `SQL`, and `UI`.\n"},"clusterStates":{"type":"array","items":{"type":"string"},"description":"List of cluster states to filter by. Possible values are `RUNNING`, `PENDING`, `RESIZING`, `RESTARTING`, `TERMINATING`, `TERMINATED`, `ERROR`, and `UNKNOWN`.\n"},"isPinned":{"type":"boolean","description":"Whether to filter by pinned clusters.\n"},"policyId":{"type":"string","description":"Filter by\u003cspan pulumi-lang-nodejs=\" databricks.ClusterPolicy \" pulumi-lang-dotnet=\" databricks.ClusterPolicy \" pulumi-lang-go=\" ClusterPolicy \" pulumi-lang-python=\" ClusterPolicy \" pulumi-lang-yaml=\" databricks.ClusterPolicy \" pulumi-lang-java=\" databricks.ClusterPolicy \"\u003e databricks.ClusterPolicy \u003c/span\u003eid.\n"}},"type":"object"},"databricks:index/getClustersProviderConfig:getClustersProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getCurrentConfigProviderConfig:getCurrentConfigProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getCurrentMetastoreMetastoreInfo:getCurrentMetastoreMetastoreInfo":{"properties":{"cloud":{"type":"string"},"createdAt":{"type":"integer","description":"Timestamp (in milliseconds) when the current metastore was created.\n"},"createdBy":{"type":"string","description":"the ID of the identity that created the current metastore.\n"},"defaultDataAccessConfigId":{"type":"string","description":"the ID of the default data access configuration.\n"},"deltaSharingOrganizationName":{"type":"string","description":"The organization name of a Delta Sharing entity. This field is used for Databricks to Databricks sharing.\n"},"deltaSharingRecipientTokenLifetimeInSeconds":{"type":"integer","description":"the expiration duration in seconds on recipient data access tokens.\n"},"deltaSharingScope":{"type":"string","description":"Used to enable delta sharing on the metastore. Valid values: INTERNAL, INTERNAL_AND_EXTERNAL. INTERNAL only allows sharing within the same account, and INTERNAL_AND_EXTERNAL allows cross account sharing and token based sharing.\n"},"externalAccessEnabled":{"type":"boolean"},"globalMetastoreId":{"type":"string","description":"Identifier in form of `\u003ccloud\u003e:\u003cregion\u003e:\u003cmetastore_id\u003e` for use in Databricks to Databricks Delta Sharing.\n"},"metastoreId":{"type":"string","description":"Metastore ID.\n"},"name":{"type":"string","description":"Name of metastore.\n"},"owner":{"type":"string","description":"Username/group name/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the metastore owner.\n"},"privilegeModelVersion":{"type":"string","description":"the version of the privilege model used by the metastore.\n"},"region":{"type":"string","description":"(Mandatory for account-level) The region of the metastore.\n"},"storageRoot":{"type":"string","description":"Path on cloud storage account, where managed \u003cspan pulumi-lang-nodejs=\"`databricks.Table`\" pulumi-lang-dotnet=\"`databricks.Table`\" pulumi-lang-go=\"`Table`\" pulumi-lang-python=\"`Table`\" pulumi-lang-yaml=\"`databricks.Table`\" pulumi-lang-java=\"`databricks.Table`\"\u003e`databricks.Table`\u003c/span\u003e are stored.\n"},"storageRootCredentialId":{"type":"string","description":"ID of a storage credential used for the \u003cspan pulumi-lang-nodejs=\"`storageRoot`\" pulumi-lang-dotnet=\"`StorageRoot`\" pulumi-lang-go=\"`storageRoot`\" pulumi-lang-python=\"`storage_root`\" pulumi-lang-yaml=\"`storageRoot`\" pulumi-lang-java=\"`storageRoot`\"\u003e`storage_root`\u003c/span\u003e.\n"},"storageRootCredentialName":{"type":"string","description":"Name of a storage credential used for the \u003cspan pulumi-lang-nodejs=\"`storageRoot`\" pulumi-lang-dotnet=\"`StorageRoot`\" pulumi-lang-go=\"`storageRoot`\" pulumi-lang-python=\"`storage_root`\" pulumi-lang-yaml=\"`storageRoot`\" pulumi-lang-java=\"`storageRoot`\"\u003e`storage_root`\u003c/span\u003e.\n"},"updatedAt":{"type":"integer","description":"Timestamp (in milliseconds) when the current metastore was updated.\n"},"updatedBy":{"type":"string","description":"the ID of the identity that updated the current metastore.\n"}},"type":"object"},"databricks:index/getCurrentMetastoreProviderConfig:getCurrentMetastoreProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getCurrentUserProviderConfig:getCurrentUserProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDashboardsDashboard:getDashboardsDashboard":{"properties":{"createTime":{"type":"string","description":"The timestamp of when the dashboard was created.\n"},"dashboardId":{"type":"string","description":"The unique ID of the dashboard.\n"},"displayName":{"type":"string","description":"The display name of the dashboard.\n"},"etag":{"type":"string"},"lifecycleState":{"type":"string"},"parentPath":{"type":"string"},"path":{"type":"string"},"serializedDashboard":{"type":"string"},"updateTime":{"type":"string"},"warehouseId":{"type":"string"}},"type":"object","required":["createTime","dashboardId","etag","lifecycleState","parentPath","path","updateTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDashboardsProviderConfig:getDashboardsProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/getDataQualityMonitorAnomalyDetectionConfig:getDataQualityMonitorAnomalyDetectionConfig":{"properties":{"excludedTableFullNames":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of fully qualified table names to exclude from anomaly detection\n"}},"type":"object"},"databricks:index/getDataQualityMonitorDataProfilingConfig:getDataQualityMonitorDataProfilingConfig":{"properties":{"assetsDir":{"type":"string","description":"(string) - Field for specifying the absolute path to a custom directory to store data-monitoring\nassets. Normally prepopulated to a default user location via UI and Python APIs\n"},"baselineTableName":{"type":"string","description":"(string) - Baseline table name.\nBaseline data is used to compute drift from the data in the monitored \u003cspan pulumi-lang-nodejs=\"`tableName`\" pulumi-lang-dotnet=\"`TableName`\" pulumi-lang-go=\"`tableName`\" pulumi-lang-python=\"`table_name`\" pulumi-lang-yaml=\"`tableName`\" pulumi-lang-java=\"`tableName`\"\u003e`table_name`\u003c/span\u003e.\nThe baseline table and the monitored table shall have the same schema\n"},"customMetrics":{"type":"array","items":{"$ref":"#/types/databricks:index/getDataQualityMonitorDataProfilingConfigCustomMetric:getDataQualityMonitorDataProfilingConfigCustomMetric"},"description":"(list of DataProfilingCustomMetric) - Custom metrics\n"},"dashboardId":{"type":"string","description":"(string) - Id of dashboard that visualizes the computed metrics.\nThis can be empty if the monitor is in PENDING state\n"},"driftMetricsTableName":{"type":"string","description":"(string) - Table that stores drift metrics data. Format: `catalog.schema.table_name`\n"},"effectiveWarehouseId":{"type":"string","description":"(string) - The warehouse for dashboard creation\n"},"inferenceLog":{"$ref":"#/types/databricks:index/getDataQualityMonitorDataProfilingConfigInferenceLog:getDataQualityMonitorDataProfilingConfigInferenceLog","description":"(InferenceLogConfig) - `Analysis Configuration` for monitoring inference log tables\n"},"latestMonitorFailureMessage":{"type":"string","description":"(string) - The latest error message for a monitor failure\n"},"monitorVersion":{"type":"integer","description":"(integer) - Represents the current monitor configuration version in use. The version will be represented in a\nnumeric fashion (1,2,3...). The field has flexibility to take on negative values, which can indicate corrupted\u003cspan pulumi-lang-nodejs=\"\nmonitorVersion \" pulumi-lang-dotnet=\"\nMonitorVersion \" pulumi-lang-go=\"\nmonitorVersion \" pulumi-lang-python=\"\nmonitor_version \" pulumi-lang-yaml=\"\nmonitorVersion \" pulumi-lang-java=\"\nmonitorVersion \"\u003e\nmonitor_version \u003c/span\u003enumbers\n"},"monitoredTableName":{"type":"string","description":"(string) - Unity Catalog table to monitor. Format: `catalog.schema.table_name`\n"},"notificationSettings":{"$ref":"#/types/databricks:index/getDataQualityMonitorDataProfilingConfigNotificationSettings:getDataQualityMonitorDataProfilingConfigNotificationSettings","description":"(NotificationSettings) - Field for specifying notification settings\n"},"outputSchemaId":{"type":"string","description":"(string) - ID of the schema where output tables are created\n"},"profileMetricsTableName":{"type":"string","description":"(string) - Table that stores profile metrics data. Format: `catalog.schema.table_name`\n"},"schedule":{"$ref":"#/types/databricks:index/getDataQualityMonitorDataProfilingConfigSchedule:getDataQualityMonitorDataProfilingConfigSchedule","description":"(CronSchedule) - The cron schedule\n"},"skipBuiltinDashboard":{"type":"boolean","description":"(boolean) - Whether to skip creating a default dashboard summarizing data quality metrics\n"},"slicingExprs":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of column expressions to slice data with for targeted analysis. The data is grouped by\neach expression independently, resulting in a separate slice for each predicate and its\ncomplements. For example `slicing_exprs=[“col_1”, “col_2 \u003e 10”]` will generate the following\nslices: two slices for \u003cspan pulumi-lang-nodejs=\"`col2 \" pulumi-lang-dotnet=\"`Col2 \" pulumi-lang-go=\"`col2 \" pulumi-lang-python=\"`col_2 \" pulumi-lang-yaml=\"`col2 \" pulumi-lang-java=\"`col2 \"\u003e`col_2 \u003c/span\u003e\u003e 10` (True and False), and one slice per unique value in\n\u003cspan pulumi-lang-nodejs=\"`col1`\" pulumi-lang-dotnet=\"`Col1`\" pulumi-lang-go=\"`col1`\" pulumi-lang-python=\"`col1`\" pulumi-lang-yaml=\"`col1`\" pulumi-lang-java=\"`col1`\"\u003e`col1`\u003c/span\u003e. For high-cardinality columns, only the top 100 unique values by frequency will\ngenerate slices\n"},"snapshot":{"$ref":"#/types/databricks:index/getDataQualityMonitorDataProfilingConfigSnapshot:getDataQualityMonitorDataProfilingConfigSnapshot","description":"(SnapshotConfig) - `Analysis Configuration` for monitoring snapshot tables\n"},"status":{"type":"string","description":"(string) - The data profiling monitor status. Possible values are: `DATA_PROFILING_STATUS_ACTIVE`, `DATA_PROFILING_STATUS_DELETE_PENDING`, `DATA_PROFILING_STATUS_ERROR`, `DATA_PROFILING_STATUS_FAILED`, `DATA_PROFILING_STATUS_PENDING`\n"},"timeSeries":{"$ref":"#/types/databricks:index/getDataQualityMonitorDataProfilingConfigTimeSeries:getDataQualityMonitorDataProfilingConfigTimeSeries","description":"(TimeSeriesConfig) - `Analysis Configuration` for monitoring time series tables\n"},"warehouseId":{"type":"string","description":"(string) - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running\nwarehouse will be used\n"}},"type":"object","required":["dashboardId","driftMetricsTableName","effectiveWarehouseId","latestMonitorFailureMessage","monitorVersion","monitoredTableName","outputSchemaId","profileMetricsTableName","status"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorDataProfilingConfigCustomMetric:getDataQualityMonitorDataProfilingConfigCustomMetric":{"properties":{"definition":{"type":"string","description":"(string) - Jinja template for a SQL expression that specifies how to compute the metric. See [create metric definition](https://docs.databricks.com/en/lakehouse-monitoring/custom-metrics.html#create-definition)\n"},"inputColumns":{"type":"array","items":{"type":"string"},"description":"(list of string) - A list of column names in the input table the metric should be computed for.\nCan use ``\":table\"`` to indicate that the metric needs information from multiple columns\n"},"name":{"type":"string","description":"(string) - Name of the metric in the output tables\n"},"outputDataType":{"type":"string","description":"(string) - The output type of the custom metric\n"},"type":{"type":"string","description":"(string) - The type of the custom metric. Possible values are: `DATA_PROFILING_CUSTOM_METRIC_TYPE_AGGREGATE`, `DATA_PROFILING_CUSTOM_METRIC_TYPE_DERIVED`, `DATA_PROFILING_CUSTOM_METRIC_TYPE_DRIFT`\n"}},"type":"object","required":["definition","inputColumns","name","outputDataType","type"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorDataProfilingConfigInferenceLog:getDataQualityMonitorDataProfilingConfigInferenceLog":{"properties":{"granularities":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of granularities to use when aggregating data into time windows based on their timestamp\n"},"labelColumn":{"type":"string","description":"(string) - Column for the label\n"},"modelIdColumn":{"type":"string","description":"(string) - Column for the model identifier\n"},"predictionColumn":{"type":"string","description":"(string) - Column for the prediction\n"},"problemType":{"type":"string","description":"(string) - Problem type the model aims to solve. Possible values are: `INFERENCE_PROBLEM_TYPE_CLASSIFICATION`, `INFERENCE_PROBLEM_TYPE_REGRESSION`\n"},"timestampColumn":{"type":"string","description":"(string) - Column for the timestamp\n"}},"type":"object","required":["granularities","modelIdColumn","predictionColumn","problemType","timestampColumn"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorDataProfilingConfigNotificationSettings:getDataQualityMonitorDataProfilingConfigNotificationSettings":{"properties":{"onFailure":{"$ref":"#/types/databricks:index/getDataQualityMonitorDataProfilingConfigNotificationSettingsOnFailure:getDataQualityMonitorDataProfilingConfigNotificationSettingsOnFailure","description":"(NotificationDestination) - Destinations to send notifications on failure/timeout\n"}},"type":"object"},"databricks:index/getDataQualityMonitorDataProfilingConfigNotificationSettingsOnFailure:getDataQualityMonitorDataProfilingConfigNotificationSettingsOnFailure":{"properties":{"emailAddresses":{"type":"array","items":{"type":"string"},"description":"(list of string) - The list of email addresses to send the notification to. A maximum of 5 email addresses is supported\n"}},"type":"object"},"databricks:index/getDataQualityMonitorDataProfilingConfigSchedule:getDataQualityMonitorDataProfilingConfigSchedule":{"properties":{"pauseStatus":{"type":"string","description":"(string) - Read only field that indicates whether the schedule is paused or not. Possible values are: `CRON_SCHEDULE_PAUSE_STATUS_PAUSED`, `CRON_SCHEDULE_PAUSE_STATUS_UNPAUSED`\n"},"quartzCronExpression":{"type":"string","description":"(string) - The expression that determines when to run the monitor. See [examples](https://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html)\n"},"timezoneId":{"type":"string","description":"(string) - A Java timezone id. The schedule for a job will be resolved with respect to this timezone.\nSee `Java TimeZone \u003chttp://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html\u003e`_ for details.\nThe timezone id (e.g., ``America/Los_Angeles``) in which to evaluate the quartz expression\n"}},"type":"object","required":["pauseStatus","quartzCronExpression","timezoneId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorDataProfilingConfigSnapshot:getDataQualityMonitorDataProfilingConfigSnapshot":{"type":"object"},"databricks:index/getDataQualityMonitorDataProfilingConfigTimeSeries:getDataQualityMonitorDataProfilingConfigTimeSeries":{"properties":{"granularities":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of granularities to use when aggregating data into time windows based on their timestamp\n"},"timestampColumn":{"type":"string","description":"(string) - Column for the timestamp\n"}},"type":"object","required":["granularities","timestampColumn"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorProviderConfig:getDataQualityMonitorProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDataQualityMonitorsMonitor:getDataQualityMonitorsMonitor":{"properties":{"anomalyDetectionConfig":{"$ref":"#/types/databricks:index/getDataQualityMonitorsMonitorAnomalyDetectionConfig:getDataQualityMonitorsMonitorAnomalyDetectionConfig","description":"(AnomalyDetectionConfig) - Anomaly Detection Configuration, applicable to \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e object types\n"},"dataProfilingConfig":{"$ref":"#/types/databricks:index/getDataQualityMonitorsMonitorDataProfilingConfig:getDataQualityMonitorsMonitorDataProfilingConfig","description":"(DataProfilingConfig) - Data Profiling Configuration, applicable to \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e object types. Exactly one `Analysis Configuration`\nmust be present\n"},"objectId":{"type":"string","description":"(string) - The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n"},"objectType":{"type":"string","description":"(string) - The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDataQualityMonitorsMonitorProviderConfig:getDataQualityMonitorsMonitorProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["anomalyDetectionConfig","dataProfilingConfig","objectId","objectType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorsMonitorAnomalyDetectionConfig:getDataQualityMonitorsMonitorAnomalyDetectionConfig":{"properties":{"excludedTableFullNames":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of fully qualified table names to exclude from anomaly detection\n"}},"type":"object"},"databricks:index/getDataQualityMonitorsMonitorDataProfilingConfig:getDataQualityMonitorsMonitorDataProfilingConfig":{"properties":{"assetsDir":{"type":"string","description":"(string) - Field for specifying the absolute path to a custom directory to store data-monitoring\nassets. Normally prepopulated to a default user location via UI and Python APIs\n"},"baselineTableName":{"type":"string","description":"(string) - Baseline table name.\nBaseline data is used to compute drift from the data in the monitored \u003cspan pulumi-lang-nodejs=\"`tableName`\" pulumi-lang-dotnet=\"`TableName`\" pulumi-lang-go=\"`tableName`\" pulumi-lang-python=\"`table_name`\" pulumi-lang-yaml=\"`tableName`\" pulumi-lang-java=\"`tableName`\"\u003e`table_name`\u003c/span\u003e.\nThe baseline table and the monitored table shall have the same schema\n"},"customMetrics":{"type":"array","items":{"$ref":"#/types/databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigCustomMetric:getDataQualityMonitorsMonitorDataProfilingConfigCustomMetric"},"description":"(list of DataProfilingCustomMetric) - Custom metrics\n"},"dashboardId":{"type":"string","description":"(string) - Id of dashboard that visualizes the computed metrics.\nThis can be empty if the monitor is in PENDING state\n"},"driftMetricsTableName":{"type":"string","description":"(string) - Table that stores drift metrics data. Format: `catalog.schema.table_name`\n"},"effectiveWarehouseId":{"type":"string","description":"(string) - The warehouse for dashboard creation\n"},"inferenceLog":{"$ref":"#/types/databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigInferenceLog:getDataQualityMonitorsMonitorDataProfilingConfigInferenceLog","description":"(InferenceLogConfig) - `Analysis Configuration` for monitoring inference log tables\n"},"latestMonitorFailureMessage":{"type":"string","description":"(string) - The latest error message for a monitor failure\n"},"monitorVersion":{"type":"integer","description":"(integer) - Represents the current monitor configuration version in use. The version will be represented in a\nnumeric fashion (1,2,3...). The field has flexibility to take on negative values, which can indicate corrupted\u003cspan pulumi-lang-nodejs=\"\nmonitorVersion \" pulumi-lang-dotnet=\"\nMonitorVersion \" pulumi-lang-go=\"\nmonitorVersion \" pulumi-lang-python=\"\nmonitor_version \" pulumi-lang-yaml=\"\nmonitorVersion \" pulumi-lang-java=\"\nmonitorVersion \"\u003e\nmonitor_version \u003c/span\u003enumbers\n"},"monitoredTableName":{"type":"string","description":"(string) - Unity Catalog table to monitor. Format: `catalog.schema.table_name`\n"},"notificationSettings":{"$ref":"#/types/databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigNotificationSettings:getDataQualityMonitorsMonitorDataProfilingConfigNotificationSettings","description":"(NotificationSettings) - Field for specifying notification settings\n"},"outputSchemaId":{"type":"string","description":"(string) - ID of the schema where output tables are created\n"},"profileMetricsTableName":{"type":"string","description":"(string) - Table that stores profile metrics data. Format: `catalog.schema.table_name`\n"},"schedule":{"$ref":"#/types/databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigSchedule:getDataQualityMonitorsMonitorDataProfilingConfigSchedule","description":"(CronSchedule) - The cron schedule\n"},"skipBuiltinDashboard":{"type":"boolean","description":"(boolean) - Whether to skip creating a default dashboard summarizing data quality metrics\n"},"slicingExprs":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of column expressions to slice data with for targeted analysis. The data is grouped by\neach expression independently, resulting in a separate slice for each predicate and its\ncomplements. For example `slicing_exprs=[“col_1”, “col_2 \u003e 10”]` will generate the following\nslices: two slices for \u003cspan pulumi-lang-nodejs=\"`col2 \" pulumi-lang-dotnet=\"`Col2 \" pulumi-lang-go=\"`col2 \" pulumi-lang-python=\"`col_2 \" pulumi-lang-yaml=\"`col2 \" pulumi-lang-java=\"`col2 \"\u003e`col_2 \u003c/span\u003e\u003e 10` (True and False), and one slice per unique value in\n\u003cspan pulumi-lang-nodejs=\"`col1`\" pulumi-lang-dotnet=\"`Col1`\" pulumi-lang-go=\"`col1`\" pulumi-lang-python=\"`col1`\" pulumi-lang-yaml=\"`col1`\" pulumi-lang-java=\"`col1`\"\u003e`col1`\u003c/span\u003e. For high-cardinality columns, only the top 100 unique values by frequency will\ngenerate slices\n"},"snapshot":{"$ref":"#/types/databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigSnapshot:getDataQualityMonitorsMonitorDataProfilingConfigSnapshot","description":"(SnapshotConfig) - `Analysis Configuration` for monitoring snapshot tables\n"},"status":{"type":"string","description":"(string) - The data profiling monitor status. Possible values are: `DATA_PROFILING_STATUS_ACTIVE`, `DATA_PROFILING_STATUS_DELETE_PENDING`, `DATA_PROFILING_STATUS_ERROR`, `DATA_PROFILING_STATUS_FAILED`, `DATA_PROFILING_STATUS_PENDING`\n"},"timeSeries":{"$ref":"#/types/databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigTimeSeries:getDataQualityMonitorsMonitorDataProfilingConfigTimeSeries","description":"(TimeSeriesConfig) - `Analysis Configuration` for monitoring time series tables\n"},"warehouseId":{"type":"string","description":"(string) - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running\nwarehouse will be used\n"}},"type":"object","required":["dashboardId","driftMetricsTableName","effectiveWarehouseId","latestMonitorFailureMessage","monitorVersion","monitoredTableName","outputSchemaId","profileMetricsTableName","status"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigCustomMetric:getDataQualityMonitorsMonitorDataProfilingConfigCustomMetric":{"properties":{"definition":{"type":"string","description":"(string) - Jinja template for a SQL expression that specifies how to compute the metric. See [create metric definition](https://docs.databricks.com/en/lakehouse-monitoring/custom-metrics.html#create-definition)\n"},"inputColumns":{"type":"array","items":{"type":"string"},"description":"(list of string) - A list of column names in the input table the metric should be computed for.\nCan use ``\":table\"`` to indicate that the metric needs information from multiple columns\n"},"name":{"type":"string","description":"(string) - Name of the metric in the output tables\n"},"outputDataType":{"type":"string","description":"(string) - The output type of the custom metric\n"},"type":{"type":"string","description":"(string) - The type of the custom metric. Possible values are: `DATA_PROFILING_CUSTOM_METRIC_TYPE_AGGREGATE`, `DATA_PROFILING_CUSTOM_METRIC_TYPE_DERIVED`, `DATA_PROFILING_CUSTOM_METRIC_TYPE_DRIFT`\n"}},"type":"object","required":["definition","inputColumns","name","outputDataType","type"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigInferenceLog:getDataQualityMonitorsMonitorDataProfilingConfigInferenceLog":{"properties":{"granularities":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of granularities to use when aggregating data into time windows based on their timestamp\n"},"labelColumn":{"type":"string","description":"(string) - Column for the label\n"},"modelIdColumn":{"type":"string","description":"(string) - Column for the model identifier\n"},"predictionColumn":{"type":"string","description":"(string) - Column for the prediction\n"},"problemType":{"type":"string","description":"(string) - Problem type the model aims to solve. Possible values are: `INFERENCE_PROBLEM_TYPE_CLASSIFICATION`, `INFERENCE_PROBLEM_TYPE_REGRESSION`\n"},"timestampColumn":{"type":"string","description":"(string) - Column for the timestamp\n"}},"type":"object","required":["granularities","modelIdColumn","predictionColumn","problemType","timestampColumn"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigNotificationSettings:getDataQualityMonitorsMonitorDataProfilingConfigNotificationSettings":{"properties":{"onFailure":{"$ref":"#/types/databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigNotificationSettingsOnFailure:getDataQualityMonitorsMonitorDataProfilingConfigNotificationSettingsOnFailure","description":"(NotificationDestination) - Destinations to send notifications on failure/timeout\n"}},"type":"object"},"databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigNotificationSettingsOnFailure:getDataQualityMonitorsMonitorDataProfilingConfigNotificationSettingsOnFailure":{"properties":{"emailAddresses":{"type":"array","items":{"type":"string"},"description":"(list of string) - The list of email addresses to send the notification to. A maximum of 5 email addresses is supported\n"}},"type":"object"},"databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigSchedule:getDataQualityMonitorsMonitorDataProfilingConfigSchedule":{"properties":{"pauseStatus":{"type":"string","description":"(string) - Read only field that indicates whether the schedule is paused or not. Possible values are: `CRON_SCHEDULE_PAUSE_STATUS_PAUSED`, `CRON_SCHEDULE_PAUSE_STATUS_UNPAUSED`\n"},"quartzCronExpression":{"type":"string","description":"(string) - The expression that determines when to run the monitor. See [examples](https://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html)\n"},"timezoneId":{"type":"string","description":"(string) - A Java timezone id. The schedule for a job will be resolved with respect to this timezone.\nSee `Java TimeZone \u003chttp://docs.oracle.com/javase/7/docs/api/java/util/TimeZone.html\u003e`_ for details.\nThe timezone id (e.g., ``America/Los_Angeles``) in which to evaluate the quartz expression\n"}},"type":"object","required":["pauseStatus","quartzCronExpression","timezoneId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigSnapshot:getDataQualityMonitorsMonitorDataProfilingConfigSnapshot":{"type":"object"},"databricks:index/getDataQualityMonitorsMonitorDataProfilingConfigTimeSeries:getDataQualityMonitorsMonitorDataProfilingConfigTimeSeries":{"properties":{"granularities":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of granularities to use when aggregating data into time windows based on their timestamp\n"},"timestampColumn":{"type":"string","description":"(string) - Column for the timestamp\n"}},"type":"object","required":["granularities","timestampColumn"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorsMonitorProviderConfig:getDataQualityMonitorsMonitorProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityMonitorsProviderConfig:getDataQualityMonitorsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDataQualityRefreshProviderConfig:getDataQualityRefreshProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDataQualityRefreshesProviderConfig:getDataQualityRefreshesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDataQualityRefreshesRefresh:getDataQualityRefreshesRefresh":{"properties":{"endTimeMs":{"type":"integer","description":"(integer) - Time when the refresh ended (milliseconds since 1/1/1970 UTC)\n"},"message":{"type":"string","description":"(string) - An optional message to give insight into the current state of the refresh (e.g. FAILURE messages)\n"},"objectId":{"type":"string","description":"The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[schemaId]\" pulumi-lang-dotnet=\"[SchemaId]\" pulumi-lang-go=\"[schemaId]\" pulumi-lang-python=\"[schema_id]\" pulumi-lang-yaml=\"[schemaId]\" pulumi-lang-java=\"[schemaId]\"\u003e[schema_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the `Schemas` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Schema ID` field.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[tableId]\" pulumi-lang-dotnet=\"[TableId]\" pulumi-lang-go=\"[tableId]\" pulumi-lang-python=\"[table_id]\" pulumi-lang-yaml=\"[tableId]\" pulumi-lang-java=\"[tableId]\"\u003e[table_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/tables/get#table_id) of the `Tables` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Table ID` field\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDataQualityRefreshesRefreshProviderConfig:getDataQualityRefreshesRefreshProviderConfig","description":"Configure the provider for management through account provider.\n"},"refreshId":{"type":"integer","description":"(integer) - Unique id of the refresh operation\n"},"startTimeMs":{"type":"integer","description":"(integer) - Time when the refresh started (milliseconds since 1/1/1970 UTC)\n"},"state":{"type":"string","description":"(string) - The current state of the refresh. Possible values are: `MONITOR_REFRESH_STATE_CANCELED`, `MONITOR_REFRESH_STATE_FAILED`, `MONITOR_REFRESH_STATE_PENDING`, `MONITOR_REFRESH_STATE_RUNNING`, `MONITOR_REFRESH_STATE_SUCCESS`, `MONITOR_REFRESH_STATE_UNKNOWN`\n"},"trigger":{"type":"string","description":"(string) - What triggered the refresh. Possible values are: `MONITOR_REFRESH_TRIGGER_DATA_CHANGE`, `MONITOR_REFRESH_TRIGGER_MANUAL`, `MONITOR_REFRESH_TRIGGER_SCHEDULE`, `MONITOR_REFRESH_TRIGGER_UNKNOWN`\n"}},"type":"object","required":["endTimeMs","message","objectId","objectType","refreshId","startTimeMs","state","trigger"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDataQualityRefreshesRefreshProviderConfig:getDataQualityRefreshesRefreshProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseDatabaseCatalogProviderConfig:getDatabaseDatabaseCatalogProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDatabaseDatabaseCatalogsDatabaseCatalog:getDatabaseDatabaseCatalogsDatabaseCatalog":{"properties":{"createDatabaseIfNotExists":{"type":"boolean","description":"(boolean)\n"},"databaseInstanceName":{"type":"string","description":"(string) - The name of the DatabaseInstance housing the database\n"},"databaseName":{"type":"string","description":"(string) - The name of the database (in a instance) associated with the catalog\n"},"name":{"type":"string","description":"(string) - The name of the catalog in UC\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseDatabaseCatalogsDatabaseCatalogProviderConfig:getDatabaseDatabaseCatalogsDatabaseCatalogProviderConfig","description":"Configure the provider for management through account provider.\n"},"uid":{"type":"string","description":"(string)\n"}},"type":"object","required":["createDatabaseIfNotExists","databaseInstanceName","databaseName","name","uid"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseDatabaseCatalogsDatabaseCatalogProviderConfig:getDatabaseDatabaseCatalogsDatabaseCatalogProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseDatabaseCatalogsProviderConfig:getDatabaseDatabaseCatalogsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDatabaseInstanceChildInstanceRef:getDatabaseInstanceChildInstanceRef":{"properties":{"branchTime":{"type":"string","description":"(string) - Branch time of the ref database instance.\nFor a parent ref instance, this is the point in time on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the point in time on the instance from which the child\ninstance was created.\nInput: For specifying the point in time to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"effectiveLsn":{"type":"string","description":"(string) - For a parent ref instance, this is the LSN on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the LSN on the instance from which the child instance\nwas created.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"lsn":{"type":"string","description":"(string) - User-specified WAL LSN of the ref database instance.\n"},"name":{"type":"string","description":"The name of the instance. This is the unique identifier for the instance\n"},"uid":{"type":"string","description":"(string) - Id of the ref database instance\n"}},"type":"object","required":["effectiveLsn","lsn","uid"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseInstanceCustomTag:getDatabaseInstanceCustomTag":{"properties":{"key":{"type":"string","description":"(string) - The key of the custom tag\n"},"value":{"type":"string","description":"(string) - The value of the custom tag\n"}},"type":"object"},"databricks:index/getDatabaseInstanceEffectiveCustomTag:getDatabaseInstanceEffectiveCustomTag":{"properties":{"key":{"type":"string","description":"(string) - The key of the custom tag\n"},"value":{"type":"string","description":"(string) - The value of the custom tag\n"}},"type":"object"},"databricks:index/getDatabaseInstanceParentInstanceRef:getDatabaseInstanceParentInstanceRef":{"properties":{"branchTime":{"type":"string","description":"(string) - Branch time of the ref database instance.\nFor a parent ref instance, this is the point in time on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the point in time on the instance from which the child\ninstance was created.\nInput: For specifying the point in time to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"effectiveLsn":{"type":"string","description":"(string) - For a parent ref instance, this is the LSN on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the LSN on the instance from which the child instance\nwas created.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"lsn":{"type":"string","description":"(string) - User-specified WAL LSN of the ref database instance.\n"},"name":{"type":"string","description":"The name of the instance. This is the unique identifier for the instance\n"},"uid":{"type":"string","description":"(string) - Id of the ref database instance\n"}},"type":"object","required":["effectiveLsn","lsn","uid"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseInstanceProviderConfig:getDatabaseInstanceProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDatabaseInstancesDatabaseInstance:getDatabaseInstancesDatabaseInstance":{"properties":{"capacity":{"type":"string","description":"(string) - The sku of the instance. Valid values are \"CU_1\", \"CU_2\", \"CU_4\", \"CU_8\"\n"},"childInstanceRefs":{"type":"array","items":{"$ref":"#/types/databricks:index/getDatabaseInstancesDatabaseInstanceChildInstanceRef:getDatabaseInstancesDatabaseInstanceChildInstanceRef"},"description":"(list of DatabaseInstanceRef) - The refs of the child instances. This is only available if the instance is\nparent instance\n"},"creationTime":{"type":"string","description":"(string) - The timestamp when the instance was created\n"},"creator":{"type":"string","description":"(string) - The email of the creator of the instance\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/getDatabaseInstancesDatabaseInstanceCustomTag:getDatabaseInstancesDatabaseInstanceCustomTag"},"description":"(list of CustomTag) - Custom tags associated with the instance. This field is only included on create and update responses\n"},"effectiveCapacity":{"type":"string","description":"(string, deprecated) - Deprecated. The sku of the instance; this field will always match the value of capacity.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveCustomTags":{"type":"array","items":{"$ref":"#/types/databricks:index/getDatabaseInstancesDatabaseInstanceEffectiveCustomTag:getDatabaseInstancesDatabaseInstanceEffectiveCustomTag"},"description":"(list of CustomTag) - The recorded custom tags associated with the instance.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveEnablePgNativeLogin":{"type":"boolean","description":"(boolean) - Whether the instance has PG native password login enabled.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveEnableReadableSecondaries":{"type":"boolean","description":"(boolean) - Whether secondaries serving read-only traffic are enabled. Defaults to false.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveNodeCount":{"type":"integer","description":"(integer) - The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to\n1 primary and 0 secondaries.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveRetentionWindowInDays":{"type":"integer","description":"(integer) - The retention window for the instance. This is the time window in days\nfor which the historical data is retained.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveStopped":{"type":"boolean","description":"(boolean) - Whether the instance is stopped.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveUsagePolicyId":{"type":"string","description":"(string) - The policy that is applied to the instance.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"enablePgNativeLogin":{"type":"boolean","description":"(boolean) - Whether to enable PG native password login on the instance. Defaults to false\n"},"enableReadableSecondaries":{"type":"boolean","description":"(boolean) - Whether to enable secondaries to serve read-only traffic. Defaults to false\n"},"name":{"type":"string","description":"(string) - Name of the ref database instance\n"},"nodeCount":{"type":"integer","description":"(integer) - The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to\n1 primary and 0 secondaries. This field is input only, see\u003cspan pulumi-lang-nodejs=\" effectiveNodeCount \" pulumi-lang-dotnet=\" EffectiveNodeCount \" pulumi-lang-go=\" effectiveNodeCount \" pulumi-lang-python=\" effective_node_count \" pulumi-lang-yaml=\" effectiveNodeCount \" pulumi-lang-java=\" effectiveNodeCount \"\u003e effective_node_count \u003c/span\u003efor the output\n"},"parentInstanceRef":{"$ref":"#/types/databricks:index/getDatabaseInstancesDatabaseInstanceParentInstanceRef:getDatabaseInstancesDatabaseInstanceParentInstanceRef","description":"(DatabaseInstanceRef) - The ref of the parent instance. This is only available if the instance is\nchild instance.\nInput: For specifying the parent instance to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"pgVersion":{"type":"string","description":"(string) - The version of Postgres running on the instance\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseInstancesDatabaseInstanceProviderConfig:getDatabaseInstancesDatabaseInstanceProviderConfig","description":"Configure the provider for management through account provider.\n"},"readOnlyDns":{"type":"string","description":"(string) - The DNS endpoint to connect to the instance for read only access. This is only available if\u003cspan pulumi-lang-nodejs=\"\nenableReadableSecondaries \" pulumi-lang-dotnet=\"\nEnableReadableSecondaries \" pulumi-lang-go=\"\nenableReadableSecondaries \" pulumi-lang-python=\"\nenable_readable_secondaries \" pulumi-lang-yaml=\"\nenableReadableSecondaries \" pulumi-lang-java=\"\nenableReadableSecondaries \"\u003e\nenable_readable_secondaries \u003c/span\u003eis true\n"},"readWriteDns":{"type":"string","description":"(string) - The DNS endpoint to connect to the instance for read+write access\n"},"retentionWindowInDays":{"type":"integer","description":"(integer) - The retention window for the instance. This is the time window in days\nfor which the historical data is retained. The default value is 7 days.\nValid values are 2 to 35 days\n"},"state":{"type":"string","description":"(string) - The current state of the instance. Possible values are: `AVAILABLE`, `DELETING`, `FAILING_OVER`, `STARTING`, `STOPPED`, `UPDATING`\n"},"stopped":{"type":"boolean","description":"(boolean) - Whether to stop the instance. An input only param, see\u003cspan pulumi-lang-nodejs=\" effectiveStopped \" pulumi-lang-dotnet=\" EffectiveStopped \" pulumi-lang-go=\" effectiveStopped \" pulumi-lang-python=\" effective_stopped \" pulumi-lang-yaml=\" effectiveStopped \" pulumi-lang-java=\" effectiveStopped \"\u003e effective_stopped \u003c/span\u003efor the output\n"},"uid":{"type":"string","description":"(string) - Id of the ref database instance\n"},"usagePolicyId":{"type":"string","description":"(string) - The desired usage policy to associate with the instance\n"}},"type":"object","required":["capacity","childInstanceRefs","creationTime","creator","customTags","effectiveCapacity","effectiveCustomTags","effectiveEnablePgNativeLogin","effectiveEnableReadableSecondaries","effectiveNodeCount","effectiveRetentionWindowInDays","effectiveStopped","effectiveUsagePolicyId","enablePgNativeLogin","enableReadableSecondaries","name","nodeCount","parentInstanceRef","pgVersion","readOnlyDns","readWriteDns","retentionWindowInDays","state","stopped","uid","usagePolicyId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseInstancesDatabaseInstanceChildInstanceRef:getDatabaseInstancesDatabaseInstanceChildInstanceRef":{"properties":{"branchTime":{"type":"string","description":"(string) - Branch time of the ref database instance.\nFor a parent ref instance, this is the point in time on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the point in time on the instance from which the child\ninstance was created.\nInput: For specifying the point in time to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"effectiveLsn":{"type":"string","description":"(string) - For a parent ref instance, this is the LSN on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the LSN on the instance from which the child instance\nwas created.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"lsn":{"type":"string","description":"(string) - User-specified WAL LSN of the ref database instance.\n"},"name":{"type":"string","description":"(string) - Name of the ref database instance\n"},"uid":{"type":"string","description":"(string) - Id of the ref database instance\n"}},"type":"object","required":["effectiveLsn","lsn","uid"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseInstancesDatabaseInstanceCustomTag:getDatabaseInstancesDatabaseInstanceCustomTag":{"properties":{"key":{"type":"string","description":"(string) - The key of the custom tag\n"},"value":{"type":"string","description":"(string) - The value of the custom tag\n"}},"type":"object"},"databricks:index/getDatabaseInstancesDatabaseInstanceEffectiveCustomTag:getDatabaseInstancesDatabaseInstanceEffectiveCustomTag":{"properties":{"key":{"type":"string","description":"(string) - The key of the custom tag\n"},"value":{"type":"string","description":"(string) - The value of the custom tag\n"}},"type":"object"},"databricks:index/getDatabaseInstancesDatabaseInstanceParentInstanceRef:getDatabaseInstancesDatabaseInstanceParentInstanceRef":{"properties":{"branchTime":{"type":"string","description":"(string) - Branch time of the ref database instance.\nFor a parent ref instance, this is the point in time on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the point in time on the instance from which the child\ninstance was created.\nInput: For specifying the point in time to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"effectiveLsn":{"type":"string","description":"(string) - For a parent ref instance, this is the LSN on the parent instance from which the\ninstance was created.\nFor a child ref instance, this is the LSN on the instance from which the child instance\nwas created.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"lsn":{"type":"string","description":"(string) - User-specified WAL LSN of the ref database instance.\n"},"name":{"type":"string","description":"(string) - Name of the ref database instance\n"},"uid":{"type":"string","description":"(string) - Id of the ref database instance\n"}},"type":"object","required":["effectiveLsn","lsn","uid"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseInstancesDatabaseInstanceProviderConfig:getDatabaseInstancesDatabaseInstanceProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseInstancesProviderConfig:getDatabaseInstancesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatus:getDatabaseSyncedDatabaseTableDataSynchronizationStatus":{"properties":{"continuousUpdateStatus":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatus:getDatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatus","description":"(SyncedTableContinuousUpdateStatus)\n"},"detailedState":{"type":"string","description":"(string) - The state of the synced table. Possible values are: `SYNCED_TABLED_OFFLINE`, `SYNCED_TABLE_OFFLINE_FAILED`, `SYNCED_TABLE_ONLINE`, `SYNCED_TABLE_ONLINE_CONTINUOUS_UPDATE`, `SYNCED_TABLE_ONLINE_NO_PENDING_UPDATE`, `SYNCED_TABLE_ONLINE_PIPELINE_FAILED`, `SYNCED_TABLE_ONLINE_TRIGGERED_UPDATE`, `SYNCED_TABLE_ONLINE_UPDATING_PIPELINE_RESOURCES`, `SYNCED_TABLE_PROVISIONING`, `SYNCED_TABLE_PROVISIONING_INITIAL_SNAPSHOT`, `SYNCED_TABLE_PROVISIONING_PIPELINE_RESOURCES`\n"},"failedStatus":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusFailedStatus:getDatabaseSyncedDatabaseTableDataSynchronizationStatusFailedStatus","description":"(SyncedTableFailedStatus)\n"},"lastSync":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusLastSync:getDatabaseSyncedDatabaseTableDataSynchronizationStatusLastSync","description":"(SyncedTablePosition) - Summary of the last successful synchronization from source to destination.\n"},"message":{"type":"string","description":"(string) - A text description of the current state of the synced table\n"},"pipelineId":{"type":"string","description":"(string) - ID of the associated pipeline. The pipeline ID may have been provided by the client\n(in the case of bin packing), or generated by the server (when creating a new pipeline)\n"},"provisioningStatus":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatus:getDatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatus","description":"(SyncedTableProvisioningStatus)\n"},"triggeredUpdateStatus":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatus:getDatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatus","description":"(SyncedTableTriggeredUpdateStatus)\n"}},"type":"object","required":["detailedState","lastSync","message","pipelineId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatus:getDatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatus":{"properties":{"initialPipelineSyncProgress":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress:getDatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress","description":"(SyncedTablePipelineProgress) - Details about initial data synchronization. Only populated when in the\nPROVISIONING_INITIAL_SNAPSHOT state\n"},"lastProcessedCommitVersion":{"type":"integer","description":"(integer) - The last source table Delta version that was successfully synced to the synced table\n"},"timestamp":{"type":"string","description":"(string) - The end timestamp of the last time any data was synchronized from the source table to the synced\ntable. This is when the data is available in the synced table\n"}},"type":"object","required":["initialPipelineSyncProgress","lastProcessedCommitVersion","timestamp"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress:getDatabaseSyncedDatabaseTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number","description":"(number) - The estimated time remaining to complete this update in seconds\n"},"latestVersionCurrentlyProcessing":{"type":"integer","description":"(integer) - The source table Delta version that was last processed by the pipeline. The pipeline may not\nhave completely processed this version yet\n"},"provisioningPhase":{"type":"string","description":"(string) - The current phase of the data synchronization pipeline. Possible values are: `PROVISIONING_PHASE_INDEX_SCAN`, `PROVISIONING_PHASE_INDEX_SORT`, `PROVISIONING_PHASE_MAIN`\n"},"syncProgressCompletion":{"type":"number","description":"(number) - The completion ratio of this update. This is a number between 0 and 1\n"},"syncedRowCount":{"type":"integer","description":"(integer) - The number of rows that have been synced in this update\n"},"totalRowCount":{"type":"integer","description":"(integer) - The total number of rows that need to be synced in this update. This number may be an estimate\n"}},"type":"object","required":["estimatedCompletionTimeSeconds","latestVersionCurrentlyProcessing","provisioningPhase","syncProgressCompletion","syncedRowCount","totalRowCount"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusFailedStatus:getDatabaseSyncedDatabaseTableDataSynchronizationStatusFailedStatus":{"properties":{"lastProcessedCommitVersion":{"type":"integer","description":"(integer) - The last source table Delta version that was successfully synced to the synced table\n"},"timestamp":{"type":"string","description":"(string) - The end timestamp of the last time any data was synchronized from the source table to the synced\ntable. This is when the data is available in the synced table\n"}},"type":"object","required":["lastProcessedCommitVersion","timestamp"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusLastSync:getDatabaseSyncedDatabaseTableDataSynchronizationStatusLastSync":{"properties":{"deltaTableSyncInfo":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo:getDatabaseSyncedDatabaseTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo","description":"(DeltaTableSyncInfo)\n"},"syncEndTimestamp":{"type":"string","description":"(string) - The end timestamp of the most recent successful synchronization.\nThis is the time when the data is available in the synced table\n"},"syncStartTimestamp":{"type":"string","description":"(string) - The starting timestamp of the most recent successful synchronization from the source table\nto the destination (synced) table.\nNote this is the starting timestamp of the sync operation, not the end time.\nE.g., for a batch, this is the time when the sync operation started\n"}},"type":"object","required":["deltaTableSyncInfo","syncEndTimestamp","syncStartTimestamp"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo:getDatabaseSyncedDatabaseTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo":{"properties":{"deltaCommitTimestamp":{"type":"string","description":"(string) - The timestamp when the above Delta version was committed in the source Delta table.\nNote: This is the Delta commit time, not the time the data was written to the synced table\n"},"deltaCommitVersion":{"type":"integer","description":"(integer) - The Delta Lake commit version that was last successfully synced\n"}},"type":"object","required":["deltaCommitTimestamp","deltaCommitVersion"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatus:getDatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatus":{"properties":{"initialPipelineSyncProgress":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress:getDatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress","description":"(SyncedTablePipelineProgress) - Details about initial data synchronization. Only populated when in the\nPROVISIONING_INITIAL_SNAPSHOT state\n"}},"type":"object","required":["initialPipelineSyncProgress"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress:getDatabaseSyncedDatabaseTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number","description":"(number) - The estimated time remaining to complete this update in seconds\n"},"latestVersionCurrentlyProcessing":{"type":"integer","description":"(integer) - The source table Delta version that was last processed by the pipeline. The pipeline may not\nhave completely processed this version yet\n"},"provisioningPhase":{"type":"string","description":"(string) - The current phase of the data synchronization pipeline. Possible values are: `PROVISIONING_PHASE_INDEX_SCAN`, `PROVISIONING_PHASE_INDEX_SORT`, `PROVISIONING_PHASE_MAIN`\n"},"syncProgressCompletion":{"type":"number","description":"(number) - The completion ratio of this update. This is a number between 0 and 1\n"},"syncedRowCount":{"type":"integer","description":"(integer) - The number of rows that have been synced in this update\n"},"totalRowCount":{"type":"integer","description":"(integer) - The total number of rows that need to be synced in this update. This number may be an estimate\n"}},"type":"object","required":["estimatedCompletionTimeSeconds","latestVersionCurrentlyProcessing","provisioningPhase","syncProgressCompletion","syncedRowCount","totalRowCount"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatus:getDatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatus":{"properties":{"lastProcessedCommitVersion":{"type":"integer","description":"(integer) - The last source table Delta version that was successfully synced to the synced table\n"},"timestamp":{"type":"string","description":"(string) - The end timestamp of the last time any data was synchronized from the source table to the synced\ntable. This is when the data is available in the synced table\n"},"triggeredUpdateProgress":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress:getDatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress","description":"(SyncedTablePipelineProgress) - Progress of the active data synchronization pipeline\n"}},"type":"object","required":["lastProcessedCommitVersion","timestamp","triggeredUpdateProgress"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress:getDatabaseSyncedDatabaseTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number","description":"(number) - The estimated time remaining to complete this update in seconds\n"},"latestVersionCurrentlyProcessing":{"type":"integer","description":"(integer) - The source table Delta version that was last processed by the pipeline. The pipeline may not\nhave completely processed this version yet\n"},"provisioningPhase":{"type":"string","description":"(string) - The current phase of the data synchronization pipeline. Possible values are: `PROVISIONING_PHASE_INDEX_SCAN`, `PROVISIONING_PHASE_INDEX_SORT`, `PROVISIONING_PHASE_MAIN`\n"},"syncProgressCompletion":{"type":"number","description":"(number) - The completion ratio of this update. This is a number between 0 and 1\n"},"syncedRowCount":{"type":"integer","description":"(integer) - The number of rows that have been synced in this update\n"},"totalRowCount":{"type":"integer","description":"(integer) - The total number of rows that need to be synced in this update. This number may be an estimate\n"}},"type":"object","required":["estimatedCompletionTimeSeconds","latestVersionCurrentlyProcessing","provisioningPhase","syncProgressCompletion","syncedRowCount","totalRowCount"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTableProviderConfig:getDatabaseSyncedDatabaseTableProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDatabaseSyncedDatabaseTableSpec:getDatabaseSyncedDatabaseTableSpec":{"properties":{"createDatabaseObjectsIfMissing":{"type":"boolean","description":"(boolean) - If true, the synced table's logical database and schema resources in PG\nwill be created if they do not already exist\n"},"existingPipelineId":{"type":"string","description":"(string) - At most one of\u003cspan pulumi-lang-nodejs=\" existingPipelineId \" pulumi-lang-dotnet=\" ExistingPipelineId \" pulumi-lang-go=\" existingPipelineId \" pulumi-lang-python=\" existing_pipeline_id \" pulumi-lang-yaml=\" existingPipelineId \" pulumi-lang-java=\" existingPipelineId \"\u003e existing_pipeline_id \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" newPipelineSpec \" pulumi-lang-dotnet=\" NewPipelineSpec \" pulumi-lang-go=\" newPipelineSpec \" pulumi-lang-python=\" new_pipeline_spec \" pulumi-lang-yaml=\" newPipelineSpec \" pulumi-lang-java=\" newPipelineSpec \"\u003e new_pipeline_spec \u003c/span\u003eshould be defined.\n"},"newPipelineSpec":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableSpecNewPipelineSpec:getDatabaseSyncedDatabaseTableSpecNewPipelineSpec","description":"(NewPipelineSpec) - At most one of\u003cspan pulumi-lang-nodejs=\" existingPipelineId \" pulumi-lang-dotnet=\" ExistingPipelineId \" pulumi-lang-go=\" existingPipelineId \" pulumi-lang-python=\" existing_pipeline_id \" pulumi-lang-yaml=\" existingPipelineId \" pulumi-lang-java=\" existingPipelineId \"\u003e existing_pipeline_id \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" newPipelineSpec \" pulumi-lang-dotnet=\" NewPipelineSpec \" pulumi-lang-go=\" newPipelineSpec \" pulumi-lang-python=\" new_pipeline_spec \" pulumi-lang-yaml=\" newPipelineSpec \" pulumi-lang-java=\" newPipelineSpec \"\u003e new_pipeline_spec \u003c/span\u003eshould be defined.\n"},"primaryKeyColumns":{"type":"array","items":{"type":"string"},"description":"(list of string) - Primary Key columns to be used for data insert/update in the destination\n"},"schedulingPolicy":{"type":"string","description":"(string) - Scheduling policy of the underlying pipeline. Possible values are: `CONTINUOUS`, `SNAPSHOT`, `TRIGGERED`\n"},"sourceTableFullName":{"type":"string","description":"(string) - Three-part (catalog, schema, table) name of the source Delta table\n"},"timeseriesKey":{"type":"string","description":"(string) - Time series key to deduplicate (tie-break) rows with the same primary key\n"}},"type":"object","required":["createDatabaseObjectsIfMissing","existingPipelineId","newPipelineSpec"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTableSpecNewPipelineSpec:getDatabaseSyncedDatabaseTableSpecNewPipelineSpec":{"properties":{"budgetPolicyId":{"type":"string","description":"(string) - Budget policy to set on the newly created pipeline\n"},"storageCatalog":{"type":"string","description":"(string) - This field needs to be specified if the destination catalog is a managed postgres catalog.\n"},"storageSchema":{"type":"string","description":"(string) - This field needs to be specified if the destination catalog is a managed postgres catalog.\n"}},"type":"object"},"databricks:index/getDatabaseSyncedDatabaseTablesProviderConfig:getDatabaseSyncedDatabaseTablesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTable:getDatabaseSyncedDatabaseTablesSyncedTable":{"properties":{"dataSynchronizationStatus":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatus:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatus","description":"(SyncedTableStatus) - Synced Table data synchronization status\n"},"databaseInstanceName":{"type":"string","description":"(string) - Name of the target database instance. This is required when creating synced database tables in standard catalogs.\nThis is optional when creating synced database tables in registered catalogs. If this field is specified\nwhen creating synced database tables in registered catalogs, the database instance name MUST\nmatch that of the registered catalog (or the request will be rejected)\n"},"effectiveDatabaseInstanceName":{"type":"string","description":"(string) - The name of the database instance that this table is registered to. This field is always returned, and for\ntables inside database catalogs is inferred database instance associated with the catalog.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveLogicalDatabaseName":{"type":"string","description":"(string) - The name of the logical database that this table is registered to.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"logicalDatabaseName":{"type":"string","description":"(string) - Target Postgres database object (logical database) name for this table.\n"},"name":{"type":"string","description":"(string) - Full three-part (catalog, schema, table) name of the table\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableProviderConfig:getDatabaseSyncedDatabaseTablesSyncedTableProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableSpec:getDatabaseSyncedDatabaseTablesSyncedTableSpec","description":"(SyncedTableSpec)\n"},"unityCatalogProvisioningState":{"type":"string","description":"(string) - The provisioning state of the synced table entity in Unity Catalog. This is distinct from the\nstate of the data synchronization pipeline (i.e. the table may be in \"ACTIVE\" but the pipeline\nmay be in \"PROVISIONING\" as it runs asynchronously). Possible values are: `ACTIVE`, `DEGRADED`, `DELETING`, `FAILED`, `PROVISIONING`, `UPDATING`\n"}},"type":"object","required":["dataSynchronizationStatus","databaseInstanceName","effectiveDatabaseInstanceName","effectiveLogicalDatabaseName","logicalDatabaseName","name","spec","unityCatalogProvisioningState"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatus:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatus":{"properties":{"continuousUpdateStatus":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusContinuousUpdateStatus:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusContinuousUpdateStatus","description":"(SyncedTableContinuousUpdateStatus)\n"},"detailedState":{"type":"string","description":"(string) - The state of the synced table. Possible values are: `SYNCED_TABLED_OFFLINE`, `SYNCED_TABLE_OFFLINE_FAILED`, `SYNCED_TABLE_ONLINE`, `SYNCED_TABLE_ONLINE_CONTINUOUS_UPDATE`, `SYNCED_TABLE_ONLINE_NO_PENDING_UPDATE`, `SYNCED_TABLE_ONLINE_PIPELINE_FAILED`, `SYNCED_TABLE_ONLINE_TRIGGERED_UPDATE`, `SYNCED_TABLE_ONLINE_UPDATING_PIPELINE_RESOURCES`, `SYNCED_TABLE_PROVISIONING`, `SYNCED_TABLE_PROVISIONING_INITIAL_SNAPSHOT`, `SYNCED_TABLE_PROVISIONING_PIPELINE_RESOURCES`\n"},"failedStatus":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusFailedStatus:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusFailedStatus","description":"(SyncedTableFailedStatus)\n"},"lastSync":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusLastSync:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusLastSync","description":"(SyncedTablePosition) - Summary of the last successful synchronization from source to destination.\n"},"message":{"type":"string","description":"(string) - A text description of the current state of the synced table\n"},"pipelineId":{"type":"string","description":"(string) - ID of the associated pipeline. The pipeline ID may have been provided by the client\n(in the case of bin packing), or generated by the server (when creating a new pipeline)\n"},"provisioningStatus":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusProvisioningStatus:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusProvisioningStatus","description":"(SyncedTableProvisioningStatus)\n"},"triggeredUpdateStatus":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusTriggeredUpdateStatus:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusTriggeredUpdateStatus","description":"(SyncedTableTriggeredUpdateStatus)\n"}},"type":"object","required":["detailedState","lastSync","message","pipelineId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusContinuousUpdateStatus:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusContinuousUpdateStatus":{"properties":{"initialPipelineSyncProgress":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress","description":"(SyncedTablePipelineProgress) - Details about initial data synchronization. Only populated when in the\nPROVISIONING_INITIAL_SNAPSHOT state\n"},"lastProcessedCommitVersion":{"type":"integer","description":"(integer) - The last source table Delta version that was successfully synced to the synced table\n"},"timestamp":{"type":"string","description":"(string) - The end timestamp of the last time any data was synchronized from the source table to the synced\ntable. This is when the data is available in the synced table\n"}},"type":"object","required":["initialPipelineSyncProgress","lastProcessedCommitVersion","timestamp"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusContinuousUpdateStatusInitialPipelineSyncProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number","description":"(number) - The estimated time remaining to complete this update in seconds\n"},"latestVersionCurrentlyProcessing":{"type":"integer","description":"(integer) - The source table Delta version that was last processed by the pipeline. The pipeline may not\nhave completely processed this version yet\n"},"provisioningPhase":{"type":"string","description":"(string) - The current phase of the data synchronization pipeline. Possible values are: `PROVISIONING_PHASE_INDEX_SCAN`, `PROVISIONING_PHASE_INDEX_SORT`, `PROVISIONING_PHASE_MAIN`\n"},"syncProgressCompletion":{"type":"number","description":"(number) - The completion ratio of this update. This is a number between 0 and 1\n"},"syncedRowCount":{"type":"integer","description":"(integer) - The number of rows that have been synced in this update\n"},"totalRowCount":{"type":"integer","description":"(integer) - The total number of rows that need to be synced in this update. This number may be an estimate\n"}},"type":"object","required":["estimatedCompletionTimeSeconds","latestVersionCurrentlyProcessing","provisioningPhase","syncProgressCompletion","syncedRowCount","totalRowCount"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusFailedStatus:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusFailedStatus":{"properties":{"lastProcessedCommitVersion":{"type":"integer","description":"(integer) - The last source table Delta version that was successfully synced to the synced table\n"},"timestamp":{"type":"string","description":"(string) - The end timestamp of the last time any data was synchronized from the source table to the synced\ntable. This is when the data is available in the synced table\n"}},"type":"object","required":["lastProcessedCommitVersion","timestamp"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusLastSync:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusLastSync":{"properties":{"deltaTableSyncInfo":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo","description":"(DeltaTableSyncInfo)\n"},"syncEndTimestamp":{"type":"string","description":"(string) - The end timestamp of the most recent successful synchronization.\nThis is the time when the data is available in the synced table\n"},"syncStartTimestamp":{"type":"string","description":"(string) - The starting timestamp of the most recent successful synchronization from the source table\nto the destination (synced) table.\nNote this is the starting timestamp of the sync operation, not the end time.\nE.g., for a batch, this is the time when the sync operation started\n"}},"type":"object","required":["deltaTableSyncInfo","syncEndTimestamp","syncStartTimestamp"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusLastSyncDeltaTableSyncInfo":{"properties":{"deltaCommitTimestamp":{"type":"string","description":"(string) - The timestamp when the above Delta version was committed in the source Delta table.\nNote: This is the Delta commit time, not the time the data was written to the synced table\n"},"deltaCommitVersion":{"type":"integer","description":"(integer) - The Delta Lake commit version that was last successfully synced\n"}},"type":"object","required":["deltaCommitTimestamp","deltaCommitVersion"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusProvisioningStatus:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusProvisioningStatus":{"properties":{"initialPipelineSyncProgress":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress","description":"(SyncedTablePipelineProgress) - Details about initial data synchronization. Only populated when in the\nPROVISIONING_INITIAL_SNAPSHOT state\n"}},"type":"object","required":["initialPipelineSyncProgress"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusProvisioningStatusInitialPipelineSyncProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number","description":"(number) - The estimated time remaining to complete this update in seconds\n"},"latestVersionCurrentlyProcessing":{"type":"integer","description":"(integer) - The source table Delta version that was last processed by the pipeline. The pipeline may not\nhave completely processed this version yet\n"},"provisioningPhase":{"type":"string","description":"(string) - The current phase of the data synchronization pipeline. Possible values are: `PROVISIONING_PHASE_INDEX_SCAN`, `PROVISIONING_PHASE_INDEX_SORT`, `PROVISIONING_PHASE_MAIN`\n"},"syncProgressCompletion":{"type":"number","description":"(number) - The completion ratio of this update. This is a number between 0 and 1\n"},"syncedRowCount":{"type":"integer","description":"(integer) - The number of rows that have been synced in this update\n"},"totalRowCount":{"type":"integer","description":"(integer) - The total number of rows that need to be synced in this update. This number may be an estimate\n"}},"type":"object","required":["estimatedCompletionTimeSeconds","latestVersionCurrentlyProcessing","provisioningPhase","syncProgressCompletion","syncedRowCount","totalRowCount"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusTriggeredUpdateStatus:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusTriggeredUpdateStatus":{"properties":{"lastProcessedCommitVersion":{"type":"integer","description":"(integer) - The last source table Delta version that was successfully synced to the synced table\n"},"timestamp":{"type":"string","description":"(string) - The end timestamp of the last time any data was synchronized from the source table to the synced\ntable. This is when the data is available in the synced table\n"},"triggeredUpdateProgress":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress","description":"(SyncedTablePipelineProgress) - Progress of the active data synchronization pipeline\n"}},"type":"object","required":["lastProcessedCommitVersion","timestamp","triggeredUpdateProgress"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress:getDatabaseSyncedDatabaseTablesSyncedTableDataSynchronizationStatusTriggeredUpdateStatusTriggeredUpdateProgress":{"properties":{"estimatedCompletionTimeSeconds":{"type":"number","description":"(number) - The estimated time remaining to complete this update in seconds\n"},"latestVersionCurrentlyProcessing":{"type":"integer","description":"(integer) - The source table Delta version that was last processed by the pipeline. The pipeline may not\nhave completely processed this version yet\n"},"provisioningPhase":{"type":"string","description":"(string) - The current phase of the data synchronization pipeline. Possible values are: `PROVISIONING_PHASE_INDEX_SCAN`, `PROVISIONING_PHASE_INDEX_SORT`, `PROVISIONING_PHASE_MAIN`\n"},"syncProgressCompletion":{"type":"number","description":"(number) - The completion ratio of this update. This is a number between 0 and 1\n"},"syncedRowCount":{"type":"integer","description":"(integer) - The number of rows that have been synced in this update\n"},"totalRowCount":{"type":"integer","description":"(integer) - The total number of rows that need to be synced in this update. This number may be an estimate\n"}},"type":"object","required":["estimatedCompletionTimeSeconds","latestVersionCurrentlyProcessing","provisioningPhase","syncProgressCompletion","syncedRowCount","totalRowCount"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableProviderConfig:getDatabaseSyncedDatabaseTablesSyncedTableProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableSpec:getDatabaseSyncedDatabaseTablesSyncedTableSpec":{"properties":{"createDatabaseObjectsIfMissing":{"type":"boolean","description":"(boolean) - If true, the synced table's logical database and schema resources in PG\nwill be created if they do not already exist\n"},"existingPipelineId":{"type":"string","description":"(string) - At most one of\u003cspan pulumi-lang-nodejs=\" existingPipelineId \" pulumi-lang-dotnet=\" ExistingPipelineId \" pulumi-lang-go=\" existingPipelineId \" pulumi-lang-python=\" existing_pipeline_id \" pulumi-lang-yaml=\" existingPipelineId \" pulumi-lang-java=\" existingPipelineId \"\u003e existing_pipeline_id \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" newPipelineSpec \" pulumi-lang-dotnet=\" NewPipelineSpec \" pulumi-lang-go=\" newPipelineSpec \" pulumi-lang-python=\" new_pipeline_spec \" pulumi-lang-yaml=\" newPipelineSpec \" pulumi-lang-java=\" newPipelineSpec \"\u003e new_pipeline_spec \u003c/span\u003eshould be defined.\n"},"newPipelineSpec":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableSpecNewPipelineSpec:getDatabaseSyncedDatabaseTablesSyncedTableSpecNewPipelineSpec","description":"(NewPipelineSpec) - At most one of\u003cspan pulumi-lang-nodejs=\" existingPipelineId \" pulumi-lang-dotnet=\" ExistingPipelineId \" pulumi-lang-go=\" existingPipelineId \" pulumi-lang-python=\" existing_pipeline_id \" pulumi-lang-yaml=\" existingPipelineId \" pulumi-lang-java=\" existingPipelineId \"\u003e existing_pipeline_id \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" newPipelineSpec \" pulumi-lang-dotnet=\" NewPipelineSpec \" pulumi-lang-go=\" newPipelineSpec \" pulumi-lang-python=\" new_pipeline_spec \" pulumi-lang-yaml=\" newPipelineSpec \" pulumi-lang-java=\" newPipelineSpec \"\u003e new_pipeline_spec \u003c/span\u003eshould be defined.\n"},"primaryKeyColumns":{"type":"array","items":{"type":"string"},"description":"(list of string) - Primary Key columns to be used for data insert/update in the destination\n"},"schedulingPolicy":{"type":"string","description":"(string) - Scheduling policy of the underlying pipeline. Possible values are: `CONTINUOUS`, `SNAPSHOT`, `TRIGGERED`\n"},"sourceTableFullName":{"type":"string","description":"(string) - Three-part (catalog, schema, table) name of the source Delta table\n"},"timeseriesKey":{"type":"string","description":"(string) - Time series key to deduplicate (tie-break) rows with the same primary key\n"}},"type":"object","required":["createDatabaseObjectsIfMissing","existingPipelineId","newPipelineSpec"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getDatabaseSyncedDatabaseTablesSyncedTableSpecNewPipelineSpec:getDatabaseSyncedDatabaseTablesSyncedTableSpecNewPipelineSpec":{"properties":{"budgetPolicyId":{"type":"string","description":"(string) - Budget policy to set on the newly created pipeline\n"},"storageCatalog":{"type":"string","description":"(string) - This field needs to be specified if the destination catalog is a managed postgres catalog.\n"},"storageSchema":{"type":"string","description":"(string) - This field needs to be specified if the destination catalog is a managed postgres catalog.\n"}},"type":"object"},"databricks:index/getDbfsFilePathsPathList:getDbfsFilePathsPathList":{"properties":{"fileSize":{"type":"integer"},"path":{"type":"string","description":"Path on DBFS for the file to perform listing\n"}},"type":"object"},"databricks:index/getDbfsFilePathsProviderConfig:getDbfsFilePathsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDbfsFileProviderConfig:getDbfsFileProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getDirectoryProviderConfig:getDirectoryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getEndpointAzurePrivateEndpointInfo:getEndpointAzurePrivateEndpointInfo":{"properties":{"privateEndpointName":{"type":"string","description":"(string) - The name of the Private Endpoint in the Azure subscription\n"},"privateEndpointResourceGuid":{"type":"string","description":"(string) - The GUID of the Private Endpoint resource in the Azure subscription.\nThis is assigned by Azure when the user sets up the Private Endpoint\n"},"privateEndpointResourceId":{"type":"string","description":"(string) - The full resource ID of the Private Endpoint\n"},"privateLinkServiceId":{"type":"string","description":"(string) - The resource ID of the Databricks Private Link Service that this Private Endpoint connects to\n"}},"type":"object","required":["privateEndpointName","privateEndpointResourceGuid","privateEndpointResourceId","privateLinkServiceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getEndpointsItem:getEndpointsItem":{"properties":{"accountId":{"type":"string","description":"(string) - The Databricks Account in which the endpoint object exists\n"},"azurePrivateEndpointInfo":{"$ref":"#/types/databricks:index/getEndpointsItemAzurePrivateEndpointInfo:getEndpointsItemAzurePrivateEndpointInfo","description":"(AzurePrivateEndpointInfo) - Info for an Azure private endpoint\n"},"createTime":{"type":"string","description":"(string) - The timestamp when the endpoint was created. The timestamp is in RFC 3339 format in UTC timezone\n"},"displayName":{"type":"string","description":"(string) - The human-readable display name of this endpoint.\nThe input should conform to RFC-1034, which restricts to letters, numbers, and hyphens,\nwith the first character a letter, the last a letter or a number, and a 63 character maximum\n"},"endpointId":{"type":"string","description":"(string) - The unique identifier for this endpoint under the account. This field is a UUID generated by Databricks\n"},"name":{"type":"string","description":"(string) - The resource name of the endpoint, which uniquely identifies the endpoint\n"},"region":{"type":"string","description":"(string) - The cloud provider region where this endpoint is located\n"},"state":{"type":"string","description":"(string) - The state of the endpoint. The endpoint can only be used if the state is `APPROVED`. Possible values are: `APPROVED`, `DISCONNECTED`, `FAILED`, `PENDING`\n"},"useCase":{"type":"string","description":"(string) - The use case that determines the type of network connectivity this endpoint provides.\nThis field is automatically determined based on the endpoint configuration and cloud-specific settings. Possible values are: `SERVICE_DIRECT`\n"}},"type":"object","required":["accountId","azurePrivateEndpointInfo","createTime","displayName","endpointId","name","region","state","useCase"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getEndpointsItemAzurePrivateEndpointInfo:getEndpointsItemAzurePrivateEndpointInfo":{"properties":{"privateEndpointName":{"type":"string","description":"(string) - The name of the Private Endpoint in the Azure subscription\n"},"privateEndpointResourceGuid":{"type":"string","description":"(string) - The GUID of the Private Endpoint resource in the Azure subscription.\nThis is assigned by Azure when the user sets up the Private Endpoint\n"},"privateEndpointResourceId":{"type":"string","description":"(string) - The full resource ID of the Private Endpoint\n"},"privateLinkServiceId":{"type":"string","description":"(string) - The resource ID of the Databricks Private Link Service that this Private Endpoint connects to\n"}},"type":"object","required":["privateEndpointName","privateEndpointResourceGuid","privateEndpointResourceId","privateLinkServiceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getEntityTagAssignmentProviderConfig:getEntityTagAssignmentProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getEntityTagAssignmentsProviderConfig:getEntityTagAssignmentsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getEntityTagAssignmentsTagAssignment:getEntityTagAssignmentsTagAssignment":{"properties":{"entityName":{"type":"string","description":"The fully qualified name of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of the entity to which the tag is assigned. Allowed values are: catalogs, schemas, tables, columns, volumes\n"},"providerConfig":{"$ref":"#/types/databricks:index/getEntityTagAssignmentsTagAssignmentProviderConfig:getEntityTagAssignmentsTagAssignmentProviderConfig","description":"Configure the provider for management through account provider.\n"},"sourceType":{"type":"string","description":"(string) - The source type of the tag assignment, e.g., user-assigned or system-assigned. Possible values are: `TAG_ASSIGNMENT_SOURCE_TYPE_SYSTEM_DATA_CLASSIFICATION`\n"},"tagKey":{"type":"string","description":"(string) - The key of the tag\n"},"tagValue":{"type":"string","description":"(string) - The value of the tag\n"},"updateTime":{"type":"string","description":"(string) - The timestamp when the tag assignment was last updated\n"},"updatedBy":{"type":"string","description":"(string) - The user or principal who updated the tag assignment\n"}},"type":"object","required":["entityName","entityType","sourceType","tagKey","tagValue","updateTime","updatedBy"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getEntityTagAssignmentsTagAssignmentProviderConfig:getEntityTagAssignmentsTagAssignmentProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getExternalLocationExternalLocationInfo:getExternalLocationExternalLocationInfo":{"properties":{"browseOnly":{"type":"boolean"},"comment":{"type":"string","description":"User-supplied comment.\n"},"createdAt":{"type":"integer","description":"Time at which this catalog was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of catalog creator.\n"},"credentialId":{"type":"string","description":"Unique ID of storage credential.\n"},"credentialName":{"type":"string","description":"Name of the\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003eto use with this external location.\n"},"effectiveEnableFileEvents":{"type":"boolean"},"enableFileEvents":{"type":"boolean"},"encryptionDetails":{"$ref":"#/types/databricks:index/getExternalLocationExternalLocationInfoEncryptionDetails:getExternalLocationExternalLocationInfoEncryptionDetails","description":"A block describing encryption options that apply to clients connecting to cloud storage. Consisting of the following attributes:\n"},"fallback":{"type":"boolean"},"fileEventQueue":{"$ref":"#/types/databricks:index/getExternalLocationExternalLocationInfoFileEventQueue:getExternalLocationExternalLocationInfoFileEventQueue"},"isolationMode":{"type":"string"},"metastoreId":{"type":"string","description":"Unique identifier of the parent Metastore.\n"},"name":{"type":"string","description":"The name of the external location\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the external location owner.\n"},"readOnly":{"type":"boolean","description":"Indicates whether the external location is read-only.\n"},"updatedAt":{"type":"integer","description":"Time at which this catalog was last modified, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of user who last modified catalog.\n"},"url":{"type":"string","description":"Path URL in cloud storage, of the form: `s3://[bucket-host]/[bucket-dir]` (AWS), `abfss://[user]@[host]/[path]` (Azure), `gs://[bucket-host]/[bucket-dir]` (GCP).\n"}},"type":"object"},"databricks:index/getExternalLocationExternalLocationInfoEncryptionDetails:getExternalLocationExternalLocationInfoEncryptionDetails":{"properties":{"sseEncryptionDetails":{"$ref":"#/types/databricks:index/getExternalLocationExternalLocationInfoEncryptionDetailsSseEncryptionDetails:getExternalLocationExternalLocationInfoEncryptionDetailsSseEncryptionDetails","description":"a block describing server-Side Encryption properties for clients communicating with AWS S3. Consists of the following attributes:\n"}},"type":"object"},"databricks:index/getExternalLocationExternalLocationInfoEncryptionDetailsSseEncryptionDetails:getExternalLocationExternalLocationInfoEncryptionDetailsSseEncryptionDetails":{"properties":{"algorithm":{"type":"string","description":"Encryption algorithm value. Sets the value of the `x-amz-server-side-encryption` header in S3 request.\n"},"awsKmsKeyArn":{"type":"string","description":"ARN of the SSE-KMS key used with the S3 location, when `algorithm = \"SSE-KMS\"`.\n"}},"type":"object"},"databricks:index/getExternalLocationExternalLocationInfoFileEventQueue:getExternalLocationExternalLocationInfoFileEventQueue":{"properties":{"managedAqs":{"$ref":"#/types/databricks:index/getExternalLocationExternalLocationInfoFileEventQueueManagedAqs:getExternalLocationExternalLocationInfoFileEventQueueManagedAqs"},"managedPubsub":{"$ref":"#/types/databricks:index/getExternalLocationExternalLocationInfoFileEventQueueManagedPubsub:getExternalLocationExternalLocationInfoFileEventQueueManagedPubsub"},"managedSqs":{"$ref":"#/types/databricks:index/getExternalLocationExternalLocationInfoFileEventQueueManagedSqs:getExternalLocationExternalLocationInfoFileEventQueueManagedSqs"},"providedAqs":{"$ref":"#/types/databricks:index/getExternalLocationExternalLocationInfoFileEventQueueProvidedAqs:getExternalLocationExternalLocationInfoFileEventQueueProvidedAqs"},"providedPubsub":{"$ref":"#/types/databricks:index/getExternalLocationExternalLocationInfoFileEventQueueProvidedPubsub:getExternalLocationExternalLocationInfoFileEventQueueProvidedPubsub"},"providedSqs":{"$ref":"#/types/databricks:index/getExternalLocationExternalLocationInfoFileEventQueueProvidedSqs:getExternalLocationExternalLocationInfoFileEventQueueProvidedSqs"}},"type":"object"},"databricks:index/getExternalLocationExternalLocationInfoFileEventQueueManagedAqs:getExternalLocationExternalLocationInfoFileEventQueueManagedAqs":{"properties":{"managedResourceId":{"type":"string"},"queueUrl":{"type":"string"},"resourceGroup":{"type":"string"},"subscriptionId":{"type":"string"}},"type":"object"},"databricks:index/getExternalLocationExternalLocationInfoFileEventQueueManagedPubsub:getExternalLocationExternalLocationInfoFileEventQueueManagedPubsub":{"properties":{"managedResourceId":{"type":"string"},"subscriptionName":{"type":"string"}},"type":"object"},"databricks:index/getExternalLocationExternalLocationInfoFileEventQueueManagedSqs:getExternalLocationExternalLocationInfoFileEventQueueManagedSqs":{"properties":{"managedResourceId":{"type":"string"},"queueUrl":{"type":"string"}},"type":"object"},"databricks:index/getExternalLocationExternalLocationInfoFileEventQueueProvidedAqs:getExternalLocationExternalLocationInfoFileEventQueueProvidedAqs":{"properties":{"managedResourceId":{"type":"string"},"queueUrl":{"type":"string"},"resourceGroup":{"type":"string"},"subscriptionId":{"type":"string"}},"type":"object"},"databricks:index/getExternalLocationExternalLocationInfoFileEventQueueProvidedPubsub:getExternalLocationExternalLocationInfoFileEventQueueProvidedPubsub":{"properties":{"managedResourceId":{"type":"string"},"subscriptionName":{"type":"string"}},"type":"object"},"databricks:index/getExternalLocationExternalLocationInfoFileEventQueueProvidedSqs:getExternalLocationExternalLocationInfoFileEventQueueProvidedSqs":{"properties":{"managedResourceId":{"type":"string"},"queueUrl":{"type":"string"}},"type":"object"},"databricks:index/getExternalLocationProviderConfig:getExternalLocationProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getExternalLocationsProviderConfig:getExternalLocationsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getExternalMetadataProviderConfig:getExternalMetadataProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getExternalMetadatasExternalMetadata:getExternalMetadatasExternalMetadata":{"properties":{"columns":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of columns associated with the external metadata object\n"},"createTime":{"type":"string","description":"(string) - Time at which this external metadata object was created\n"},"createdBy":{"type":"string","description":"(string) - Username of external metadata object creator\n"},"description":{"type":"string","description":"(string) - User-provided free-form text description\n"},"entityType":{"type":"string","description":"(string) - Type of entity within the external system\n"},"id":{"type":"string","description":"(string) - Unique identifier of the external metadata object\n"},"metastoreId":{"type":"string","description":"(string) - Unique identifier of parent metastore\n"},"name":{"type":"string","description":"(string) - Name of the external metadata object\n"},"owner":{"type":"string","description":"(string) - Owner of the external metadata object\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"(object) - A map of key-value properties attached to the external metadata object\n"},"providerConfig":{"$ref":"#/types/databricks:index/getExternalMetadatasExternalMetadataProviderConfig:getExternalMetadatasExternalMetadataProviderConfig","description":"Configure the provider for management through account provider.\n"},"systemType":{"type":"string","description":"(string) - Type of external system. Possible values are: `AMAZON_REDSHIFT`, `AZURE_SYNAPSE`, `CONFLUENT`, `DATABRICKS`, `GOOGLE_BIGQUERY`, `KAFKA`, `LOOKER`, `MICROSOFT_FABRIC`, `MICROSOFT_SQL_SERVER`, `MONGODB`, `MYSQL`, `ORACLE`, `OTHER`, `POSTGRESQL`, `POWER_BI`, `SALESFORCE`, `SAP`, `SERVICENOW`, `SNOWFLAKE`, `STREAM_NATIVE`, `TABLEAU`, `TERADATA`, `WORKDAY`\n"},"updateTime":{"type":"string","description":"(string) - Time at which this external metadata object was last modified\n"},"updatedBy":{"type":"string","description":"(string) - Username of user who last modified external metadata object\n"},"url":{"type":"string","description":"(string) - URL associated with the external metadata object\n"}},"type":"object","required":["columns","createTime","createdBy","description","entityType","id","metastoreId","name","owner","properties","systemType","updateTime","updatedBy","url"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getExternalMetadatasExternalMetadataProviderConfig:getExternalMetadatasExternalMetadataProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getExternalMetadatasProviderConfig:getExternalMetadatasProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getFeatureEngineeringFeatureFunction:getFeatureEngineeringFeatureFunction":{"properties":{"extraParameters":{"type":"array","items":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureFunctionExtraParameter:getFeatureEngineeringFeatureFunctionExtraParameter"},"description":"(list of FunctionExtraParameter) - Extra parameters for parameterized functions\n"},"functionType":{"type":"string","description":"(string) - The type of the function. Possible values are: `APPROX_COUNT_DISTINCT`, `APPROX_PERCENTILE`, `AVG`, `COUNT`, `FIRST`, `LAST`, `MAX`, `MIN`, `STDDEV_POP`, `STDDEV_SAMP`, `SUM`, `VAR_POP`, `VAR_SAMP`\n"}},"type":"object","required":["functionType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeatureFunctionExtraParameter:getFeatureEngineeringFeatureFunctionExtraParameter":{"properties":{"key":{"type":"string","description":"(string) - The name of the parameter\n"},"value":{"type":"string","description":"(string) - The value of the parameter\n"}},"type":"object","required":["key","value"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeatureLineageContext:getFeatureEngineeringFeatureLineageContext":{"properties":{"jobContext":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureLineageContextJobContext:getFeatureEngineeringFeatureLineageContextJobContext","description":"(JobContext) - Job context information including job ID and run ID\n"},"notebookId":{"type":"integer","description":"(integer) - The notebook ID where this API was invoked\n"}},"type":"object"},"databricks:index/getFeatureEngineeringFeatureLineageContextJobContext:getFeatureEngineeringFeatureLineageContextJobContext":{"properties":{"jobId":{"type":"integer","description":"(integer) - The job ID where this API invoked\n"},"jobRunId":{"type":"integer","description":"(integer) - The job run ID where this API was invoked\n"}},"type":"object"},"databricks:index/getFeatureEngineeringFeatureProviderConfig:getFeatureEngineeringFeatureProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getFeatureEngineeringFeatureSource:getFeatureEngineeringFeatureSource":{"properties":{"deltaTableSource":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureSourceDeltaTableSource:getFeatureEngineeringFeatureSourceDeltaTableSource","description":"(DeltaTableSource)\n"},"kafkaSource":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureSourceKafkaSource:getFeatureEngineeringFeatureSourceKafkaSource","description":"(KafkaSource)\n"}},"type":"object"},"databricks:index/getFeatureEngineeringFeatureSourceDeltaTableSource:getFeatureEngineeringFeatureSourceDeltaTableSource":{"properties":{"entityColumns":{"type":"array","items":{"type":"string"},"description":"(list of string) - The entity columns of the Delta table\n"},"fullName":{"type":"string","description":"The full three-part name (catalog, schema, name) of the feature\n"},"timeseriesColumn":{"type":"string","description":"(string) - The timeseries column of the Delta table\n"}},"type":"object","required":["entityColumns","fullName","timeseriesColumn"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeatureSourceKafkaSource:getFeatureEngineeringFeatureSourceKafkaSource":{"properties":{"entityColumnIdentifiers":{"type":"array","items":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureSourceKafkaSourceEntityColumnIdentifier:getFeatureEngineeringFeatureSourceKafkaSourceEntityColumnIdentifier"},"description":"(list of ColumnIdentifier) - The entity column identifiers of the Kafka source\n"},"name":{"type":"string","description":"(string) - Name of the Kafka source, used to identify it. This is used to look up the corresponding KafkaConfig object. Can be distinct from topic name\n"},"timeseriesColumnIdentifier":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureSourceKafkaSourceTimeseriesColumnIdentifier:getFeatureEngineeringFeatureSourceKafkaSourceTimeseriesColumnIdentifier","description":"(ColumnIdentifier) - The timeseries column identifier of the Kafka source\n"}},"type":"object","required":["entityColumnIdentifiers","name","timeseriesColumnIdentifier"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeatureSourceKafkaSourceEntityColumnIdentifier:getFeatureEngineeringFeatureSourceKafkaSourceEntityColumnIdentifier":{"properties":{"variantExprPath":{"type":"string","description":"(string) - String representation of the column name or variant expression path. For nested fields, the leaf value is what will be present in materialized tables\nand expected to match at query time. For example, the leaf node of value:trip_details.location_details.pickup_zip is pickup_zip\n"}},"type":"object","required":["variantExprPath"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeatureSourceKafkaSourceTimeseriesColumnIdentifier:getFeatureEngineeringFeatureSourceKafkaSourceTimeseriesColumnIdentifier":{"properties":{"variantExprPath":{"type":"string","description":"(string) - String representation of the column name or variant expression path. For nested fields, the leaf value is what will be present in materialized tables\nand expected to match at query time. For example, the leaf node of value:trip_details.location_details.pickup_zip is pickup_zip\n"}},"type":"object","required":["variantExprPath"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeatureTimeWindow:getFeatureEngineeringFeatureTimeWindow":{"properties":{"continuous":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureTimeWindowContinuous:getFeatureEngineeringFeatureTimeWindowContinuous","description":"(ContinuousWindow)\n"},"sliding":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureTimeWindowSliding:getFeatureEngineeringFeatureTimeWindowSliding","description":"(SlidingWindow)\n"},"tumbling":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureTimeWindowTumbling:getFeatureEngineeringFeatureTimeWindowTumbling","description":"(TumblingWindow)\n"}},"type":"object"},"databricks:index/getFeatureEngineeringFeatureTimeWindowContinuous:getFeatureEngineeringFeatureTimeWindowContinuous":{"properties":{"offset":{"type":"string","description":"(string) - The offset of the continuous window (must be non-positive)\n"},"windowDuration":{"type":"string","description":"(string) - The duration of each tumbling window (non-overlapping, fixed-duration windows)\n"}},"type":"object","required":["windowDuration"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeatureTimeWindowSliding:getFeatureEngineeringFeatureTimeWindowSliding":{"properties":{"slideDuration":{"type":"string","description":"(string) - The slide duration (interval by which windows advance, must be positive and less than duration)\n"},"windowDuration":{"type":"string","description":"(string) - The duration of each tumbling window (non-overlapping, fixed-duration windows)\n"}},"type":"object","required":["slideDuration","windowDuration"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeatureTimeWindowTumbling:getFeatureEngineeringFeatureTimeWindowTumbling":{"properties":{"windowDuration":{"type":"string","description":"(string) - The duration of each tumbling window (non-overlapping, fixed-duration windows)\n"}},"type":"object","required":["windowDuration"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesFeature:getFeatureEngineeringFeaturesFeature":{"properties":{"description":{"type":"string","description":"(string) - The description of the feature\n"},"filterCondition":{"type":"string","description":"(string) - The filter condition applied to the source data before aggregation\n"},"fullName":{"type":"string","description":"(string) - The full three-part (catalog, schema, table) name of the Delta table\n"},"function":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureFunction:getFeatureEngineeringFeaturesFeatureFunction","description":"(Function) - The function by which the feature is computed\n"},"inputs":{"type":"array","items":{"type":"string"},"description":"(list of string) - The input columns from which the feature is computed\n"},"lineageContext":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureLineageContext:getFeatureEngineeringFeaturesFeatureLineageContext","description":"(LineageContext) - WARNING: This field is primarily intended for internal use by Databricks systems and\nis automatically populated when features are created through Databricks notebooks or jobs.\nUsers should not manually set this field as incorrect values may lead to inaccurate lineage tracking or unexpected behavior.\nThis field will be set by feature-engineering client and should be left unset by SDK and terraform users\n"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureProviderConfig:getFeatureEngineeringFeaturesFeatureProviderConfig","description":"Configure the provider for management through account provider.\n"},"source":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureSource:getFeatureEngineeringFeaturesFeatureSource","description":"(DataSource) - The data source of the feature\n"},"timeWindow":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureTimeWindow:getFeatureEngineeringFeaturesFeatureTimeWindow","description":"(TimeWindow) - The time window in which the feature is computed\n"}},"type":"object","required":["description","filterCondition","fullName","function","inputs","lineageContext","source","timeWindow"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesFeatureFunction:getFeatureEngineeringFeaturesFeatureFunction":{"properties":{"extraParameters":{"type":"array","items":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureFunctionExtraParameter:getFeatureEngineeringFeaturesFeatureFunctionExtraParameter"},"description":"(list of FunctionExtraParameter) - Extra parameters for parameterized functions\n"},"functionType":{"type":"string","description":"(string) - The type of the function. Possible values are: `APPROX_COUNT_DISTINCT`, `APPROX_PERCENTILE`, `AVG`, `COUNT`, `FIRST`, `LAST`, `MAX`, `MIN`, `STDDEV_POP`, `STDDEV_SAMP`, `SUM`, `VAR_POP`, `VAR_SAMP`\n"}},"type":"object","required":["functionType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesFeatureFunctionExtraParameter:getFeatureEngineeringFeaturesFeatureFunctionExtraParameter":{"properties":{"key":{"type":"string","description":"(string) - The name of the parameter\n"},"value":{"type":"string","description":"(string) - The value of the parameter\n"}},"type":"object","required":["key","value"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesFeatureLineageContext:getFeatureEngineeringFeaturesFeatureLineageContext":{"properties":{"jobContext":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureLineageContextJobContext:getFeatureEngineeringFeaturesFeatureLineageContextJobContext","description":"(JobContext) - Job context information including job ID and run ID\n"},"notebookId":{"type":"integer","description":"(integer) - The notebook ID where this API was invoked\n"}},"type":"object"},"databricks:index/getFeatureEngineeringFeaturesFeatureLineageContextJobContext:getFeatureEngineeringFeaturesFeatureLineageContextJobContext":{"properties":{"jobId":{"type":"integer","description":"(integer) - The job ID where this API invoked\n"},"jobRunId":{"type":"integer","description":"(integer) - The job run ID where this API was invoked\n"}},"type":"object"},"databricks:index/getFeatureEngineeringFeaturesFeatureProviderConfig:getFeatureEngineeringFeaturesFeatureProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesFeatureSource:getFeatureEngineeringFeaturesFeatureSource":{"properties":{"deltaTableSource":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureSourceDeltaTableSource:getFeatureEngineeringFeaturesFeatureSourceDeltaTableSource","description":"(DeltaTableSource)\n"},"kafkaSource":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureSourceKafkaSource:getFeatureEngineeringFeaturesFeatureSourceKafkaSource","description":"(KafkaSource)\n"}},"type":"object"},"databricks:index/getFeatureEngineeringFeaturesFeatureSourceDeltaTableSource:getFeatureEngineeringFeaturesFeatureSourceDeltaTableSource":{"properties":{"entityColumns":{"type":"array","items":{"type":"string"},"description":"(list of string) - The entity columns of the Delta table\n"},"fullName":{"type":"string","description":"(string) - The full three-part (catalog, schema, table) name of the Delta table\n"},"timeseriesColumn":{"type":"string","description":"(string) - The timeseries column of the Delta table\n"}},"type":"object","required":["entityColumns","fullName","timeseriesColumn"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesFeatureSourceKafkaSource:getFeatureEngineeringFeaturesFeatureSourceKafkaSource":{"properties":{"entityColumnIdentifiers":{"type":"array","items":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureSourceKafkaSourceEntityColumnIdentifier:getFeatureEngineeringFeaturesFeatureSourceKafkaSourceEntityColumnIdentifier"},"description":"(list of ColumnIdentifier) - The entity column identifiers of the Kafka source\n"},"name":{"type":"string","description":"(string) - Name of the Kafka source, used to identify it. This is used to look up the corresponding KafkaConfig object. Can be distinct from topic name\n"},"timeseriesColumnIdentifier":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureSourceKafkaSourceTimeseriesColumnIdentifier:getFeatureEngineeringFeaturesFeatureSourceKafkaSourceTimeseriesColumnIdentifier","description":"(ColumnIdentifier) - The timeseries column identifier of the Kafka source\n"}},"type":"object","required":["entityColumnIdentifiers","name","timeseriesColumnIdentifier"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesFeatureSourceKafkaSourceEntityColumnIdentifier:getFeatureEngineeringFeaturesFeatureSourceKafkaSourceEntityColumnIdentifier":{"properties":{"variantExprPath":{"type":"string","description":"(string) - String representation of the column name or variant expression path. For nested fields, the leaf value is what will be present in materialized tables\nand expected to match at query time. For example, the leaf node of value:trip_details.location_details.pickup_zip is pickup_zip\n"}},"type":"object","required":["variantExprPath"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesFeatureSourceKafkaSourceTimeseriesColumnIdentifier:getFeatureEngineeringFeaturesFeatureSourceKafkaSourceTimeseriesColumnIdentifier":{"properties":{"variantExprPath":{"type":"string","description":"(string) - String representation of the column name or variant expression path. For nested fields, the leaf value is what will be present in materialized tables\nand expected to match at query time. For example, the leaf node of value:trip_details.location_details.pickup_zip is pickup_zip\n"}},"type":"object","required":["variantExprPath"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesFeatureTimeWindow:getFeatureEngineeringFeaturesFeatureTimeWindow":{"properties":{"continuous":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureTimeWindowContinuous:getFeatureEngineeringFeaturesFeatureTimeWindowContinuous","description":"(ContinuousWindow)\n"},"sliding":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureTimeWindowSliding:getFeatureEngineeringFeaturesFeatureTimeWindowSliding","description":"(SlidingWindow)\n"},"tumbling":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeatureTimeWindowTumbling:getFeatureEngineeringFeaturesFeatureTimeWindowTumbling","description":"(TumblingWindow)\n"}},"type":"object"},"databricks:index/getFeatureEngineeringFeaturesFeatureTimeWindowContinuous:getFeatureEngineeringFeaturesFeatureTimeWindowContinuous":{"properties":{"offset":{"type":"string","description":"(string) - The offset of the continuous window (must be non-positive)\n"},"windowDuration":{"type":"string","description":"(string) - The duration of each tumbling window (non-overlapping, fixed-duration windows)\n"}},"type":"object","required":["windowDuration"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesFeatureTimeWindowSliding:getFeatureEngineeringFeaturesFeatureTimeWindowSliding":{"properties":{"slideDuration":{"type":"string","description":"(string) - The slide duration (interval by which windows advance, must be positive and less than duration)\n"},"windowDuration":{"type":"string","description":"(string) - The duration of each tumbling window (non-overlapping, fixed-duration windows)\n"}},"type":"object","required":["slideDuration","windowDuration"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesFeatureTimeWindowTumbling:getFeatureEngineeringFeaturesFeatureTimeWindowTumbling":{"properties":{"windowDuration":{"type":"string","description":"(string) - The duration of each tumbling window (non-overlapping, fixed-duration windows)\n"}},"type":"object","required":["windowDuration"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringFeaturesProviderConfig:getFeatureEngineeringFeaturesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getFeatureEngineeringKafkaConfigAuthConfig:getFeatureEngineeringKafkaConfigAuthConfig":{"properties":{"ucServiceCredentialName":{"type":"string","description":"(string) - Name of the Unity Catalog service credential. This value will be set under the option databricks.serviceCredential\n"}},"type":"object"},"databricks:index/getFeatureEngineeringKafkaConfigBackfillSource:getFeatureEngineeringKafkaConfigBackfillSource":{"properties":{"deltaTableSource":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigBackfillSourceDeltaTableSource:getFeatureEngineeringKafkaConfigBackfillSourceDeltaTableSource","description":"(DeltaTableSource) - The Delta table source containing the historic data to backfill.\nOnly the delta table name is used for backfill, the entity columns and timeseries column are ignored as they are defined by the associated KafkaSource\n"}},"type":"object"},"databricks:index/getFeatureEngineeringKafkaConfigBackfillSourceDeltaTableSource:getFeatureEngineeringKafkaConfigBackfillSourceDeltaTableSource":{"properties":{"entityColumns":{"type":"array","items":{"type":"string"},"description":"(list of string) - The entity columns of the Delta table\n"},"fullName":{"type":"string","description":"(string) - The full three-part (catalog, schema, table) name of the Delta table\n"},"timeseriesColumn":{"type":"string","description":"(string) - The timeseries column of the Delta table\n"}},"type":"object","required":["entityColumns","fullName","timeseriesColumn"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringKafkaConfigKeySchema:getFeatureEngineeringKafkaConfigKeySchema":{"properties":{"jsonSchema":{"type":"string","description":"(string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)\n"}},"type":"object"},"databricks:index/getFeatureEngineeringKafkaConfigProviderConfig:getFeatureEngineeringKafkaConfigProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getFeatureEngineeringKafkaConfigSubscriptionMode:getFeatureEngineeringKafkaConfigSubscriptionMode":{"properties":{"assign":{"type":"string","description":"(string) - A JSON string that contains the specific topic-partitions to consume from.\nFor example, for '{\"topicA\":[0,1],\"topicB\":[2,4]}', topicA's 0'th and 1st partitions will be consumed from\n"},"subscribe":{"type":"string","description":"(string) - A comma-separated list of Kafka topics to read from. For example, 'topicA,topicB,topicC'\n"},"subscribePattern":{"type":"string","description":"(string) - A regular expression matching topics to subscribe to. For example, 'topic.*' will subscribe to all topics starting with 'topic'\n"}},"type":"object"},"databricks:index/getFeatureEngineeringKafkaConfigValueSchema:getFeatureEngineeringKafkaConfigValueSchema":{"properties":{"jsonSchema":{"type":"string","description":"(string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)\n"}},"type":"object"},"databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfig:getFeatureEngineeringKafkaConfigsKafkaConfig":{"properties":{"authConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigAuthConfig:getFeatureEngineeringKafkaConfigsKafkaConfigAuthConfig","description":"(AuthConfig) - Authentication configuration for connection to topics\n"},"backfillSource":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigBackfillSource:getFeatureEngineeringKafkaConfigsKafkaConfigBackfillSource","description":"(BackfillSource) - A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config.\nIn the future, a separate table will be maintained by Databricks for forward filling data.\nThe schema for this source must match exactly that of the key and value schemas specified for this Kafka config\n"},"bootstrapServers":{"type":"string","description":"(string) - A comma-separated list of host/port pairs pointing to Kafka cluster\n"},"extraOptions":{"type":"object","additionalProperties":{"type":"string"},"description":"(object) - Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*)\n"},"keySchema":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigKeySchema:getFeatureEngineeringKafkaConfigsKafkaConfigKeySchema","description":"(SchemaConfig) - Schema configuration for extracting message keys from topics. At least one of\u003cspan pulumi-lang-nodejs=\" keySchema \" pulumi-lang-dotnet=\" KeySchema \" pulumi-lang-go=\" keySchema \" pulumi-lang-python=\" key_schema \" pulumi-lang-yaml=\" keySchema \" pulumi-lang-java=\" keySchema \"\u003e key_schema \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" valueSchema \" pulumi-lang-dotnet=\" ValueSchema \" pulumi-lang-go=\" valueSchema \" pulumi-lang-python=\" value_schema \" pulumi-lang-yaml=\" valueSchema \" pulumi-lang-java=\" valueSchema \"\u003e value_schema \u003c/span\u003emust be provided\n"},"name":{"type":"string","description":"(string) - Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature.\nCan be distinct from topic name\n"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigProviderConfig:getFeatureEngineeringKafkaConfigsKafkaConfigProviderConfig","description":"Configure the provider for management through account provider.\n"},"subscriptionMode":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigSubscriptionMode:getFeatureEngineeringKafkaConfigsKafkaConfigSubscriptionMode","description":"(SubscriptionMode) - Options to configure which Kafka topics to pull data from\n"},"valueSchema":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigValueSchema:getFeatureEngineeringKafkaConfigsKafkaConfigValueSchema","description":"(SchemaConfig) - Schema configuration for extracting message values from topics. At least one of\u003cspan pulumi-lang-nodejs=\" keySchema \" pulumi-lang-dotnet=\" KeySchema \" pulumi-lang-go=\" keySchema \" pulumi-lang-python=\" key_schema \" pulumi-lang-yaml=\" keySchema \" pulumi-lang-java=\" keySchema \"\u003e key_schema \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" valueSchema \" pulumi-lang-dotnet=\" ValueSchema \" pulumi-lang-go=\" valueSchema \" pulumi-lang-python=\" value_schema \" pulumi-lang-yaml=\" valueSchema \" pulumi-lang-java=\" valueSchema \"\u003e value_schema \u003c/span\u003emust be provided\n"}},"type":"object","required":["authConfig","backfillSource","bootstrapServers","extraOptions","keySchema","name","subscriptionMode","valueSchema"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigAuthConfig:getFeatureEngineeringKafkaConfigsKafkaConfigAuthConfig":{"properties":{"ucServiceCredentialName":{"type":"string","description":"(string) - Name of the Unity Catalog service credential. This value will be set under the option databricks.serviceCredential\n"}},"type":"object"},"databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigBackfillSource:getFeatureEngineeringKafkaConfigsKafkaConfigBackfillSource":{"properties":{"deltaTableSource":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigBackfillSourceDeltaTableSource:getFeatureEngineeringKafkaConfigsKafkaConfigBackfillSourceDeltaTableSource","description":"(DeltaTableSource) - The Delta table source containing the historic data to backfill.\nOnly the delta table name is used for backfill, the entity columns and timeseries column are ignored as they are defined by the associated KafkaSource\n"}},"type":"object"},"databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigBackfillSourceDeltaTableSource:getFeatureEngineeringKafkaConfigsKafkaConfigBackfillSourceDeltaTableSource":{"properties":{"entityColumns":{"type":"array","items":{"type":"string"},"description":"(list of string) - The entity columns of the Delta table\n"},"fullName":{"type":"string","description":"(string) - The full three-part (catalog, schema, table) name of the Delta table\n"},"timeseriesColumn":{"type":"string","description":"(string) - The timeseries column of the Delta table\n"}},"type":"object","required":["entityColumns","fullName","timeseriesColumn"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigKeySchema:getFeatureEngineeringKafkaConfigsKafkaConfigKeySchema":{"properties":{"jsonSchema":{"type":"string","description":"(string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)\n"}},"type":"object"},"databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigProviderConfig:getFeatureEngineeringKafkaConfigsKafkaConfigProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigSubscriptionMode:getFeatureEngineeringKafkaConfigsKafkaConfigSubscriptionMode":{"properties":{"assign":{"type":"string","description":"(string) - A JSON string that contains the specific topic-partitions to consume from.\nFor example, for '{\"topicA\":[0,1],\"topicB\":[2,4]}', topicA's 0'th and 1st partitions will be consumed from\n"},"subscribe":{"type":"string","description":"(string) - A comma-separated list of Kafka topics to read from. For example, 'topicA,topicB,topicC'\n"},"subscribePattern":{"type":"string","description":"(string) - A regular expression matching topics to subscribe to. For example, 'topic.*' will subscribe to all topics starting with 'topic'\n"}},"type":"object"},"databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfigValueSchema:getFeatureEngineeringKafkaConfigsKafkaConfigValueSchema":{"properties":{"jsonSchema":{"type":"string","description":"(string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)\n"}},"type":"object"},"databricks:index/getFeatureEngineeringKafkaConfigsProviderConfig:getFeatureEngineeringKafkaConfigsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getFeatureEngineeringMaterializedFeatureOfflineStoreConfig:getFeatureEngineeringMaterializedFeatureOfflineStoreConfig":{"properties":{"catalogName":{"type":"string","description":"(string) - The Unity Catalog catalog name. This name is also used as the Lakebase logical database name\n"},"schemaName":{"type":"string","description":"(string) - The Unity Catalog schema name\n"},"tableNamePrefix":{"type":"string","description":"(string) - Prefix for Unity Catalog table name.\nThe materialized feature will be stored in a Lakebase table with this prefix and a generated postfix\n"}},"type":"object","required":["catalogName","schemaName","tableNamePrefix"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringMaterializedFeatureOnlineStoreConfig:getFeatureEngineeringMaterializedFeatureOnlineStoreConfig":{"properties":{"catalogName":{"type":"string","description":"(string) - The Unity Catalog catalog name. This name is also used as the Lakebase logical database name\n"},"onlineStoreName":{"type":"string","description":"(string) - The name of the target online store\n"},"schemaName":{"type":"string","description":"(string) - The Unity Catalog schema name\n"},"tableNamePrefix":{"type":"string","description":"(string) - Prefix for Unity Catalog table name.\nThe materialized feature will be stored in a Lakebase table with this prefix and a generated postfix\n"}},"type":"object","required":["catalogName","onlineStoreName","schemaName","tableNamePrefix"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringMaterializedFeatureProviderConfig:getFeatureEngineeringMaterializedFeatureProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getFeatureEngineeringMaterializedFeaturesMaterializedFeature:getFeatureEngineeringMaterializedFeaturesMaterializedFeature":{"properties":{"cronSchedule":{"type":"string","description":"(string) - The quartz cron expression that defines the schedule of the materialization pipeline. The schedule is evaluated in the UTC timezone\n"},"featureName":{"type":"string","description":"Filter by feature name. If specified, only materialized features materialized from this feature will be returned\n"},"lastMaterializationTime":{"type":"string","description":"(string) - The timestamp when the pipeline last ran and updated the materialized feature values.\nIf the pipeline has not run yet, this field will be null\n"},"materializedFeatureId":{"type":"string","description":"(string) - Unique identifier for the materialized feature\n"},"offlineStoreConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringMaterializedFeaturesMaterializedFeatureOfflineStoreConfig:getFeatureEngineeringMaterializedFeaturesMaterializedFeatureOfflineStoreConfig","description":"(OfflineStoreConfig)\n"},"onlineStoreConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringMaterializedFeaturesMaterializedFeatureOnlineStoreConfig:getFeatureEngineeringMaterializedFeaturesMaterializedFeatureOnlineStoreConfig","description":"(OnlineStoreConfig)\n"},"pipelineScheduleState":{"type":"string","description":"(string) - The schedule state of the materialization pipeline. Possible values are: `ACTIVE`, `PAUSED`, `SNAPSHOT`\n"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringMaterializedFeaturesMaterializedFeatureProviderConfig:getFeatureEngineeringMaterializedFeaturesMaterializedFeatureProviderConfig","description":"Configure the provider for management through account provider.\n"},"tableName":{"type":"string","description":"(string) - The fully qualified Unity Catalog path to the table containing the materialized feature (Delta table or Lakebase table). Output only\n"}},"type":"object","required":["cronSchedule","featureName","lastMaterializationTime","materializedFeatureId","offlineStoreConfig","onlineStoreConfig","pipelineScheduleState","tableName"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringMaterializedFeaturesMaterializedFeatureOfflineStoreConfig:getFeatureEngineeringMaterializedFeaturesMaterializedFeatureOfflineStoreConfig":{"properties":{"catalogName":{"type":"string","description":"(string) - The Unity Catalog catalog name. This name is also used as the Lakebase logical database name\n"},"schemaName":{"type":"string","description":"(string) - The Unity Catalog schema name\n"},"tableNamePrefix":{"type":"string","description":"(string) - Prefix for Unity Catalog table name.\nThe materialized feature will be stored in a Lakebase table with this prefix and a generated postfix\n"}},"type":"object","required":["catalogName","schemaName","tableNamePrefix"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringMaterializedFeaturesMaterializedFeatureOnlineStoreConfig:getFeatureEngineeringMaterializedFeaturesMaterializedFeatureOnlineStoreConfig":{"properties":{"catalogName":{"type":"string","description":"(string) - The Unity Catalog catalog name. This name is also used as the Lakebase logical database name\n"},"onlineStoreName":{"type":"string","description":"(string) - The name of the target online store\n"},"schemaName":{"type":"string","description":"(string) - The Unity Catalog schema name\n"},"tableNamePrefix":{"type":"string","description":"(string) - Prefix for Unity Catalog table name.\nThe materialized feature will be stored in a Lakebase table with this prefix and a generated postfix\n"}},"type":"object","required":["catalogName","onlineStoreName","schemaName","tableNamePrefix"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringMaterializedFeaturesMaterializedFeatureProviderConfig:getFeatureEngineeringMaterializedFeaturesMaterializedFeatureProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getFeatureEngineeringMaterializedFeaturesProviderConfig:getFeatureEngineeringMaterializedFeaturesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getFunctionsFunction:getFunctionsFunction":{"properties":{"browseOnly":{"type":"boolean","description":"Indicates whether the principal is limited to retrieving metadata for the associated object through the `BROWSE` privilege when \u003cspan pulumi-lang-nodejs=\"`includeBrowse`\" pulumi-lang-dotnet=\"`IncludeBrowse`\" pulumi-lang-go=\"`includeBrowse`\" pulumi-lang-python=\"`include_browse`\" pulumi-lang-yaml=\"`includeBrowse`\" pulumi-lang-java=\"`includeBrowse`\"\u003e`include_browse`\u003c/span\u003e is enabled in the request.\n"},"catalogName":{"type":"string","description":"Name of databricks_catalog.\n"},"comment":{"type":"string","description":"User-provided free-form text description.\n"},"createdAt":{"type":"integer","description":"Time at which this function was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of function creator.\n"},"dataType":{"type":"string","description":"Scalar function return data type.\n"},"externalLanguage":{"type":"string","description":"External function language.\n"},"externalName":{"type":"string","description":"External function name.\n"},"fullDataType":{"type":"string","description":"Pretty printed function data type.\n"},"fullName":{"type":"string","description":"Full name of function, in form of catalog_name.schema_name.function__name\n"},"functionId":{"type":"string","description":"Id of Function, relative to parent schema.\n"},"inputParams":{"$ref":"#/types/databricks:index/getFunctionsFunctionInputParams:getFunctionsFunctionInputParams","description":"object describing input parameters. Consists of the single attribute:\n"},"isDeterministic":{"type":"boolean","description":"Boolean flag specifying whether the function is deterministic.\n"},"isNullCall":{"type":"boolean","description":"Boolean flag whether function null call.\n"},"metastoreId":{"type":"string","description":"Unique identifier of parent metastore.\n"},"name":{"type":"string","description":"Name of parameter.\n"},"owner":{"type":"string","description":"Username of current owner of function.\n"},"parameterStyle":{"type":"string","description":"Function parameter style. `S` is the value for SQL.\n"},"properties":{"type":"string","description":"JSON-serialized key-value pair map, encoded (escaped) as a string.\n"},"returnParams":{"$ref":"#/types/databricks:index/getFunctionsFunctionReturnParams:getFunctionsFunctionReturnParams","description":"Table function return parameters.  See \u003cspan pulumi-lang-nodejs=\"`inputParams`\" pulumi-lang-dotnet=\"`InputParams`\" pulumi-lang-go=\"`inputParams`\" pulumi-lang-python=\"`input_params`\" pulumi-lang-yaml=\"`inputParams`\" pulumi-lang-java=\"`inputParams`\"\u003e`input_params`\u003c/span\u003e for description.\n"},"routineBody":{"type":"string","description":"Function language (`SQL` or `EXTERNAL`). When `EXTERNAL` is used, the language of the routine function should be specified in the \u003cspan pulumi-lang-nodejs=\"`externalLanguage`\" pulumi-lang-dotnet=\"`ExternalLanguage`\" pulumi-lang-go=\"`externalLanguage`\" pulumi-lang-python=\"`external_language`\" pulumi-lang-yaml=\"`externalLanguage`\" pulumi-lang-java=\"`externalLanguage`\"\u003e`external_language`\u003c/span\u003e field, and the \u003cspan pulumi-lang-nodejs=\"`returnParams`\" pulumi-lang-dotnet=\"`ReturnParams`\" pulumi-lang-go=\"`returnParams`\" pulumi-lang-python=\"`return_params`\" pulumi-lang-yaml=\"`returnParams`\" pulumi-lang-java=\"`returnParams`\"\u003e`return_params`\u003c/span\u003e of the function cannot be used (as `TABLE` return type is not supported), and the \u003cspan pulumi-lang-nodejs=\"`sqlDataAccess`\" pulumi-lang-dotnet=\"`SqlDataAccess`\" pulumi-lang-go=\"`sqlDataAccess`\" pulumi-lang-python=\"`sql_data_access`\" pulumi-lang-yaml=\"`sqlDataAccess`\" pulumi-lang-java=\"`sqlDataAccess`\"\u003e`sql_data_access`\u003c/span\u003e field must be `NO_SQL`.\n"},"routineDefinition":{"type":"string","description":"Function body.\n"},"routineDependencies":{"$ref":"#/types/databricks:index/getFunctionsFunctionRoutineDependencies:getFunctionsFunctionRoutineDependencies","description":"Function dependencies.\n"},"schemaName":{"type":"string","description":"Name of databricks_schema.\n"},"securityType":{"type":"string","description":"Function security type. (Enum: `DEFINER`).\n"},"specificName":{"type":"string","description":"Specific name of the function; Reserved for future use.\n"},"sqlDataAccess":{"type":"string","description":"Function SQL data access (`CONTAINS_SQL`, `READS_SQL_DATA`, `NO_SQL`).\n"},"sqlPath":{"type":"string","description":"List of schemes whose objects can be referenced without qualification.\n"},"updatedAt":{"type":"integer","description":"Time at which this function was created, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of user who last modified function.\n"}},"type":"object"},"databricks:index/getFunctionsFunctionInputParams:getFunctionsFunctionInputParams":{"properties":{"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/getFunctionsFunctionInputParamsParameter:getFunctionsFunctionInputParamsParameter"},"description":"The array of definitions of the function's parameters:\n"}},"type":"object"},"databricks:index/getFunctionsFunctionInputParamsParameter:getFunctionsFunctionInputParamsParameter":{"properties":{"comment":{"type":"string","description":"User-provided free-form text description.\n"},"name":{"type":"string","description":"Name of parameter.\n"},"parameterDefault":{"type":"string","description":"Default value of the parameter.\n"},"parameterMode":{"type":"string","description":"The mode of the function parameter.\n"},"parameterType":{"type":"string","description":"The type of function parameter (`PARAM` or `COLUMN`).\n"},"position":{"type":"integer","description":"Ordinal position of column (starting at position 0).\n"},"typeIntervalType":{"type":"string","description":"Format of IntervalType.\n"},"typeJson":{"type":"string","description":"Full data type spec, JSON-serialized.\n"},"typeName":{"type":"string","description":"Name of type (INT, STRUCT, MAP, etc.).\n"},"typePrecision":{"type":"integer","description":"Digits of precision; required on Create for DecimalTypes.\n"},"typeScale":{"type":"integer","description":"Digits to right of decimal; Required on Create for DecimalTypes.\n"},"typeText":{"type":"string","description":"Full data type spec, SQL/catalogString text.\n"}},"type":"object","required":["name","position","typeName","typeText"]},"databricks:index/getFunctionsFunctionReturnParams:getFunctionsFunctionReturnParams":{"properties":{"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/getFunctionsFunctionReturnParamsParameter:getFunctionsFunctionReturnParamsParameter"},"description":"The array of definitions of the function's parameters:\n"}},"type":"object"},"databricks:index/getFunctionsFunctionReturnParamsParameter:getFunctionsFunctionReturnParamsParameter":{"properties":{"comment":{"type":"string","description":"User-provided free-form text description.\n"},"name":{"type":"string","description":"Name of parameter.\n"},"parameterDefault":{"type":"string","description":"Default value of the parameter.\n"},"parameterMode":{"type":"string","description":"The mode of the function parameter.\n"},"parameterType":{"type":"string","description":"The type of function parameter (`PARAM` or `COLUMN`).\n"},"position":{"type":"integer","description":"Ordinal position of column (starting at position 0).\n"},"typeIntervalType":{"type":"string","description":"Format of IntervalType.\n"},"typeJson":{"type":"string","description":"Full data type spec, JSON-serialized.\n"},"typeName":{"type":"string","description":"Name of type (INT, STRUCT, MAP, etc.).\n"},"typePrecision":{"type":"integer","description":"Digits of precision; required on Create for DecimalTypes.\n"},"typeScale":{"type":"integer","description":"Digits to right of decimal; Required on Create for DecimalTypes.\n"},"typeText":{"type":"string","description":"Full data type spec, SQL/catalogString text.\n"}},"type":"object","required":["name","position","typeName","typeText"]},"databricks:index/getFunctionsFunctionRoutineDependencies:getFunctionsFunctionRoutineDependencies":{"properties":{"dependencies":{"type":"array","items":{"$ref":"#/types/databricks:index/getFunctionsFunctionRoutineDependenciesDependency:getFunctionsFunctionRoutineDependenciesDependency"}}},"type":"object"},"databricks:index/getFunctionsFunctionRoutineDependenciesDependency:getFunctionsFunctionRoutineDependenciesDependency":{"properties":{"connection":{"$ref":"#/types/databricks:index/getFunctionsFunctionRoutineDependenciesDependencyConnection:getFunctionsFunctionRoutineDependenciesDependencyConnection"},"credential":{"$ref":"#/types/databricks:index/getFunctionsFunctionRoutineDependenciesDependencyCredential:getFunctionsFunctionRoutineDependenciesDependencyCredential"},"function":{"$ref":"#/types/databricks:index/getFunctionsFunctionRoutineDependenciesDependencyFunction:getFunctionsFunctionRoutineDependenciesDependencyFunction"},"table":{"$ref":"#/types/databricks:index/getFunctionsFunctionRoutineDependenciesDependencyTable:getFunctionsFunctionRoutineDependenciesDependencyTable"}},"type":"object"},"databricks:index/getFunctionsFunctionRoutineDependenciesDependencyConnection:getFunctionsFunctionRoutineDependenciesDependencyConnection":{"properties":{"connectionName":{"type":"string"}},"type":"object"},"databricks:index/getFunctionsFunctionRoutineDependenciesDependencyCredential:getFunctionsFunctionRoutineDependenciesDependencyCredential":{"properties":{"credentialName":{"type":"string"}},"type":"object"},"databricks:index/getFunctionsFunctionRoutineDependenciesDependencyFunction:getFunctionsFunctionRoutineDependenciesDependencyFunction":{"properties":{"functionFullName":{"type":"string"}},"type":"object","required":["functionFullName"]},"databricks:index/getFunctionsFunctionRoutineDependenciesDependencyTable:getFunctionsFunctionRoutineDependenciesDependencyTable":{"properties":{"tableFullName":{"type":"string"}},"type":"object","required":["tableFullName"]},"databricks:index/getFunctionsProviderConfig:getFunctionsProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/getGroupProviderConfig:getGroupProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getInstancePoolPoolInfo:getInstancePoolPoolInfo":{"properties":{"awsAttributes":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoAwsAttributes:getInstancePoolPoolInfoAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoAzureAttributes:getInstancePoolPoolInfoAzureAttributes"},"customTags":{"type":"object","additionalProperties":{"type":"string"}},"defaultTags":{"type":"object","additionalProperties":{"type":"string"}},"diskSpec":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoDiskSpec:getInstancePoolPoolInfoDiskSpec"},"enableElasticDisk":{"type":"boolean"},"gcpAttributes":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoGcpAttributes:getInstancePoolPoolInfoGcpAttributes"},"idleInstanceAutoterminationMinutes":{"type":"integer"},"instancePoolFleetAttributes":{"type":"array","items":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoInstancePoolFleetAttribute:getInstancePoolPoolInfoInstancePoolFleetAttribute"}},"instancePoolId":{"type":"string"},"instancePoolName":{"type":"string"},"maxCapacity":{"type":"integer"},"minIdleInstances":{"type":"integer"},"nodeTypeFlexibility":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoNodeTypeFlexibility:getInstancePoolPoolInfoNodeTypeFlexibility"},"nodeTypeId":{"type":"string"},"preloadedDockerImages":{"type":"array","items":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoPreloadedDockerImage:getInstancePoolPoolInfoPreloadedDockerImage"}},"preloadedSparkVersions":{"type":"array","items":{"type":"string"}},"state":{"type":"string"},"stats":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoStats:getInstancePoolPoolInfoStats"}},"type":"object","required":["defaultTags","idleInstanceAutoterminationMinutes","instancePoolId","instancePoolName"],"language":{"nodejs":{"requiredInputs":["idleInstanceAutoterminationMinutes","instancePoolName"]}}},"databricks:index/getInstancePoolPoolInfoAwsAttributes:getInstancePoolPoolInfoAwsAttributes":{"properties":{"availability":{"type":"string"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object","required":["zoneId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getInstancePoolPoolInfoAzureAttributes:getInstancePoolPoolInfoAzureAttributes":{"properties":{"availability":{"type":"string"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/getInstancePoolPoolInfoDiskSpec:getInstancePoolPoolInfoDiskSpec":{"properties":{"diskCount":{"type":"integer"},"diskSize":{"type":"integer"},"diskType":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoDiskSpecDiskType:getInstancePoolPoolInfoDiskSpecDiskType"}},"type":"object"},"databricks:index/getInstancePoolPoolInfoDiskSpecDiskType:getInstancePoolPoolInfoDiskSpecDiskType":{"properties":{"azureDiskVolumeType":{"type":"string"},"ebsVolumeType":{"type":"string"}},"type":"object"},"databricks:index/getInstancePoolPoolInfoGcpAttributes:getInstancePoolPoolInfoGcpAttributes":{"properties":{"gcpAvailability":{"type":"string"},"localSsdCount":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object","required":["localSsdCount","zoneId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getInstancePoolPoolInfoInstancePoolFleetAttribute:getInstancePoolPoolInfoInstancePoolFleetAttribute":{"properties":{"fleetOnDemandOption":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoInstancePoolFleetAttributeFleetOnDemandOption:getInstancePoolPoolInfoInstancePoolFleetAttributeFleetOnDemandOption"},"fleetSpotOption":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoInstancePoolFleetAttributeFleetSpotOption:getInstancePoolPoolInfoInstancePoolFleetAttributeFleetSpotOption"},"launchTemplateOverrides":{"type":"array","items":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoInstancePoolFleetAttributeLaunchTemplateOverride:getInstancePoolPoolInfoInstancePoolFleetAttributeLaunchTemplateOverride"}}},"type":"object","required":["launchTemplateOverrides"]},"databricks:index/getInstancePoolPoolInfoInstancePoolFleetAttributeFleetOnDemandOption:getInstancePoolPoolInfoInstancePoolFleetAttributeFleetOnDemandOption":{"properties":{"allocationStrategy":{"type":"string"},"instancePoolsToUseCount":{"type":"integer"}},"type":"object","required":["allocationStrategy"]},"databricks:index/getInstancePoolPoolInfoInstancePoolFleetAttributeFleetSpotOption:getInstancePoolPoolInfoInstancePoolFleetAttributeFleetSpotOption":{"properties":{"allocationStrategy":{"type":"string"},"instancePoolsToUseCount":{"type":"integer"}},"type":"object","required":["allocationStrategy"]},"databricks:index/getInstancePoolPoolInfoInstancePoolFleetAttributeLaunchTemplateOverride:getInstancePoolPoolInfoInstancePoolFleetAttributeLaunchTemplateOverride":{"properties":{"availabilityZone":{"type":"string"},"instanceType":{"type":"string"}},"type":"object","required":["availabilityZone","instanceType"]},"databricks:index/getInstancePoolPoolInfoNodeTypeFlexibility:getInstancePoolPoolInfoNodeTypeFlexibility":{"properties":{"alternateNodeTypeIds":{"type":"array","items":{"type":"string"}}},"type":"object","required":["alternateNodeTypeIds"]},"databricks:index/getInstancePoolPoolInfoPreloadedDockerImage:getInstancePoolPoolInfoPreloadedDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfoPreloadedDockerImageBasicAuth:getInstancePoolPoolInfoPreloadedDockerImageBasicAuth"},"url":{"type":"string"}},"type":"object","required":["url"]},"databricks:index/getInstancePoolPoolInfoPreloadedDockerImageBasicAuth:getInstancePoolPoolInfoPreloadedDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true},"username":{"type":"string"}},"type":"object","required":["password","username"]},"databricks:index/getInstancePoolPoolInfoStats:getInstancePoolPoolInfoStats":{"properties":{"idleCount":{"type":"integer"},"pendingIdleCount":{"type":"integer"},"pendingUsedCount":{"type":"integer"},"usedCount":{"type":"integer"}},"type":"object"},"databricks:index/getInstancePoolProviderConfig:getInstancePoolProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getInstanceProfilesInstanceProfile:getInstanceProfilesInstanceProfile":{"properties":{"arn":{"type":"string","description":"ARN of the instance profile.\n"},"isMeta":{"type":"boolean","description":"Whether the instance profile is a meta instance profile or not.\n"},"name":{"type":"string","description":"Name of the instance profile.\n"},"roleArn":{"type":"string","description":"ARN of the role attached to the instance profile.\n"}},"type":"object","required":["arn","isMeta","name","roleArn"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getInstanceProfilesProviderConfig:getInstanceProfilesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getJobJobSettings:getJobJobSettings":{"properties":{"createdTime":{"type":"integer"},"creatorUserName":{"type":"string"},"jobId":{"type":"integer"},"runAsUserName":{"type":"string"},"settings":{"$ref":"#/types/databricks:index/getJobJobSettingsSettings:getJobJobSettingsSettings"}},"type":"object","required":["runAsUserName"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getJobJobSettingsSettings:getJobJobSettingsSettings":{"properties":{"continuous":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsContinuous:getJobJobSettingsSettingsContinuous"},"dbtTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsDbtTask:getJobJobSettingsSettingsDbtTask"},"deployment":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsDeployment:getJobJobSettingsSettingsDeployment"},"description":{"type":"string"},"editMode":{"type":"string"},"emailNotifications":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsEmailNotifications:getJobJobSettingsSettingsEmailNotifications"},"environments":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsEnvironment:getJobJobSettingsSettingsEnvironment"}},"existingClusterId":{"type":"string"},"format":{"type":"string"},"gitSource":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsGitSource:getJobJobSettingsSettingsGitSource"},"health":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsHealth:getJobJobSettingsSettingsHealth"},"jobClusters":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobCluster:getJobJobSettingsSettingsJobCluster"}},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsLibrary:getJobJobSettingsSettingsLibrary"}},"maxConcurrentRuns":{"type":"integer"},"maxRetries":{"type":"integer"},"minRetryIntervalMillis":{"type":"integer"},"name":{"type":"string","description":"the job name of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by id.\n"},"newCluster":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewCluster:getJobJobSettingsSettingsNewCluster"},"notebookTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNotebookTask:getJobJobSettingsSettingsNotebookTask"},"notificationSettings":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNotificationSettings:getJobJobSettingsSettingsNotificationSettings"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsParameter:getJobJobSettingsSettingsParameter"}},"pipelineTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsPipelineTask:getJobJobSettingsSettingsPipelineTask"},"pythonWheelTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsPythonWheelTask:getJobJobSettingsSettingsPythonWheelTask"},"queue":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsQueue:getJobJobSettingsSettingsQueue"},"retryOnTimeout":{"type":"boolean"},"runAs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsRunAs:getJobJobSettingsSettingsRunAs"},"runJobTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsRunJobTask:getJobJobSettingsSettingsRunJobTask"},"schedule":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsSchedule:getJobJobSettingsSettingsSchedule"},"sparkJarTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsSparkJarTask:getJobJobSettingsSettingsSparkJarTask"},"sparkPythonTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsSparkPythonTask:getJobJobSettingsSettingsSparkPythonTask"},"sparkSubmitTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsSparkSubmitTask:getJobJobSettingsSettingsSparkSubmitTask"},"tags":{"type":"object","additionalProperties":{"type":"string"}},"tasks":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTask:getJobJobSettingsSettingsTask"}},"timeoutSeconds":{"type":"integer"},"trigger":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTrigger:getJobJobSettingsSettingsTrigger"},"webhookNotifications":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsWebhookNotifications:getJobJobSettingsSettingsWebhookNotifications"}},"type":"object","required":["format","runAs"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getJobJobSettingsSettingsContinuous:getJobJobSettingsSettingsContinuous":{"properties":{"pauseStatus":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsDbtTask:getJobJobSettingsSettingsDbtTask":{"properties":{"catalog":{"type":"string"},"commands":{"type":"array","items":{"type":"string"}},"profilesDirectory":{"type":"string"},"projectDirectory":{"type":"string"},"schema":{"type":"string"},"source":{"type":"string"},"warehouseId":{"type":"string"}},"type":"object","required":["commands"]},"databricks:index/getJobJobSettingsSettingsDeployment:getJobJobSettingsSettingsDeployment":{"properties":{"kind":{"type":"string"},"metadataFilePath":{"type":"string"}},"type":"object","required":["kind"]},"databricks:index/getJobJobSettingsSettingsEmailNotifications:getJobJobSettingsSettingsEmailNotifications":{"properties":{"noAlertForSkippedRuns":{"type":"boolean"},"onDurationWarningThresholdExceededs":{"type":"array","items":{"type":"string"}},"onFailures":{"type":"array","items":{"type":"string"}},"onStarts":{"type":"array","items":{"type":"string"}},"onStreamingBacklogExceededs":{"type":"array","items":{"type":"string"}},"onSuccesses":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsEnvironment:getJobJobSettingsSettingsEnvironment":{"properties":{"environmentKey":{"type":"string"},"spec":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsEnvironmentSpec:getJobJobSettingsSettingsEnvironmentSpec"}},"type":"object","required":["environmentKey"]},"databricks:index/getJobJobSettingsSettingsEnvironmentSpec:getJobJobSettingsSettingsEnvironmentSpec":{"properties":{"baseEnvironment":{"type":"string"},"client":{"type":"string"},"dependencies":{"type":"array","items":{"type":"string"}},"environmentVersion":{"type":"string"},"javaDependencies":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsGitSource:getJobJobSettingsSettingsGitSource":{"properties":{"branch":{"type":"string"},"commit":{"type":"string"},"jobSource":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsGitSourceJobSource:getJobJobSettingsSettingsGitSourceJobSource"},"provider":{"type":"string"},"tag":{"type":"string"},"url":{"type":"string"}},"type":"object","required":["url"]},"databricks:index/getJobJobSettingsSettingsGitSourceJobSource:getJobJobSettingsSettingsGitSourceJobSource":{"properties":{"dirtyState":{"type":"string"},"importFromGitBranch":{"type":"string"},"jobConfigPath":{"type":"string"}},"type":"object","required":["importFromGitBranch","jobConfigPath"]},"databricks:index/getJobJobSettingsSettingsHealth:getJobJobSettingsSettingsHealth":{"properties":{"rules":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsHealthRule:getJobJobSettingsSettingsHealthRule"}}},"type":"object","required":["rules"]},"databricks:index/getJobJobSettingsSettingsHealthRule:getJobJobSettingsSettingsHealthRule":{"properties":{"metric":{"type":"string"},"op":{"type":"string"},"value":{"type":"integer"}},"type":"object","required":["metric","op","value"]},"databricks:index/getJobJobSettingsSettingsJobCluster:getJobJobSettingsSettingsJobCluster":{"properties":{"jobClusterKey":{"type":"string"},"newCluster":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewCluster:getJobJobSettingsSettingsJobClusterNewCluster"}},"type":"object","required":["jobClusterKey","newCluster"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewCluster:getJobJobSettingsSettingsJobClusterNewCluster":{"properties":{"applyPolicyDefaultValues":{"type":"boolean"},"autoscale":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterAutoscale:getJobJobSettingsSettingsJobClusterNewClusterAutoscale"},"autoterminationMinutes":{"type":"integer"},"awsAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterAwsAttributes:getJobJobSettingsSettingsJobClusterNewClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterAzureAttributes:getJobJobSettingsSettingsJobClusterNewClusterAzureAttributes"},"clusterId":{"type":"string"},"clusterLogConf":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterClusterLogConf:getJobJobSettingsSettingsJobClusterNewClusterClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterClusterMountInfo:getJobJobSettingsSettingsJobClusterNewClusterClusterMountInfo"}},"clusterName":{"type":"string"},"customTags":{"type":"object","additionalProperties":{"type":"string"}},"dataSecurityMode":{"type":"string"},"dockerImage":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterDockerImage:getJobJobSettingsSettingsJobClusterNewClusterDockerImage"},"driverInstancePoolId":{"type":"string"},"driverNodeTypeId":{"type":"string"},"enableElasticDisk":{"type":"boolean"},"enableLocalDiskEncryption":{"type":"boolean"},"gcpAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterGcpAttributes:getJobJobSettingsSettingsJobClusterNewClusterGcpAttributes"},"idempotencyToken":{"type":"string"},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScript:getJobJobSettingsSettingsJobClusterNewClusterInitScript"}},"instancePoolId":{"type":"string"},"nodeTypeId":{"type":"string"},"numWorkers":{"type":"integer"},"policyId":{"type":"string"},"runtimeEngine":{"type":"string"},"singleUserName":{"type":"string"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"}},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"}},"sparkVersion":{"type":"string"},"sshPublicKeys":{"type":"array","items":{"type":"string"}},"workloadType":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterWorkloadType:getJobJobSettingsSettingsJobClusterNewClusterWorkloadType"}},"type":"object","required":["driverInstancePoolId","driverNodeTypeId","enableElasticDisk","enableLocalDiskEncryption","nodeTypeId","numWorkers"],"language":{"nodejs":{"requiredInputs":["numWorkers"]}}},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterAutoscale:getJobJobSettingsSettingsJobClusterNewClusterAutoscale":{"properties":{"maxWorkers":{"type":"integer"},"minWorkers":{"type":"integer"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterAwsAttributes:getJobJobSettingsSettingsJobClusterNewClusterAwsAttributes":{"properties":{"availability":{"type":"string"},"ebsVolumeCount":{"type":"integer"},"ebsVolumeSize":{"type":"integer"},"ebsVolumeType":{"type":"string"},"firstOnDemand":{"type":"integer"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterAzureAttributes:getJobJobSettingsSettingsJobClusterNewClusterAzureAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterClusterLogConf:getJobJobSettingsSettingsJobClusterNewClusterClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterClusterLogConfDbfs:getJobJobSettingsSettingsJobClusterNewClusterClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterClusterLogConfS3:getJobJobSettingsSettingsJobClusterNewClusterClusterLogConfS3"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterClusterLogConfDbfs:getJobJobSettingsSettingsJobClusterNewClusterClusterLogConfDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterClusterLogConfS3:getJobJobSettingsSettingsJobClusterNewClusterClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterClusterMountInfo:getJobJobSettingsSettingsJobClusterNewClusterClusterMountInfo":{"properties":{"localMountDirPath":{"type":"string"},"networkFilesystemInfo":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterClusterMountInfoNetworkFilesystemInfo:getJobJobSettingsSettingsJobClusterNewClusterClusterMountInfoNetworkFilesystemInfo"},"remoteMountDirPath":{"type":"string"}},"type":"object","required":["localMountDirPath","networkFilesystemInfo"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterClusterMountInfoNetworkFilesystemInfo:getJobJobSettingsSettingsJobClusterNewClusterClusterMountInfoNetworkFilesystemInfo":{"properties":{"mountOptions":{"type":"string"},"serverAddress":{"type":"string"}},"type":"object","required":["serverAddress"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterDockerImage:getJobJobSettingsSettingsJobClusterNewClusterDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterDockerImageBasicAuth:getJobJobSettingsSettingsJobClusterNewClusterDockerImageBasicAuth"},"url":{"type":"string"}},"type":"object","required":["url"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterDockerImageBasicAuth:getJobJobSettingsSettingsJobClusterNewClusterDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true},"username":{"type":"string"}},"type":"object","required":["password","username"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterGcpAttributes:getJobJobSettingsSettingsJobClusterNewClusterGcpAttributes":{"properties":{"availability":{"type":"string"},"bootDiskSize":{"type":"integer"},"googleServiceAccount":{"type":"string"},"localSsdCount":{"type":"integer"},"usePreemptibleExecutors":{"type":"boolean"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScript:getJobJobSettingsSettingsJobClusterNewClusterInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptAbfss:getJobJobSettingsSettingsJobClusterNewClusterInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptDbfs:getJobJobSettingsSettingsJobClusterNewClusterInitScriptDbfs"},"file":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptFile:getJobJobSettingsSettingsJobClusterNewClusterInitScriptFile"},"gcs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptGcs:getJobJobSettingsSettingsJobClusterNewClusterInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptS3:getJobJobSettingsSettingsJobClusterNewClusterInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptVolumes:getJobJobSettingsSettingsJobClusterNewClusterInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptWorkspace:getJobJobSettingsSettingsJobClusterNewClusterInitScriptWorkspace"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptAbfss:getJobJobSettingsSettingsJobClusterNewClusterInitScriptAbfss":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptDbfs:getJobJobSettingsSettingsJobClusterNewClusterInitScriptDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptFile:getJobJobSettingsSettingsJobClusterNewClusterInitScriptFile":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptGcs:getJobJobSettingsSettingsJobClusterNewClusterInitScriptGcs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptS3:getJobJobSettingsSettingsJobClusterNewClusterInitScriptS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptVolumes:getJobJobSettingsSettingsJobClusterNewClusterInitScriptVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterInitScriptWorkspace:getJobJobSettingsSettingsJobClusterNewClusterInitScriptWorkspace":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterWorkloadType:getJobJobSettingsSettingsJobClusterNewClusterWorkloadType":{"properties":{"clients":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsJobClusterNewClusterWorkloadTypeClients:getJobJobSettingsSettingsJobClusterNewClusterWorkloadTypeClients"}},"type":"object","required":["clients"]},"databricks:index/getJobJobSettingsSettingsJobClusterNewClusterWorkloadTypeClients:getJobJobSettingsSettingsJobClusterNewClusterWorkloadTypeClients":{"properties":{"jobs":{"type":"boolean"},"notebooks":{"type":"boolean"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsLibrary:getJobJobSettingsSettingsLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsLibraryCran:getJobJobSettingsSettingsLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsLibraryMaven:getJobJobSettingsSettingsLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsLibraryProviderConfig:getJobJobSettingsSettingsLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsLibraryPypi:getJobJobSettingsSettingsLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsLibraryCran:getJobJobSettingsSettingsLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/getJobJobSettingsSettingsLibraryMaven:getJobJobSettingsSettingsLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/getJobJobSettingsSettingsLibraryProviderConfig:getJobJobSettingsSettingsLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getJobJobSettingsSettingsLibraryPypi:getJobJobSettingsSettingsLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/getJobJobSettingsSettingsNewCluster:getJobJobSettingsSettingsNewCluster":{"properties":{"applyPolicyDefaultValues":{"type":"boolean"},"autoscale":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterAutoscale:getJobJobSettingsSettingsNewClusterAutoscale"},"autoterminationMinutes":{"type":"integer"},"awsAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterAwsAttributes:getJobJobSettingsSettingsNewClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterAzureAttributes:getJobJobSettingsSettingsNewClusterAzureAttributes"},"clusterId":{"type":"string"},"clusterLogConf":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterClusterLogConf:getJobJobSettingsSettingsNewClusterClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterClusterMountInfo:getJobJobSettingsSettingsNewClusterClusterMountInfo"}},"clusterName":{"type":"string"},"customTags":{"type":"object","additionalProperties":{"type":"string"}},"dataSecurityMode":{"type":"string"},"dockerImage":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterDockerImage:getJobJobSettingsSettingsNewClusterDockerImage"},"driverInstancePoolId":{"type":"string"},"driverNodeTypeId":{"type":"string"},"enableElasticDisk":{"type":"boolean"},"enableLocalDiskEncryption":{"type":"boolean"},"gcpAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterGcpAttributes:getJobJobSettingsSettingsNewClusterGcpAttributes"},"idempotencyToken":{"type":"string"},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterInitScript:getJobJobSettingsSettingsNewClusterInitScript"}},"instancePoolId":{"type":"string"},"nodeTypeId":{"type":"string"},"numWorkers":{"type":"integer"},"policyId":{"type":"string"},"runtimeEngine":{"type":"string"},"singleUserName":{"type":"string"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"}},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"}},"sparkVersion":{"type":"string"},"sshPublicKeys":{"type":"array","items":{"type":"string"}},"workloadType":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterWorkloadType:getJobJobSettingsSettingsNewClusterWorkloadType"}},"type":"object","required":["driverInstancePoolId","driverNodeTypeId","enableElasticDisk","enableLocalDiskEncryption","nodeTypeId","numWorkers"],"language":{"nodejs":{"requiredInputs":["numWorkers"]}}},"databricks:index/getJobJobSettingsSettingsNewClusterAutoscale:getJobJobSettingsSettingsNewClusterAutoscale":{"properties":{"maxWorkers":{"type":"integer"},"minWorkers":{"type":"integer"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsNewClusterAwsAttributes:getJobJobSettingsSettingsNewClusterAwsAttributes":{"properties":{"availability":{"type":"string"},"ebsVolumeCount":{"type":"integer"},"ebsVolumeSize":{"type":"integer"},"ebsVolumeType":{"type":"string"},"firstOnDemand":{"type":"integer"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsNewClusterAzureAttributes:getJobJobSettingsSettingsNewClusterAzureAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsNewClusterClusterLogConf:getJobJobSettingsSettingsNewClusterClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterClusterLogConfDbfs:getJobJobSettingsSettingsNewClusterClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterClusterLogConfS3:getJobJobSettingsSettingsNewClusterClusterLogConfS3"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsNewClusterClusterLogConfDbfs:getJobJobSettingsSettingsNewClusterClusterLogConfDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsNewClusterClusterLogConfS3:getJobJobSettingsSettingsNewClusterClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsNewClusterClusterMountInfo:getJobJobSettingsSettingsNewClusterClusterMountInfo":{"properties":{"localMountDirPath":{"type":"string"},"networkFilesystemInfo":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterClusterMountInfoNetworkFilesystemInfo:getJobJobSettingsSettingsNewClusterClusterMountInfoNetworkFilesystemInfo"},"remoteMountDirPath":{"type":"string"}},"type":"object","required":["localMountDirPath","networkFilesystemInfo"]},"databricks:index/getJobJobSettingsSettingsNewClusterClusterMountInfoNetworkFilesystemInfo:getJobJobSettingsSettingsNewClusterClusterMountInfoNetworkFilesystemInfo":{"properties":{"mountOptions":{"type":"string"},"serverAddress":{"type":"string"}},"type":"object","required":["serverAddress"]},"databricks:index/getJobJobSettingsSettingsNewClusterDockerImage:getJobJobSettingsSettingsNewClusterDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterDockerImageBasicAuth:getJobJobSettingsSettingsNewClusterDockerImageBasicAuth"},"url":{"type":"string"}},"type":"object","required":["url"]},"databricks:index/getJobJobSettingsSettingsNewClusterDockerImageBasicAuth:getJobJobSettingsSettingsNewClusterDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true},"username":{"type":"string"}},"type":"object","required":["password","username"]},"databricks:index/getJobJobSettingsSettingsNewClusterGcpAttributes:getJobJobSettingsSettingsNewClusterGcpAttributes":{"properties":{"availability":{"type":"string"},"bootDiskSize":{"type":"integer"},"googleServiceAccount":{"type":"string"},"localSsdCount":{"type":"integer"},"usePreemptibleExecutors":{"type":"boolean"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsNewClusterInitScript:getJobJobSettingsSettingsNewClusterInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterInitScriptAbfss:getJobJobSettingsSettingsNewClusterInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterInitScriptDbfs:getJobJobSettingsSettingsNewClusterInitScriptDbfs"},"file":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterInitScriptFile:getJobJobSettingsSettingsNewClusterInitScriptFile"},"gcs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterInitScriptGcs:getJobJobSettingsSettingsNewClusterInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterInitScriptS3:getJobJobSettingsSettingsNewClusterInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterInitScriptVolumes:getJobJobSettingsSettingsNewClusterInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterInitScriptWorkspace:getJobJobSettingsSettingsNewClusterInitScriptWorkspace"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsNewClusterInitScriptAbfss:getJobJobSettingsSettingsNewClusterInitScriptAbfss":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsNewClusterInitScriptDbfs:getJobJobSettingsSettingsNewClusterInitScriptDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsNewClusterInitScriptFile:getJobJobSettingsSettingsNewClusterInitScriptFile":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsNewClusterInitScriptGcs:getJobJobSettingsSettingsNewClusterInitScriptGcs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsNewClusterInitScriptS3:getJobJobSettingsSettingsNewClusterInitScriptS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsNewClusterInitScriptVolumes:getJobJobSettingsSettingsNewClusterInitScriptVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsNewClusterInitScriptWorkspace:getJobJobSettingsSettingsNewClusterInitScriptWorkspace":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsNewClusterWorkloadType:getJobJobSettingsSettingsNewClusterWorkloadType":{"properties":{"clients":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsNewClusterWorkloadTypeClients:getJobJobSettingsSettingsNewClusterWorkloadTypeClients"}},"type":"object","required":["clients"]},"databricks:index/getJobJobSettingsSettingsNewClusterWorkloadTypeClients:getJobJobSettingsSettingsNewClusterWorkloadTypeClients":{"properties":{"jobs":{"type":"boolean"},"notebooks":{"type":"boolean"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsNotebookTask:getJobJobSettingsSettingsNotebookTask":{"properties":{"baseParameters":{"type":"object","additionalProperties":{"type":"string"}},"notebookPath":{"type":"string"},"source":{"type":"string"},"warehouseId":{"type":"string"}},"type":"object","required":["notebookPath"]},"databricks:index/getJobJobSettingsSettingsNotificationSettings:getJobJobSettingsSettingsNotificationSettings":{"properties":{"noAlertForCanceledRuns":{"type":"boolean"},"noAlertForSkippedRuns":{"type":"boolean"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsParameter:getJobJobSettingsSettingsParameter":{"properties":{"default":{"type":"string"},"name":{"type":"string","description":"the job name of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by id.\n"}},"type":"object","required":["default","name"]},"databricks:index/getJobJobSettingsSettingsPipelineTask:getJobJobSettingsSettingsPipelineTask":{"properties":{"fullRefresh":{"type":"boolean"},"pipelineId":{"type":"string"}},"type":"object","required":["pipelineId"]},"databricks:index/getJobJobSettingsSettingsPythonWheelTask:getJobJobSettingsSettingsPythonWheelTask":{"properties":{"entryPoint":{"type":"string"},"namedParameters":{"type":"object","additionalProperties":{"type":"string"}},"packageName":{"type":"string"},"parameters":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsQueue:getJobJobSettingsSettingsQueue":{"properties":{"enabled":{"type":"boolean"}},"type":"object","required":["enabled"]},"databricks:index/getJobJobSettingsSettingsRunAs:getJobJobSettingsSettingsRunAs":{"properties":{"servicePrincipalName":{"type":"string"},"userName":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsRunJobTask:getJobJobSettingsSettingsRunJobTask":{"properties":{"jobId":{"type":"integer"},"jobParameters":{"type":"object","additionalProperties":{"type":"string"}}},"type":"object","required":["jobId"]},"databricks:index/getJobJobSettingsSettingsSchedule:getJobJobSettingsSettingsSchedule":{"properties":{"pauseStatus":{"type":"string"},"quartzCronExpression":{"type":"string"},"timezoneId":{"type":"string"}},"type":"object","required":["quartzCronExpression","timezoneId"]},"databricks:index/getJobJobSettingsSettingsSparkJarTask:getJobJobSettingsSettingsSparkJarTask":{"properties":{"jarUri":{"type":"string"},"mainClassName":{"type":"string"},"parameters":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsSparkPythonTask:getJobJobSettingsSettingsSparkPythonTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"}},"pythonFile":{"type":"string"},"source":{"type":"string"}},"type":"object","required":["pythonFile"]},"databricks:index/getJobJobSettingsSettingsSparkSubmitTask:getJobJobSettingsSettingsSparkSubmitTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTask:getJobJobSettingsSettingsTask":{"properties":{"conditionTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskConditionTask:getJobJobSettingsSettingsTaskConditionTask"},"dashboardTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskDashboardTask:getJobJobSettingsSettingsTaskDashboardTask"},"dbtTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskDbtTask:getJobJobSettingsSettingsTaskDbtTask"},"dependsOns":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskDependsOn:getJobJobSettingsSettingsTaskDependsOn"}},"description":{"type":"string"},"emailNotifications":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskEmailNotifications:getJobJobSettingsSettingsTaskEmailNotifications"},"environmentKey":{"type":"string"},"existingClusterId":{"type":"string"},"forEachTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTask:getJobJobSettingsSettingsTaskForEachTask"},"health":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskHealth:getJobJobSettingsSettingsTaskHealth"},"jobClusterKey":{"type":"string"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskLibrary:getJobJobSettingsSettingsTaskLibrary"}},"maxRetries":{"type":"integer"},"minRetryIntervalMillis":{"type":"integer"},"newCluster":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewCluster:getJobJobSettingsSettingsTaskNewCluster"},"notebookTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNotebookTask:getJobJobSettingsSettingsTaskNotebookTask"},"notificationSettings":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNotificationSettings:getJobJobSettingsSettingsTaskNotificationSettings"},"pipelineTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskPipelineTask:getJobJobSettingsSettingsTaskPipelineTask"},"powerBiTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskPowerBiTask:getJobJobSettingsSettingsTaskPowerBiTask"},"pythonWheelTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskPythonWheelTask:getJobJobSettingsSettingsTaskPythonWheelTask"},"retryOnTimeout":{"type":"boolean"},"runIf":{"type":"string"},"runJobTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskRunJobTask:getJobJobSettingsSettingsTaskRunJobTask"},"sparkJarTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskSparkJarTask:getJobJobSettingsSettingsTaskSparkJarTask"},"sparkPythonTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskSparkPythonTask:getJobJobSettingsSettingsTaskSparkPythonTask"},"sparkSubmitTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskSparkSubmitTask:getJobJobSettingsSettingsTaskSparkSubmitTask"},"sqlTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskSqlTask:getJobJobSettingsSettingsTaskSqlTask"},"taskKey":{"type":"string"},"timeoutSeconds":{"type":"integer"},"webhookNotifications":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskWebhookNotifications:getJobJobSettingsSettingsTaskWebhookNotifications"}},"type":"object","required":["retryOnTimeout","taskKey"],"language":{"nodejs":{"requiredInputs":["taskKey"]}}},"databricks:index/getJobJobSettingsSettingsTaskConditionTask:getJobJobSettingsSettingsTaskConditionTask":{"properties":{"left":{"type":"string"},"op":{"type":"string"},"right":{"type":"string"}},"type":"object","required":["left","op","right"]},"databricks:index/getJobJobSettingsSettingsTaskDashboardTask:getJobJobSettingsSettingsTaskDashboardTask":{"properties":{"dashboardId":{"type":"string"},"filters":{"type":"object","additionalProperties":{"type":"string"}},"subscription":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskDashboardTaskSubscription:getJobJobSettingsSettingsTaskDashboardTaskSubscription"},"warehouseId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskDashboardTaskSubscription:getJobJobSettingsSettingsTaskDashboardTaskSubscription":{"properties":{"customSubject":{"type":"string"},"paused":{"type":"boolean"},"subscribers":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskDashboardTaskSubscriptionSubscriber:getJobJobSettingsSettingsTaskDashboardTaskSubscriptionSubscriber"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskDashboardTaskSubscriptionSubscriber:getJobJobSettingsSettingsTaskDashboardTaskSubscriptionSubscriber":{"properties":{"destinationId":{"type":"string"},"userName":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskDbtTask:getJobJobSettingsSettingsTaskDbtTask":{"properties":{"catalog":{"type":"string"},"commands":{"type":"array","items":{"type":"string"}},"profilesDirectory":{"type":"string"},"projectDirectory":{"type":"string"},"schema":{"type":"string"},"source":{"type":"string"},"warehouseId":{"type":"string"}},"type":"object","required":["commands"]},"databricks:index/getJobJobSettingsSettingsTaskDependsOn:getJobJobSettingsSettingsTaskDependsOn":{"properties":{"outcome":{"type":"string"},"taskKey":{"type":"string"}},"type":"object","required":["taskKey"]},"databricks:index/getJobJobSettingsSettingsTaskEmailNotifications:getJobJobSettingsSettingsTaskEmailNotifications":{"properties":{"noAlertForSkippedRuns":{"type":"boolean"},"onDurationWarningThresholdExceededs":{"type":"array","items":{"type":"string"}},"onFailures":{"type":"array","items":{"type":"string"}},"onStarts":{"type":"array","items":{"type":"string"}},"onStreamingBacklogExceededs":{"type":"array","items":{"type":"string"}},"onSuccesses":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTask:getJobJobSettingsSettingsTaskForEachTask":{"properties":{"concurrency":{"type":"integer"},"inputs":{"type":"string"},"task":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTask:getJobJobSettingsSettingsTaskForEachTaskTask"}},"type":"object","required":["inputs","task"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTask:getJobJobSettingsSettingsTaskForEachTaskTask":{"properties":{"conditionTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskConditionTask:getJobJobSettingsSettingsTaskForEachTaskTaskConditionTask"},"dashboardTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTask:getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTask"},"dbtTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskDbtTask:getJobJobSettingsSettingsTaskForEachTaskTaskDbtTask"},"dependsOns":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskDependsOn:getJobJobSettingsSettingsTaskForEachTaskTaskDependsOn"}},"description":{"type":"string"},"emailNotifications":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskEmailNotifications:getJobJobSettingsSettingsTaskForEachTaskTaskEmailNotifications"},"environmentKey":{"type":"string"},"existingClusterId":{"type":"string"},"health":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskHealth:getJobJobSettingsSettingsTaskForEachTaskTaskHealth"},"jobClusterKey":{"type":"string"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskLibrary:getJobJobSettingsSettingsTaskForEachTaskTaskLibrary"}},"maxRetries":{"type":"integer"},"minRetryIntervalMillis":{"type":"integer"},"newCluster":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewCluster:getJobJobSettingsSettingsTaskForEachTaskTaskNewCluster"},"notebookTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNotebookTask:getJobJobSettingsSettingsTaskForEachTaskTaskNotebookTask"},"notificationSettings":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNotificationSettings:getJobJobSettingsSettingsTaskForEachTaskTaskNotificationSettings"},"pipelineTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskPipelineTask:getJobJobSettingsSettingsTaskForEachTaskTaskPipelineTask"},"powerBiTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTask:getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTask"},"pythonWheelTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskPythonWheelTask:getJobJobSettingsSettingsTaskForEachTaskTaskPythonWheelTask"},"retryOnTimeout":{"type":"boolean"},"runIf":{"type":"string"},"runJobTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskRunJobTask:getJobJobSettingsSettingsTaskForEachTaskTaskRunJobTask"},"sparkJarTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSparkJarTask:getJobJobSettingsSettingsTaskForEachTaskTaskSparkJarTask"},"sparkPythonTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSparkPythonTask:getJobJobSettingsSettingsTaskForEachTaskTaskSparkPythonTask"},"sparkSubmitTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSparkSubmitTask:getJobJobSettingsSettingsTaskForEachTaskTaskSparkSubmitTask"},"sqlTask":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTask:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTask"},"taskKey":{"type":"string"},"timeoutSeconds":{"type":"integer"},"webhookNotifications":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotifications:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotifications"}},"type":"object","required":["retryOnTimeout","taskKey"],"language":{"nodejs":{"requiredInputs":["taskKey"]}}},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskConditionTask:getJobJobSettingsSettingsTaskForEachTaskTaskConditionTask":{"properties":{"left":{"type":"string"},"op":{"type":"string"},"right":{"type":"string"}},"type":"object","required":["left","op","right"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTask:getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTask":{"properties":{"dashboardId":{"type":"string"},"filters":{"type":"object","additionalProperties":{"type":"string"}},"subscription":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTaskSubscription:getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTaskSubscription"},"warehouseId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTaskSubscription:getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTaskSubscription":{"properties":{"customSubject":{"type":"string"},"paused":{"type":"boolean"},"subscribers":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTaskSubscriptionSubscriber:getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTaskSubscriptionSubscriber"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTaskSubscriptionSubscriber:getJobJobSettingsSettingsTaskForEachTaskTaskDashboardTaskSubscriptionSubscriber":{"properties":{"destinationId":{"type":"string"},"userName":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskDbtTask:getJobJobSettingsSettingsTaskForEachTaskTaskDbtTask":{"properties":{"catalog":{"type":"string"},"commands":{"type":"array","items":{"type":"string"}},"profilesDirectory":{"type":"string"},"projectDirectory":{"type":"string"},"schema":{"type":"string"},"source":{"type":"string"},"warehouseId":{"type":"string"}},"type":"object","required":["commands"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskDependsOn:getJobJobSettingsSettingsTaskForEachTaskTaskDependsOn":{"properties":{"outcome":{"type":"string"},"taskKey":{"type":"string"}},"type":"object","required":["taskKey"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskEmailNotifications:getJobJobSettingsSettingsTaskForEachTaskTaskEmailNotifications":{"properties":{"noAlertForSkippedRuns":{"type":"boolean"},"onDurationWarningThresholdExceededs":{"type":"array","items":{"type":"string"}},"onFailures":{"type":"array","items":{"type":"string"}},"onStarts":{"type":"array","items":{"type":"string"}},"onStreamingBacklogExceededs":{"type":"array","items":{"type":"string"}},"onSuccesses":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskHealth:getJobJobSettingsSettingsTaskForEachTaskTaskHealth":{"properties":{"rules":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskHealthRule:getJobJobSettingsSettingsTaskForEachTaskTaskHealthRule"}}},"type":"object","required":["rules"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskHealthRule:getJobJobSettingsSettingsTaskForEachTaskTaskHealthRule":{"properties":{"metric":{"type":"string"},"op":{"type":"string"},"value":{"type":"integer"}},"type":"object","required":["metric","op","value"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskLibrary:getJobJobSettingsSettingsTaskForEachTaskTaskLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskLibraryCran:getJobJobSettingsSettingsTaskForEachTaskTaskLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskLibraryMaven:getJobJobSettingsSettingsTaskForEachTaskTaskLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskLibraryProviderConfig:getJobJobSettingsSettingsTaskForEachTaskTaskLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskLibraryPypi:getJobJobSettingsSettingsTaskForEachTaskTaskLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskLibraryCran:getJobJobSettingsSettingsTaskForEachTaskTaskLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskLibraryMaven:getJobJobSettingsSettingsTaskForEachTaskTaskLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskLibraryProviderConfig:getJobJobSettingsSettingsTaskForEachTaskTaskLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskLibraryPypi:getJobJobSettingsSettingsTaskForEachTaskTaskLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewCluster:getJobJobSettingsSettingsTaskForEachTaskTaskNewCluster":{"properties":{"applyPolicyDefaultValues":{"type":"boolean"},"autoscale":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAutoscale:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAutoscale"},"autoterminationMinutes":{"type":"integer"},"awsAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAwsAttributes:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAzureAttributes:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAzureAttributes"},"clusterId":{"type":"string"},"clusterLogConf":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConf:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterMountInfo:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterMountInfo"}},"clusterName":{"type":"string"},"customTags":{"type":"object","additionalProperties":{"type":"string"}},"dataSecurityMode":{"type":"string"},"dockerImage":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterDockerImage:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterDockerImage"},"driverInstancePoolId":{"type":"string"},"driverNodeTypeId":{"type":"string"},"enableElasticDisk":{"type":"boolean"},"enableLocalDiskEncryption":{"type":"boolean"},"gcpAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterGcpAttributes:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterGcpAttributes"},"idempotencyToken":{"type":"string"},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScript:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScript"}},"instancePoolId":{"type":"string"},"nodeTypeId":{"type":"string"},"numWorkers":{"type":"integer"},"policyId":{"type":"string"},"runtimeEngine":{"type":"string"},"singleUserName":{"type":"string"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"}},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"}},"sparkVersion":{"type":"string"},"sshPublicKeys":{"type":"array","items":{"type":"string"}},"workloadType":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterWorkloadType:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterWorkloadType"}},"type":"object","required":["driverInstancePoolId","driverNodeTypeId","enableElasticDisk","enableLocalDiskEncryption","nodeTypeId","numWorkers"],"language":{"nodejs":{"requiredInputs":["numWorkers"]}}},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAutoscale:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAutoscale":{"properties":{"maxWorkers":{"type":"integer"},"minWorkers":{"type":"integer"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAwsAttributes:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAwsAttributes":{"properties":{"availability":{"type":"string"},"ebsVolumeCount":{"type":"integer"},"ebsVolumeSize":{"type":"integer"},"ebsVolumeType":{"type":"string"},"firstOnDemand":{"type":"integer"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAzureAttributes:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterAzureAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConf:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConfDbfs:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConfS3:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConfS3"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConfDbfs:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConfDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConfS3:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterMountInfo:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterMountInfo":{"properties":{"localMountDirPath":{"type":"string"},"networkFilesystemInfo":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfo:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfo"},"remoteMountDirPath":{"type":"string"}},"type":"object","required":["localMountDirPath","networkFilesystemInfo"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfo:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterClusterMountInfoNetworkFilesystemInfo":{"properties":{"mountOptions":{"type":"string"},"serverAddress":{"type":"string"}},"type":"object","required":["serverAddress"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterDockerImage:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterDockerImageBasicAuth:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterDockerImageBasicAuth"},"url":{"type":"string"}},"type":"object","required":["url"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterDockerImageBasicAuth:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true},"username":{"type":"string"}},"type":"object","required":["password","username"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterGcpAttributes:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterGcpAttributes":{"properties":{"availability":{"type":"string"},"bootDiskSize":{"type":"integer"},"googleServiceAccount":{"type":"string"},"localSsdCount":{"type":"integer"},"usePreemptibleExecutors":{"type":"boolean"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScript:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptAbfss:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptDbfs:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptDbfs"},"file":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptFile:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptFile"},"gcs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptGcs:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptS3:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptVolumes:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptWorkspace:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptWorkspace"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptAbfss:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptAbfss":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptDbfs:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptFile:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptFile":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptGcs:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptGcs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptS3:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptVolumes:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptWorkspace:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterInitScriptWorkspace":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterWorkloadType:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterWorkloadType":{"properties":{"clients":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterWorkloadTypeClients:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterWorkloadTypeClients"}},"type":"object","required":["clients"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterWorkloadTypeClients:getJobJobSettingsSettingsTaskForEachTaskTaskNewClusterWorkloadTypeClients":{"properties":{"jobs":{"type":"boolean"},"notebooks":{"type":"boolean"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNotebookTask:getJobJobSettingsSettingsTaskForEachTaskTaskNotebookTask":{"properties":{"baseParameters":{"type":"object","additionalProperties":{"type":"string"}},"notebookPath":{"type":"string"},"source":{"type":"string"},"warehouseId":{"type":"string"}},"type":"object","required":["notebookPath"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskNotificationSettings:getJobJobSettingsSettingsTaskForEachTaskTaskNotificationSettings":{"properties":{"alertOnLastAttempt":{"type":"boolean"},"noAlertForCanceledRuns":{"type":"boolean"},"noAlertForSkippedRuns":{"type":"boolean"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskPipelineTask:getJobJobSettingsSettingsTaskForEachTaskTaskPipelineTask":{"properties":{"fullRefresh":{"type":"boolean"},"pipelineId":{"type":"string"}},"type":"object","required":["pipelineId"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTask:getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTask":{"properties":{"connectionResourceName":{"type":"string"},"powerBiModel":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTaskPowerBiModel:getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTaskPowerBiModel"},"refreshAfterUpdate":{"type":"boolean"},"tables":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTaskTable:getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTaskTable"}},"warehouseId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTaskPowerBiModel:getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTaskPowerBiModel":{"properties":{"authenticationMethod":{"type":"string"},"modelName":{"type":"string"},"overwriteExisting":{"type":"boolean"},"storageMode":{"type":"string"},"workspaceName":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTaskTable:getJobJobSettingsSettingsTaskForEachTaskTaskPowerBiTaskTable":{"properties":{"catalog":{"type":"string"},"name":{"type":"string","description":"the job name of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by id.\n"},"schema":{"type":"string"},"storageMode":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskPythonWheelTask:getJobJobSettingsSettingsTaskForEachTaskTaskPythonWheelTask":{"properties":{"entryPoint":{"type":"string"},"namedParameters":{"type":"object","additionalProperties":{"type":"string"}},"packageName":{"type":"string"},"parameters":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskRunJobTask:getJobJobSettingsSettingsTaskForEachTaskTaskRunJobTask":{"properties":{"jobId":{"type":"integer"},"jobParameters":{"type":"object","additionalProperties":{"type":"string"}}},"type":"object","required":["jobId"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSparkJarTask:getJobJobSettingsSettingsTaskForEachTaskTaskSparkJarTask":{"properties":{"jarUri":{"type":"string"},"mainClassName":{"type":"string"},"parameters":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSparkPythonTask:getJobJobSettingsSettingsTaskForEachTaskTaskSparkPythonTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"}},"pythonFile":{"type":"string"},"source":{"type":"string"}},"type":"object","required":["pythonFile"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSparkSubmitTask:getJobJobSettingsSettingsTaskForEachTaskTaskSparkSubmitTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTask:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTask":{"properties":{"alert":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskAlert:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskAlert"},"dashboard":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskDashboard:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskDashboard"},"file":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskFile:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskFile"},"parameters":{"type":"object","additionalProperties":{"type":"string"}},"query":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskQuery:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskQuery"},"warehouseId":{"type":"string"}},"type":"object","required":["warehouseId"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskAlert:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskAlert":{"properties":{"alertId":{"type":"string"},"pauseSubscriptions":{"type":"boolean"},"subscriptions":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskAlertSubscription:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskAlertSubscription"}}},"type":"object","required":["alertId"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskAlertSubscription:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskAlertSubscription":{"properties":{"destinationId":{"type":"string"},"userName":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskDashboard:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskDashboard":{"properties":{"customSubject":{"type":"string"},"dashboardId":{"type":"string"},"pauseSubscriptions":{"type":"boolean"},"subscriptions":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskDashboardSubscription:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskDashboardSubscription"}}},"type":"object","required":["dashboardId"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskDashboardSubscription:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskDashboardSubscription":{"properties":{"destinationId":{"type":"string"},"userName":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskFile:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskFile":{"properties":{"path":{"type":"string"},"source":{"type":"string"}},"type":"object","required":["path"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskQuery:getJobJobSettingsSettingsTaskForEachTaskTaskSqlTaskQuery":{"properties":{"queryId":{"type":"string"}},"type":"object","required":["queryId"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotifications:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotifications":{"properties":{"onDurationWarningThresholdExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded"}},"onFailures":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnFailure:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnFailure"}},"onStarts":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnStart:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnStart"}},"onStreamingBacklogExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnStreamingBacklogExceeded:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnStreamingBacklogExceeded"}},"onSuccesses":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnSuccess:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnSuccess"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnDurationWarningThresholdExceeded":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnFailure:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnFailure":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnStart:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnStart":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnStreamingBacklogExceeded:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnStreamingBacklogExceeded":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnSuccess:getJobJobSettingsSettingsTaskForEachTaskTaskWebhookNotificationsOnSuccess":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsTaskHealth:getJobJobSettingsSettingsTaskHealth":{"properties":{"rules":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskHealthRule:getJobJobSettingsSettingsTaskHealthRule"}}},"type":"object","required":["rules"]},"databricks:index/getJobJobSettingsSettingsTaskHealthRule:getJobJobSettingsSettingsTaskHealthRule":{"properties":{"metric":{"type":"string"},"op":{"type":"string"},"value":{"type":"integer"}},"type":"object","required":["metric","op","value"]},"databricks:index/getJobJobSettingsSettingsTaskLibrary:getJobJobSettingsSettingsTaskLibrary":{"properties":{"cran":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskLibraryCran:getJobJobSettingsSettingsTaskLibraryCran"},"egg":{"type":"string","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskLibraryMaven:getJobJobSettingsSettingsTaskLibraryMaven"},"providerConfig":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskLibraryProviderConfig:getJobJobSettingsSettingsTaskLibraryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskLibraryPypi:getJobJobSettingsSettingsTaskLibraryPypi"},"requirements":{"type":"string"},"whl":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskLibraryCran:getJobJobSettingsSettingsTaskLibraryCran":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/getJobJobSettingsSettingsTaskLibraryMaven:getJobJobSettingsSettingsTaskLibraryMaven":{"properties":{"coordinates":{"type":"string"},"exclusions":{"type":"array","items":{"type":"string"}},"repo":{"type":"string"}},"type":"object","required":["coordinates"]},"databricks:index/getJobJobSettingsSettingsTaskLibraryProviderConfig:getJobJobSettingsSettingsTaskLibraryProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getJobJobSettingsSettingsTaskLibraryPypi:getJobJobSettingsSettingsTaskLibraryPypi":{"properties":{"package":{"type":"string"},"repo":{"type":"string"}},"type":"object","required":["package"]},"databricks:index/getJobJobSettingsSettingsTaskNewCluster:getJobJobSettingsSettingsTaskNewCluster":{"properties":{"applyPolicyDefaultValues":{"type":"boolean"},"autoscale":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterAutoscale:getJobJobSettingsSettingsTaskNewClusterAutoscale"},"autoterminationMinutes":{"type":"integer"},"awsAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterAwsAttributes:getJobJobSettingsSettingsTaskNewClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterAzureAttributes:getJobJobSettingsSettingsTaskNewClusterAzureAttributes"},"clusterId":{"type":"string"},"clusterLogConf":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterClusterLogConf:getJobJobSettingsSettingsTaskNewClusterClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterClusterMountInfo:getJobJobSettingsSettingsTaskNewClusterClusterMountInfo"}},"clusterName":{"type":"string"},"customTags":{"type":"object","additionalProperties":{"type":"string"}},"dataSecurityMode":{"type":"string"},"dockerImage":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterDockerImage:getJobJobSettingsSettingsTaskNewClusterDockerImage"},"driverInstancePoolId":{"type":"string"},"driverNodeTypeId":{"type":"string"},"enableElasticDisk":{"type":"boolean"},"enableLocalDiskEncryption":{"type":"boolean"},"gcpAttributes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterGcpAttributes:getJobJobSettingsSettingsTaskNewClusterGcpAttributes"},"idempotencyToken":{"type":"string"},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScript:getJobJobSettingsSettingsTaskNewClusterInitScript"}},"instancePoolId":{"type":"string"},"nodeTypeId":{"type":"string"},"numWorkers":{"type":"integer"},"policyId":{"type":"string"},"runtimeEngine":{"type":"string"},"singleUserName":{"type":"string"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"}},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"}},"sparkVersion":{"type":"string"},"sshPublicKeys":{"type":"array","items":{"type":"string"}},"workloadType":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterWorkloadType:getJobJobSettingsSettingsTaskNewClusterWorkloadType"}},"type":"object","required":["driverInstancePoolId","driverNodeTypeId","enableElasticDisk","enableLocalDiskEncryption","nodeTypeId","numWorkers"],"language":{"nodejs":{"requiredInputs":["numWorkers"]}}},"databricks:index/getJobJobSettingsSettingsTaskNewClusterAutoscale:getJobJobSettingsSettingsTaskNewClusterAutoscale":{"properties":{"maxWorkers":{"type":"integer"},"minWorkers":{"type":"integer"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskNewClusterAwsAttributes:getJobJobSettingsSettingsTaskNewClusterAwsAttributes":{"properties":{"availability":{"type":"string"},"ebsVolumeCount":{"type":"integer"},"ebsVolumeSize":{"type":"integer"},"ebsVolumeType":{"type":"string"},"firstOnDemand":{"type":"integer"},"instanceProfileArn":{"type":"string"},"spotBidPricePercent":{"type":"integer"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskNewClusterAzureAttributes:getJobJobSettingsSettingsTaskNewClusterAzureAttributes":{"properties":{"availability":{"type":"string"},"firstOnDemand":{"type":"integer"},"spotBidMaxPrice":{"type":"number"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskNewClusterClusterLogConf:getJobJobSettingsSettingsTaskNewClusterClusterLogConf":{"properties":{"dbfs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterClusterLogConfDbfs:getJobJobSettingsSettingsTaskNewClusterClusterLogConfDbfs"},"s3":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterClusterLogConfS3:getJobJobSettingsSettingsTaskNewClusterClusterLogConfS3"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskNewClusterClusterLogConfDbfs:getJobJobSettingsSettingsTaskNewClusterClusterLogConfDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterClusterLogConfS3:getJobJobSettingsSettingsTaskNewClusterClusterLogConfS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterClusterMountInfo:getJobJobSettingsSettingsTaskNewClusterClusterMountInfo":{"properties":{"localMountDirPath":{"type":"string"},"networkFilesystemInfo":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterClusterMountInfoNetworkFilesystemInfo:getJobJobSettingsSettingsTaskNewClusterClusterMountInfoNetworkFilesystemInfo"},"remoteMountDirPath":{"type":"string"}},"type":"object","required":["localMountDirPath","networkFilesystemInfo"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterClusterMountInfoNetworkFilesystemInfo:getJobJobSettingsSettingsTaskNewClusterClusterMountInfoNetworkFilesystemInfo":{"properties":{"mountOptions":{"type":"string"},"serverAddress":{"type":"string"}},"type":"object","required":["serverAddress"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterDockerImage:getJobJobSettingsSettingsTaskNewClusterDockerImage":{"properties":{"basicAuth":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterDockerImageBasicAuth:getJobJobSettingsSettingsTaskNewClusterDockerImageBasicAuth"},"url":{"type":"string"}},"type":"object","required":["url"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterDockerImageBasicAuth:getJobJobSettingsSettingsTaskNewClusterDockerImageBasicAuth":{"properties":{"password":{"type":"string","secret":true},"username":{"type":"string"}},"type":"object","required":["password","username"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterGcpAttributes:getJobJobSettingsSettingsTaskNewClusterGcpAttributes":{"properties":{"availability":{"type":"string"},"bootDiskSize":{"type":"integer"},"googleServiceAccount":{"type":"string"},"localSsdCount":{"type":"integer"},"usePreemptibleExecutors":{"type":"boolean"},"zoneId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScript:getJobJobSettingsSettingsTaskNewClusterInitScript":{"properties":{"abfss":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptAbfss:getJobJobSettingsSettingsTaskNewClusterInitScriptAbfss"},"dbfs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptDbfs:getJobJobSettingsSettingsTaskNewClusterInitScriptDbfs"},"file":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptFile:getJobJobSettingsSettingsTaskNewClusterInitScriptFile"},"gcs":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptGcs:getJobJobSettingsSettingsTaskNewClusterInitScriptGcs"},"s3":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptS3:getJobJobSettingsSettingsTaskNewClusterInitScriptS3"},"volumes":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptVolumes:getJobJobSettingsSettingsTaskNewClusterInitScriptVolumes"},"workspace":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptWorkspace:getJobJobSettingsSettingsTaskNewClusterInitScriptWorkspace"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptAbfss:getJobJobSettingsSettingsTaskNewClusterInitScriptAbfss":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptDbfs:getJobJobSettingsSettingsTaskNewClusterInitScriptDbfs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptFile:getJobJobSettingsSettingsTaskNewClusterInitScriptFile":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptGcs:getJobJobSettingsSettingsTaskNewClusterInitScriptGcs":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptS3:getJobJobSettingsSettingsTaskNewClusterInitScriptS3":{"properties":{"cannedAcl":{"type":"string"},"destination":{"type":"string"},"enableEncryption":{"type":"boolean"},"encryptionType":{"type":"string"},"endpoint":{"type":"string"},"kmsKey":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptVolumes:getJobJobSettingsSettingsTaskNewClusterInitScriptVolumes":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterInitScriptWorkspace:getJobJobSettingsSettingsTaskNewClusterInitScriptWorkspace":{"properties":{"destination":{"type":"string"}},"type":"object","required":["destination"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterWorkloadType:getJobJobSettingsSettingsTaskNewClusterWorkloadType":{"properties":{"clients":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskNewClusterWorkloadTypeClients:getJobJobSettingsSettingsTaskNewClusterWorkloadTypeClients"}},"type":"object","required":["clients"]},"databricks:index/getJobJobSettingsSettingsTaskNewClusterWorkloadTypeClients:getJobJobSettingsSettingsTaskNewClusterWorkloadTypeClients":{"properties":{"jobs":{"type":"boolean"},"notebooks":{"type":"boolean"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskNotebookTask:getJobJobSettingsSettingsTaskNotebookTask":{"properties":{"baseParameters":{"type":"object","additionalProperties":{"type":"string"}},"notebookPath":{"type":"string"},"source":{"type":"string"},"warehouseId":{"type":"string"}},"type":"object","required":["notebookPath"]},"databricks:index/getJobJobSettingsSettingsTaskNotificationSettings:getJobJobSettingsSettingsTaskNotificationSettings":{"properties":{"alertOnLastAttempt":{"type":"boolean"},"noAlertForCanceledRuns":{"type":"boolean"},"noAlertForSkippedRuns":{"type":"boolean"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskPipelineTask:getJobJobSettingsSettingsTaskPipelineTask":{"properties":{"fullRefresh":{"type":"boolean"},"pipelineId":{"type":"string"}},"type":"object","required":["pipelineId"]},"databricks:index/getJobJobSettingsSettingsTaskPowerBiTask:getJobJobSettingsSettingsTaskPowerBiTask":{"properties":{"connectionResourceName":{"type":"string"},"powerBiModel":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskPowerBiTaskPowerBiModel:getJobJobSettingsSettingsTaskPowerBiTaskPowerBiModel"},"refreshAfterUpdate":{"type":"boolean"},"tables":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskPowerBiTaskTable:getJobJobSettingsSettingsTaskPowerBiTaskTable"}},"warehouseId":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskPowerBiTaskPowerBiModel:getJobJobSettingsSettingsTaskPowerBiTaskPowerBiModel":{"properties":{"authenticationMethod":{"type":"string"},"modelName":{"type":"string"},"overwriteExisting":{"type":"boolean"},"storageMode":{"type":"string"},"workspaceName":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskPowerBiTaskTable:getJobJobSettingsSettingsTaskPowerBiTaskTable":{"properties":{"catalog":{"type":"string"},"name":{"type":"string","description":"the job name of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by id.\n"},"schema":{"type":"string"},"storageMode":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskPythonWheelTask:getJobJobSettingsSettingsTaskPythonWheelTask":{"properties":{"entryPoint":{"type":"string"},"namedParameters":{"type":"object","additionalProperties":{"type":"string"}},"packageName":{"type":"string"},"parameters":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskRunJobTask:getJobJobSettingsSettingsTaskRunJobTask":{"properties":{"jobId":{"type":"integer"},"jobParameters":{"type":"object","additionalProperties":{"type":"string"}}},"type":"object","required":["jobId"]},"databricks:index/getJobJobSettingsSettingsTaskSparkJarTask:getJobJobSettingsSettingsTaskSparkJarTask":{"properties":{"jarUri":{"type":"string"},"mainClassName":{"type":"string"},"parameters":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskSparkPythonTask:getJobJobSettingsSettingsTaskSparkPythonTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"}},"pythonFile":{"type":"string"},"source":{"type":"string"}},"type":"object","required":["pythonFile"]},"databricks:index/getJobJobSettingsSettingsTaskSparkSubmitTask:getJobJobSettingsSettingsTaskSparkSubmitTask":{"properties":{"parameters":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskSqlTask:getJobJobSettingsSettingsTaskSqlTask":{"properties":{"alert":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskSqlTaskAlert:getJobJobSettingsSettingsTaskSqlTaskAlert"},"dashboard":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskSqlTaskDashboard:getJobJobSettingsSettingsTaskSqlTaskDashboard"},"file":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskSqlTaskFile:getJobJobSettingsSettingsTaskSqlTaskFile"},"parameters":{"type":"object","additionalProperties":{"type":"string"}},"query":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskSqlTaskQuery:getJobJobSettingsSettingsTaskSqlTaskQuery"},"warehouseId":{"type":"string"}},"type":"object","required":["warehouseId"]},"databricks:index/getJobJobSettingsSettingsTaskSqlTaskAlert:getJobJobSettingsSettingsTaskSqlTaskAlert":{"properties":{"alertId":{"type":"string"},"pauseSubscriptions":{"type":"boolean"},"subscriptions":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskSqlTaskAlertSubscription:getJobJobSettingsSettingsTaskSqlTaskAlertSubscription"}}},"type":"object","required":["alertId"]},"databricks:index/getJobJobSettingsSettingsTaskSqlTaskAlertSubscription:getJobJobSettingsSettingsTaskSqlTaskAlertSubscription":{"properties":{"destinationId":{"type":"string"},"userName":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskSqlTaskDashboard:getJobJobSettingsSettingsTaskSqlTaskDashboard":{"properties":{"customSubject":{"type":"string"},"dashboardId":{"type":"string"},"pauseSubscriptions":{"type":"boolean"},"subscriptions":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskSqlTaskDashboardSubscription:getJobJobSettingsSettingsTaskSqlTaskDashboardSubscription"}}},"type":"object","required":["dashboardId"]},"databricks:index/getJobJobSettingsSettingsTaskSqlTaskDashboardSubscription:getJobJobSettingsSettingsTaskSqlTaskDashboardSubscription":{"properties":{"destinationId":{"type":"string"},"userName":{"type":"string"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskSqlTaskFile:getJobJobSettingsSettingsTaskSqlTaskFile":{"properties":{"path":{"type":"string"},"source":{"type":"string"}},"type":"object","required":["path"]},"databricks:index/getJobJobSettingsSettingsTaskSqlTaskQuery:getJobJobSettingsSettingsTaskSqlTaskQuery":{"properties":{"queryId":{"type":"string"}},"type":"object","required":["queryId"]},"databricks:index/getJobJobSettingsSettingsTaskWebhookNotifications:getJobJobSettingsSettingsTaskWebhookNotifications":{"properties":{"onDurationWarningThresholdExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskWebhookNotificationsOnDurationWarningThresholdExceeded:getJobJobSettingsSettingsTaskWebhookNotificationsOnDurationWarningThresholdExceeded"}},"onFailures":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskWebhookNotificationsOnFailure:getJobJobSettingsSettingsTaskWebhookNotificationsOnFailure"}},"onStarts":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskWebhookNotificationsOnStart:getJobJobSettingsSettingsTaskWebhookNotificationsOnStart"}},"onStreamingBacklogExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskWebhookNotificationsOnStreamingBacklogExceeded:getJobJobSettingsSettingsTaskWebhookNotificationsOnStreamingBacklogExceeded"}},"onSuccesses":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTaskWebhookNotificationsOnSuccess:getJobJobSettingsSettingsTaskWebhookNotificationsOnSuccess"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTaskWebhookNotificationsOnDurationWarningThresholdExceeded:getJobJobSettingsSettingsTaskWebhookNotificationsOnDurationWarningThresholdExceeded":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsTaskWebhookNotificationsOnFailure:getJobJobSettingsSettingsTaskWebhookNotificationsOnFailure":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsTaskWebhookNotificationsOnStart:getJobJobSettingsSettingsTaskWebhookNotificationsOnStart":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsTaskWebhookNotificationsOnStreamingBacklogExceeded:getJobJobSettingsSettingsTaskWebhookNotificationsOnStreamingBacklogExceeded":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsTaskWebhookNotificationsOnSuccess:getJobJobSettingsSettingsTaskWebhookNotificationsOnSuccess":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsTrigger:getJobJobSettingsSettingsTrigger":{"properties":{"fileArrival":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTriggerFileArrival:getJobJobSettingsSettingsTriggerFileArrival"},"pauseStatus":{"type":"string"},"periodic":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTriggerPeriodic:getJobJobSettingsSettingsTriggerPeriodic"},"tableUpdate":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsTriggerTableUpdate:getJobJobSettingsSettingsTriggerTableUpdate"}},"type":"object"},"databricks:index/getJobJobSettingsSettingsTriggerFileArrival:getJobJobSettingsSettingsTriggerFileArrival":{"properties":{"minTimeBetweenTriggersSeconds":{"type":"integer"},"url":{"type":"string"},"waitAfterLastChangeSeconds":{"type":"integer"}},"type":"object","required":["url"]},"databricks:index/getJobJobSettingsSettingsTriggerPeriodic:getJobJobSettingsSettingsTriggerPeriodic":{"properties":{"interval":{"type":"integer"},"unit":{"type":"string"}},"type":"object","required":["interval","unit"]},"databricks:index/getJobJobSettingsSettingsTriggerTableUpdate:getJobJobSettingsSettingsTriggerTableUpdate":{"properties":{"condition":{"type":"string"},"minTimeBetweenTriggersSeconds":{"type":"integer"},"tableNames":{"type":"array","items":{"type":"string"}},"waitAfterLastChangeSeconds":{"type":"integer"}},"type":"object","required":["tableNames"]},"databricks:index/getJobJobSettingsSettingsWebhookNotifications:getJobJobSettingsSettingsWebhookNotifications":{"properties":{"onDurationWarningThresholdExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsWebhookNotificationsOnDurationWarningThresholdExceeded:getJobJobSettingsSettingsWebhookNotificationsOnDurationWarningThresholdExceeded"}},"onFailures":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsWebhookNotificationsOnFailure:getJobJobSettingsSettingsWebhookNotificationsOnFailure"}},"onStarts":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsWebhookNotificationsOnStart:getJobJobSettingsSettingsWebhookNotificationsOnStart"}},"onStreamingBacklogExceededs":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsWebhookNotificationsOnStreamingBacklogExceeded:getJobJobSettingsSettingsWebhookNotificationsOnStreamingBacklogExceeded"}},"onSuccesses":{"type":"array","items":{"$ref":"#/types/databricks:index/getJobJobSettingsSettingsWebhookNotificationsOnSuccess:getJobJobSettingsSettingsWebhookNotificationsOnSuccess"}}},"type":"object"},"databricks:index/getJobJobSettingsSettingsWebhookNotificationsOnDurationWarningThresholdExceeded:getJobJobSettingsSettingsWebhookNotificationsOnDurationWarningThresholdExceeded":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsWebhookNotificationsOnFailure:getJobJobSettingsSettingsWebhookNotificationsOnFailure":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsWebhookNotificationsOnStart:getJobJobSettingsSettingsWebhookNotificationsOnStart":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsWebhookNotificationsOnStreamingBacklogExceeded:getJobJobSettingsSettingsWebhookNotificationsOnStreamingBacklogExceeded":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobJobSettingsSettingsWebhookNotificationsOnSuccess:getJobJobSettingsSettingsWebhookNotificationsOnSuccess":{"properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"}},"type":"object","required":["id"]},"databricks:index/getJobProviderConfig:getJobProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getJobsProviderConfig:getJobsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getMaterializedFeaturesFeatureTagProviderConfig:getMaterializedFeaturesFeatureTagProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getMaterializedFeaturesFeatureTagsFeatureTag:getMaterializedFeaturesFeatureTagsFeatureTag":{"properties":{"key":{"type":"string","description":"(string)\n"},"providerConfig":{"$ref":"#/types/databricks:index/getMaterializedFeaturesFeatureTagsFeatureTagProviderConfig:getMaterializedFeaturesFeatureTagsFeatureTagProviderConfig","description":"Configure the provider for management through account provider.\n"},"value":{"type":"string","description":"(string)\n"}},"type":"object","required":["key","value"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getMaterializedFeaturesFeatureTagsFeatureTagProviderConfig:getMaterializedFeaturesFeatureTagsFeatureTagProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getMaterializedFeaturesFeatureTagsProviderConfig:getMaterializedFeaturesFeatureTagsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getMetastoreMetastoreInfo:getMetastoreMetastoreInfo":{"properties":{"cloud":{"type":"string","description":"Cloud vendor of the metastore home shard (e.g., \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`azure`\" pulumi-lang-dotnet=\"`Azure`\" pulumi-lang-go=\"`azure`\" pulumi-lang-python=\"`azure`\" pulumi-lang-yaml=\"`azure`\" pulumi-lang-java=\"`azure`\"\u003e`azure`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`gcp`\" pulumi-lang-dotnet=\"`Gcp`\" pulumi-lang-go=\"`gcp`\" pulumi-lang-python=\"`gcp`\" pulumi-lang-yaml=\"`gcp`\" pulumi-lang-java=\"`gcp`\"\u003e`gcp`\u003c/span\u003e).\n"},"createdAt":{"type":"integer","description":"Time at which the metastore was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of metastore creator.\n"},"defaultDataAccessConfigId":{"type":"string","description":"Unique identifier of the metastore's default data access configuration.\n"},"deltaSharingOrganizationName":{"type":"string","description":"The organization name of a Delta Sharing entity. This field is used for Databricks to Databricks sharing.\n"},"deltaSharingRecipientTokenLifetimeInSeconds":{"type":"integer","description":"Used to set expiration duration in seconds on recipient data access tokens.\n"},"deltaSharingScope":{"type":"string","description":"Used to enable delta sharing on the metastore. Valid values: INTERNAL, INTERNAL_AND_EXTERNAL. INTERNAL only allows sharing within the same account, and INTERNAL_AND_EXTERNAL allows cross account sharing and token based sharing.\n"},"externalAccessEnabled":{"type":"boolean","description":"Whether to allow non-DBR clients to directly access entities under the metastore.\n"},"globalMetastoreId":{"type":"string","description":"Globally unique metastore ID across clouds and regions, of the form `cloud:region:metastore_id`.\n"},"metastoreId":{"type":"string","description":"ID of the metastore\n"},"name":{"type":"string","description":"Name of the metastore\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the metastore owner.\n"},"privilegeModelVersion":{"type":"string","description":"Privilege model version of the metastore, of the form `major.minor` (e.g., `1.0`).\n"},"region":{"type":"string","description":"Region of the metastore\n"},"storageRoot":{"type":"string","description":"Path on cloud storage account, where managed \u003cspan pulumi-lang-nodejs=\"`databricks.Table`\" pulumi-lang-dotnet=\"`databricks.Table`\" pulumi-lang-go=\"`Table`\" pulumi-lang-python=\"`Table`\" pulumi-lang-yaml=\"`databricks.Table`\" pulumi-lang-java=\"`databricks.Table`\"\u003e`databricks.Table`\u003c/span\u003e are stored.\n"},"storageRootCredentialId":{"type":"string","description":"UUID of storage credential to access the metastore storage_root.\n"},"storageRootCredentialName":{"type":"string","description":"Name of the storage credential to access the metastore storage_root.\n"},"updatedAt":{"type":"integer","description":"Time at which the metastore was last modified, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of user who last modified the metastore.\n"}},"type":"object"},"databricks:index/getMlflowExperimentProviderConfig:getMlflowExperimentProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getMlflowExperimentTag:getMlflowExperimentTag":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/getMlflowModelLatestVersion:getMlflowModelLatestVersion":{"properties":{"creationTimestamp":{"type":"integer"},"currentStage":{"type":"string"},"description":{"type":"string","description":"User-specified description for the object.\n"},"lastUpdatedTimestamp":{"type":"integer"},"name":{"type":"string","description":"Name of the registered model.\n"},"runId":{"type":"string"},"runLink":{"type":"string"},"source":{"type":"string"},"status":{"type":"string"},"statusMessage":{"type":"string"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/getMlflowModelLatestVersionTag:getMlflowModelLatestVersionTag"},"description":"Array of tags associated with the model.\n"},"userId":{"type":"string","description":"The username of the user that created the object.\n"},"version":{"type":"string"}},"type":"object"},"databricks:index/getMlflowModelLatestVersionTag:getMlflowModelLatestVersionTag":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/getMlflowModelProviderConfig:getMlflowModelProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getMlflowModelTag:getMlflowModelTag":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/getMlflowModelsProviderConfig:getMlflowModelsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getMwsCredentialsProviderConfig:getMwsCredentialsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getMwsNetworkConnectivityConfigEgressConfig:getMwsNetworkConnectivityConfigEgressConfig":{"properties":{"defaultRules":{"$ref":"#/types/databricks:index/getMwsNetworkConnectivityConfigEgressConfigDefaultRules:getMwsNetworkConnectivityConfigEgressConfigDefaultRules","description":"Array of default rules.\n"},"targetRules":{"$ref":"#/types/databricks:index/getMwsNetworkConnectivityConfigEgressConfigTargetRules:getMwsNetworkConnectivityConfigEgressConfigTargetRules","description":"Array of target rules.\n"}},"type":"object"},"databricks:index/getMwsNetworkConnectivityConfigEgressConfigDefaultRules:getMwsNetworkConnectivityConfigEgressConfigDefaultRules":{"properties":{"awsStableIpRule":{"$ref":"#/types/databricks:index/getMwsNetworkConnectivityConfigEgressConfigDefaultRulesAwsStableIpRule:getMwsNetworkConnectivityConfigEgressConfigDefaultRulesAwsStableIpRule","description":"The stable AWS IP CIDR blocks. You can use these to configure the firewall of your resources to allow traffic from your Databricks workspace.\n"},"azureServiceEndpointRule":{"$ref":"#/types/databricks:index/getMwsNetworkConnectivityConfigEgressConfigDefaultRulesAzureServiceEndpointRule:getMwsNetworkConnectivityConfigEgressConfigDefaultRulesAzureServiceEndpointRule","description":"Array of Azure service endpoint rules.\n"}},"type":"object"},"databricks:index/getMwsNetworkConnectivityConfigEgressConfigDefaultRulesAwsStableIpRule:getMwsNetworkConnectivityConfigEgressConfigDefaultRulesAwsStableIpRule":{"properties":{"cidrBlocks":{"type":"array","items":{"type":"string"},"description":"The list of stable IP CIDR blocks from which Databricks network traffic originates when accessing your resources.\n"}},"type":"object"},"databricks:index/getMwsNetworkConnectivityConfigEgressConfigDefaultRulesAzureServiceEndpointRule:getMwsNetworkConnectivityConfigEgressConfigDefaultRulesAzureServiceEndpointRule":{"properties":{"subnets":{"type":"array","items":{"type":"string"},"description":"Array of strings representing the subnet IDs.\n"},"targetRegion":{"type":"string","description":"The target region for the service endpoint.\n"},"targetServices":{"type":"array","items":{"type":"string"},"description":"Array of target services.\n"}},"type":"object"},"databricks:index/getMwsNetworkConnectivityConfigEgressConfigTargetRules:getMwsNetworkConnectivityConfigEgressConfigTargetRules":{"properties":{"awsPrivateEndpointRules":{"type":"array","items":{"$ref":"#/types/databricks:index/getMwsNetworkConnectivityConfigEgressConfigTargetRulesAwsPrivateEndpointRule:getMwsNetworkConnectivityConfigEgressConfigTargetRulesAwsPrivateEndpointRule"}},"azurePrivateEndpointRules":{"type":"array","items":{"$ref":"#/types/databricks:index/getMwsNetworkConnectivityConfigEgressConfigTargetRulesAzurePrivateEndpointRule:getMwsNetworkConnectivityConfigEgressConfigTargetRulesAzurePrivateEndpointRule"},"description":"Array of private endpoint rule objects.\n"}},"type":"object"},"databricks:index/getMwsNetworkConnectivityConfigEgressConfigTargetRulesAwsPrivateEndpointRule:getMwsNetworkConnectivityConfigEgressConfigTargetRulesAwsPrivateEndpointRule":{"properties":{"accountId":{"type":"string","description":"The Databricks account ID associated with this network configuration.\n"},"connectionState":{"type":"string","description":"The current status of this private endpoint.\n"},"creationTime":{"type":"integer","description":"Time in epoch milliseconds when this object was created.\n"},"deactivated":{"type":"boolean","description":"Whether this private endpoint is deactivated.\n"},"deactivatedAt":{"type":"integer","description":"Time in epoch milliseconds when this object was deactivated.\n"},"domainNames":{"type":"array","items":{"type":"string"}},"enabled":{"type":"boolean"},"endpointService":{"type":"string"},"errorMessage":{"type":"string"},"networkConnectivityConfigId":{"type":"string","description":"The Databricks network connectivity configuration ID.\n"},"resourceNames":{"type":"array","items":{"type":"string"}},"ruleId":{"type":"string","description":"The ID of a private endpoint rule.\n"},"updatedTime":{"type":"integer","description":"Time in epoch milliseconds when the network was updated.\n"},"vpcEndpointId":{"type":"string"}},"type":"object"},"databricks:index/getMwsNetworkConnectivityConfigEgressConfigTargetRulesAzurePrivateEndpointRule:getMwsNetworkConnectivityConfigEgressConfigTargetRulesAzurePrivateEndpointRule":{"properties":{"connectionState":{"type":"string","description":"The current status of this private endpoint.\n"},"creationTime":{"type":"integer","description":"Time in epoch milliseconds when this object was created.\n"},"deactivated":{"type":"boolean","description":"Whether this private endpoint is deactivated.\n"},"deactivatedAt":{"type":"integer","description":"Time in epoch milliseconds when this object was deactivated.\n"},"domainNames":{"type":"array","items":{"type":"string"}},"endpointName":{"type":"string","description":"The name of the Azure private endpoint resource.\n"},"errorMessage":{"type":"string"},"groupId":{"type":"string","description":"The sub-resource type (group ID) of the target resource.\n"},"networkConnectivityConfigId":{"type":"string","description":"The Databricks network connectivity configuration ID.\n"},"resourceId":{"type":"string","description":"The Azure resource ID of the target resource.\n"},"ruleId":{"type":"string","description":"The ID of a private endpoint rule.\n"},"updatedTime":{"type":"integer","description":"Time in epoch milliseconds when the network was updated.\n"}},"type":"object"},"databricks:index/getMwsWorkspacesProviderConfig:getMwsWorkspacesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getNodeTypeProviderConfig:getNodeTypeProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getNotebookPathsNotebookPathList:getNotebookPathsNotebookPathList":{"properties":{"language":{"type":"string"},"path":{"type":"string","description":"Path to workspace directory\n"}},"type":"object"},"databricks:index/getNotebookPathsProviderConfig:getNotebookPathsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getNotebookProviderConfig:getNotebookProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getNotificationDestinationsNotificationDestination:getNotificationDestinationsNotificationDestination":{"properties":{"destinationType":{"type":"string","description":"The type of the notification destination. Possible values are `EMAIL`, `MICROSOFT_TEAMS`, `PAGERDUTY`, `SLACK`, or `WEBHOOK`.\n"},"displayName":{"type":"string","description":"The display name of the Notification Destination.\n"},"id":{"type":"string","description":"The unique ID of the Notification Destination.\n"}},"type":"object"},"databricks:index/getNotificationDestinationsProviderConfig:getNotificationDestinationsProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/getOnlineStoreProviderConfig:getOnlineStoreProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getOnlineStoresOnlineStore:getOnlineStoresOnlineStore":{"properties":{"capacity":{"type":"string","description":"(string) - The capacity of the online store. Valid values are \"CU_1\", \"CU_2\", \"CU_4\", \"CU_8\"\n"},"creationTime":{"type":"string","description":"(string) - The timestamp when the online store was created\n"},"creator":{"type":"string","description":"(string) - The email of the creator of the online store\n"},"name":{"type":"string","description":"(string) - The name of the online store. This is the unique identifier for the online store\n"},"providerConfig":{"$ref":"#/types/databricks:index/getOnlineStoresOnlineStoreProviderConfig:getOnlineStoresOnlineStoreProviderConfig","description":"Configure the provider for management through account provider.\n"},"readReplicaCount":{"type":"integer","description":"(integer) - The number of read replicas for the online store. Defaults to 0\n"},"state":{"type":"string","description":"(string) - The current state of the online store. Possible values are: `AVAILABLE`, `DELETING`, `FAILING_OVER`, `STARTING`, `STOPPED`, `UPDATING`\n"},"usagePolicyId":{"type":"string","description":"(string) - The usage policy applied to the online store to track billing\n"}},"type":"object","required":["capacity","creationTime","creator","name","readReplicaCount","state","usagePolicyId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getOnlineStoresOnlineStoreProviderConfig:getOnlineStoresOnlineStoreProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getOnlineStoresProviderConfig:getOnlineStoresProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getPipelinesProviderConfig:getPipelinesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getPolicyInfoColumnMask:getPolicyInfoColumnMask":{"properties":{"functionName":{"type":"string","description":"(string) - The fully qualified name of the row filter function.\nThe function is called on each row of the target table. It should return a boolean value\nindicating whether the row should be visible to the user.\nRequired on create and update\n"},"onColumn":{"type":"string","description":"(string) - The alias of the column to be masked. The alias must refer to one of matched columns.\nThe values of the column is passed to the column mask function as the first argument.\nRequired on create and update\n"},"usings":{"type":"array","items":{"$ref":"#/types/databricks:index/getPolicyInfoColumnMaskUsing:getPolicyInfoColumnMaskUsing"},"description":"(list of FunctionArgument) - Optional list of column aliases or constant literals to be passed as arguments to the row filter function.\nThe type of each column should match the positional argument of the row filter function\n"}},"type":"object","required":["functionName","onColumn"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPolicyInfoColumnMaskUsing:getPolicyInfoColumnMaskUsing":{"properties":{"alias":{"type":"string","description":"(string) - Optional alias of the matched column\n"},"constant":{"type":"string","description":"(string) - A constant literal\n"}},"type":"object"},"databricks:index/getPolicyInfoMatchColumn:getPolicyInfoMatchColumn":{"properties":{"alias":{"type":"string","description":"(string) - Optional alias of the matched column\n"},"condition":{"type":"string","description":"(string) - The condition expression used to match a table column\n"}},"type":"object"},"databricks:index/getPolicyInfoProviderConfig:getPolicyInfoProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getPolicyInfoRowFilter:getPolicyInfoRowFilter":{"properties":{"functionName":{"type":"string","description":"(string) - The fully qualified name of the row filter function.\nThe function is called on each row of the target table. It should return a boolean value\nindicating whether the row should be visible to the user.\nRequired on create and update\n"},"usings":{"type":"array","items":{"$ref":"#/types/databricks:index/getPolicyInfoRowFilterUsing:getPolicyInfoRowFilterUsing"},"description":"(list of FunctionArgument) - Optional list of column aliases or constant literals to be passed as arguments to the row filter function.\nThe type of each column should match the positional argument of the row filter function\n"}},"type":"object","required":["functionName"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPolicyInfoRowFilterUsing:getPolicyInfoRowFilterUsing":{"properties":{"alias":{"type":"string","description":"(string) - Optional alias of the matched column\n"},"constant":{"type":"string","description":"(string) - A constant literal\n"}},"type":"object"},"databricks:index/getPolicyInfosPolicy:getPolicyInfosPolicy":{"properties":{"columnMask":{"$ref":"#/types/databricks:index/getPolicyInfosPolicyColumnMask:getPolicyInfosPolicyColumnMask","description":"(ColumnMaskOptions) - Options for column mask policies. Valid only if \u003cspan pulumi-lang-nodejs=\"`policyType`\" pulumi-lang-dotnet=\"`PolicyType`\" pulumi-lang-go=\"`policyType`\" pulumi-lang-python=\"`policy_type`\" pulumi-lang-yaml=\"`policyType`\" pulumi-lang-java=\"`policyType`\"\u003e`policy_type`\u003c/span\u003e is `POLICY_TYPE_COLUMN_MASK`.\nRequired on create and optional on update. When specified on update,\nthe new options will replace the existing options as a whole\n"},"comment":{"type":"string","description":"(string) - Optional description of the policy\n"},"createdAt":{"type":"integer","description":"(integer) - Time at which the policy was created, in epoch milliseconds. Output only\n"},"createdBy":{"type":"string","description":"(string) - Username of the user who created the policy. Output only\n"},"exceptPrincipals":{"type":"array","items":{"type":"string"},"description":"(list of string) - Optional list of user or group names that should be excluded from the policy\n"},"forSecurableType":{"type":"string","description":"(string) - Type of securables that the policy should take effect on.\nOnly `TABLE` is supported at this moment.\nRequired on create and optional on update. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"},"id":{"type":"string","description":"(string) - Unique identifier of the policy. This field is output only and is generated by the system\n"},"matchColumns":{"type":"array","items":{"$ref":"#/types/databricks:index/getPolicyInfosPolicyMatchColumn:getPolicyInfosPolicyMatchColumn"},"description":"(list of MatchColumn) - Optional list of condition expressions used to match table columns.\nOnly valid when \u003cspan pulumi-lang-nodejs=\"`forSecurableType`\" pulumi-lang-dotnet=\"`ForSecurableType`\" pulumi-lang-go=\"`forSecurableType`\" pulumi-lang-python=\"`for_securable_type`\" pulumi-lang-yaml=\"`forSecurableType`\" pulumi-lang-java=\"`forSecurableType`\"\u003e`for_securable_type`\u003c/span\u003e is `TABLE`.\nWhen specified, the policy only applies to tables whose columns satisfy all match conditions\n"},"name":{"type":"string","description":"(string) - Name of the policy. Required on create and optional on update.\nTo rename the policy, set \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e to a different value on update\n"},"onSecurableFullname":{"type":"string","description":"Required. The fully qualified name of securable to list policies for\n"},"onSecurableType":{"type":"string","description":"Required. The type of the securable to list policies for\n"},"policyType":{"type":"string","description":"(string) - Type of the policy. Required on create. Possible values are: `POLICY_TYPE_COLUMN_MASK`, `POLICY_TYPE_ROW_FILTER`\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPolicyInfosPolicyProviderConfig:getPolicyInfosPolicyProviderConfig","description":"Configure the provider for management through account provider.\n"},"rowFilter":{"$ref":"#/types/databricks:index/getPolicyInfosPolicyRowFilter:getPolicyInfosPolicyRowFilter","description":"(RowFilterOptions) - Options for row filter policies. Valid only if \u003cspan pulumi-lang-nodejs=\"`policyType`\" pulumi-lang-dotnet=\"`PolicyType`\" pulumi-lang-go=\"`policyType`\" pulumi-lang-python=\"`policy_type`\" pulumi-lang-yaml=\"`policyType`\" pulumi-lang-java=\"`policyType`\"\u003e`policy_type`\u003c/span\u003e is `POLICY_TYPE_ROW_FILTER`.\nRequired on create and optional on update. When specified on update,\nthe new options will replace the existing options as a whole\n"},"toPrincipals":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of user or group names that the policy applies to.\nRequired on create and optional on update\n"},"updatedAt":{"type":"integer","description":"(integer) - Time at which the policy was last modified, in epoch milliseconds. Output only\n"},"updatedBy":{"type":"string","description":"(string) - Username of the user who last modified the policy. Output only\n"},"whenCondition":{"type":"string","description":"(string) - Optional condition when the policy should take effect\n"}},"type":"object","required":["columnMask","comment","createdAt","createdBy","exceptPrincipals","forSecurableType","id","matchColumns","name","onSecurableFullname","onSecurableType","policyType","rowFilter","toPrincipals","updatedAt","updatedBy","whenCondition"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPolicyInfosPolicyColumnMask:getPolicyInfosPolicyColumnMask":{"properties":{"functionName":{"type":"string","description":"(string) - The fully qualified name of the row filter function.\nThe function is called on each row of the target table. It should return a boolean value\nindicating whether the row should be visible to the user.\nRequired on create and update\n"},"onColumn":{"type":"string","description":"(string) - The alias of the column to be masked. The alias must refer to one of matched columns.\nThe values of the column is passed to the column mask function as the first argument.\nRequired on create and update\n"},"usings":{"type":"array","items":{"$ref":"#/types/databricks:index/getPolicyInfosPolicyColumnMaskUsing:getPolicyInfosPolicyColumnMaskUsing"},"description":"(list of FunctionArgument) - Optional list of column aliases or constant literals to be passed as arguments to the row filter function.\nThe type of each column should match the positional argument of the row filter function\n"}},"type":"object","required":["functionName","onColumn"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPolicyInfosPolicyColumnMaskUsing:getPolicyInfosPolicyColumnMaskUsing":{"properties":{"alias":{"type":"string","description":"(string) - Optional alias of the matched column\n"},"constant":{"type":"string","description":"(string) - A constant literal\n"}},"type":"object"},"databricks:index/getPolicyInfosPolicyMatchColumn:getPolicyInfosPolicyMatchColumn":{"properties":{"alias":{"type":"string","description":"(string) - Optional alias of the matched column\n"},"condition":{"type":"string","description":"(string) - The condition expression used to match a table column\n"}},"type":"object"},"databricks:index/getPolicyInfosPolicyProviderConfig:getPolicyInfosPolicyProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPolicyInfosPolicyRowFilter:getPolicyInfosPolicyRowFilter":{"properties":{"functionName":{"type":"string","description":"(string) - The fully qualified name of the row filter function.\nThe function is called on each row of the target table. It should return a boolean value\nindicating whether the row should be visible to the user.\nRequired on create and update\n"},"usings":{"type":"array","items":{"$ref":"#/types/databricks:index/getPolicyInfosPolicyRowFilterUsing:getPolicyInfosPolicyRowFilterUsing"},"description":"(list of FunctionArgument) - Optional list of column aliases or constant literals to be passed as arguments to the row filter function.\nThe type of each column should match the positional argument of the row filter function\n"}},"type":"object","required":["functionName"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPolicyInfosPolicyRowFilterUsing:getPolicyInfosPolicyRowFilterUsing":{"properties":{"alias":{"type":"string","description":"(string) - Optional alias of the matched column\n"},"constant":{"type":"string","description":"(string) - A constant literal\n"}},"type":"object"},"databricks:index/getPolicyInfosProviderConfig:getPolicyInfosProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getPostgresBranchProviderConfig:getPostgresBranchProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getPostgresBranchSpec:getPostgresBranchSpec":{"properties":{"expireTime":{"type":"string","description":"(string) - Absolute expiration time for the branch. Empty if expiration is disabled\n"},"isProtected":{"type":"boolean","description":"(boolean) - Whether the branch is protected\n"},"noExpiry":{"type":"boolean","description":"(boolean) - Explicitly disable expiration. When set to true, the branch will not expire.\nIf set to false, the request is invalid; provide either ttl or\u003cspan pulumi-lang-nodejs=\" expireTime \" pulumi-lang-dotnet=\" ExpireTime \" pulumi-lang-go=\" expireTime \" pulumi-lang-python=\" expire_time \" pulumi-lang-yaml=\" expireTime \" pulumi-lang-java=\" expireTime \"\u003e expire_time \u003c/span\u003einstead\n"},"sourceBranch":{"type":"string","description":"(string) - The name of the source branch from which this branch was created.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"sourceBranchLsn":{"type":"string","description":"(string) - The Log Sequence Number (LSN) on the source branch from which this branch was created\n"},"sourceBranchTime":{"type":"string","description":"(string) - The point in time on the source branch from which this branch was created\n"},"ttl":{"type":"string","description":"(string) - Relative time-to-live duration. When set, the branch will expire at\u003cspan pulumi-lang-nodejs=\" creationTime \" pulumi-lang-dotnet=\" CreationTime \" pulumi-lang-go=\" creationTime \" pulumi-lang-python=\" creation_time \" pulumi-lang-yaml=\" creationTime \" pulumi-lang-java=\" creationTime \"\u003e creation_time \u003c/span\u003e+ ttl\n"}},"type":"object"},"databricks:index/getPostgresBranchStatus:getPostgresBranchStatus":{"properties":{"currentState":{"type":"string","description":"(string) - The branch's state, indicating if it is initializing, ready for use, or archived. Possible values are: `ARCHIVED`, `IMPORTING`, `INIT`, `READY`, `RESETTING`\n"},"default":{"type":"boolean","description":"(boolean) - Whether the branch is the project's default branch\n"},"expireTime":{"type":"string","description":"(string) - Absolute expiration time for the branch. Empty if expiration is disabled\n"},"isProtected":{"type":"boolean","description":"(boolean) - Whether the branch is protected\n"},"logicalSizeBytes":{"type":"integer","description":"(integer) - The logical size of the branch\n"},"pendingState":{"type":"string","description":"(string) - The pending state of the branch, if a state transition is in progress. Possible values are: `ARCHIVED`, `IMPORTING`, `INIT`, `READY`, `RESETTING`\n"},"sourceBranch":{"type":"string","description":"(string) - The name of the source branch from which this branch was created.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"sourceBranchLsn":{"type":"string","description":"(string) - The Log Sequence Number (LSN) on the source branch from which this branch was created\n"},"sourceBranchTime":{"type":"string","description":"(string) - The point in time on the source branch from which this branch was created\n"},"stateChangeTime":{"type":"string","description":"(string) - A timestamp indicating when the \u003cspan pulumi-lang-nodejs=\"`currentState`\" pulumi-lang-dotnet=\"`CurrentState`\" pulumi-lang-go=\"`currentState`\" pulumi-lang-python=\"`current_state`\" pulumi-lang-yaml=\"`currentState`\" pulumi-lang-java=\"`currentState`\"\u003e`current_state`\u003c/span\u003e began\n"}},"type":"object","required":["currentState","default","expireTime","isProtected","logicalSizeBytes","pendingState","sourceBranch","sourceBranchLsn","sourceBranchTime","stateChangeTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresBranchesBranch:getPostgresBranchesBranch":{"properties":{"createTime":{"type":"string","description":"(string) - A timestamp indicating when the branch was created\n"},"name":{"type":"string","description":"(string) - Output only. The full resource path of the branch.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"parent":{"type":"string","description":"The Project that owns this collection of branches.\nFormat: projects/{project_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresBranchesBranchProviderConfig:getPostgresBranchesBranchProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/getPostgresBranchesBranchSpec:getPostgresBranchesBranchSpec","description":"(BranchSpec) - The spec contains the branch configuration\n"},"status":{"$ref":"#/types/databricks:index/getPostgresBranchesBranchStatus:getPostgresBranchesBranchStatus","description":"(BranchStatus) - The current status of a Branch\n"},"uid":{"type":"string","description":"(string) - System-generated unique ID for the branch\n"},"updateTime":{"type":"string","description":"(string) - A timestamp indicating when the branch was last updated\n"}},"type":"object","required":["createTime","name","parent","spec","status","uid","updateTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresBranchesBranchProviderConfig:getPostgresBranchesBranchProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresBranchesBranchSpec:getPostgresBranchesBranchSpec":{"properties":{"expireTime":{"type":"string","description":"(string) - Absolute expiration time for the branch. Empty if expiration is disabled\n"},"isProtected":{"type":"boolean","description":"(boolean) - Whether the branch is protected\n"},"noExpiry":{"type":"boolean","description":"(boolean) - Explicitly disable expiration. When set to true, the branch will not expire.\nIf set to false, the request is invalid; provide either ttl or\u003cspan pulumi-lang-nodejs=\" expireTime \" pulumi-lang-dotnet=\" ExpireTime \" pulumi-lang-go=\" expireTime \" pulumi-lang-python=\" expire_time \" pulumi-lang-yaml=\" expireTime \" pulumi-lang-java=\" expireTime \"\u003e expire_time \u003c/span\u003einstead\n"},"sourceBranch":{"type":"string","description":"(string) - The name of the source branch from which this branch was created.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"sourceBranchLsn":{"type":"string","description":"(string) - The Log Sequence Number (LSN) on the source branch from which this branch was created\n"},"sourceBranchTime":{"type":"string","description":"(string) - The point in time on the source branch from which this branch was created\n"},"ttl":{"type":"string","description":"(string) - Relative time-to-live duration. When set, the branch will expire at\u003cspan pulumi-lang-nodejs=\" creationTime \" pulumi-lang-dotnet=\" CreationTime \" pulumi-lang-go=\" creationTime \" pulumi-lang-python=\" creation_time \" pulumi-lang-yaml=\" creationTime \" pulumi-lang-java=\" creationTime \"\u003e creation_time \u003c/span\u003e+ ttl\n"}},"type":"object"},"databricks:index/getPostgresBranchesBranchStatus:getPostgresBranchesBranchStatus":{"properties":{"currentState":{"type":"string","description":"(string) - The branch's state, indicating if it is initializing, ready for use, or archived. Possible values are: `ARCHIVED`, `IMPORTING`, `INIT`, `READY`, `RESETTING`\n"},"default":{"type":"boolean","description":"(boolean) - Whether the branch is the project's default branch\n"},"expireTime":{"type":"string","description":"(string) - Absolute expiration time for the branch. Empty if expiration is disabled\n"},"isProtected":{"type":"boolean","description":"(boolean) - Whether the branch is protected\n"},"logicalSizeBytes":{"type":"integer","description":"(integer) - The logical size of the branch\n"},"pendingState":{"type":"string","description":"(string) - The pending state of the branch, if a state transition is in progress. Possible values are: `ARCHIVED`, `IMPORTING`, `INIT`, `READY`, `RESETTING`\n"},"sourceBranch":{"type":"string","description":"(string) - The name of the source branch from which this branch was created.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"sourceBranchLsn":{"type":"string","description":"(string) - The Log Sequence Number (LSN) on the source branch from which this branch was created\n"},"sourceBranchTime":{"type":"string","description":"(string) - The point in time on the source branch from which this branch was created\n"},"stateChangeTime":{"type":"string","description":"(string) - A timestamp indicating when the \u003cspan pulumi-lang-nodejs=\"`currentState`\" pulumi-lang-dotnet=\"`CurrentState`\" pulumi-lang-go=\"`currentState`\" pulumi-lang-python=\"`current_state`\" pulumi-lang-yaml=\"`currentState`\" pulumi-lang-java=\"`currentState`\"\u003e`current_state`\u003c/span\u003e began\n"}},"type":"object","required":["currentState","default","expireTime","isProtected","logicalSizeBytes","pendingState","sourceBranch","sourceBranchLsn","sourceBranchTime","stateChangeTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresBranchesProviderConfig:getPostgresBranchesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getPostgresEndpointProviderConfig:getPostgresEndpointProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getPostgresEndpointSpec:getPostgresEndpointSpec":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"(number) - The maximum number of Compute Units\n"},"autoscalingLimitMinCu":{"type":"number","description":"(number) - The minimum number of Compute Units\n"},"disabled":{"type":"boolean","description":"(boolean) - Whether to restrict connections to the compute endpoint.\nEnabling this option schedules a suspend compute operation.\nA disabled compute endpoint cannot be enabled by a connection or\nconsole action\n"},"endpointType":{"type":"string","description":"(string) - The endpoint type. A branch can only have one READ_WRITE endpoint. Possible values are: `ENDPOINT_TYPE_READ_ONLY`, `ENDPOINT_TYPE_READ_WRITE`\n"},"noSuspension":{"type":"boolean","description":"(boolean) - When set to true, explicitly disables automatic suspension (never suspend).\nShould be set to true when provided\n"},"settings":{"$ref":"#/types/databricks:index/getPostgresEndpointSpecSettings:getPostgresEndpointSpecSettings","description":"(EndpointSettings)\n"},"suspendTimeoutDuration":{"type":"string","description":"(string) - Duration of inactivity after which the compute endpoint is automatically suspended\n"}},"type":"object","required":["endpointType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresEndpointSpecSettings:getPostgresEndpointSpecSettings":{"properties":{"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"(object) - A raw representation of Postgres settings\n"}},"type":"object"},"databricks:index/getPostgresEndpointStatus:getPostgresEndpointStatus":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"(number) - The maximum number of Compute Units\n"},"autoscalingLimitMinCu":{"type":"number","description":"(number) - The minimum number of Compute Units\n"},"currentState":{"type":"string","description":"(string) - Possible values are: `ACTIVE`, `IDLE`, `INIT`\n"},"disabled":{"type":"boolean","description":"(boolean) - Whether to restrict connections to the compute endpoint.\nEnabling this option schedules a suspend compute operation.\nA disabled compute endpoint cannot be enabled by a connection or\nconsole action\n"},"endpointType":{"type":"string","description":"(string) - The endpoint type. A branch can only have one READ_WRITE endpoint. Possible values are: `ENDPOINT_TYPE_READ_ONLY`, `ENDPOINT_TYPE_READ_WRITE`\n"},"hosts":{"$ref":"#/types/databricks:index/getPostgresEndpointStatusHosts:getPostgresEndpointStatusHosts","description":"(EndpointHosts) - Contains host information for connecting to the endpoint\n"},"pendingState":{"type":"string","description":"(string) - Possible values are: `ACTIVE`, `IDLE`, `INIT`\n"},"settings":{"$ref":"#/types/databricks:index/getPostgresEndpointStatusSettings:getPostgresEndpointStatusSettings","description":"(EndpointSettings)\n"},"suspendTimeoutDuration":{"type":"string","description":"(string) - Duration of inactivity after which the compute endpoint is automatically suspended\n"}},"type":"object","required":["autoscalingLimitMaxCu","autoscalingLimitMinCu","currentState","disabled","endpointType","hosts","pendingState","settings","suspendTimeoutDuration"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresEndpointStatusHosts:getPostgresEndpointStatusHosts":{"properties":{"host":{"type":"string","description":"(string) - The hostname to connect to this endpoint. For read-write endpoints, this is a read-write hostname which connects\nto the primary compute. For read-only endpoints, this is a read-only hostname which allows read-only operations\n"}},"type":"object","required":["host"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresEndpointStatusSettings:getPostgresEndpointStatusSettings":{"properties":{"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"(object) - A raw representation of Postgres settings\n"}},"type":"object"},"databricks:index/getPostgresEndpointsEndpoint:getPostgresEndpointsEndpoint":{"properties":{"createTime":{"type":"string","description":"(string) - A timestamp indicating when the compute endpoint was created\n"},"name":{"type":"string","description":"(string) - Output only. The full resource path of the endpoint.\nFormat: projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}\n"},"parent":{"type":"string","description":"The Branch that owns this collection of endpoints.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresEndpointsEndpointProviderConfig:getPostgresEndpointsEndpointProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/getPostgresEndpointsEndpointSpec:getPostgresEndpointsEndpointSpec","description":"(EndpointSpec) - The spec contains the compute endpoint configuration, including autoscaling limits, suspend timeout, and disabled state\n"},"status":{"$ref":"#/types/databricks:index/getPostgresEndpointsEndpointStatus:getPostgresEndpointsEndpointStatus","description":"(EndpointStatus) - Current operational status of the compute endpoint\n"},"uid":{"type":"string","description":"(string) - System-generated unique ID for the endpoint\n"},"updateTime":{"type":"string","description":"(string) - A timestamp indicating when the compute endpoint was last updated\n"}},"type":"object","required":["createTime","name","parent","spec","status","uid","updateTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresEndpointsEndpointProviderConfig:getPostgresEndpointsEndpointProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresEndpointsEndpointSpec:getPostgresEndpointsEndpointSpec":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"(number) - The maximum number of Compute Units\n"},"autoscalingLimitMinCu":{"type":"number","description":"(number) - The minimum number of Compute Units\n"},"disabled":{"type":"boolean","description":"(boolean) - Whether to restrict connections to the compute endpoint.\nEnabling this option schedules a suspend compute operation.\nA disabled compute endpoint cannot be enabled by a connection or\nconsole action\n"},"endpointType":{"type":"string","description":"(string) - The endpoint type. A branch can only have one READ_WRITE endpoint. Possible values are: `ENDPOINT_TYPE_READ_ONLY`, `ENDPOINT_TYPE_READ_WRITE`\n"},"noSuspension":{"type":"boolean","description":"(boolean) - When set to true, explicitly disables automatic suspension (never suspend).\nShould be set to true when provided\n"},"settings":{"$ref":"#/types/databricks:index/getPostgresEndpointsEndpointSpecSettings:getPostgresEndpointsEndpointSpecSettings","description":"(EndpointSettings)\n"},"suspendTimeoutDuration":{"type":"string","description":"(string) - Duration of inactivity after which the compute endpoint is automatically suspended\n"}},"type":"object","required":["endpointType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresEndpointsEndpointSpecSettings:getPostgresEndpointsEndpointSpecSettings":{"properties":{"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"(object) - A raw representation of Postgres settings\n"}},"type":"object"},"databricks:index/getPostgresEndpointsEndpointStatus:getPostgresEndpointsEndpointStatus":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"(number) - The maximum number of Compute Units\n"},"autoscalingLimitMinCu":{"type":"number","description":"(number) - The minimum number of Compute Units\n"},"currentState":{"type":"string","description":"(string) - Possible values are: `ACTIVE`, `IDLE`, `INIT`\n"},"disabled":{"type":"boolean","description":"(boolean) - Whether to restrict connections to the compute endpoint.\nEnabling this option schedules a suspend compute operation.\nA disabled compute endpoint cannot be enabled by a connection or\nconsole action\n"},"endpointType":{"type":"string","description":"(string) - The endpoint type. A branch can only have one READ_WRITE endpoint. Possible values are: `ENDPOINT_TYPE_READ_ONLY`, `ENDPOINT_TYPE_READ_WRITE`\n"},"hosts":{"$ref":"#/types/databricks:index/getPostgresEndpointsEndpointStatusHosts:getPostgresEndpointsEndpointStatusHosts","description":"(EndpointHosts) - Contains host information for connecting to the endpoint\n"},"pendingState":{"type":"string","description":"(string) - Possible values are: `ACTIVE`, `IDLE`, `INIT`\n"},"settings":{"$ref":"#/types/databricks:index/getPostgresEndpointsEndpointStatusSettings:getPostgresEndpointsEndpointStatusSettings","description":"(EndpointSettings)\n"},"suspendTimeoutDuration":{"type":"string","description":"(string) - Duration of inactivity after which the compute endpoint is automatically suspended\n"}},"type":"object","required":["autoscalingLimitMaxCu","autoscalingLimitMinCu","currentState","disabled","endpointType","hosts","pendingState","settings","suspendTimeoutDuration"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresEndpointsEndpointStatusHosts:getPostgresEndpointsEndpointStatusHosts":{"properties":{"host":{"type":"string","description":"(string) - The hostname to connect to this endpoint. For read-write endpoints, this is a read-write hostname which connects\nto the primary compute. For read-only endpoints, this is a read-only hostname which allows read-only operations\n"}},"type":"object","required":["host"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresEndpointsEndpointStatusSettings:getPostgresEndpointsEndpointStatusSettings":{"properties":{"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"(object) - A raw representation of Postgres settings\n"}},"type":"object"},"databricks:index/getPostgresEndpointsProviderConfig:getPostgresEndpointsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getPostgresProjectProviderConfig:getPostgresProjectProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getPostgresProjectSpec:getPostgresProjectSpec":{"properties":{"budgetPolicyId":{"type":"string","description":"(string) - The budget policy that is applied to the project\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/getPostgresProjectSpecCustomTag:getPostgresProjectSpecCustomTag"},"description":"(list of ProjectCustomTag) - The effective custom tags associated with the project\n"},"defaultEndpointSettings":{"$ref":"#/types/databricks:index/getPostgresProjectSpecDefaultEndpointSettings:getPostgresProjectSpecDefaultEndpointSettings","description":"(ProjectDefaultEndpointSettings) - The effective default endpoint settings\n"},"displayName":{"type":"string","description":"(string) - The effective human-readable project name\n"},"historyRetentionDuration":{"type":"string","description":"(string) - The effective number of seconds to retain the shared history for point in time recovery\n"},"pgVersion":{"type":"integer","description":"(integer) - The effective major Postgres version number\n"}},"type":"object"},"databricks:index/getPostgresProjectSpecCustomTag:getPostgresProjectSpecCustomTag":{"properties":{"key":{"type":"string","description":"(string) - The key of the custom tag\n"},"value":{"type":"string","description":"(string) - The value of the custom tag\n"}},"type":"object"},"databricks:index/getPostgresProjectSpecDefaultEndpointSettings:getPostgresProjectSpecDefaultEndpointSettings":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"(number) - The maximum number of Compute Units. Minimum value is 0.5\n"},"autoscalingLimitMinCu":{"type":"number","description":"(number) - The minimum number of Compute Units. Minimum value is 0.5\n"},"noSuspension":{"type":"boolean","description":"(boolean) - When set to true, explicitly disables automatic suspension (never suspend).\nShould be set to true when provided\n"},"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"(object) - A raw representation of Postgres settings\n"},"suspendTimeoutDuration":{"type":"string","description":"(string) - Duration of inactivity after which the compute endpoint is automatically suspended.\nIf specified should be between 60s and 604800s (1 minute to 1 week)\n"}},"type":"object"},"databricks:index/getPostgresProjectStatus:getPostgresProjectStatus":{"properties":{"branchLogicalSizeLimitBytes":{"type":"integer","description":"(integer) - The logical size limit for a branch\n"},"budgetPolicyId":{"type":"string","description":"(string) - The budget policy that is applied to the project\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/getPostgresProjectStatusCustomTag:getPostgresProjectStatusCustomTag"},"description":"(list of ProjectCustomTag) - The effective custom tags associated with the project\n"},"defaultEndpointSettings":{"$ref":"#/types/databricks:index/getPostgresProjectStatusDefaultEndpointSettings:getPostgresProjectStatusDefaultEndpointSettings","description":"(ProjectDefaultEndpointSettings) - The effective default endpoint settings\n"},"displayName":{"type":"string","description":"(string) - The effective human-readable project name\n"},"historyRetentionDuration":{"type":"string","description":"(string) - The effective number of seconds to retain the shared history for point in time recovery\n"},"owner":{"type":"string","description":"(string) - The email of the project owner\n"},"pgVersion":{"type":"integer","description":"(integer) - The effective major Postgres version number\n"},"syntheticStorageSizeBytes":{"type":"integer","description":"(integer) - The current space occupied by the project in storage\n"}},"type":"object","required":["branchLogicalSizeLimitBytes","budgetPolicyId","customTags","defaultEndpointSettings","displayName","historyRetentionDuration","owner","pgVersion","syntheticStorageSizeBytes"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresProjectStatusCustomTag:getPostgresProjectStatusCustomTag":{"properties":{"key":{"type":"string","description":"(string) - The key of the custom tag\n"},"value":{"type":"string","description":"(string) - The value of the custom tag\n"}},"type":"object"},"databricks:index/getPostgresProjectStatusDefaultEndpointSettings:getPostgresProjectStatusDefaultEndpointSettings":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"(number) - The maximum number of Compute Units. Minimum value is 0.5\n"},"autoscalingLimitMinCu":{"type":"number","description":"(number) - The minimum number of Compute Units. Minimum value is 0.5\n"},"noSuspension":{"type":"boolean","description":"(boolean) - When set to true, explicitly disables automatic suspension (never suspend).\nShould be set to true when provided\n"},"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"(object) - A raw representation of Postgres settings\n"},"suspendTimeoutDuration":{"type":"string","description":"(string) - Duration of inactivity after which the compute endpoint is automatically suspended.\nIf specified should be between 60s and 604800s (1 minute to 1 week)\n"}},"type":"object"},"databricks:index/getPostgresProjectsProject:getPostgresProjectsProject":{"properties":{"createTime":{"type":"string","description":"(string) - A timestamp indicating when the project was created\n"},"name":{"type":"string","description":"(string) - Output only. The full resource path of the project.\nFormat: projects/{project_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresProjectsProjectProviderConfig:getPostgresProjectsProjectProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/getPostgresProjectsProjectSpec:getPostgresProjectsProjectSpec","description":"(ProjectSpec) - The spec contains the project configuration, including display_name,\u003cspan pulumi-lang-nodejs=\" pgVersion \" pulumi-lang-dotnet=\" PgVersion \" pulumi-lang-go=\" pgVersion \" pulumi-lang-python=\" pg_version \" pulumi-lang-yaml=\" pgVersion \" pulumi-lang-java=\" pgVersion \"\u003e pg_version \u003c/span\u003e(Postgres version), history_retention_duration, and default_endpoint_settings\n"},"status":{"$ref":"#/types/databricks:index/getPostgresProjectsProjectStatus:getPostgresProjectsProjectStatus","description":"(ProjectStatus) - The current status of a Project\n"},"uid":{"type":"string","description":"(string) - System-generated unique ID for the project\n"},"updateTime":{"type":"string","description":"(string) - A timestamp indicating when the project was last updated\n"}},"type":"object","required":["createTime","name","spec","status","uid","updateTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresProjectsProjectProviderConfig:getPostgresProjectsProjectProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresProjectsProjectSpec:getPostgresProjectsProjectSpec":{"properties":{"budgetPolicyId":{"type":"string","description":"(string) - The budget policy that is applied to the project\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/getPostgresProjectsProjectSpecCustomTag:getPostgresProjectsProjectSpecCustomTag"},"description":"(list of ProjectCustomTag) - The effective custom tags associated with the project\n"},"defaultEndpointSettings":{"$ref":"#/types/databricks:index/getPostgresProjectsProjectSpecDefaultEndpointSettings:getPostgresProjectsProjectSpecDefaultEndpointSettings","description":"(ProjectDefaultEndpointSettings) - The effective default endpoint settings\n"},"displayName":{"type":"string","description":"(string) - The effective human-readable project name\n"},"historyRetentionDuration":{"type":"string","description":"(string) - The effective number of seconds to retain the shared history for point in time recovery\n"},"pgVersion":{"type":"integer","description":"(integer) - The effective major Postgres version number\n"}},"type":"object"},"databricks:index/getPostgresProjectsProjectSpecCustomTag:getPostgresProjectsProjectSpecCustomTag":{"properties":{"key":{"type":"string","description":"(string) - The key of the custom tag\n"},"value":{"type":"string","description":"(string) - The value of the custom tag\n"}},"type":"object"},"databricks:index/getPostgresProjectsProjectSpecDefaultEndpointSettings:getPostgresProjectsProjectSpecDefaultEndpointSettings":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"(number) - The maximum number of Compute Units. Minimum value is 0.5\n"},"autoscalingLimitMinCu":{"type":"number","description":"(number) - The minimum number of Compute Units. Minimum value is 0.5\n"},"noSuspension":{"type":"boolean","description":"(boolean) - When set to true, explicitly disables automatic suspension (never suspend).\nShould be set to true when provided\n"},"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"(object) - A raw representation of Postgres settings\n"},"suspendTimeoutDuration":{"type":"string","description":"(string) - Duration of inactivity after which the compute endpoint is automatically suspended.\nIf specified should be between 60s and 604800s (1 minute to 1 week)\n"}},"type":"object"},"databricks:index/getPostgresProjectsProjectStatus:getPostgresProjectsProjectStatus":{"properties":{"branchLogicalSizeLimitBytes":{"type":"integer","description":"(integer) - The logical size limit for a branch\n"},"budgetPolicyId":{"type":"string","description":"(string) - The budget policy that is applied to the project\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/getPostgresProjectsProjectStatusCustomTag:getPostgresProjectsProjectStatusCustomTag"},"description":"(list of ProjectCustomTag) - The effective custom tags associated with the project\n"},"defaultEndpointSettings":{"$ref":"#/types/databricks:index/getPostgresProjectsProjectStatusDefaultEndpointSettings:getPostgresProjectsProjectStatusDefaultEndpointSettings","description":"(ProjectDefaultEndpointSettings) - The effective default endpoint settings\n"},"displayName":{"type":"string","description":"(string) - The effective human-readable project name\n"},"historyRetentionDuration":{"type":"string","description":"(string) - The effective number of seconds to retain the shared history for point in time recovery\n"},"owner":{"type":"string","description":"(string) - The email of the project owner\n"},"pgVersion":{"type":"integer","description":"(integer) - The effective major Postgres version number\n"},"syntheticStorageSizeBytes":{"type":"integer","description":"(integer) - The current space occupied by the project in storage\n"}},"type":"object","required":["branchLogicalSizeLimitBytes","budgetPolicyId","customTags","defaultEndpointSettings","displayName","historyRetentionDuration","owner","pgVersion","syntheticStorageSizeBytes"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getPostgresProjectsProjectStatusCustomTag:getPostgresProjectsProjectStatusCustomTag":{"properties":{"key":{"type":"string","description":"(string) - The key of the custom tag\n"},"value":{"type":"string","description":"(string) - The value of the custom tag\n"}},"type":"object"},"databricks:index/getPostgresProjectsProjectStatusDefaultEndpointSettings:getPostgresProjectsProjectStatusDefaultEndpointSettings":{"properties":{"autoscalingLimitMaxCu":{"type":"number","description":"(number) - The maximum number of Compute Units. Minimum value is 0.5\n"},"autoscalingLimitMinCu":{"type":"number","description":"(number) - The minimum number of Compute Units. Minimum value is 0.5\n"},"noSuspension":{"type":"boolean","description":"(boolean) - When set to true, explicitly disables automatic suspension (never suspend).\nShould be set to true when provided\n"},"pgSettings":{"type":"object","additionalProperties":{"type":"string"},"description":"(object) - A raw representation of Postgres settings\n"},"suspendTimeoutDuration":{"type":"string","description":"(string) - Duration of inactivity after which the compute endpoint is automatically suspended.\nIf specified should be between 60s and 604800s (1 minute to 1 week)\n"}},"type":"object"},"databricks:index/getPostgresProjectsProviderConfig:getPostgresProjectsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getQualityMonitorV2AnomalyDetectionConfig:getQualityMonitorV2AnomalyDetectionConfig":{"properties":{"excludedTableFullNames":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of fully qualified table names to exclude from anomaly detection\n"},"lastRunId":{"type":"string","description":"(string) - Run id of the last run of the workflow\n"},"latestRunStatus":{"type":"string","description":"(string) - The status of the last run of the workflow. Possible values are: `ANOMALY_DETECTION_RUN_STATUS_CANCELED`, `ANOMALY_DETECTION_RUN_STATUS_FAILED`, `ANOMALY_DETECTION_RUN_STATUS_JOB_DELETED`, `ANOMALY_DETECTION_RUN_STATUS_PENDING`, `ANOMALY_DETECTION_RUN_STATUS_RUNNING`, `ANOMALY_DETECTION_RUN_STATUS_SUCCESS`, `ANOMALY_DETECTION_RUN_STATUS_UNKNOWN`, `ANOMALY_DETECTION_RUN_STATUS_WORKSPACE_MISMATCH_ERROR`\n"}},"type":"object","required":["lastRunId","latestRunStatus"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getQualityMonitorV2ProviderConfig:getQualityMonitorV2ProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getQualityMonitorV2ValidityCheckConfiguration:getQualityMonitorV2ValidityCheckConfiguration":{"properties":{"name":{"type":"string","description":"(string) - Can be set by system. Does not need to be user facing\n"},"percentNullValidityCheck":{"$ref":"#/types/databricks:index/getQualityMonitorV2ValidityCheckConfigurationPercentNullValidityCheck:getQualityMonitorV2ValidityCheckConfigurationPercentNullValidityCheck","description":"(PercentNullValidityCheck)\n"},"rangeValidityCheck":{"$ref":"#/types/databricks:index/getQualityMonitorV2ValidityCheckConfigurationRangeValidityCheck:getQualityMonitorV2ValidityCheckConfigurationRangeValidityCheck","description":"(RangeValidityCheck)\n"},"uniquenessValidityCheck":{"$ref":"#/types/databricks:index/getQualityMonitorV2ValidityCheckConfigurationUniquenessValidityCheck:getQualityMonitorV2ValidityCheckConfigurationUniquenessValidityCheck","description":"(UniquenessValidityCheck)\n"}},"type":"object"},"databricks:index/getQualityMonitorV2ValidityCheckConfigurationPercentNullValidityCheck:getQualityMonitorV2ValidityCheckConfigurationPercentNullValidityCheck":{"properties":{"columnNames":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of column names to check for uniqueness\n"},"upperBound":{"type":"number","description":"(number) - Upper bound for the range\n"}},"type":"object"},"databricks:index/getQualityMonitorV2ValidityCheckConfigurationRangeValidityCheck:getQualityMonitorV2ValidityCheckConfigurationRangeValidityCheck":{"properties":{"columnNames":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of column names to check for uniqueness\n"},"lowerBound":{"type":"number","description":"(number) - Lower bound for the range\n"},"upperBound":{"type":"number","description":"(number) - Upper bound for the range\n"}},"type":"object"},"databricks:index/getQualityMonitorV2ValidityCheckConfigurationUniquenessValidityCheck:getQualityMonitorV2ValidityCheckConfigurationUniquenessValidityCheck":{"properties":{"columnNames":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of column names to check for uniqueness\n"}},"type":"object"},"databricks:index/getQualityMonitorsV2ProviderConfig:getQualityMonitorsV2ProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getQualityMonitorsV2QualityMonitor:getQualityMonitorsV2QualityMonitor":{"properties":{"anomalyDetectionConfig":{"$ref":"#/types/databricks:index/getQualityMonitorsV2QualityMonitorAnomalyDetectionConfig:getQualityMonitorsV2QualityMonitorAnomalyDetectionConfig","description":"(AnomalyDetectionConfig)\n"},"objectId":{"type":"string","description":"(string) - The uuid of the request object. For example, schema id\n"},"objectType":{"type":"string","description":"(string) - The type of the monitored object. Can be one of the following: schema\n"},"providerConfig":{"$ref":"#/types/databricks:index/getQualityMonitorsV2QualityMonitorProviderConfig:getQualityMonitorsV2QualityMonitorProviderConfig","description":"Configure the provider for management through account provider.\n"},"validityCheckConfigurations":{"type":"array","items":{"$ref":"#/types/databricks:index/getQualityMonitorsV2QualityMonitorValidityCheckConfiguration:getQualityMonitorsV2QualityMonitorValidityCheckConfiguration"},"description":"(list of ValidityCheckConfiguration) - Validity check configurations for anomaly detection\n"}},"type":"object","required":["anomalyDetectionConfig","objectId","objectType","validityCheckConfigurations"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getQualityMonitorsV2QualityMonitorAnomalyDetectionConfig:getQualityMonitorsV2QualityMonitorAnomalyDetectionConfig":{"properties":{"excludedTableFullNames":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of fully qualified table names to exclude from anomaly detection\n"},"lastRunId":{"type":"string","description":"(string) - Run id of the last run of the workflow\n"},"latestRunStatus":{"type":"string","description":"(string) - The status of the last run of the workflow. Possible values are: `ANOMALY_DETECTION_RUN_STATUS_CANCELED`, `ANOMALY_DETECTION_RUN_STATUS_FAILED`, `ANOMALY_DETECTION_RUN_STATUS_JOB_DELETED`, `ANOMALY_DETECTION_RUN_STATUS_PENDING`, `ANOMALY_DETECTION_RUN_STATUS_RUNNING`, `ANOMALY_DETECTION_RUN_STATUS_SUCCESS`, `ANOMALY_DETECTION_RUN_STATUS_UNKNOWN`, `ANOMALY_DETECTION_RUN_STATUS_WORKSPACE_MISMATCH_ERROR`\n"}},"type":"object","required":["lastRunId","latestRunStatus"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getQualityMonitorsV2QualityMonitorProviderConfig:getQualityMonitorsV2QualityMonitorProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getQualityMonitorsV2QualityMonitorValidityCheckConfiguration:getQualityMonitorsV2QualityMonitorValidityCheckConfiguration":{"properties":{"name":{"type":"string","description":"(string) - Can be set by system. Does not need to be user facing\n"},"percentNullValidityCheck":{"$ref":"#/types/databricks:index/getQualityMonitorsV2QualityMonitorValidityCheckConfigurationPercentNullValidityCheck:getQualityMonitorsV2QualityMonitorValidityCheckConfigurationPercentNullValidityCheck","description":"(PercentNullValidityCheck)\n"},"rangeValidityCheck":{"$ref":"#/types/databricks:index/getQualityMonitorsV2QualityMonitorValidityCheckConfigurationRangeValidityCheck:getQualityMonitorsV2QualityMonitorValidityCheckConfigurationRangeValidityCheck","description":"(RangeValidityCheck)\n"},"uniquenessValidityCheck":{"$ref":"#/types/databricks:index/getQualityMonitorsV2QualityMonitorValidityCheckConfigurationUniquenessValidityCheck:getQualityMonitorsV2QualityMonitorValidityCheckConfigurationUniquenessValidityCheck","description":"(UniquenessValidityCheck)\n"}},"type":"object"},"databricks:index/getQualityMonitorsV2QualityMonitorValidityCheckConfigurationPercentNullValidityCheck:getQualityMonitorsV2QualityMonitorValidityCheckConfigurationPercentNullValidityCheck":{"properties":{"columnNames":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of column names to check for uniqueness\n"},"upperBound":{"type":"number","description":"(number) - Upper bound for the range\n"}},"type":"object"},"databricks:index/getQualityMonitorsV2QualityMonitorValidityCheckConfigurationRangeValidityCheck:getQualityMonitorsV2QualityMonitorValidityCheckConfigurationRangeValidityCheck":{"properties":{"columnNames":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of column names to check for uniqueness\n"},"lowerBound":{"type":"number","description":"(number) - Lower bound for the range\n"},"upperBound":{"type":"number","description":"(number) - Upper bound for the range\n"}},"type":"object"},"databricks:index/getQualityMonitorsV2QualityMonitorValidityCheckConfigurationUniquenessValidityCheck:getQualityMonitorsV2QualityMonitorValidityCheckConfigurationUniquenessValidityCheck":{"properties":{"columnNames":{"type":"array","items":{"type":"string"},"description":"(list of string) - List of column names to check for uniqueness\n"}},"type":"object"},"databricks:index/getRegisteredModelModelInfo:getRegisteredModelModelInfo":{"properties":{"aliases":{"type":"array","items":{"$ref":"#/types/databricks:index/getRegisteredModelModelInfoAlias:getRegisteredModelModelInfoAlias"},"description":"the list of aliases associated with this model. Each item is object consisting of following attributes:\n"},"browseOnly":{"type":"boolean"},"catalogName":{"type":"string","description":"The name of the catalog where the schema and the registered model reside.\n"},"comment":{"type":"string","description":"The comment attached to the registered model.\n"},"createdAt":{"type":"integer","description":"the Unix timestamp at the model's creation\n"},"createdBy":{"type":"string","description":"the identifier of the user who created the model\n"},"fullName":{"type":"string","description":"The fully-qualified name of the registered model (`catalog_name.schema_name.name`).\n"},"metastoreId":{"type":"string","description":"the unique identifier of the metastore\n"},"name":{"type":"string","description":"The name of the registered model.\n"},"owner":{"type":"string","description":"Name of the registered model owner.\n"},"schemaName":{"type":"string","description":"The name of the schema where the registered model resides.\n"},"storageLocation":{"type":"string","description":"The storage location under which model version data files are stored.\n"},"updatedAt":{"type":"integer","description":"the timestamp of the last time changes were made to the model\n"},"updatedBy":{"type":"string","description":"the identifier of the user who updated the model last time\n"}},"type":"object"},"databricks:index/getRegisteredModelModelInfoAlias:getRegisteredModelModelInfoAlias":{"properties":{"aliasName":{"type":"string","description":"string with the name of alias\n"},"catalogName":{"type":"string","description":"The name of the catalog where the schema and the registered model reside.\n"},"id":{"type":"string"},"modelName":{"type":"string"},"schemaName":{"type":"string","description":"The name of the schema where the registered model resides.\n"},"versionNum":{"type":"integer","description":"associated model version\n"}},"type":"object"},"databricks:index/getRegisteredModelProviderConfig:getRegisteredModelProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/getRegisteredModelVersionsModelVersion:getRegisteredModelVersionsModelVersion":{"properties":{"aliases":{"type":"array","items":{"$ref":"#/types/databricks:index/getRegisteredModelVersionsModelVersionAlias:getRegisteredModelVersionsModelVersionAlias"},"description":"the list of aliases associated with this model. Each item is object consisting of following attributes:\n"},"catalogName":{"type":"string","description":"The name of the catalog where the schema and the registered model reside.\n"},"comment":{"type":"string","description":"The comment attached to the registered model.\n"},"createdAt":{"type":"integer","description":"the Unix timestamp at the model's creation\n"},"createdBy":{"type":"string","description":"the identifier of the user who created the model\n"},"id":{"type":"string","description":"The unique identifier of the model version\n"},"metastoreId":{"type":"string","description":"the unique identifier of the metastore\n"},"modelName":{"type":"string"},"modelVersionDependencies":{"type":"array","items":{"$ref":"#/types/databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependency:getRegisteredModelVersionsModelVersionModelVersionDependency"},"description":"block describing model version dependencies, for feature-store packaged models. Consists of following attributes:\n"},"runId":{"type":"string","description":"MLflow run ID used when creating the model version, if \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e was generated by an experiment run stored in an MLflow tracking server\n"},"runWorkspaceId":{"type":"integer","description":"ID of the Databricks workspace containing the MLflow run that generated this model version, if applicable\n"},"schemaName":{"type":"string","description":"The name of the schema where the registered model resides.\n"},"source":{"type":"string","description":"URI indicating the location of the source artifacts (files) for the model version.\n"},"status":{"type":"string","description":"Current status of the model version.\n"},"storageLocation":{"type":"string","description":"The storage location under which model version data files are stored.\n"},"updatedAt":{"type":"integer","description":"the timestamp of the last time changes were made to the model\n"},"updatedBy":{"type":"string","description":"the identifier of the user who updated the model last time\n"},"version":{"type":"integer","description":"Integer model version number, used to reference the model version in API requests.\n"}},"type":"object"},"databricks:index/getRegisteredModelVersionsModelVersionAlias:getRegisteredModelVersionsModelVersionAlias":{"properties":{"aliasName":{"type":"string","description":"string with the name of alias\n"},"catalogName":{"type":"string","description":"The name of the catalog where the schema and the registered model reside.\n"},"id":{"type":"string","description":"The unique identifier of the model version\n"},"modelName":{"type":"string"},"schemaName":{"type":"string","description":"The name of the schema where the registered model resides.\n"},"versionNum":{"type":"integer","description":"associated model version\n"}},"type":"object"},"databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependency:getRegisteredModelVersionsModelVersionModelVersionDependency":{"properties":{"dependencies":{"type":"array","items":{"$ref":"#/types/databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependencyDependency:getRegisteredModelVersionsModelVersionModelVersionDependencyDependency"},"description":"list of dependencies consisting of following attributes:\n"}},"type":"object"},"databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependencyDependency:getRegisteredModelVersionsModelVersionModelVersionDependencyDependency":{"properties":{"connections":{"type":"array","items":{"$ref":"#/types/databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyConnection:getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyConnection"}},"credentials":{"type":"array","items":{"$ref":"#/types/databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyCredential:getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyCredential"}},"functions":{"type":"array","items":{"$ref":"#/types/databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyFunction:getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyFunction"},"description":"A function that is dependent on a SQL object:\n"},"tables":{"type":"array","items":{"$ref":"#/types/databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyTable:getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyTable"},"description":"A table that is dependent on a SQL object\n"}},"type":"object"},"databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyConnection:getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyConnection":{"properties":{"connectionName":{"type":"string"}},"type":"object"},"databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyCredential:getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyCredential":{"properties":{"credentialName":{"type":"string"}},"type":"object"},"databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyFunction:getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyFunction":{"properties":{"functionFullName":{"type":"string","description":"Full name of the dependent function\n"}},"type":"object","required":["functionFullName"]},"databricks:index/getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyTable:getRegisteredModelVersionsModelVersionModelVersionDependencyDependencyTable":{"properties":{"tableFullName":{"type":"string","description":"Full name of the dependent table\n"}},"type":"object","required":["tableFullName"]},"databricks:index/getRegisteredModelVersionsProviderConfig:getRegisteredModelVersionsProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/getRfaAccessRequestDestinationsDestination:getRfaAccessRequestDestinationsDestination":{"properties":{"destinationId":{"type":"string","description":"(string) - The identifier for the destination. This is the email address for EMAIL destinations, the URL for URL destinations,\nor the unique Databricks notification destination ID for all other external destinations\n"},"destinationType":{"type":"string","description":"(string) - The type of the destination. Possible values are: `EMAIL`, `GENERIC_WEBHOOK`, `MICROSOFT_TEAMS`, `SLACK`, `URL`\n"},"specialDestination":{"type":"string","description":"(string) - This field is used to denote whether the destination is the email of the owner of the securable object.\nThe special destination cannot be assigned to a securable and only represents the default destination of the securable.\nThe securable types that support default special destinations are: \"catalog\", \u003cspan pulumi-lang-nodejs=\"\"externalLocation\"\" pulumi-lang-dotnet=\"\"ExternalLocation\"\" pulumi-lang-go=\"\"externalLocation\"\" pulumi-lang-python=\"\"external_location\"\" pulumi-lang-yaml=\"\"externalLocation\"\" pulumi-lang-java=\"\"externalLocation\"\"\u003e\"external_location\"\u003c/span\u003e, \"connection\", \"credential\", and \"metastore\".\nThe **destination_type** of a **special_destination** is always EMAIL. Possible values are: `SPECIAL_DESTINATION_CATALOG_OWNER`, `SPECIAL_DESTINATION_CONNECTION_OWNER`, `SPECIAL_DESTINATION_CREDENTIAL_OWNER`, `SPECIAL_DESTINATION_EXTERNAL_LOCATION_OWNER`, `SPECIAL_DESTINATION_METASTORE_OWNER`\n"}},"type":"object"},"databricks:index/getRfaAccessRequestDestinationsDestinationSourceSecurable:getRfaAccessRequestDestinationsDestinationSourceSecurable":{"properties":{"fullName":{"type":"string","description":"The full name of the securable. Redundant with the name in the securable object, but necessary for Pulumi integration\n"},"providerShare":{"type":"string","description":"(string) - Optional. The name of the Share object that contains the securable when the securable is\ngetting shared in D2D Delta Sharing\n"},"type":{"type":"string","description":"(string) - Required. The type of securable (catalog/schema/table).\nOptional if\u003cspan pulumi-lang-nodejs=\" resourceName \" pulumi-lang-dotnet=\" ResourceName \" pulumi-lang-go=\" resourceName \" pulumi-lang-python=\" resource_name \" pulumi-lang-yaml=\" resourceName \" pulumi-lang-java=\" resourceName \"\u003e resource_name \u003c/span\u003eis present. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"}},"type":"object"},"databricks:index/getRfaAccessRequestDestinationsProviderConfig:getRfaAccessRequestDestinationsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getRfaAccessRequestDestinationsSecurable:getRfaAccessRequestDestinationsSecurable":{"properties":{"fullName":{"type":"string","description":"The full name of the securable. Redundant with the name in the securable object, but necessary for Pulumi integration\n"},"providerShare":{"type":"string","description":"(string) - Optional. The name of the Share object that contains the securable when the securable is\ngetting shared in D2D Delta Sharing\n"},"type":{"type":"string","description":"(string) - Required. The type of securable (catalog/schema/table).\nOptional if\u003cspan pulumi-lang-nodejs=\" resourceName \" pulumi-lang-dotnet=\" ResourceName \" pulumi-lang-go=\" resourceName \" pulumi-lang-python=\" resource_name \" pulumi-lang-yaml=\" resourceName \" pulumi-lang-java=\" resourceName \"\u003e resource_name \u003c/span\u003eis present. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"}},"type":"object"},"databricks:index/getSchemaProviderConfig:getSchemaProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getSchemaSchemaInfo:getSchemaSchemaInfo":{"properties":{"browseOnly":{"type":"boolean","description":"indicates whether the principal is limited to retrieving metadata for the schema through the BROWSE privilege.\n"},"catalogName":{"type":"string","description":"the name of the catalog where the schema is.\n"},"catalogType":{"type":"string","description":"the type of the parent catalog.\n"},"comment":{"type":"string","description":"the comment attached to the volume\n"},"createdAt":{"type":"integer","description":"time at which this schema was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"username of schema creator.\n"},"effectivePredictiveOptimizationFlag":{"$ref":"#/types/databricks:index/getSchemaSchemaInfoEffectivePredictiveOptimizationFlag:getSchemaSchemaInfoEffectivePredictiveOptimizationFlag","description":"information about actual state of predictive optimization.\n"},"enablePredictiveOptimization":{"type":"string","description":"whether predictive optimization should be enabled for this object and objects under it.\n"},"fullName":{"type":"string","description":"the two-level (fully qualified) name of the schema\n"},"metastoreId":{"type":"string","description":"the unique identifier of the metastore\n"},"name":{"type":"string","description":"a fully qualified name of databricks_schema: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e*\n"},"owner":{"type":"string","description":"the identifier of the user who owns the schema\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"map of properties set on the schema\n"},"schemaId":{"type":"string","description":"the unique identifier of the schema\n"},"storageLocation":{"type":"string","description":"the storage location on the cloud.\n"},"storageRoot":{"type":"string","description":"storage root URL for managed tables within schema.\n"},"updatedAt":{"type":"integer","description":"the timestamp of the last time changes were made to the schema\n"},"updatedBy":{"type":"string","description":"the identifier of the user who updated the schema last time\n"}},"type":"object"},"databricks:index/getSchemaSchemaInfoEffectivePredictiveOptimizationFlag:getSchemaSchemaInfoEffectivePredictiveOptimizationFlag":{"properties":{"inheritedFromName":{"type":"string"},"inheritedFromType":{"type":"string"},"value":{"type":"string"}},"type":"object","required":["value"]},"databricks:index/getSchemasProviderConfig:getSchemasProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getServicePrincipalFederationPoliciesPolicy:getServicePrincipalFederationPoliciesPolicy":{"properties":{"createTime":{"type":"string","description":"(string) - Creation time of the federation policy\n"},"description":{"type":"string","description":"(string) - Description of the federation policy\n"},"name":{"type":"string","description":"(string) - Resource name for the federation policy. Example values include\n`accounts/\u003caccount-id\u003e/federationPolicies/my-federation-policy` for Account Federation Policies, and\n`accounts/\u003caccount-id\u003e/servicePrincipals/\u003cservice-principal-id\u003e/federationPolicies/my-federation-policy`\nfor Service Principal Federation Policies. Typically an output parameter, which does not need to be\nspecified in create or update requests. If specified in a request, must match the value in the\nrequest URL\n"},"oidcPolicy":{"$ref":"#/types/databricks:index/getServicePrincipalFederationPoliciesPolicyOidcPolicy:getServicePrincipalFederationPoliciesPolicyOidcPolicy","description":"(OidcFederationPolicy)\n"},"policyId":{"type":"string","description":"(string) - The ID of the federation policy. Output only\n"},"servicePrincipalId":{"type":"integer","description":"The service principal id for the federation policy\n"},"uid":{"type":"string","description":"(string) - Unique, immutable id of the federation policy\n"},"updateTime":{"type":"string","description":"(string) - Last update time of the federation policy\n"}},"type":"object","required":["createTime","description","name","oidcPolicy","policyId","servicePrincipalId","uid","updateTime"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getServicePrincipalFederationPoliciesPolicyOidcPolicy:getServicePrincipalFederationPoliciesPolicyOidcPolicy":{"properties":{"audiences":{"type":"array","items":{"type":"string"},"description":"(list of string) - The allowed token audiences, as specified in the 'aud' claim of federated tokens.\nThe audience identifier is intended to represent the recipient of the token.\nCan be any non-empty string value. As long as the audience in the token matches\nat least one audience in the policy, the token is considered a match. If audiences\nis unspecified, defaults to your Databricks account id\n"},"issuer":{"type":"string","description":"(string) - The required token issuer, as specified in the 'iss' claim of federated tokens\n"},"jwksJson":{"type":"string","description":"(string) - The public keys used to validate the signature of federated tokens, in JWKS format.\nMost use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri \" pulumi-lang-dotnet=\" JwksUri \" pulumi-lang-go=\" jwksUri \" pulumi-lang-python=\" jwks_uri \" pulumi-lang-yaml=\" jwksUri \" pulumi-lang-java=\" jwksUri \"\u003e jwks_uri \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson\n\" pulumi-lang-dotnet=\" JwksJson\n\" pulumi-lang-go=\" jwksJson\n\" pulumi-lang-python=\" jwks_json\n\" pulumi-lang-yaml=\" jwksJson\n\" pulumi-lang-java=\" jwksJson\n\"\u003e jwks_json\n\u003c/span\u003eare both unspecified (recommended), Databricks automatically fetches the public\nkeys from your issuer’s well known endpoint. Databricks strongly recommends\nrelying on your issuer’s well known endpoint for discovering public keys\n"},"jwksUri":{"type":"string","description":"(string) - URL of the public keys used to validate the signature of federated tokens, in\nJWKS format. Most use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri\n\" pulumi-lang-dotnet=\" JwksUri\n\" pulumi-lang-go=\" jwksUri\n\" pulumi-lang-python=\" jwks_uri\n\" pulumi-lang-yaml=\" jwksUri\n\" pulumi-lang-java=\" jwksUri\n\"\u003e jwks_uri\n\u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson \" pulumi-lang-dotnet=\" JwksJson \" pulumi-lang-go=\" jwksJson \" pulumi-lang-python=\" jwks_json \" pulumi-lang-yaml=\" jwksJson \" pulumi-lang-java=\" jwksJson \"\u003e jwks_json \u003c/span\u003eare both unspecified (recommended), Databricks automatically\nfetches the public keys from your issuer’s well known endpoint. Databricks\nstrongly recommends relying on your issuer’s well known endpoint for discovering\npublic keys\n"},"subject":{"type":"string","description":"(string) - The required token subject, as specified in the subject claim of federated tokens.\nMust be specified for service principal federation policies. Must not be specified\nfor account federation policies\n"},"subjectClaim":{"type":"string","description":"(string) - The claim that contains the subject of the token. If unspecified, the default value\nis 'sub'\n"}},"type":"object"},"databricks:index/getServicePrincipalFederationPolicyOidcPolicy:getServicePrincipalFederationPolicyOidcPolicy":{"properties":{"audiences":{"type":"array","items":{"type":"string"},"description":"(list of string) - The allowed token audiences, as specified in the 'aud' claim of federated tokens.\nThe audience identifier is intended to represent the recipient of the token.\nCan be any non-empty string value. As long as the audience in the token matches\nat least one audience in the policy, the token is considered a match. If audiences\nis unspecified, defaults to your Databricks account id\n"},"issuer":{"type":"string","description":"(string) - The required token issuer, as specified in the 'iss' claim of federated tokens\n"},"jwksJson":{"type":"string","description":"(string) - The public keys used to validate the signature of federated tokens, in JWKS format.\nMost use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri \" pulumi-lang-dotnet=\" JwksUri \" pulumi-lang-go=\" jwksUri \" pulumi-lang-python=\" jwks_uri \" pulumi-lang-yaml=\" jwksUri \" pulumi-lang-java=\" jwksUri \"\u003e jwks_uri \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson\n\" pulumi-lang-dotnet=\" JwksJson\n\" pulumi-lang-go=\" jwksJson\n\" pulumi-lang-python=\" jwks_json\n\" pulumi-lang-yaml=\" jwksJson\n\" pulumi-lang-java=\" jwksJson\n\"\u003e jwks_json\n\u003c/span\u003eare both unspecified (recommended), Databricks automatically fetches the public\nkeys from your issuer’s well known endpoint. Databricks strongly recommends\nrelying on your issuer’s well known endpoint for discovering public keys\n"},"jwksUri":{"type":"string","description":"(string) - URL of the public keys used to validate the signature of federated tokens, in\nJWKS format. Most use cases should not need to specify this field. If\u003cspan pulumi-lang-nodejs=\" jwksUri\n\" pulumi-lang-dotnet=\" JwksUri\n\" pulumi-lang-go=\" jwksUri\n\" pulumi-lang-python=\" jwks_uri\n\" pulumi-lang-yaml=\" jwksUri\n\" pulumi-lang-java=\" jwksUri\n\"\u003e jwks_uri\n\u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" jwksJson \" pulumi-lang-dotnet=\" JwksJson \" pulumi-lang-go=\" jwksJson \" pulumi-lang-python=\" jwks_json \" pulumi-lang-yaml=\" jwksJson \" pulumi-lang-java=\" jwksJson \"\u003e jwks_json \u003c/span\u003eare both unspecified (recommended), Databricks automatically\nfetches the public keys from your issuer’s well known endpoint. Databricks\nstrongly recommends relying on your issuer’s well known endpoint for discovering\npublic keys\n"},"subject":{"type":"string","description":"(string) - The required token subject, as specified in the subject claim of federated tokens.\nMust be specified for service principal federation policies. Must not be specified\nfor account federation policies\n"},"subjectClaim":{"type":"string","description":"(string) - The claim that contains the subject of the token. If unspecified, the default value\nis 'sub'\n"}},"type":"object"},"databricks:index/getServicePrincipalProviderConfig:getServicePrincipalProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getServicePrincipalsProviderConfig:getServicePrincipalsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getServicePrincipalsServicePrincipal:getServicePrincipalsServicePrincipal":{"properties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `servicePrincipals/00000000-0000-0000-0000-000000000000`.\n"},"active":{"type":"boolean","description":"Whether service principal is active or not.\n"},"applicationId":{"type":"string","description":"Application ID of the service principal.\n"},"displayName":{"type":"string","description":"Display name of the service principal, e.g. `Foo SPN`.\n"},"externalId":{"type":"string","description":"ID of the service principal in an external identity provider.\n"},"home":{"type":"string","description":"Home folder of the service principal, e.g. `/Users/11111111-2222-3333-4444-555666777888`.\n"},"id":{"type":"string","description":"The id of the service principal (SCIM ID).\n"},"repos":{"type":"string","description":"Repos location of the service principal, e.g. `/Repos/11111111-2222-3333-4444-555666777888`.\n"},"scimId":{"type":"string","description":"same as \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e.\n"},"spId":{"type":"string"}},"type":"object","required":["aclPrincipalId","active","applicationId","displayName","externalId","home","id","repos","scimId","spId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getServingEndpointsEndpoint:getServingEndpointsEndpoint":{"properties":{"aiGateways":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointAiGateway:getServingEndpointsEndpointAiGateway"},"description":"A block with AI Gateway configuration for the serving endpoint.\n"},"budgetPolicyId":{"type":"string"},"configs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfig:getServingEndpointsEndpointConfig"},"description":"The model serving endpoint configuration.\n"},"creationTimestamp":{"type":"integer"},"creator":{"type":"string"},"description":{"type":"string"},"id":{"type":"string"},"lastUpdatedTimestamp":{"type":"integer"},"name":{"type":"string","description":"The name of the model serving endpoint.\n"},"states":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointState:getServingEndpointsEndpointState"}},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointTag:getServingEndpointsEndpointTag"},"description":"Tags to be attached to the serving endpoint and automatically propagated to billing logs.\n"},"task":{"type":"string"},"usagePolicyId":{"type":"string"}},"type":"object"},"databricks:index/getServingEndpointsEndpointAiGateway:getServingEndpointsEndpointAiGateway":{"properties":{"fallbackConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointAiGatewayFallbackConfig:getServingEndpointsEndpointAiGatewayFallbackConfig"}},"guardrails":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointAiGatewayGuardrail:getServingEndpointsEndpointAiGatewayGuardrail"}},"inferenceTableConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointAiGatewayInferenceTableConfig:getServingEndpointsEndpointAiGatewayInferenceTableConfig"}},"rateLimits":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointAiGatewayRateLimit:getServingEndpointsEndpointAiGatewayRateLimit"},"description":"A list of rate limit blocks to be applied to the serving endpoint.\n"},"usageTrackingConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointAiGatewayUsageTrackingConfig:getServingEndpointsEndpointAiGatewayUsageTrackingConfig"}}},"type":"object"},"databricks:index/getServingEndpointsEndpointAiGatewayFallbackConfig:getServingEndpointsEndpointAiGatewayFallbackConfig":{"properties":{"enabled":{"type":"boolean"}},"type":"object","required":["enabled"]},"databricks:index/getServingEndpointsEndpointAiGatewayGuardrail:getServingEndpointsEndpointAiGatewayGuardrail":{"properties":{"inputProperties":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointAiGatewayGuardrailInputProperty:getServingEndpointsEndpointAiGatewayGuardrailInputProperty"}},"outputs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointAiGatewayGuardrailOutput:getServingEndpointsEndpointAiGatewayGuardrailOutput"}}},"type":"object"},"databricks:index/getServingEndpointsEndpointAiGatewayGuardrailInputProperty:getServingEndpointsEndpointAiGatewayGuardrailInputProperty":{"properties":{"invalidKeywords":{"type":"array","items":{"type":"string"}},"piis":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointAiGatewayGuardrailInputPropertyPii:getServingEndpointsEndpointAiGatewayGuardrailInputPropertyPii"}},"safety":{"type":"boolean"},"validTopics":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getServingEndpointsEndpointAiGatewayGuardrailInputPropertyPii:getServingEndpointsEndpointAiGatewayGuardrailInputPropertyPii":{"properties":{"behavior":{"type":"string"}},"type":"object"},"databricks:index/getServingEndpointsEndpointAiGatewayGuardrailOutput:getServingEndpointsEndpointAiGatewayGuardrailOutput":{"properties":{"invalidKeywords":{"type":"array","items":{"type":"string"}},"piis":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointAiGatewayGuardrailOutputPii:getServingEndpointsEndpointAiGatewayGuardrailOutputPii"}},"safety":{"type":"boolean"},"validTopics":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getServingEndpointsEndpointAiGatewayGuardrailOutputPii:getServingEndpointsEndpointAiGatewayGuardrailOutputPii":{"properties":{"behavior":{"type":"string"}},"type":"object"},"databricks:index/getServingEndpointsEndpointAiGatewayInferenceTableConfig:getServingEndpointsEndpointAiGatewayInferenceTableConfig":{"properties":{"catalogName":{"type":"string"},"enabled":{"type":"boolean"},"schemaName":{"type":"string"},"tableNamePrefix":{"type":"string"}},"type":"object"},"databricks:index/getServingEndpointsEndpointAiGatewayRateLimit:getServingEndpointsEndpointAiGatewayRateLimit":{"properties":{"calls":{"type":"integer"},"key":{"type":"string"},"principal":{"type":"string"},"renewalPeriod":{"type":"string"},"tokens":{"type":"integer"}},"type":"object","required":["renewalPeriod"]},"databricks:index/getServingEndpointsEndpointAiGatewayUsageTrackingConfig:getServingEndpointsEndpointAiGatewayUsageTrackingConfig":{"properties":{"enabled":{"type":"boolean"}},"type":"object"},"databricks:index/getServingEndpointsEndpointConfig:getServingEndpointsEndpointConfig":{"properties":{"servedEntities":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntity:getServingEndpointsEndpointConfigServedEntity"}},"servedModels":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedModel:getServingEndpointsEndpointConfigServedModel"}}},"type":"object"},"databricks:index/getServingEndpointsEndpointConfigServedEntity:getServingEndpointsEndpointConfigServedEntity":{"properties":{"entityName":{"type":"string"},"entityVersion":{"type":"string"},"externalModels":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModel:getServingEndpointsEndpointConfigServedEntityExternalModel"}},"foundationModels":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityFoundationModel:getServingEndpointsEndpointConfigServedEntityFoundationModel"}},"name":{"type":"string","description":"The name of the model serving endpoint.\n"}},"type":"object"},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModel:getServingEndpointsEndpointConfigServedEntityExternalModel":{"properties":{"ai21labsConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelAi21labsConfig:getServingEndpointsEndpointConfigServedEntityExternalModelAi21labsConfig"}},"amazonBedrockConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelAmazonBedrockConfig:getServingEndpointsEndpointConfigServedEntityExternalModelAmazonBedrockConfig"}},"anthropicConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelAnthropicConfig:getServingEndpointsEndpointConfigServedEntityExternalModelAnthropicConfig"}},"cohereConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelCohereConfig:getServingEndpointsEndpointConfigServedEntityExternalModelCohereConfig"}},"customProviderConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfig:getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfig"}},"databricksModelServingConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelDatabricksModelServingConfig:getServingEndpointsEndpointConfigServedEntityExternalModelDatabricksModelServingConfig"}},"googleCloudVertexAiConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelGoogleCloudVertexAiConfig:getServingEndpointsEndpointConfigServedEntityExternalModelGoogleCloudVertexAiConfig"}},"name":{"type":"string","description":"The name of the model serving endpoint.\n"},"openaiConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelOpenaiConfig:getServingEndpointsEndpointConfigServedEntityExternalModelOpenaiConfig"}},"palmConfigs":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelPalmConfig:getServingEndpointsEndpointConfigServedEntityExternalModelPalmConfig"}},"provider":{"type":"string"},"task":{"type":"string"}},"type":"object","required":["name","provider","task"]},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelAi21labsConfig:getServingEndpointsEndpointConfigServedEntityExternalModelAi21labsConfig":{"properties":{"ai21labsApiKey":{"type":"string"},"ai21labsApiKeyPlaintext":{"type":"string"}},"type":"object"},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelAmazonBedrockConfig:getServingEndpointsEndpointConfigServedEntityExternalModelAmazonBedrockConfig":{"properties":{"awsAccessKeyId":{"type":"string"},"awsAccessKeyIdPlaintext":{"type":"string"},"awsRegion":{"type":"string"},"awsSecretAccessKey":{"type":"string"},"awsSecretAccessKeyPlaintext":{"type":"string"},"bedrockProvider":{"type":"string"},"instanceProfileArn":{"type":"string"}},"type":"object","required":["awsRegion","bedrockProvider"]},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelAnthropicConfig:getServingEndpointsEndpointConfigServedEntityExternalModelAnthropicConfig":{"properties":{"anthropicApiKey":{"type":"string"},"anthropicApiKeyPlaintext":{"type":"string"}},"type":"object"},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelCohereConfig:getServingEndpointsEndpointConfigServedEntityExternalModelCohereConfig":{"properties":{"cohereApiBase":{"type":"string"},"cohereApiKey":{"type":"string"},"cohereApiKeyPlaintext":{"type":"string"}},"type":"object"},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfig:getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfig":{"properties":{"apiKeyAuths":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfigApiKeyAuth:getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfigApiKeyAuth"}},"bearerTokenAuths":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfigBearerTokenAuth:getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfigBearerTokenAuth"}},"customProviderUrl":{"type":"string"}},"type":"object","required":["customProviderUrl"]},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfigApiKeyAuth:getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfigApiKeyAuth":{"properties":{"key":{"type":"string"},"value":{"type":"string"},"valuePlaintext":{"type":"string"}},"type":"object","required":["key"]},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfigBearerTokenAuth:getServingEndpointsEndpointConfigServedEntityExternalModelCustomProviderConfigBearerTokenAuth":{"properties":{"token":{"type":"string"},"tokenPlaintext":{"type":"string"}},"type":"object"},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelDatabricksModelServingConfig:getServingEndpointsEndpointConfigServedEntityExternalModelDatabricksModelServingConfig":{"properties":{"databricksApiToken":{"type":"string"},"databricksApiTokenPlaintext":{"type":"string"},"databricksWorkspaceUrl":{"type":"string"}},"type":"object","required":["databricksWorkspaceUrl"]},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelGoogleCloudVertexAiConfig:getServingEndpointsEndpointConfigServedEntityExternalModelGoogleCloudVertexAiConfig":{"properties":{"privateKey":{"type":"string"},"privateKeyPlaintext":{"type":"string"},"projectId":{"type":"string"},"region":{"type":"string"}},"type":"object","required":["projectId","region"]},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelOpenaiConfig:getServingEndpointsEndpointConfigServedEntityExternalModelOpenaiConfig":{"properties":{"microsoftEntraClientId":{"type":"string"},"microsoftEntraClientSecret":{"type":"string"},"microsoftEntraClientSecretPlaintext":{"type":"string"},"microsoftEntraTenantId":{"type":"string"},"openaiApiBase":{"type":"string"},"openaiApiKey":{"type":"string"},"openaiApiKeyPlaintext":{"type":"string"},"openaiApiType":{"type":"string"},"openaiApiVersion":{"type":"string"},"openaiDeploymentName":{"type":"string"},"openaiOrganization":{"type":"string"}},"type":"object"},"databricks:index/getServingEndpointsEndpointConfigServedEntityExternalModelPalmConfig:getServingEndpointsEndpointConfigServedEntityExternalModelPalmConfig":{"properties":{"palmApiKey":{"type":"string"},"palmApiKeyPlaintext":{"type":"string"}},"type":"object"},"databricks:index/getServingEndpointsEndpointConfigServedEntityFoundationModel:getServingEndpointsEndpointConfigServedEntityFoundationModel":{"properties":{"description":{"type":"string"},"displayName":{"type":"string"},"docs":{"type":"string"},"name":{"type":"string","description":"The name of the model serving endpoint.\n"}},"type":"object"},"databricks:index/getServingEndpointsEndpointConfigServedModel:getServingEndpointsEndpointConfigServedModel":{"properties":{"modelName":{"type":"string"},"modelVersion":{"type":"string"},"name":{"type":"string","description":"The name of the model serving endpoint.\n"}},"type":"object"},"databricks:index/getServingEndpointsEndpointState:getServingEndpointsEndpointState":{"properties":{"configUpdate":{"type":"string"},"ready":{"type":"string"}},"type":"object"},"databricks:index/getServingEndpointsEndpointTag:getServingEndpointsEndpointTag":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object","required":["key"]},"databricks:index/getServingEndpointsProviderConfig:getServingEndpointsProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/getShareObject:getShareObject":{"properties":{"addedAt":{"type":"integer"},"addedBy":{"type":"string"},"cdfEnabled":{"type":"boolean"},"comment":{"type":"string","description":"Description about the object.\n"},"content":{"type":"string"},"dataObjectType":{"type":"string","description":"Type of the object.\n"},"effectiveCdfEnabled":{"type":"boolean"},"effectiveHistoryDataSharingStatus":{"type":"string"},"effectiveSharedAs":{"type":"string"},"effectiveStartVersion":{"type":"integer"},"effectiveStringSharedAs":{"type":"string"},"historyDataSharingStatus":{"type":"string"},"name":{"type":"string","description":"The name of the share\n"},"partitions":{"type":"array","items":{"$ref":"#/types/databricks:index/getShareObjectPartition:getShareObjectPartition"}},"sharedAs":{"type":"string"},"startVersion":{"type":"integer"},"status":{"type":"string"},"stringSharedAs":{"type":"string"}},"type":"object","required":["addedAt","addedBy","effectiveCdfEnabled","effectiveHistoryDataSharingStatus","effectiveSharedAs","effectiveStartVersion","effectiveStringSharedAs","name","status"],"language":{"nodejs":{"requiredInputs":["name"]}}},"databricks:index/getShareObjectPartition:getShareObjectPartition":{"properties":{"values":{"type":"array","items":{"$ref":"#/types/databricks:index/getShareObjectPartitionValue:getShareObjectPartitionValue"}}},"type":"object"},"databricks:index/getShareObjectPartitionValue:getShareObjectPartitionValue":{"properties":{"name":{"type":"string","description":"The name of the share\n"},"op":{"type":"string"},"recipientPropertyKey":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/getShareProviderConfig:getShareProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getSharesProviderConfig:getSharesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getSparkVersionProviderConfig:getSparkVersionProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getSqlWarehouseChannel:getSqlWarehouseChannel":{"properties":{"dbsqlVersion":{"type":"string"},"name":{"type":"string","description":"Name of the SQL warehouse to search (case-sensitive).\n"}},"type":"object"},"databricks:index/getSqlWarehouseHealth:getSqlWarehouseHealth":{"properties":{"details":{"type":"string"},"failureReason":{"$ref":"#/types/databricks:index/getSqlWarehouseHealthFailureReason:getSqlWarehouseHealthFailureReason"},"message":{"type":"string"},"status":{"type":"string"},"summary":{"type":"string"}},"type":"object"},"databricks:index/getSqlWarehouseHealthFailureReason:getSqlWarehouseHealthFailureReason":{"properties":{"code":{"type":"string"},"parameters":{"type":"object","additionalProperties":{"type":"string"}},"type":{"type":"string"}},"type":"object"},"databricks:index/getSqlWarehouseOdbcParams:getSqlWarehouseOdbcParams":{"properties":{"hostname":{"type":"string"},"path":{"type":"string"},"port":{"type":"integer"},"protocol":{"type":"string"}},"type":"object"},"databricks:index/getSqlWarehouseProviderConfig:getSqlWarehouseProviderConfig":{"properties":{"workspaceId":{"type":"string"}},"type":"object","required":["workspaceId"]},"databricks:index/getSqlWarehouseTags:getSqlWarehouseTags":{"properties":{"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/getSqlWarehouseTagsCustomTag:getSqlWarehouseTagsCustomTag"}}},"type":"object"},"databricks:index/getSqlWarehouseTagsCustomTag:getSqlWarehouseTagsCustomTag":{"properties":{"key":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/getSqlWarehousesProviderConfig:getSqlWarehousesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getStorageCredentialProviderConfig:getStorageCredentialProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getStorageCredentialStorageCredentialInfo:getStorageCredentialStorageCredentialInfo":{"properties":{"awsIamRole":{"$ref":"#/types/databricks:index/getStorageCredentialStorageCredentialInfoAwsIamRole:getStorageCredentialStorageCredentialInfoAwsIamRole","description":"credential details for AWS:\n"},"azureManagedIdentity":{"$ref":"#/types/databricks:index/getStorageCredentialStorageCredentialInfoAzureManagedIdentity:getStorageCredentialStorageCredentialInfoAzureManagedIdentity","description":"managed identity credential details for Azure\n"},"azureServicePrincipal":{"$ref":"#/types/databricks:index/getStorageCredentialStorageCredentialInfoAzureServicePrincipal:getStorageCredentialStorageCredentialInfoAzureServicePrincipal","description":"service principal credential details for Azure:\n"},"cloudflareApiToken":{"$ref":"#/types/databricks:index/getStorageCredentialStorageCredentialInfoCloudflareApiToken:getStorageCredentialStorageCredentialInfoCloudflareApiToken"},"comment":{"type":"string"},"createdAt":{"type":"integer","description":"Time at which this catalog was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of catalog creator.\n"},"databricksGcpServiceAccount":{"$ref":"#/types/databricks:index/getStorageCredentialStorageCredentialInfoDatabricksGcpServiceAccount:getStorageCredentialStorageCredentialInfoDatabricksGcpServiceAccount","description":"credential details for GCP:\n"},"fullName":{"type":"string"},"id":{"type":"string","description":"Unique ID of storage credential.\n"},"isolationMode":{"type":"string"},"metastoreId":{"type":"string","description":"Unique identifier of the parent Metastore.\n"},"name":{"type":"string","description":"The name of the storage credential\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the storage credential owner.\n"},"readOnly":{"type":"boolean","description":"Indicates whether the storage credential is only usable for read operations.\n"},"updatedAt":{"type":"integer","description":"Time at which this catalog was last modified, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of user who last modified catalog.\n"},"usedForManagedStorage":{"type":"boolean"}},"type":"object"},"databricks:index/getStorageCredentialStorageCredentialInfoAwsIamRole:getStorageCredentialStorageCredentialInfoAwsIamRole":{"properties":{"externalId":{"type":"string","description":"(output only) - The external ID used in role assumption to prevent confused deputy problem.\n"},"roleArn":{"type":"string","description":"The Amazon Resource Name (ARN) of the AWS IAM role for S3 data access, of the form `arn:aws:iam::1234567890:role/MyRole-AJJHDSKSDF`\n"},"unityCatalogIamArn":{"type":"string","description":"(output only) - The Amazon Resource Name (ARN) of the AWS IAM user managed by Databricks. This is the identity that is going to assume the AWS IAM role.\n"}},"type":"object","required":["roleArn"]},"databricks:index/getStorageCredentialStorageCredentialInfoAzureManagedIdentity:getStorageCredentialStorageCredentialInfoAzureManagedIdentity":{"properties":{"accessConnectorId":{"type":"string","description":"The Resource ID of the Azure Databricks Access Connector resource, of the form `/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-name/providers/Microsoft.Databricks/accessConnectors/connector-name`.\n"},"credentialId":{"type":"string"},"managedIdentityId":{"type":"string","description":"The Resource ID of the Azure User Assigned Managed Identity associated with Azure Databricks Access Connector, of the form `/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-name/providers/Microsoft.ManagedIdentity/userAssignedIdentities/user-managed-identity-name`.\n"}},"type":"object","required":["accessConnectorId"]},"databricks:index/getStorageCredentialStorageCredentialInfoAzureServicePrincipal:getStorageCredentialStorageCredentialInfoAzureServicePrincipal":{"properties":{"applicationId":{"type":"string","description":"The application ID of the application registration within the referenced AAD tenant\n"},"clientSecret":{"type":"string"},"directoryId":{"type":"string","description":"The directory ID corresponding to the Azure Active Directory (AAD) tenant of the application\n"}},"type":"object","required":["applicationId","clientSecret","directoryId"]},"databricks:index/getStorageCredentialStorageCredentialInfoCloudflareApiToken:getStorageCredentialStorageCredentialInfoCloudflareApiToken":{"properties":{"accessKeyId":{"type":"string"},"accountId":{"type":"string"},"secretAccessKey":{"type":"string"}},"type":"object","required":["accessKeyId","accountId","secretAccessKey"]},"databricks:index/getStorageCredentialStorageCredentialInfoDatabricksGcpServiceAccount:getStorageCredentialStorageCredentialInfoDatabricksGcpServiceAccount":{"properties":{"credentialId":{"type":"string"},"email":{"type":"string","description":"The email of the GCP service account created, to be granted access to relevant buckets.\n"}},"type":"object"},"databricks:index/getStorageCredentialsProviderConfig:getStorageCredentialsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getTableProviderConfig:getTableProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getTableTableInfo:getTableTableInfo":{"properties":{"accessPoint":{"type":"string"},"browseOnly":{"type":"boolean"},"catalogName":{"type":"string","description":"Name of parent catalog.\n"},"columns":{"type":"array","items":{"$ref":"#/types/databricks:index/getTableTableInfoColumn:getTableTableInfoColumn"},"description":"Array of ColumnInfo objects of the table's columns\n"},"comment":{"type":"string","description":"Free-form text description\n"},"createdAt":{"type":"integer"},"createdBy":{"type":"string"},"dataAccessConfigurationId":{"type":"string"},"dataSourceFormat":{"type":"string","description":"Table format, e.g. DELTA, CSV, JSON\n"},"deletedAt":{"type":"integer"},"deltaRuntimePropertiesKvpairs":{"$ref":"#/types/databricks:index/getTableTableInfoDeltaRuntimePropertiesKvpairs:getTableTableInfoDeltaRuntimePropertiesKvpairs"},"effectivePredictiveOptimizationFlag":{"$ref":"#/types/databricks:index/getTableTableInfoEffectivePredictiveOptimizationFlag:getTableTableInfoEffectivePredictiveOptimizationFlag"},"enablePredictiveOptimization":{"type":"string"},"encryptionDetails":{"$ref":"#/types/databricks:index/getTableTableInfoEncryptionDetails:getTableTableInfoEncryptionDetails"},"fullName":{"type":"string"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"Full name of the databricks_table: _\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e_\n"},"owner":{"type":"string","description":"Current owner of the table\n"},"pipelineId":{"type":"string"},"properties":{"type":"object","additionalProperties":{"type":"string"}},"rowFilter":{"$ref":"#/types/databricks:index/getTableTableInfoRowFilter:getTableTableInfoRowFilter"},"schemaName":{"type":"string","description":"Name of parent schema relative to its parent catalog.\n"},"securableKindManifest":{"$ref":"#/types/databricks:index/getTableTableInfoSecurableKindManifest:getTableTableInfoSecurableKindManifest"},"sqlPath":{"type":"string"},"storageCredentialName":{"type":"string"},"storageLocation":{"type":"string"},"tableConstraints":{"type":"array","items":{"$ref":"#/types/databricks:index/getTableTableInfoTableConstraint:getTableTableInfoTableConstraint"}},"tableId":{"type":"string","description":"The unique identifier of the table.\n"},"tableType":{"type":"string","description":"Table type, e.g. MANAGED, EXTERNAL, VIEW\n"},"updatedAt":{"type":"integer"},"updatedBy":{"type":"string"},"viewDefinition":{"type":"string","description":"View definition SQL (when \u003cspan pulumi-lang-nodejs=\"`tableType`\" pulumi-lang-dotnet=\"`TableType`\" pulumi-lang-go=\"`tableType`\" pulumi-lang-python=\"`table_type`\" pulumi-lang-yaml=\"`tableType`\" pulumi-lang-java=\"`tableType`\"\u003e`table_type`\u003c/span\u003e is VIEW, MATERIALIZED_VIEW, or STREAMING_TABLE)\n"},"viewDependencies":{"$ref":"#/types/databricks:index/getTableTableInfoViewDependencies:getTableTableInfoViewDependencies","description":"View dependencies (when \u003cspan pulumi-lang-nodejs=\"`tableType`\" pulumi-lang-dotnet=\"`TableType`\" pulumi-lang-go=\"`tableType`\" pulumi-lang-python=\"`table_type`\" pulumi-lang-yaml=\"`tableType`\" pulumi-lang-java=\"`tableType`\"\u003e`table_type`\u003c/span\u003e is VIEW or MATERIALIZED_VIEW, STREAMING_TABLE)\n"}},"type":"object"},"databricks:index/getTableTableInfoColumn:getTableTableInfoColumn":{"properties":{"comment":{"type":"string","description":"Free-form text description\n"},"mask":{"$ref":"#/types/databricks:index/getTableTableInfoColumnMask:getTableTableInfoColumnMask"},"name":{"type":"string","description":"Full name of the databricks_table: _\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e_\n"},"nullable":{"type":"boolean"},"partitionIndex":{"type":"integer"},"position":{"type":"integer"},"typeIntervalType":{"type":"string"},"typeJson":{"type":"string"},"typeName":{"type":"string"},"typePrecision":{"type":"integer"},"typeScale":{"type":"integer"},"typeText":{"type":"string"}},"type":"object"},"databricks:index/getTableTableInfoColumnMask:getTableTableInfoColumnMask":{"properties":{"functionName":{"type":"string"},"usingArguments":{"type":"array","items":{"$ref":"#/types/databricks:index/getTableTableInfoColumnMaskUsingArgument:getTableTableInfoColumnMaskUsingArgument"}},"usingColumnNames":{"type":"array","items":{"type":"string"}}},"type":"object"},"databricks:index/getTableTableInfoColumnMaskUsingArgument:getTableTableInfoColumnMaskUsingArgument":{"properties":{"column":{"type":"string"},"constant":{"type":"string"}},"type":"object"},"databricks:index/getTableTableInfoDeltaRuntimePropertiesKvpairs:getTableTableInfoDeltaRuntimePropertiesKvpairs":{"properties":{"deltaRuntimeProperties":{"type":"object","additionalProperties":{"type":"string"}}},"type":"object","required":["deltaRuntimeProperties"]},"databricks:index/getTableTableInfoEffectivePredictiveOptimizationFlag:getTableTableInfoEffectivePredictiveOptimizationFlag":{"properties":{"inheritedFromName":{"type":"string"},"inheritedFromType":{"type":"string"},"value":{"type":"string"}},"type":"object","required":["value"]},"databricks:index/getTableTableInfoEncryptionDetails:getTableTableInfoEncryptionDetails":{"properties":{"sseEncryptionDetails":{"$ref":"#/types/databricks:index/getTableTableInfoEncryptionDetailsSseEncryptionDetails:getTableTableInfoEncryptionDetailsSseEncryptionDetails"}},"type":"object"},"databricks:index/getTableTableInfoEncryptionDetailsSseEncryptionDetails:getTableTableInfoEncryptionDetailsSseEncryptionDetails":{"properties":{"algorithm":{"type":"string"},"awsKmsKeyArn":{"type":"string"}},"type":"object"},"databricks:index/getTableTableInfoRowFilter:getTableTableInfoRowFilter":{"properties":{"functionName":{"type":"string"},"inputArguments":{"type":"array","items":{"$ref":"#/types/databricks:index/getTableTableInfoRowFilterInputArgument:getTableTableInfoRowFilterInputArgument"}},"inputColumnNames":{"type":"array","items":{"type":"string"}}},"type":"object","required":["functionName","inputColumnNames"]},"databricks:index/getTableTableInfoRowFilterInputArgument:getTableTableInfoRowFilterInputArgument":{"properties":{"column":{"type":"string"},"constant":{"type":"string"}},"type":"object"},"databricks:index/getTableTableInfoSecurableKindManifest:getTableTableInfoSecurableKindManifest":{"properties":{"assignablePrivileges":{"type":"array","items":{"type":"string"}},"capabilities":{"type":"array","items":{"type":"string"}},"options":{"type":"array","items":{"$ref":"#/types/databricks:index/getTableTableInfoSecurableKindManifestOption:getTableTableInfoSecurableKindManifestOption"}},"securableKind":{"type":"string"},"securableType":{"type":"string"}},"type":"object"},"databricks:index/getTableTableInfoSecurableKindManifestOption:getTableTableInfoSecurableKindManifestOption":{"properties":{"allowedValues":{"type":"array","items":{"type":"string"}},"defaultValue":{"type":"string"},"description":{"type":"string"},"hint":{"type":"string"},"isCopiable":{"type":"boolean"},"isCreatable":{"type":"boolean"},"isHidden":{"type":"boolean"},"isLoggable":{"type":"boolean"},"isRequired":{"type":"boolean"},"isSecret":{"type":"boolean"},"isUpdatable":{"type":"boolean"},"name":{"type":"string","description":"Full name of the databricks_table: _\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e_\n"},"oauthStage":{"type":"string"},"type":{"type":"string"}},"type":"object"},"databricks:index/getTableTableInfoTableConstraint:getTableTableInfoTableConstraint":{"properties":{"foreignKeyConstraint":{"$ref":"#/types/databricks:index/getTableTableInfoTableConstraintForeignKeyConstraint:getTableTableInfoTableConstraintForeignKeyConstraint"},"namedTableConstraint":{"$ref":"#/types/databricks:index/getTableTableInfoTableConstraintNamedTableConstraint:getTableTableInfoTableConstraintNamedTableConstraint"},"primaryKeyConstraint":{"$ref":"#/types/databricks:index/getTableTableInfoTableConstraintPrimaryKeyConstraint:getTableTableInfoTableConstraintPrimaryKeyConstraint"}},"type":"object"},"databricks:index/getTableTableInfoTableConstraintForeignKeyConstraint:getTableTableInfoTableConstraintForeignKeyConstraint":{"properties":{"childColumns":{"type":"array","items":{"type":"string"}},"name":{"type":"string","description":"Full name of the databricks_table: _\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e_\n"},"parentColumns":{"type":"array","items":{"type":"string"}},"parentTable":{"type":"string"},"rely":{"type":"boolean"}},"type":"object","required":["childColumns","name","parentColumns","parentTable"]},"databricks:index/getTableTableInfoTableConstraintNamedTableConstraint:getTableTableInfoTableConstraintNamedTableConstraint":{"properties":{"name":{"type":"string","description":"Full name of the databricks_table: _\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e_\n"}},"type":"object","required":["name"]},"databricks:index/getTableTableInfoTableConstraintPrimaryKeyConstraint:getTableTableInfoTableConstraintPrimaryKeyConstraint":{"properties":{"childColumns":{"type":"array","items":{"type":"string"}},"name":{"type":"string","description":"Full name of the databricks_table: _\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e_\n"},"rely":{"type":"boolean"},"timeseriesColumns":{"type":"array","items":{"type":"string"}}},"type":"object","required":["childColumns","name"]},"databricks:index/getTableTableInfoViewDependencies:getTableTableInfoViewDependencies":{"properties":{"dependencies":{"type":"array","items":{"$ref":"#/types/databricks:index/getTableTableInfoViewDependenciesDependency:getTableTableInfoViewDependenciesDependency"}}},"type":"object"},"databricks:index/getTableTableInfoViewDependenciesDependency:getTableTableInfoViewDependenciesDependency":{"properties":{"connection":{"$ref":"#/types/databricks:index/getTableTableInfoViewDependenciesDependencyConnection:getTableTableInfoViewDependenciesDependencyConnection"},"credential":{"$ref":"#/types/databricks:index/getTableTableInfoViewDependenciesDependencyCredential:getTableTableInfoViewDependenciesDependencyCredential"},"function":{"$ref":"#/types/databricks:index/getTableTableInfoViewDependenciesDependencyFunction:getTableTableInfoViewDependenciesDependencyFunction"},"table":{"$ref":"#/types/databricks:index/getTableTableInfoViewDependenciesDependencyTable:getTableTableInfoViewDependenciesDependencyTable"}},"type":"object"},"databricks:index/getTableTableInfoViewDependenciesDependencyConnection:getTableTableInfoViewDependenciesDependencyConnection":{"properties":{"connectionName":{"type":"string"}},"type":"object"},"databricks:index/getTableTableInfoViewDependenciesDependencyCredential:getTableTableInfoViewDependenciesDependencyCredential":{"properties":{"credentialName":{"type":"string"}},"type":"object"},"databricks:index/getTableTableInfoViewDependenciesDependencyFunction:getTableTableInfoViewDependenciesDependencyFunction":{"properties":{"functionFullName":{"type":"string"}},"type":"object","required":["functionFullName"]},"databricks:index/getTableTableInfoViewDependenciesDependencyTable:getTableTableInfoViewDependenciesDependencyTable":{"properties":{"tableFullName":{"type":"string"}},"type":"object","required":["tableFullName"]},"databricks:index/getTablesProviderConfig:getTablesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getTagPoliciesProviderConfig:getTagPoliciesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getTagPoliciesTagPolicy:getTagPoliciesTagPolicy":{"properties":{"createTime":{"type":"string","description":"(string) - Timestamp when the tag policy was created\n"},"description":{"type":"string","description":"(string)\n"},"id":{"type":"string","description":"(string)\n"},"providerConfig":{"$ref":"#/types/databricks:index/getTagPoliciesTagPolicyProviderConfig:getTagPoliciesTagPolicyProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string","description":"(string)\n"},"updateTime":{"type":"string","description":"(string) - Timestamp when the tag policy was last updated\n"},"values":{"type":"array","items":{"$ref":"#/types/databricks:index/getTagPoliciesTagPolicyValue:getTagPoliciesTagPolicyValue"},"description":"(list of Value)\n"}},"type":"object","required":["createTime","description","id","tagKey","updateTime","values"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getTagPoliciesTagPolicyProviderConfig:getTagPoliciesTagPolicyProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getTagPoliciesTagPolicyValue:getTagPoliciesTagPolicyValue":{"properties":{"name":{"type":"string","description":"(string)\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getTagPolicyProviderConfig:getTagPolicyProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getTagPolicyValue:getTagPolicyValue":{"properties":{"name":{"type":"string","description":"(string)\n"}},"type":"object","required":["name"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getUserProviderConfig:getUserProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getUsersUser:getUsersUser":{"properties":{"active":{"type":"boolean","description":"Boolean that represents if this user is active.\n"},"displayName":{"type":"string"},"emails":{"type":"array","items":{"$ref":"#/types/databricks:index/getUsersUserEmail:getUsersUserEmail"},"description":"All the emails associated with the Databricks user.\n"},"entitlements":{"type":"array","items":{"$ref":"#/types/databricks:index/getUsersUserEntitlement:getUsersUserEntitlement"},"description":"Entitlements assigned to the user.\n"},"externalId":{"type":"string"},"groups":{"type":"array","items":{"$ref":"#/types/databricks:index/getUsersUserGroup:getUsersUserGroup"},"description":"Indicates if the user is part of any groups.\n"},"id":{"type":"string","description":"The ID of the user.\n- `userName` - The username of the user.\n"},"name":{"$ref":"#/types/databricks:index/getUsersUserName:getUsersUserName","description":"- `givenName` - Given name of the Databricks user.\n- `familyName` - Family name of the Databricks user.\n- `displayName` - The display name of the user.\n"},"roles":{"type":"array","items":{"$ref":"#/types/databricks:index/getUsersUserRole:getUsersUserRole"},"description":"Indicates if the user has any associated roles.\n"},"schemas":{"type":"array","items":{"type":"string"},"description":"The schema of the user.\n- `externalId` - Reserved for future use.\n"},"userName":{"type":"string"}},"type":"object"},"databricks:index/getUsersUserEmail:getUsersUserEmail":{"properties":{"display":{"type":"string"},"primary":{"type":"boolean"},"ref":{"type":"string"},"type":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/getUsersUserEntitlement:getUsersUserEntitlement":{"properties":{"display":{"type":"string"},"primary":{"type":"boolean"},"ref":{"type":"string"},"type":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/getUsersUserGroup:getUsersUserGroup":{"properties":{"display":{"type":"string"},"primary":{"type":"boolean"},"ref":{"type":"string"},"type":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/getUsersUserName:getUsersUserName":{"properties":{"familyName":{"type":"string"},"givenName":{"type":"string"}},"type":"object"},"databricks:index/getUsersUserRole:getUsersUserRole":{"properties":{"display":{"type":"string"},"primary":{"type":"boolean"},"ref":{"type":"string"},"type":{"type":"string"},"value":{"type":"string"}},"type":"object"},"databricks:index/getViewsProviderConfig:getViewsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getVolumeProviderConfig:getVolumeProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getVolumeVolumeInfo:getVolumeVolumeInfo":{"properties":{"accessPoint":{"type":"string","description":"the AWS access point to use when accessing s3 bucket for this volume's external location\n"},"browseOnly":{"type":"boolean","description":"indicates whether the principal is limited to retrieving metadata for the volume through the BROWSE privilege when\u003cspan pulumi-lang-nodejs=\" includeBrowse \" pulumi-lang-dotnet=\" IncludeBrowse \" pulumi-lang-go=\" includeBrowse \" pulumi-lang-python=\" include_browse \" pulumi-lang-yaml=\" includeBrowse \" pulumi-lang-java=\" includeBrowse \"\u003e include_browse \u003c/span\u003eis enabled in the request.\n"},"catalogName":{"type":"string","description":"the name of the catalog where the schema and the volume are\n"},"comment":{"type":"string","description":"the comment attached to the volume\n"},"createdAt":{"type":"integer","description":"the Unix timestamp at the volume's creation\n"},"createdBy":{"type":"string","description":"the identifier of the user who created the volume\n"},"encryptionDetails":{"$ref":"#/types/databricks:index/getVolumeVolumeInfoEncryptionDetails:getVolumeVolumeInfoEncryptionDetails","description":"encryption options that apply to clients connecting to cloud storage\n"},"fullName":{"type":"string","description":"the three-level (fully qualified) name of the volume\n"},"metastoreId":{"type":"string","description":"the unique identifier of the metastore\n"},"name":{"type":"string","description":"a fully qualified name of databricks_volume: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`volume`\" pulumi-lang-dotnet=\"`Volume`\" pulumi-lang-go=\"`volume`\" pulumi-lang-python=\"`volume`\" pulumi-lang-yaml=\"`volume`\" pulumi-lang-java=\"`volume`\"\u003e`volume`\u003c/span\u003e*\n"},"owner":{"type":"string","description":"the identifier of the user who owns the volume\n"},"schemaName":{"type":"string","description":"the name of the schema where the volume is\n"},"storageLocation":{"type":"string","description":"the storage location on the cloud\n"},"updatedAt":{"type":"integer","description":"the timestamp of the last time changes were made to the volume\n"},"updatedBy":{"type":"string","description":"the identifier of the user who updated the volume last time\n"},"volumeId":{"type":"string","description":"the unique identifier of the volume\n"},"volumeType":{"type":"string","description":"whether the volume is `MANAGED` or `EXTERNAL`\n"}},"type":"object"},"databricks:index/getVolumeVolumeInfoEncryptionDetails:getVolumeVolumeInfoEncryptionDetails":{"properties":{"sseEncryptionDetails":{"$ref":"#/types/databricks:index/getVolumeVolumeInfoEncryptionDetailsSseEncryptionDetails:getVolumeVolumeInfoEncryptionDetailsSseEncryptionDetails"}},"type":"object"},"databricks:index/getVolumeVolumeInfoEncryptionDetailsSseEncryptionDetails:getVolumeVolumeInfoEncryptionDetailsSseEncryptionDetails":{"properties":{"algorithm":{"type":"string"},"awsKmsKeyArn":{"type":"string"}},"type":"object"},"databricks:index/getVolumesProviderConfig:getVolumesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getWarehousesDefaultWarehouseOverrideProviderConfig:getWarehousesDefaultWarehouseOverrideProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getWarehousesDefaultWarehouseOverridesDefaultWarehouseOverride:getWarehousesDefaultWarehouseOverridesDefaultWarehouseOverride":{"properties":{"defaultWarehouseOverrideId":{"type":"string","description":"(string) - The ID component of the resource name (user ID)\n"},"name":{"type":"string","description":"(string) - The resource name of the default warehouse override.\nFormat: default-warehouse-overrides/{default_warehouse_override_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/getWarehousesDefaultWarehouseOverridesDefaultWarehouseOverrideProviderConfig:getWarehousesDefaultWarehouseOverridesDefaultWarehouseOverrideProviderConfig","description":"Configure the provider for management through account provider.\n"},"type":{"type":"string","description":"(string) - The type of override behavior. Possible values are: `CUSTOM`, `LAST_SELECTED`\n"},"warehouseId":{"type":"string","description":"(string) - The specific warehouse ID when type is CUSTOM.\nNot set for LAST_SELECTED type\n"}},"type":"object","required":["defaultWarehouseOverrideId","name","type","warehouseId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getWarehousesDefaultWarehouseOverridesDefaultWarehouseOverrideProviderConfig:getWarehousesDefaultWarehouseOverridesDefaultWarehouseOverrideProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getWarehousesDefaultWarehouseOverridesProviderConfig:getWarehousesDefaultWarehouseOverridesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getWorkspaceEntityTagAssignmentProviderConfig:getWorkspaceEntityTagAssignmentProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getWorkspaceEntityTagAssignmentsProviderConfig:getWorkspaceEntityTagAssignmentsProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getWorkspaceEntityTagAssignmentsTagAssignment:getWorkspaceEntityTagAssignmentsTagAssignment":{"properties":{"entityId":{"type":"string","description":"The identifier of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of entity to which the tag is assigned. Allowed values are apps, dashboards, geniespaces\n"},"providerConfig":{"$ref":"#/types/databricks:index/getWorkspaceEntityTagAssignmentsTagAssignmentProviderConfig:getWorkspaceEntityTagAssignmentsTagAssignmentProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string","description":"(string) - The key of the tag. The characters , . : / - = and leading/trailing spaces are not allowed\n"},"tagValue":{"type":"string","description":"(string) - The value of the tag\n"}},"type":"object","required":["entityId","entityType","tagKey","tagValue"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getWorkspaceEntityTagAssignmentsTagAssignmentProviderConfig:getWorkspaceEntityTagAssignmentsTagAssignmentProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getWorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy:getWorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy":{"properties":{"accessPolicyType":{"type":"string","description":"(string) - Possible values are: `ALLOW_ALL_DOMAINS`, `ALLOW_APPROVED_DOMAINS`, `DENY_ALL_DOMAINS`\n"}},"type":"object","required":["accessPolicyType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getWorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains:getWorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains":{"properties":{"approvedDomains":{"type":"array","items":{"type":"string"},"description":"(list of string)\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2AutomaticClusterUpdateWorkspace:getWorkspaceSettingV2AutomaticClusterUpdateWorkspace":{"properties":{"canToggle":{"type":"boolean","description":"(boolean)\n"},"enabled":{"type":"boolean","description":"(boolean)\n"},"enablementDetails":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails:getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails","description":"(ClusterAutoRestartMessageEnablementDetails)\n"},"maintenanceWindow":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow:getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow","description":"(ClusterAutoRestartMessageMaintenanceWindow)\n"},"restartEvenIfNoUpdatesAvailable":{"type":"boolean","description":"(boolean)\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails:getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceEnablementDetails":{"properties":{"forcedForComplianceMode":{"type":"boolean","description":"(boolean) - The feature is force enabled if compliance mode is active\n"},"unavailableForDisabledEntitlement":{"type":"boolean","description":"(boolean) - The feature is unavailable if the corresponding entitlement disabled (see getShieldEntitlementEnable)\n"},"unavailableForNonEnterpriseTier":{"type":"boolean","description":"(boolean) - The feature is unavailable if the customer doesn't have enterprise tier\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow:getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindow":{"properties":{"weekDayBasedSchedule":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule","description":"(ClusterAutoRestartMessageMaintenanceWindowWeekDayBasedSchedule)\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule":{"properties":{"dayOfWeek":{"type":"string","description":"(string) - Possible values are: `FRIDAY`, `MONDAY`, `SATURDAY`, `SUNDAY`, `THURSDAY`, `TUESDAY`, `WEDNESDAY`\n"},"frequency":{"type":"string","description":"(string) - Possible values are: `EVERY_WEEK`, `FIRST_AND_THIRD_OF_MONTH`, `FIRST_OF_MONTH`, `FOURTH_OF_MONTH`, `SECOND_AND_FOURTH_OF_MONTH`, `SECOND_OF_MONTH`, `THIRD_OF_MONTH`\n"},"windowStartTime":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime","description":"(ClusterAutoRestartMessageMaintenanceWindowWindowStartTime)\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:getWorkspaceSettingV2AutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime":{"properties":{"hours":{"type":"integer","description":"(integer)\n"},"minutes":{"type":"integer","description":"(integer)\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2BooleanVal:getWorkspaceSettingV2BooleanVal":{"properties":{"value":{"type":"boolean","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:getWorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy":{"properties":{"accessPolicyType":{"type":"string","description":"(string) - Possible values are: `ALLOW_ALL_DOMAINS`, `ALLOW_APPROVED_DOMAINS`, `DENY_ALL_DOMAINS`\n"}},"type":"object","required":["accessPolicyType"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getWorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:getWorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains":{"properties":{"approvedDomains":{"type":"array","items":{"type":"string"},"description":"(list of string)\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace:getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace":{"properties":{"canToggle":{"type":"boolean","description":"(boolean)\n"},"enabled":{"type":"boolean","description":"(boolean)\n"},"enablementDetails":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails:getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails","description":"(ClusterAutoRestartMessageEnablementDetails)\n"},"maintenanceWindow":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow:getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow","description":"(ClusterAutoRestartMessageMaintenanceWindow)\n"},"restartEvenIfNoUpdatesAvailable":{"type":"boolean","description":"(boolean)\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails:getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceEnablementDetails":{"properties":{"forcedForComplianceMode":{"type":"boolean","description":"(boolean) - The feature is force enabled if compliance mode is active\n"},"unavailableForDisabledEntitlement":{"type":"boolean","description":"(boolean) - The feature is unavailable if the corresponding entitlement disabled (see getShieldEntitlementEnable)\n"},"unavailableForNonEnterpriseTier":{"type":"boolean","description":"(boolean) - The feature is unavailable if the customer doesn't have enterprise tier\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow:getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindow":{"properties":{"weekDayBasedSchedule":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule","description":"(ClusterAutoRestartMessageMaintenanceWindowWeekDayBasedSchedule)\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule:getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule":{"properties":{"dayOfWeek":{"type":"string","description":"(string) - Possible values are: `FRIDAY`, `MONDAY`, `SATURDAY`, `SUNDAY`, `THURSDAY`, `TUESDAY`, `WEDNESDAY`\n"},"frequency":{"type":"string","description":"(string) - Possible values are: `EVERY_WEEK`, `FIRST_AND_THIRD_OF_MONTH`, `FIRST_OF_MONTH`, `FOURTH_OF_MONTH`, `SECOND_AND_FOURTH_OF_MONTH`, `SECOND_OF_MONTH`, `THIRD_OF_MONTH`\n"},"windowStartTime":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime","description":"(ClusterAutoRestartMessageMaintenanceWindowWindowStartTime)\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime:getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime":{"properties":{"hours":{"type":"integer","description":"(integer)\n"},"minutes":{"type":"integer","description":"(integer)\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2EffectiveBooleanVal:getWorkspaceSettingV2EffectiveBooleanVal":{"properties":{"value":{"type":"boolean","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2EffectiveIntegerVal:getWorkspaceSettingV2EffectiveIntegerVal":{"properties":{"value":{"type":"integer","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2EffectivePersonalCompute:getWorkspaceSettingV2EffectivePersonalCompute":{"properties":{"value":{"type":"string","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2EffectiveRestrictWorkspaceAdmins:getWorkspaceSettingV2EffectiveRestrictWorkspaceAdmins":{"properties":{"status":{"type":"string","description":"(string) - Possible values are: `ALLOW_ALL`, `RESTRICT_TOKENS_AND_JOB_RUN_AS`\n"}},"type":"object","required":["status"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getWorkspaceSettingV2EffectiveStringVal:getWorkspaceSettingV2EffectiveStringVal":{"properties":{"value":{"type":"string","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2IntegerVal:getWorkspaceSettingV2IntegerVal":{"properties":{"value":{"type":"integer","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2PersonalCompute:getWorkspaceSettingV2PersonalCompute":{"properties":{"value":{"type":"string","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getWorkspaceSettingV2ProviderConfig:getWorkspaceSettingV2ProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]},"databricks:index/getWorkspaceSettingV2RestrictWorkspaceAdmins:getWorkspaceSettingV2RestrictWorkspaceAdmins":{"properties":{"status":{"type":"string","description":"(string) - Possible values are: `ALLOW_ALL`, `RESTRICT_TOKENS_AND_JOB_RUN_AS`\n"}},"type":"object","required":["status"],"language":{"nodejs":{"requiredInputs":[]}}},"databricks:index/getWorkspaceSettingV2StringVal:getWorkspaceSettingV2StringVal":{"properties":{"value":{"type":"string","description":"(string) - Represents a generic string value\n"}},"type":"object"},"databricks:index/getZonesProviderConfig:getZonesProviderConfig":{"properties":{"workspaceId":{"type":"string","description":"Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n"}},"type":"object","required":["workspaceId"]}},"provider":{"description":"The provider type for the databricks package. By default, resources use package-wide configuration\nsettings, however an explicit `Provider` instance may be created and passed during resource\nconstruction to achieve fine-grained programmatic control over provider settings. See the\n[documentation](https://www.pulumi.com/docs/reference/programming-model/#providers) for more information.\n","properties":{"accountId":{"type":"string"},"actionsIdTokenRequestToken":{"type":"string"},"actionsIdTokenRequestUrl":{"type":"string"},"audience":{"type":"string"},"authType":{"type":"string"},"azureClientId":{"type":"string"},"azureClientSecret":{"type":"string","secret":true},"azureEnvironment":{"type":"string"},"azureLoginAppId":{"type":"string"},"azureTenantId":{"type":"string"},"azureUseMsi":{"type":"boolean"},"azureWorkspaceResourceId":{"type":"string"},"clientId":{"type":"string"},"clientSecret":{"type":"string","secret":true},"clusterId":{"type":"string"},"configFile":{"type":"string"},"databricksCliPath":{"type":"string"},"databricksIdTokenFilepath":{"type":"string"},"debugHeaders":{"type":"boolean"},"debugTruncateBytes":{"type":"integer"},"disableOauthRefreshToken":{"type":"boolean"},"experimentalIsUnifiedHost":{"type":"boolean"},"googleCredentials":{"type":"string","secret":true},"googleServiceAccount":{"type":"string"},"host":{"type":"string"},"httpTimeoutSeconds":{"type":"integer"},"metadataServiceUrl":{"type":"string","secret":true},"oauthCallbackPort":{"type":"integer"},"oidcTokenEnv":{"type":"string"},"password":{"type":"string","secret":true},"profile":{"type":"string"},"rateLimit":{"type":"integer"},"retryTimeoutSeconds":{"type":"integer"},"scopes":{"type":"array","items":{"type":"string"}},"serverlessComputeId":{"type":"string"},"skipVerify":{"type":"boolean"},"token":{"type":"string","secret":true},"username":{"type":"string"},"warehouseId":{"type":"string"},"workspaceId":{"type":"string"}},"inputProperties":{"accountId":{"type":"string"},"actionsIdTokenRequestToken":{"type":"string"},"actionsIdTokenRequestUrl":{"type":"string"},"audience":{"type":"string"},"authType":{"type":"string"},"azureClientId":{"type":"string"},"azureClientSecret":{"type":"string","secret":true},"azureEnvironment":{"type":"string"},"azureLoginAppId":{"type":"string"},"azureTenantId":{"type":"string"},"azureUseMsi":{"type":"boolean"},"azureWorkspaceResourceId":{"type":"string"},"clientId":{"type":"string"},"clientSecret":{"type":"string","secret":true},"clusterId":{"type":"string"},"configFile":{"type":"string"},"databricksCliPath":{"type":"string"},"databricksIdTokenFilepath":{"type":"string"},"debugHeaders":{"type":"boolean"},"debugTruncateBytes":{"type":"integer"},"disableOauthRefreshToken":{"type":"boolean"},"experimentalIsUnifiedHost":{"type":"boolean"},"googleCredentials":{"type":"string","secret":true},"googleServiceAccount":{"type":"string"},"host":{"type":"string"},"httpTimeoutSeconds":{"type":"integer"},"metadataServiceUrl":{"type":"string","secret":true},"oauthCallbackPort":{"type":"integer"},"oidcTokenEnv":{"type":"string"},"password":{"type":"string","secret":true},"profile":{"type":"string"},"rateLimit":{"type":"integer"},"retryTimeoutSeconds":{"type":"integer"},"scopes":{"type":"array","items":{"type":"string"}},"serverlessComputeId":{"type":"string"},"skipVerify":{"type":"boolean"},"token":{"type":"string","secret":true},"username":{"type":"string"},"warehouseId":{"type":"string"},"workspaceId":{"type":"string"}},"methods":{"terraformConfig":"pulumi:providers:databricks/terraformConfig"}},"resources":{"databricks:index/accessControlRuleSet:AccessControlRuleSet":{"description":"This resource allows you to manage access rules on Databricks account level resources. For convenience we allow accessing this resource through the Databricks account and workspace.\n\n\u003e This resource can be used with an account or workspace-level provider.\n\n\u003e Currently, we only support managing access rules on specific object resources (service principal, group, budget policies and account) through \u003cspan pulumi-lang-nodejs=\"`databricks.AccessControlRuleSet`\" pulumi-lang-dotnet=\"`databricks.AccessControlRuleSet`\" pulumi-lang-go=\"`AccessControlRuleSet`\" pulumi-lang-python=\"`AccessControlRuleSet`\" pulumi-lang-yaml=\"`databricks.AccessControlRuleSet`\" pulumi-lang-java=\"`databricks.AccessControlRuleSet`\"\u003e`databricks.AccessControlRuleSet`\u003c/span\u003e.\n\n!\u003e \u003cspan pulumi-lang-nodejs=\"`databricks.AccessControlRuleSet`\" pulumi-lang-dotnet=\"`databricks.AccessControlRuleSet`\" pulumi-lang-go=\"`AccessControlRuleSet`\" pulumi-lang-python=\"`AccessControlRuleSet`\" pulumi-lang-yaml=\"`databricks.AccessControlRuleSet`\" pulumi-lang-java=\"`databricks.AccessControlRuleSet`\"\u003e`databricks.AccessControlRuleSet`\u003c/span\u003e cannot be used to manage access rules for resources supported by databricks_permissions. Refer to its documentation for more information.\n\n\u003e This resource is _authoritative_ for permissions on objects. Configuring this resource for an object will **OVERWRITE** any existing permissions of the same type unless imported, and changes made outside of Pulumi will be reset.\n\n## Service principal rule set usage\n\nThrough a Databricks workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst accountId = \"00000000-0000-0000-0000-000000000000\";\n// account level group\nconst ds = databricks.getGroup({\n    displayName: \"Data Science\",\n});\nconst automationSp = new databricks.ServicePrincipal(\"automation_sp\", {displayName: \"SP_FOR_AUTOMATION\"});\nconst automationSpRuleSet = new databricks.AccessControlRuleSet(\"automation_sp_rule_set\", {\n    name: pulumi.interpolate`accounts/${accountId}/servicePrincipals/${automationSp.applicationId}/ruleSets/default`,\n    grantRules: [{\n        principals: [ds.then(ds =\u003e ds.aclPrincipalId)],\n        role: \"roles/servicePrincipal.user\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\naccount_id = \"00000000-0000-0000-0000-000000000000\"\n# account level group\nds = databricks.get_group(display_name=\"Data Science\")\nautomation_sp = databricks.ServicePrincipal(\"automation_sp\", display_name=\"SP_FOR_AUTOMATION\")\nautomation_sp_rule_set = databricks.AccessControlRuleSet(\"automation_sp_rule_set\",\n    name=automation_sp.application_id.apply(lambda application_id: f\"accounts/{account_id}/servicePrincipals/{application_id}/ruleSets/default\"),\n    grant_rules=[{\n        \"principals\": [ds.acl_principal_id],\n        \"role\": \"roles/servicePrincipal.user\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n    // account level group\n    var ds = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"Data Science\",\n    });\n\n    var automationSp = new Databricks.ServicePrincipal(\"automation_sp\", new()\n    {\n        DisplayName = \"SP_FOR_AUTOMATION\",\n    });\n\n    var automationSpRuleSet = new Databricks.AccessControlRuleSet(\"automation_sp_rule_set\", new()\n    {\n        Name = automationSp.ApplicationId.Apply(applicationId =\u003e $\"accounts/{accountId}/servicePrincipals/{applicationId}/ruleSets/default\"),\n        GrantRules = new[]\n        {\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    ds.Apply(getGroupResult =\u003e getGroupResult.AclPrincipalId),\n                },\n                Role = \"roles/servicePrincipal.user\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\taccountId := \"00000000-0000-0000-0000-000000000000\"\n\t\t// account level group\n\t\tds, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"Data Science\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tautomationSp, err := databricks.NewServicePrincipal(ctx, \"automation_sp\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"SP_FOR_AUTOMATION\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewAccessControlRuleSet(ctx, \"automation_sp_rule_set\", \u0026databricks.AccessControlRuleSetArgs{\n\t\t\tName: automationSp.ApplicationId.ApplyT(func(applicationId string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"accounts/%v/servicePrincipals/%v/ruleSets/default\", accountId, applicationId), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tGrantRules: databricks.AccessControlRuleSetGrantRuleArray{\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(ds.AclPrincipalId),\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/servicePrincipal.user\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport com.pulumi.databricks.AccessControlRuleSet;\nimport com.pulumi.databricks.AccessControlRuleSetArgs;\nimport com.pulumi.databricks.inputs.AccessControlRuleSetGrantRuleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n        // account level group\n        final var ds = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"Data Science\")\n            .build());\n\n        var automationSp = new ServicePrincipal(\"automationSp\", ServicePrincipalArgs.builder()\n            .displayName(\"SP_FOR_AUTOMATION\")\n            .build());\n\n        var automationSpRuleSet = new AccessControlRuleSet(\"automationSpRuleSet\", AccessControlRuleSetArgs.builder()\n            .name(automationSp.applicationId().applyValue(_applicationId -\u003e String.format(\"accounts/%s/servicePrincipals/%s/ruleSets/default\", accountId,_applicationId)))\n            .grantRules(AccessControlRuleSetGrantRuleArgs.builder()\n                .principals(ds.aclPrincipalId())\n                .role(\"roles/servicePrincipal.user\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  automationSp:\n    type: databricks:ServicePrincipal\n    name: automation_sp\n    properties:\n      displayName: SP_FOR_AUTOMATION\n  automationSpRuleSet:\n    type: databricks:AccessControlRuleSet\n    name: automation_sp_rule_set\n    properties:\n      name: accounts/${accountId}/servicePrincipals/${automationSp.applicationId}/ruleSets/default\n      grantRules:\n        - principals:\n            - ${ds.aclPrincipalId}\n          role: roles/servicePrincipal.user\nvariables:\n  accountId: 00000000-0000-0000-0000-000000000000\n  # account level group\n  ds:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: Data Science\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nThrough AWS Databricks account:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst accountId = \"00000000-0000-0000-0000-000000000000\";\n// account level group creation\nconst ds = new databricks.Group(\"ds\", {displayName: \"Data Science\"});\nconst automationSp = new databricks.ServicePrincipal(\"automation_sp\", {displayName: \"SP_FOR_AUTOMATION\"});\nconst automationSpRuleSet = new databricks.AccessControlRuleSet(\"automation_sp_rule_set\", {\n    name: pulumi.interpolate`accounts/${accountId}/servicePrincipals/${automationSp.applicationId}/ruleSets/default`,\n    grantRules: [{\n        principals: [ds.aclPrincipalId],\n        role: \"roles/servicePrincipal.user\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\naccount_id = \"00000000-0000-0000-0000-000000000000\"\n# account level group creation\nds = databricks.Group(\"ds\", display_name=\"Data Science\")\nautomation_sp = databricks.ServicePrincipal(\"automation_sp\", display_name=\"SP_FOR_AUTOMATION\")\nautomation_sp_rule_set = databricks.AccessControlRuleSet(\"automation_sp_rule_set\",\n    name=automation_sp.application_id.apply(lambda application_id: f\"accounts/{account_id}/servicePrincipals/{application_id}/ruleSets/default\"),\n    grant_rules=[{\n        \"principals\": [ds.acl_principal_id],\n        \"role\": \"roles/servicePrincipal.user\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n    // account level group creation\n    var ds = new Databricks.Group(\"ds\", new()\n    {\n        DisplayName = \"Data Science\",\n    });\n\n    var automationSp = new Databricks.ServicePrincipal(\"automation_sp\", new()\n    {\n        DisplayName = \"SP_FOR_AUTOMATION\",\n    });\n\n    var automationSpRuleSet = new Databricks.AccessControlRuleSet(\"automation_sp_rule_set\", new()\n    {\n        Name = automationSp.ApplicationId.Apply(applicationId =\u003e $\"accounts/{accountId}/servicePrincipals/{applicationId}/ruleSets/default\"),\n        GrantRules = new[]\n        {\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    ds.AclPrincipalId,\n                },\n                Role = \"roles/servicePrincipal.user\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\taccountId := \"00000000-0000-0000-0000-000000000000\"\n\t\t// account level group creation\n\t\tds, err := databricks.NewGroup(ctx, \"ds\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Data Science\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tautomationSp, err := databricks.NewServicePrincipal(ctx, \"automation_sp\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"SP_FOR_AUTOMATION\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewAccessControlRuleSet(ctx, \"automation_sp_rule_set\", \u0026databricks.AccessControlRuleSetArgs{\n\t\t\tName: automationSp.ApplicationId.ApplyT(func(applicationId string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"accounts/%v/servicePrincipals/%v/ruleSets/default\", accountId, applicationId), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tGrantRules: databricks.AccessControlRuleSetGrantRuleArray{\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tds.AclPrincipalId,\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/servicePrincipal.user\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport com.pulumi.databricks.AccessControlRuleSet;\nimport com.pulumi.databricks.AccessControlRuleSetArgs;\nimport com.pulumi.databricks.inputs.AccessControlRuleSetGrantRuleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n        // account level group creation\n        var ds = new Group(\"ds\", GroupArgs.builder()\n            .displayName(\"Data Science\")\n            .build());\n\n        var automationSp = new ServicePrincipal(\"automationSp\", ServicePrincipalArgs.builder()\n            .displayName(\"SP_FOR_AUTOMATION\")\n            .build());\n\n        var automationSpRuleSet = new AccessControlRuleSet(\"automationSpRuleSet\", AccessControlRuleSetArgs.builder()\n            .name(automationSp.applicationId().applyValue(_applicationId -\u003e String.format(\"accounts/%s/servicePrincipals/%s/ruleSets/default\", accountId,_applicationId)))\n            .grantRules(AccessControlRuleSetGrantRuleArgs.builder()\n                .principals(ds.aclPrincipalId())\n                .role(\"roles/servicePrincipal.user\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  # account level group creation\n  ds:\n    type: databricks:Group\n    properties:\n      displayName: Data Science\n  automationSp:\n    type: databricks:ServicePrincipal\n    name: automation_sp\n    properties:\n      displayName: SP_FOR_AUTOMATION\n  automationSpRuleSet:\n    type: databricks:AccessControlRuleSet\n    name: automation_sp_rule_set\n    properties:\n      name: accounts/${accountId}/servicePrincipals/${automationSp.applicationId}/ruleSets/default\n      grantRules:\n        - principals:\n            - ${ds.aclPrincipalId}\n          role: roles/servicePrincipal.user\nvariables:\n  accountId: 00000000-0000-0000-0000-000000000000\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nThrough Azure Databricks account:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst accountId = \"00000000-0000-0000-0000-000000000000\";\n// account level group creation\nconst ds = new databricks.Group(\"ds\", {displayName: \"Data Science\"});\nconst automationSp = new databricks.ServicePrincipal(\"automation_sp\", {\n    applicationId: \"00000000-0000-0000-0000-000000000000\",\n    displayName: \"SP_FOR_AUTOMATION\",\n});\nconst automationSpRuleSet = new databricks.AccessControlRuleSet(\"automation_sp_rule_set\", {\n    name: pulumi.interpolate`accounts/${accountId}/servicePrincipals/${automationSp.applicationId}/ruleSets/default`,\n    grantRules: [{\n        principals: [ds.aclPrincipalId],\n        role: \"roles/servicePrincipal.user\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\naccount_id = \"00000000-0000-0000-0000-000000000000\"\n# account level group creation\nds = databricks.Group(\"ds\", display_name=\"Data Science\")\nautomation_sp = databricks.ServicePrincipal(\"automation_sp\",\n    application_id=\"00000000-0000-0000-0000-000000000000\",\n    display_name=\"SP_FOR_AUTOMATION\")\nautomation_sp_rule_set = databricks.AccessControlRuleSet(\"automation_sp_rule_set\",\n    name=automation_sp.application_id.apply(lambda application_id: f\"accounts/{account_id}/servicePrincipals/{application_id}/ruleSets/default\"),\n    grant_rules=[{\n        \"principals\": [ds.acl_principal_id],\n        \"role\": \"roles/servicePrincipal.user\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n    // account level group creation\n    var ds = new Databricks.Group(\"ds\", new()\n    {\n        DisplayName = \"Data Science\",\n    });\n\n    var automationSp = new Databricks.ServicePrincipal(\"automation_sp\", new()\n    {\n        ApplicationId = \"00000000-0000-0000-0000-000000000000\",\n        DisplayName = \"SP_FOR_AUTOMATION\",\n    });\n\n    var automationSpRuleSet = new Databricks.AccessControlRuleSet(\"automation_sp_rule_set\", new()\n    {\n        Name = automationSp.ApplicationId.Apply(applicationId =\u003e $\"accounts/{accountId}/servicePrincipals/{applicationId}/ruleSets/default\"),\n        GrantRules = new[]\n        {\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    ds.AclPrincipalId,\n                },\n                Role = \"roles/servicePrincipal.user\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\taccountId := \"00000000-0000-0000-0000-000000000000\"\n\t\t// account level group creation\n\t\tds, err := databricks.NewGroup(ctx, \"ds\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Data Science\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tautomationSp, err := databricks.NewServicePrincipal(ctx, \"automation_sp\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tApplicationId: pulumi.String(\"00000000-0000-0000-0000-000000000000\"),\n\t\t\tDisplayName:   pulumi.String(\"SP_FOR_AUTOMATION\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewAccessControlRuleSet(ctx, \"automation_sp_rule_set\", \u0026databricks.AccessControlRuleSetArgs{\n\t\t\tName: automationSp.ApplicationId.ApplyT(func(applicationId string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"accounts/%v/servicePrincipals/%v/ruleSets/default\", accountId, applicationId), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tGrantRules: databricks.AccessControlRuleSetGrantRuleArray{\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tds.AclPrincipalId,\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/servicePrincipal.user\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport com.pulumi.databricks.AccessControlRuleSet;\nimport com.pulumi.databricks.AccessControlRuleSetArgs;\nimport com.pulumi.databricks.inputs.AccessControlRuleSetGrantRuleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n        // account level group creation\n        var ds = new Group(\"ds\", GroupArgs.builder()\n            .displayName(\"Data Science\")\n            .build());\n\n        var automationSp = new ServicePrincipal(\"automationSp\", ServicePrincipalArgs.builder()\n            .applicationId(\"00000000-0000-0000-0000-000000000000\")\n            .displayName(\"SP_FOR_AUTOMATION\")\n            .build());\n\n        var automationSpRuleSet = new AccessControlRuleSet(\"automationSpRuleSet\", AccessControlRuleSetArgs.builder()\n            .name(automationSp.applicationId().applyValue(_applicationId -\u003e String.format(\"accounts/%s/servicePrincipals/%s/ruleSets/default\", accountId,_applicationId)))\n            .grantRules(AccessControlRuleSetGrantRuleArgs.builder()\n                .principals(ds.aclPrincipalId())\n                .role(\"roles/servicePrincipal.user\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  # account level group creation\n  ds:\n    type: databricks:Group\n    properties:\n      displayName: Data Science\n  automationSp:\n    type: databricks:ServicePrincipal\n    name: automation_sp\n    properties:\n      applicationId: 00000000-0000-0000-0000-000000000000\n      displayName: SP_FOR_AUTOMATION\n  automationSpRuleSet:\n    type: databricks:AccessControlRuleSet\n    name: automation_sp_rule_set\n    properties:\n      name: accounts/${accountId}/servicePrincipals/${automationSp.applicationId}/ruleSets/default\n      grantRules:\n        - principals:\n            - ${ds.aclPrincipalId}\n          role: roles/servicePrincipal.user\nvariables:\n  accountId: 00000000-0000-0000-0000-000000000000\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nThrough GCP Databricks account:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst accountId = \"00000000-0000-0000-0000-000000000000\";\n// account level group creation\nconst ds = new databricks.Group(\"ds\", {displayName: \"Data Science\"});\nconst automationSp = new databricks.ServicePrincipal(\"automation_sp\", {displayName: \"SP_FOR_AUTOMATION\"});\nconst automationSpRuleSet = new databricks.AccessControlRuleSet(\"automation_sp_rule_set\", {\n    name: pulumi.interpolate`accounts/${accountId}/servicePrincipals/${automationSp.applicationId}/ruleSets/default`,\n    grantRules: [{\n        principals: [ds.aclPrincipalId],\n        role: \"roles/servicePrincipal.user\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\naccount_id = \"00000000-0000-0000-0000-000000000000\"\n# account level group creation\nds = databricks.Group(\"ds\", display_name=\"Data Science\")\nautomation_sp = databricks.ServicePrincipal(\"automation_sp\", display_name=\"SP_FOR_AUTOMATION\")\nautomation_sp_rule_set = databricks.AccessControlRuleSet(\"automation_sp_rule_set\",\n    name=automation_sp.application_id.apply(lambda application_id: f\"accounts/{account_id}/servicePrincipals/{application_id}/ruleSets/default\"),\n    grant_rules=[{\n        \"principals\": [ds.acl_principal_id],\n        \"role\": \"roles/servicePrincipal.user\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n    // account level group creation\n    var ds = new Databricks.Group(\"ds\", new()\n    {\n        DisplayName = \"Data Science\",\n    });\n\n    var automationSp = new Databricks.ServicePrincipal(\"automation_sp\", new()\n    {\n        DisplayName = \"SP_FOR_AUTOMATION\",\n    });\n\n    var automationSpRuleSet = new Databricks.AccessControlRuleSet(\"automation_sp_rule_set\", new()\n    {\n        Name = automationSp.ApplicationId.Apply(applicationId =\u003e $\"accounts/{accountId}/servicePrincipals/{applicationId}/ruleSets/default\"),\n        GrantRules = new[]\n        {\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    ds.AclPrincipalId,\n                },\n                Role = \"roles/servicePrincipal.user\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\taccountId := \"00000000-0000-0000-0000-000000000000\"\n\t\t// account level group creation\n\t\tds, err := databricks.NewGroup(ctx, \"ds\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Data Science\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tautomationSp, err := databricks.NewServicePrincipal(ctx, \"automation_sp\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"SP_FOR_AUTOMATION\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewAccessControlRuleSet(ctx, \"automation_sp_rule_set\", \u0026databricks.AccessControlRuleSetArgs{\n\t\t\tName: automationSp.ApplicationId.ApplyT(func(applicationId string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"accounts/%v/servicePrincipals/%v/ruleSets/default\", accountId, applicationId), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tGrantRules: databricks.AccessControlRuleSetGrantRuleArray{\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tds.AclPrincipalId,\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/servicePrincipal.user\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport com.pulumi.databricks.AccessControlRuleSet;\nimport com.pulumi.databricks.AccessControlRuleSetArgs;\nimport com.pulumi.databricks.inputs.AccessControlRuleSetGrantRuleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n        // account level group creation\n        var ds = new Group(\"ds\", GroupArgs.builder()\n            .displayName(\"Data Science\")\n            .build());\n\n        var automationSp = new ServicePrincipal(\"automationSp\", ServicePrincipalArgs.builder()\n            .displayName(\"SP_FOR_AUTOMATION\")\n            .build());\n\n        var automationSpRuleSet = new AccessControlRuleSet(\"automationSpRuleSet\", AccessControlRuleSetArgs.builder()\n            .name(automationSp.applicationId().applyValue(_applicationId -\u003e String.format(\"accounts/%s/servicePrincipals/%s/ruleSets/default\", accountId,_applicationId)))\n            .grantRules(AccessControlRuleSetGrantRuleArgs.builder()\n                .principals(ds.aclPrincipalId())\n                .role(\"roles/servicePrincipal.user\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  # account level group creation\n  ds:\n    type: databricks:Group\n    properties:\n      displayName: Data Science\n  automationSp:\n    type: databricks:ServicePrincipal\n    name: automation_sp\n    properties:\n      displayName: SP_FOR_AUTOMATION\n  automationSpRuleSet:\n    type: databricks:AccessControlRuleSet\n    name: automation_sp_rule_set\n    properties:\n      name: accounts/${accountId}/servicePrincipals/${automationSp.applicationId}/ruleSets/default\n      grantRules:\n        - principals:\n            - ${ds.aclPrincipalId}\n          role: roles/servicePrincipal.user\nvariables:\n  accountId: 00000000-0000-0000-0000-000000000000\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Group rule set usage\n\nRefer to the appropriate provider configuration as shown in the examples for service principal rule set.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst accountId = \"00000000-0000-0000-0000-000000000000\";\n// account level group\nconst ds = databricks.getGroup({\n    displayName: \"Data Science\",\n});\nconst john = databricks.getUser({\n    userName: \"john.doe@example.com\",\n});\nconst dsGroupRuleSet = new databricks.AccessControlRuleSet(\"ds_group_rule_set\", {\n    name: `accounts/${accountId}/groups/${dsDatabricksGroup.id}/ruleSets/default`,\n    grantRules: [{\n        principals: [john.then(john =\u003e john.aclPrincipalId)],\n        role: \"roles/group.manager\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\naccount_id = \"00000000-0000-0000-0000-000000000000\"\n# account level group\nds = databricks.get_group(display_name=\"Data Science\")\njohn = databricks.get_user(user_name=\"john.doe@example.com\")\nds_group_rule_set = databricks.AccessControlRuleSet(\"ds_group_rule_set\",\n    name=f\"accounts/{account_id}/groups/{ds_databricks_group['id']}/ruleSets/default\",\n    grant_rules=[{\n        \"principals\": [john.acl_principal_id],\n        \"role\": \"roles/group.manager\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n    // account level group\n    var ds = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"Data Science\",\n    });\n\n    var john = Databricks.GetUser.Invoke(new()\n    {\n        UserName = \"john.doe@example.com\",\n    });\n\n    var dsGroupRuleSet = new Databricks.AccessControlRuleSet(\"ds_group_rule_set\", new()\n    {\n        Name = $\"accounts/{accountId}/groups/{dsDatabricksGroup.Id}/ruleSets/default\",\n        GrantRules = new[]\n        {\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    john.Apply(getUserResult =\u003e getUserResult.AclPrincipalId),\n                },\n                Role = \"roles/group.manager\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\taccountId := \"00000000-0000-0000-0000-000000000000\"\n\t\t// account level group\n\t\t_, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"Data Science\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tjohn, err := databricks.LookupUser(ctx, \u0026databricks.LookupUserArgs{\n\t\t\tUserName: pulumi.StringRef(\"john.doe@example.com\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewAccessControlRuleSet(ctx, \"ds_group_rule_set\", \u0026databricks.AccessControlRuleSetArgs{\n\t\t\tName: pulumi.Sprintf(\"accounts/%v/groups/%v/ruleSets/default\", accountId, dsDatabricksGroup.Id),\n\t\t\tGrantRules: databricks.AccessControlRuleSetGrantRuleArray{\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(john.AclPrincipalId),\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/group.manager\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.inputs.GetUserArgs;\nimport com.pulumi.databricks.AccessControlRuleSet;\nimport com.pulumi.databricks.AccessControlRuleSetArgs;\nimport com.pulumi.databricks.inputs.AccessControlRuleSetGrantRuleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n        // account level group\n        final var ds = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"Data Science\")\n            .build());\n\n        final var john = DatabricksFunctions.getUser(GetUserArgs.builder()\n            .userName(\"john.doe@example.com\")\n            .build());\n\n        var dsGroupRuleSet = new AccessControlRuleSet(\"dsGroupRuleSet\", AccessControlRuleSetArgs.builder()\n            .name(String.format(\"accounts/%s/groups/%s/ruleSets/default\", accountId,dsDatabricksGroup.id()))\n            .grantRules(AccessControlRuleSetGrantRuleArgs.builder()\n                .principals(john.aclPrincipalId())\n                .role(\"roles/group.manager\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  dsGroupRuleSet:\n    type: databricks:AccessControlRuleSet\n    name: ds_group_rule_set\n    properties:\n      name: accounts/${accountId}/groups/${dsDatabricksGroup.id}/ruleSets/default\n      grantRules:\n        - principals:\n            - ${john.aclPrincipalId}\n          role: roles/group.manager\nvariables:\n  accountId: 00000000-0000-0000-0000-000000000000\n  # account level group\n  ds:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: Data Science\n  john:\n    fn::invoke:\n      function: databricks:getUser\n      arguments:\n        userName: john.doe@example.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Account rule set usage\n\nRefer to the appropriate provider configuration as shown in the examples for service principal rule set.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst accountId = \"00000000-0000-0000-0000-000000000000\";\n// account level group\nconst ds = databricks.getGroup({\n    displayName: \"Data Science\",\n});\n// account level group\nconst marketplaceAdmins = databricks.getGroup({\n    displayName: \"Marketplace Admins\",\n});\nconst john = databricks.getUser({\n    userName: \"john.doe@example.com\",\n});\nconst accountRuleSet = new databricks.AccessControlRuleSet(\"account_rule_set\", {\n    name: `accounts/${accountId}/ruleSets/default`,\n    grantRules: [\n        {\n            principals: [john.then(john =\u003e john.aclPrincipalId)],\n            role: \"roles/group.manager\",\n        },\n        {\n            principals: [ds.then(ds =\u003e ds.aclPrincipalId)],\n            role: \"roles/servicePrincipal.manager\",\n        },\n        {\n            principals: [marketplaceAdmins.then(marketplaceAdmins =\u003e marketplaceAdmins.aclPrincipalId)],\n            role: \"roles/marketplace.admin\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\naccount_id = \"00000000-0000-0000-0000-000000000000\"\n# account level group\nds = databricks.get_group(display_name=\"Data Science\")\n# account level group\nmarketplace_admins = databricks.get_group(display_name=\"Marketplace Admins\")\njohn = databricks.get_user(user_name=\"john.doe@example.com\")\naccount_rule_set = databricks.AccessControlRuleSet(\"account_rule_set\",\n    name=f\"accounts/{account_id}/ruleSets/default\",\n    grant_rules=[\n        {\n            \"principals\": [john.acl_principal_id],\n            \"role\": \"roles/group.manager\",\n        },\n        {\n            \"principals\": [ds.acl_principal_id],\n            \"role\": \"roles/servicePrincipal.manager\",\n        },\n        {\n            \"principals\": [marketplace_admins.acl_principal_id],\n            \"role\": \"roles/marketplace.admin\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n    // account level group\n    var ds = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"Data Science\",\n    });\n\n    // account level group\n    var marketplaceAdmins = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"Marketplace Admins\",\n    });\n\n    var john = Databricks.GetUser.Invoke(new()\n    {\n        UserName = \"john.doe@example.com\",\n    });\n\n    var accountRuleSet = new Databricks.AccessControlRuleSet(\"account_rule_set\", new()\n    {\n        Name = $\"accounts/{accountId}/ruleSets/default\",\n        GrantRules = new[]\n        {\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    john.Apply(getUserResult =\u003e getUserResult.AclPrincipalId),\n                },\n                Role = \"roles/group.manager\",\n            },\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    ds.Apply(getGroupResult =\u003e getGroupResult.AclPrincipalId),\n                },\n                Role = \"roles/servicePrincipal.manager\",\n            },\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    marketplaceAdmins.Apply(getGroupResult =\u003e getGroupResult.AclPrincipalId),\n                },\n                Role = \"roles/marketplace.admin\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\taccountId := \"00000000-0000-0000-0000-000000000000\"\n\t\t// account level group\n\t\tds, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"Data Science\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// account level group\n\t\tmarketplaceAdmins, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"Marketplace Admins\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tjohn, err := databricks.LookupUser(ctx, \u0026databricks.LookupUserArgs{\n\t\t\tUserName: pulumi.StringRef(\"john.doe@example.com\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewAccessControlRuleSet(ctx, \"account_rule_set\", \u0026databricks.AccessControlRuleSetArgs{\n\t\t\tName: pulumi.Sprintf(\"accounts/%v/ruleSets/default\", accountId),\n\t\t\tGrantRules: databricks.AccessControlRuleSetGrantRuleArray{\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(john.AclPrincipalId),\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/group.manager\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(ds.AclPrincipalId),\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/servicePrincipal.manager\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(marketplaceAdmins.AclPrincipalId),\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/marketplace.admin\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.inputs.GetUserArgs;\nimport com.pulumi.databricks.AccessControlRuleSet;\nimport com.pulumi.databricks.AccessControlRuleSetArgs;\nimport com.pulumi.databricks.inputs.AccessControlRuleSetGrantRuleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n        // account level group\n        final var ds = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"Data Science\")\n            .build());\n\n        // account level group\n        final var marketplaceAdmins = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"Marketplace Admins\")\n            .build());\n\n        final var john = DatabricksFunctions.getUser(GetUserArgs.builder()\n            .userName(\"john.doe@example.com\")\n            .build());\n\n        var accountRuleSet = new AccessControlRuleSet(\"accountRuleSet\", AccessControlRuleSetArgs.builder()\n            .name(String.format(\"accounts/%s/ruleSets/default\", accountId))\n            .grantRules(            \n                AccessControlRuleSetGrantRuleArgs.builder()\n                    .principals(john.aclPrincipalId())\n                    .role(\"roles/group.manager\")\n                    .build(),\n                AccessControlRuleSetGrantRuleArgs.builder()\n                    .principals(ds.aclPrincipalId())\n                    .role(\"roles/servicePrincipal.manager\")\n                    .build(),\n                AccessControlRuleSetGrantRuleArgs.builder()\n                    .principals(marketplaceAdmins.aclPrincipalId())\n                    .role(\"roles/marketplace.admin\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  accountRuleSet:\n    type: databricks:AccessControlRuleSet\n    name: account_rule_set\n    properties:\n      name: accounts/${accountId}/ruleSets/default\n      grantRules:\n        - principals:\n            - ${john.aclPrincipalId}\n          role: roles/group.manager\n        - principals:\n            - ${ds.aclPrincipalId}\n          role: roles/servicePrincipal.manager\n        - principals:\n            - ${marketplaceAdmins.aclPrincipalId}\n          role: roles/marketplace.admin\nvariables:\n  accountId: 00000000-0000-0000-0000-000000000000\n  # account level group\n  ds:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: Data Science\n  # account level group\n  marketplaceAdmins:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: Marketplace Admins\n  john:\n    fn::invoke:\n      function: databricks:getUser\n      arguments:\n        userName: john.doe@example.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Budget policy usage\n\nAccess to budget policies could be controlled with this resource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst accountId = \"00000000-0000-0000-0000-000000000000\";\n// account level group\nconst ds = databricks.getGroup({\n    displayName: \"Data Science\",\n});\nconst john = databricks.getUser({\n    userName: \"john.doe@example.com\",\n});\nconst _this = new databricks.BudgetPolicy(\"this\", {\n    policyName: \"data-science-budget-policy\",\n    customTags: [{\n        key: \"mykey\",\n        value: \"myvalue\",\n    }],\n});\nconst budgetPolicyUsage = new databricks.AccessControlRuleSet(\"budget_policy_usage\", {\n    name: pulumi.interpolate`accounts/${accountId}/budgetPolicies/${_this.policyId}/ruleSets/default`,\n    grantRules: [\n        {\n            principals: [john.then(john =\u003e john.aclPrincipalId)],\n            role: \"roles/budgetPolicy.manager\",\n        },\n        {\n            principals: [ds.then(ds =\u003e ds.aclPrincipalId)],\n            role: \"roles/budgetPolicy.user\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\naccount_id = \"00000000-0000-0000-0000-000000000000\"\n# account level group\nds = databricks.get_group(display_name=\"Data Science\")\njohn = databricks.get_user(user_name=\"john.doe@example.com\")\nthis = databricks.BudgetPolicy(\"this\",\n    policy_name=\"data-science-budget-policy\",\n    custom_tags=[{\n        \"key\": \"mykey\",\n        \"value\": \"myvalue\",\n    }])\nbudget_policy_usage = databricks.AccessControlRuleSet(\"budget_policy_usage\",\n    name=this.policy_id.apply(lambda policy_id: f\"accounts/{account_id}/budgetPolicies/{policy_id}/ruleSets/default\"),\n    grant_rules=[\n        {\n            \"principals\": [john.acl_principal_id],\n            \"role\": \"roles/budgetPolicy.manager\",\n        },\n        {\n            \"principals\": [ds.acl_principal_id],\n            \"role\": \"roles/budgetPolicy.user\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n    // account level group\n    var ds = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"Data Science\",\n    });\n\n    var john = Databricks.GetUser.Invoke(new()\n    {\n        UserName = \"john.doe@example.com\",\n    });\n\n    var @this = new Databricks.BudgetPolicy(\"this\", new()\n    {\n        PolicyName = \"data-science-budget-policy\",\n        CustomTags = new[]\n        {\n            new Databricks.Inputs.BudgetPolicyCustomTagArgs\n            {\n                Key = \"mykey\",\n                Value = \"myvalue\",\n            },\n        },\n    });\n\n    var budgetPolicyUsage = new Databricks.AccessControlRuleSet(\"budget_policy_usage\", new()\n    {\n        Name = @this.PolicyId.Apply(policyId =\u003e $\"accounts/{accountId}/budgetPolicies/{policyId}/ruleSets/default\"),\n        GrantRules = new[]\n        {\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    john.Apply(getUserResult =\u003e getUserResult.AclPrincipalId),\n                },\n                Role = \"roles/budgetPolicy.manager\",\n            },\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    ds.Apply(getGroupResult =\u003e getGroupResult.AclPrincipalId),\n                },\n                Role = \"roles/budgetPolicy.user\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\taccountId := \"00000000-0000-0000-0000-000000000000\"\n\t\t// account level group\n\t\tds, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"Data Science\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tjohn, err := databricks.LookupUser(ctx, \u0026databricks.LookupUserArgs{\n\t\t\tUserName: pulumi.StringRef(\"john.doe@example.com\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewBudgetPolicy(ctx, \"this\", \u0026databricks.BudgetPolicyArgs{\n\t\t\tPolicyName: pulumi.String(\"data-science-budget-policy\"),\n\t\t\tCustomTags: databricks.BudgetPolicyCustomTagArray{\n\t\t\t\t\u0026databricks.BudgetPolicyCustomTagArgs{\n\t\t\t\t\tKey:   pulumi.String(\"mykey\"),\n\t\t\t\t\tValue: pulumi.String(\"myvalue\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewAccessControlRuleSet(ctx, \"budget_policy_usage\", \u0026databricks.AccessControlRuleSetArgs{\n\t\t\tName: this.PolicyId.ApplyT(func(policyId string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"accounts/%v/budgetPolicies/%v/ruleSets/default\", accountId, policyId), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tGrantRules: databricks.AccessControlRuleSetGrantRuleArray{\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(john.AclPrincipalId),\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/budgetPolicy.manager\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(ds.AclPrincipalId),\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/budgetPolicy.user\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.inputs.GetUserArgs;\nimport com.pulumi.databricks.BudgetPolicy;\nimport com.pulumi.databricks.BudgetPolicyArgs;\nimport com.pulumi.databricks.inputs.BudgetPolicyCustomTagArgs;\nimport com.pulumi.databricks.AccessControlRuleSet;\nimport com.pulumi.databricks.AccessControlRuleSetArgs;\nimport com.pulumi.databricks.inputs.AccessControlRuleSetGrantRuleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n        // account level group\n        final var ds = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"Data Science\")\n            .build());\n\n        final var john = DatabricksFunctions.getUser(GetUserArgs.builder()\n            .userName(\"john.doe@example.com\")\n            .build());\n\n        var this_ = new BudgetPolicy(\"this\", BudgetPolicyArgs.builder()\n            .policyName(\"data-science-budget-policy\")\n            .customTags(BudgetPolicyCustomTagArgs.builder()\n                .key(\"mykey\")\n                .value(\"myvalue\")\n                .build())\n            .build());\n\n        var budgetPolicyUsage = new AccessControlRuleSet(\"budgetPolicyUsage\", AccessControlRuleSetArgs.builder()\n            .name(this_.policyId().applyValue(_policyId -\u003e String.format(\"accounts/%s/budgetPolicies/%s/ruleSets/default\", accountId,_policyId)))\n            .grantRules(            \n                AccessControlRuleSetGrantRuleArgs.builder()\n                    .principals(john.aclPrincipalId())\n                    .role(\"roles/budgetPolicy.manager\")\n                    .build(),\n                AccessControlRuleSetGrantRuleArgs.builder()\n                    .principals(ds.aclPrincipalId())\n                    .role(\"roles/budgetPolicy.user\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:BudgetPolicy\n    properties:\n      policyName: data-science-budget-policy\n      customTags:\n        - key: mykey\n          value: myvalue\n  budgetPolicyUsage:\n    type: databricks:AccessControlRuleSet\n    name: budget_policy_usage\n    properties:\n      name: accounts/${accountId}/budgetPolicies/${this.policyId}/ruleSets/default\n      grantRules:\n        - principals:\n            - ${john.aclPrincipalId}\n          role: roles/budgetPolicy.manager\n        - principals:\n            - ${ds.aclPrincipalId}\n          role: roles/budgetPolicy.user\nvariables:\n  accountId: 00000000-0000-0000-0000-000000000000\n  # account level group\n  ds:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: Data Science\n  john:\n    fn::invoke:\n      function: databricks:getUser\n      arguments:\n        userName: john.doe@example.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Tag policy usage\n\nAccess to tag policies could be controlled with this resource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst accountId = \"00000000-0000-0000-0000-000000000000\";\n// account level group\nconst ds = databricks.getGroup({\n    displayName: \"Data Science\",\n});\nconst john = databricks.getUser({\n    userName: \"john.doe@example.com\",\n});\nconst _this = new databricks.TagPolicy(\"this\", {\n    tagKey: \"example_tag_key\",\n    description: \"Example description.\",\n    values: [\n        {\n            name: \"example_value_2\",\n        },\n        {\n            name: \"example_value_3\",\n        },\n    ],\n});\nconst tagPolicyUsage = new databricks.AccessControlRuleSet(\"tag_policy_usage\", {\n    name: pulumi.interpolate`accounts/${accountId}/tagPolicies/${_this.id}/ruleSets/default`,\n    grantRules: [\n        {\n            principals: [john.then(john =\u003e john.aclPrincipalId)],\n            role: \"roles/tagPolicy.manager\",\n        },\n        {\n            principals: [ds.then(ds =\u003e ds.aclPrincipalId)],\n            role: \"roles/tagPolicy.assigner\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\naccount_id = \"00000000-0000-0000-0000-000000000000\"\n# account level group\nds = databricks.get_group(display_name=\"Data Science\")\njohn = databricks.get_user(user_name=\"john.doe@example.com\")\nthis = databricks.TagPolicy(\"this\",\n    tag_key=\"example_tag_key\",\n    description=\"Example description.\",\n    values=[\n        {\n            \"name\": \"example_value_2\",\n        },\n        {\n            \"name\": \"example_value_3\",\n        },\n    ])\ntag_policy_usage = databricks.AccessControlRuleSet(\"tag_policy_usage\",\n    name=this.id.apply(lambda id: f\"accounts/{account_id}/tagPolicies/{id}/ruleSets/default\"),\n    grant_rules=[\n        {\n            \"principals\": [john.acl_principal_id],\n            \"role\": \"roles/tagPolicy.manager\",\n        },\n        {\n            \"principals\": [ds.acl_principal_id],\n            \"role\": \"roles/tagPolicy.assigner\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n    // account level group\n    var ds = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"Data Science\",\n    });\n\n    var john = Databricks.GetUser.Invoke(new()\n    {\n        UserName = \"john.doe@example.com\",\n    });\n\n    var @this = new Databricks.TagPolicy(\"this\", new()\n    {\n        TagKey = \"example_tag_key\",\n        Description = \"Example description.\",\n        Values = new[]\n        {\n            new Databricks.Inputs.TagPolicyValueArgs\n            {\n                Name = \"example_value_2\",\n            },\n            new Databricks.Inputs.TagPolicyValueArgs\n            {\n                Name = \"example_value_3\",\n            },\n        },\n    });\n\n    var tagPolicyUsage = new Databricks.AccessControlRuleSet(\"tag_policy_usage\", new()\n    {\n        Name = @this.Id.Apply(id =\u003e $\"accounts/{accountId}/tagPolicies/{id}/ruleSets/default\"),\n        GrantRules = new[]\n        {\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    john.Apply(getUserResult =\u003e getUserResult.AclPrincipalId),\n                },\n                Role = \"roles/tagPolicy.manager\",\n            },\n            new Databricks.Inputs.AccessControlRuleSetGrantRuleArgs\n            {\n                Principals = new[]\n                {\n                    ds.Apply(getGroupResult =\u003e getGroupResult.AclPrincipalId),\n                },\n                Role = \"roles/tagPolicy.assigner\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\taccountId := \"00000000-0000-0000-0000-000000000000\"\n\t\t// account level group\n\t\tds, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"Data Science\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tjohn, err := databricks.LookupUser(ctx, \u0026databricks.LookupUserArgs{\n\t\t\tUserName: pulumi.StringRef(\"john.doe@example.com\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewTagPolicy(ctx, \"this\", \u0026databricks.TagPolicyArgs{\n\t\t\tTagKey:      pulumi.String(\"example_tag_key\"),\n\t\t\tDescription: pulumi.String(\"Example description.\"),\n\t\t\tValues: databricks.TagPolicyValueArray{\n\t\t\t\t\u0026databricks.TagPolicyValueArgs{\n\t\t\t\t\tName: pulumi.String(\"example_value_2\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.TagPolicyValueArgs{\n\t\t\t\t\tName: pulumi.String(\"example_value_3\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewAccessControlRuleSet(ctx, \"tag_policy_usage\", \u0026databricks.AccessControlRuleSetArgs{\n\t\t\tName: this.ID().ApplyT(func(id string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"accounts/%v/tagPolicies/%v/ruleSets/default\", accountId, id), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tGrantRules: databricks.AccessControlRuleSetGrantRuleArray{\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(john.AclPrincipalId),\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/tagPolicy.manager\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.AccessControlRuleSetGrantRuleArgs{\n\t\t\t\t\tPrincipals: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(ds.AclPrincipalId),\n\t\t\t\t\t},\n\t\t\t\t\tRole: pulumi.String(\"roles/tagPolicy.assigner\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.inputs.GetUserArgs;\nimport com.pulumi.databricks.TagPolicy;\nimport com.pulumi.databricks.TagPolicyArgs;\nimport com.pulumi.databricks.inputs.TagPolicyValueArgs;\nimport com.pulumi.databricks.AccessControlRuleSet;\nimport com.pulumi.databricks.AccessControlRuleSetArgs;\nimport com.pulumi.databricks.inputs.AccessControlRuleSetGrantRuleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var accountId = \"00000000-0000-0000-0000-000000000000\";\n\n        // account level group\n        final var ds = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"Data Science\")\n            .build());\n\n        final var john = DatabricksFunctions.getUser(GetUserArgs.builder()\n            .userName(\"john.doe@example.com\")\n            .build());\n\n        var this_ = new TagPolicy(\"this\", TagPolicyArgs.builder()\n            .tagKey(\"example_tag_key\")\n            .description(\"Example description.\")\n            .values(            \n                TagPolicyValueArgs.builder()\n                    .name(\"example_value_2\")\n                    .build(),\n                TagPolicyValueArgs.builder()\n                    .name(\"example_value_3\")\n                    .build())\n            .build());\n\n        var tagPolicyUsage = new AccessControlRuleSet(\"tagPolicyUsage\", AccessControlRuleSetArgs.builder()\n            .name(this_.id().applyValue(_id -\u003e String.format(\"accounts/%s/tagPolicies/%s/ruleSets/default\", accountId,_id)))\n            .grantRules(            \n                AccessControlRuleSetGrantRuleArgs.builder()\n                    .principals(john.aclPrincipalId())\n                    .role(\"roles/tagPolicy.manager\")\n                    .build(),\n                AccessControlRuleSetGrantRuleArgs.builder()\n                    .principals(ds.aclPrincipalId())\n                    .role(\"roles/tagPolicy.assigner\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:TagPolicy\n    properties:\n      tagKey: example_tag_key\n      description: Example description.\n      values:\n        - name: example_value_2\n        - name: example_value_3\n  tagPolicyUsage:\n    type: databricks:AccessControlRuleSet\n    name: tag_policy_usage\n    properties:\n      name: accounts/${accountId}/tagPolicies/${this.id}/ruleSets/default\n      grantRules:\n        - principals:\n            - ${john.aclPrincipalId}\n          role: roles/tagPolicy.manager\n        - principals:\n            - ${ds.aclPrincipalId}\n          role: roles/tagPolicy.assigner\nvariables:\n  accountId: 00000000-0000-0000-0000-000000000000\n  # account level group\n  ds:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: Data Science\n  john:\n    fn::invoke:\n      function: databricks:getUser\n      arguments:\n        userName: john.doe@example.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group\n\" pulumi-lang-dotnet=\" databricks.Group\n\" pulumi-lang-go=\" Group\n\" pulumi-lang-python=\" Group\n\" pulumi-lang-yaml=\" databricks.Group\n\" pulumi-lang-java=\" databricks.Group\n\"\u003e databricks.Group\n\u003c/span\u003e*\u003cspan pulumi-lang-nodejs=\" databricks.User\n\" pulumi-lang-dotnet=\" databricks.User\n\" pulumi-lang-go=\" User\n\" pulumi-lang-python=\" User\n\" pulumi-lang-yaml=\" databricks.User\n\" pulumi-lang-java=\" databricks.User\n\"\u003e databricks.User\n\u003c/span\u003e*\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal\n\" pulumi-lang-dotnet=\" databricks.ServicePrincipal\n\" pulumi-lang-go=\" ServicePrincipal\n\" pulumi-lang-python=\" ServicePrincipal\n\" pulumi-lang-yaml=\" databricks.ServicePrincipal\n\" pulumi-lang-java=\" databricks.ServicePrincipal\n\"\u003e databricks.ServicePrincipal\n\u003c/span\u003e\n","properties":{"etag":{"type":"string"},"grantRules":{"type":"array","items":{"$ref":"#/types/databricks:index/AccessControlRuleSetGrantRule:AccessControlRuleSetGrantRule"},"description":"The access control rules to be granted by this rule set, consisting of a set of principals and roles to be granted to them.\n\n!\u003e Name uniquely identifies a rule set resource. Ensure all the\u003cspan pulumi-lang-nodejs=\" grantRules \" pulumi-lang-dotnet=\" GrantRules \" pulumi-lang-go=\" grantRules \" pulumi-lang-python=\" grant_rules \" pulumi-lang-yaml=\" grantRules \" pulumi-lang-java=\" grantRules \"\u003e grant_rules \u003c/span\u003eblocks for a rule set name are present in one \u003cspan pulumi-lang-nodejs=\"`databricks.AccessControlRuleSet`\" pulumi-lang-dotnet=\"`databricks.AccessControlRuleSet`\" pulumi-lang-go=\"`AccessControlRuleSet`\" pulumi-lang-python=\"`AccessControlRuleSet`\" pulumi-lang-yaml=\"`databricks.AccessControlRuleSet`\" pulumi-lang-java=\"`databricks.AccessControlRuleSet`\"\u003e`databricks.AccessControlRuleSet`\u003c/span\u003e resource block. Otherwise, after applying changes, users might lose their role assignment even if that was not intended.\n"},"name":{"type":"string","description":"Unique identifier of a rule set. The name determines the resource to which the rule set applies. **Changing the name recreates the resource!**. Currently, only default rule sets are supported. The following rule set formats are supported:\n* `accounts/{account_id}/ruleSets/default` - account-level access control.\n* `accounts/{account_id}/servicePrincipals/{service_principal_application_id}/ruleSets/default` - access control for a specific service principal.\n* `accounts/{account_id}/groups/{group_id}/ruleSets/default` - access control for a specific group.\n* `accounts/{account_id}/budgetPolicies/{budget_policy_id}/ruleSets/default` - access control for a specific budget policy.\n* `accounts/{account_id}/tagPolicies/{tag_policy_id}/ruleSets/default` - access control for a specific tag policy.\n"}},"required":["etag","name"],"inputProperties":{"grantRules":{"type":"array","items":{"$ref":"#/types/databricks:index/AccessControlRuleSetGrantRule:AccessControlRuleSetGrantRule"},"description":"The access control rules to be granted by this rule set, consisting of a set of principals and roles to be granted to them.\n\n!\u003e Name uniquely identifies a rule set resource. Ensure all the\u003cspan pulumi-lang-nodejs=\" grantRules \" pulumi-lang-dotnet=\" GrantRules \" pulumi-lang-go=\" grantRules \" pulumi-lang-python=\" grant_rules \" pulumi-lang-yaml=\" grantRules \" pulumi-lang-java=\" grantRules \"\u003e grant_rules \u003c/span\u003eblocks for a rule set name are present in one \u003cspan pulumi-lang-nodejs=\"`databricks.AccessControlRuleSet`\" pulumi-lang-dotnet=\"`databricks.AccessControlRuleSet`\" pulumi-lang-go=\"`AccessControlRuleSet`\" pulumi-lang-python=\"`AccessControlRuleSet`\" pulumi-lang-yaml=\"`databricks.AccessControlRuleSet`\" pulumi-lang-java=\"`databricks.AccessControlRuleSet`\"\u003e`databricks.AccessControlRuleSet`\u003c/span\u003e resource block. Otherwise, after applying changes, users might lose their role assignment even if that was not intended.\n"},"name":{"type":"string","description":"Unique identifier of a rule set. The name determines the resource to which the rule set applies. **Changing the name recreates the resource!**. Currently, only default rule sets are supported. The following rule set formats are supported:\n* `accounts/{account_id}/ruleSets/default` - account-level access control.\n* `accounts/{account_id}/servicePrincipals/{service_principal_application_id}/ruleSets/default` - access control for a specific service principal.\n* `accounts/{account_id}/groups/{group_id}/ruleSets/default` - access control for a specific group.\n* `accounts/{account_id}/budgetPolicies/{budget_policy_id}/ruleSets/default` - access control for a specific budget policy.\n* `accounts/{account_id}/tagPolicies/{tag_policy_id}/ruleSets/default` - access control for a specific tag policy.\n","willReplaceOnChanges":true}},"stateInputs":{"description":"Input properties used for looking up and filtering AccessControlRuleSet resources.\n","properties":{"etag":{"type":"string"},"grantRules":{"type":"array","items":{"$ref":"#/types/databricks:index/AccessControlRuleSetGrantRule:AccessControlRuleSetGrantRule"},"description":"The access control rules to be granted by this rule set, consisting of a set of principals and roles to be granted to them.\n\n!\u003e Name uniquely identifies a rule set resource. Ensure all the\u003cspan pulumi-lang-nodejs=\" grantRules \" pulumi-lang-dotnet=\" GrantRules \" pulumi-lang-go=\" grantRules \" pulumi-lang-python=\" grant_rules \" pulumi-lang-yaml=\" grantRules \" pulumi-lang-java=\" grantRules \"\u003e grant_rules \u003c/span\u003eblocks for a rule set name are present in one \u003cspan pulumi-lang-nodejs=\"`databricks.AccessControlRuleSet`\" pulumi-lang-dotnet=\"`databricks.AccessControlRuleSet`\" pulumi-lang-go=\"`AccessControlRuleSet`\" pulumi-lang-python=\"`AccessControlRuleSet`\" pulumi-lang-yaml=\"`databricks.AccessControlRuleSet`\" pulumi-lang-java=\"`databricks.AccessControlRuleSet`\"\u003e`databricks.AccessControlRuleSet`\u003c/span\u003e resource block. Otherwise, after applying changes, users might lose their role assignment even if that was not intended.\n"},"name":{"type":"string","description":"Unique identifier of a rule set. The name determines the resource to which the rule set applies. **Changing the name recreates the resource!**. Currently, only default rule sets are supported. The following rule set formats are supported:\n* `accounts/{account_id}/ruleSets/default` - account-level access control.\n* `accounts/{account_id}/servicePrincipals/{service_principal_application_id}/ruleSets/default` - access control for a specific service principal.\n* `accounts/{account_id}/groups/{group_id}/ruleSets/default` - access control for a specific group.\n* `accounts/{account_id}/budgetPolicies/{budget_policy_id}/ruleSets/default` - access control for a specific budget policy.\n* `accounts/{account_id}/tagPolicies/{tag_policy_id}/ruleSets/default` - access control for a specific tag policy.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/accountFederationPolicy:AccountFederationPolicy":{"description":"[![GA](https://img.shields.io/badge/Release_Stage-GA-green)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nAccount federation policies allow users and service principals in your Databricks account to securely access Databricks APIs using tokens from your trusted identity providers (IdPs).\n\nToken federation policies eliminate the need to manage Databricks secrets, and allow you to centralize management of token issuance policies in your IdP. Databricks token federation policies are typically used in combination with [SCIM](https://www.terraform.io/admin/users-groups/scim/index.html), so users in your IdP are synchronized into your Databricks account.\n\nAn account federation policy specifies:\n* which IdP, or issuer, your Databricks account should accept tokens from\n* how to determine which Databricks user, or subject, a token is issued for\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.AccountFederationPolicy(\"this\", {\n    policyId: \"my-policy\",\n    oidcPolicy: {\n        issuer: \"https://myidp.example.com\",\n        subjectClaim: \"sub\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.AccountFederationPolicy(\"this\",\n    policy_id=\"my-policy\",\n    oidc_policy={\n        \"issuer\": \"https://myidp.example.com\",\n        \"subject_claim\": \"sub\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.AccountFederationPolicy(\"this\", new()\n    {\n        PolicyId = \"my-policy\",\n        OidcPolicy = new Databricks.Inputs.AccountFederationPolicyOidcPolicyArgs\n        {\n            Issuer = \"https://myidp.example.com\",\n            SubjectClaim = \"sub\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewAccountFederationPolicy(ctx, \"this\", \u0026databricks.AccountFederationPolicyArgs{\n\t\t\tPolicyId: pulumi.String(\"my-policy\"),\n\t\t\tOidcPolicy: \u0026databricks.AccountFederationPolicyOidcPolicyArgs{\n\t\t\t\tIssuer:       pulumi.String(\"https://myidp.example.com\"),\n\t\t\t\tSubjectClaim: pulumi.String(\"sub\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.AccountFederationPolicy;\nimport com.pulumi.databricks.AccountFederationPolicyArgs;\nimport com.pulumi.databricks.inputs.AccountFederationPolicyOidcPolicyArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new AccountFederationPolicy(\"this\", AccountFederationPolicyArgs.builder()\n            .policyId(\"my-policy\")\n            .oidcPolicy(AccountFederationPolicyOidcPolicyArgs.builder()\n                .issuer(\"https://myidp.example.com\")\n                .subjectClaim(\"sub\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:AccountFederationPolicy\n    properties:\n      policyId: my-policy\n      oidcPolicy:\n        issuer: https://myidp.example.com\n        subjectClaim: sub\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"createTime":{"type":"string","description":"(string) - Creation time of the federation policy\n"},"description":{"type":"string","description":"Description of the federation policy\n"},"name":{"type":"string","description":"(string) - Resource name for the federation policy. Example values include\n`accounts/\u003caccount-id\u003e/federationPolicies/my-federation-policy` for Account Federation Policies, and\n`accounts/\u003caccount-id\u003e/servicePrincipals/\u003cservice-principal-id\u003e/federationPolicies/my-federation-policy`\nfor Service Principal Federation Policies. Typically an output parameter, which does not need to be\nspecified in create or update requests. If specified in a request, must match the value in the\nrequest URL\n"},"oidcPolicy":{"$ref":"#/types/databricks:index/AccountFederationPolicyOidcPolicy:AccountFederationPolicyOidcPolicy"},"policyId":{"type":"string","description":"(string) - The ID of the federation policy. Output only\n"},"servicePrincipalId":{"type":"integer","description":"(integer) - The service principal ID that this federation policy applies to. Output only. Only set for service principal federation policies\n"},"uid":{"type":"string","description":"(string) - Unique, immutable id of the federation policy\n"},"updateTime":{"type":"string","description":"(string) - Last update time of the federation policy\n"}},"required":["createTime","name","policyId","servicePrincipalId","uid","updateTime"],"inputProperties":{"description":{"type":"string","description":"Description of the federation policy\n"},"oidcPolicy":{"$ref":"#/types/databricks:index/AccountFederationPolicyOidcPolicy:AccountFederationPolicyOidcPolicy"},"policyId":{"type":"string","description":"(string) - The ID of the federation policy. Output only\n"},"servicePrincipalId":{"type":"integer","description":"(integer) - The service principal ID that this federation policy applies to. Output only. Only set for service principal federation policies\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering AccountFederationPolicy resources.\n","properties":{"createTime":{"type":"string","description":"(string) - Creation time of the federation policy\n"},"description":{"type":"string","description":"Description of the federation policy\n"},"name":{"type":"string","description":"(string) - Resource name for the federation policy. Example values include\n`accounts/\u003caccount-id\u003e/federationPolicies/my-federation-policy` for Account Federation Policies, and\n`accounts/\u003caccount-id\u003e/servicePrincipals/\u003cservice-principal-id\u003e/federationPolicies/my-federation-policy`\nfor Service Principal Federation Policies. Typically an output parameter, which does not need to be\nspecified in create or update requests. If specified in a request, must match the value in the\nrequest URL\n"},"oidcPolicy":{"$ref":"#/types/databricks:index/AccountFederationPolicyOidcPolicy:AccountFederationPolicyOidcPolicy"},"policyId":{"type":"string","description":"(string) - The ID of the federation policy. Output only\n"},"servicePrincipalId":{"type":"integer","description":"(integer) - The service principal ID that this federation policy applies to. Output only. Only set for service principal federation policies\n"},"uid":{"type":"string","description":"(string) - Unique, immutable id of the federation policy\n"},"updateTime":{"type":"string","description":"(string) - Last update time of the federation policy\n"}},"type":"object"}},"databricks:index/accountNetworkPolicy:AccountNetworkPolicy":{"description":"[![GA](https://img.shields.io/badge/Release_Stage-GA-green)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nNetwork policies control which network destinations can be accessed from the Databricks environment. \n\nEach Databricks account includes a default policy named `default-policy`. This policy is:\n\n- Associated with any workspace lacking an explicit network policy assignment\n- Automatically associated with each newly created workspace\n- Reserved and cannot be deleted, but can be updated to customize the default network access rules for your account\n\nThe `default-policy` provides a baseline security configuration that ensures all workspaces have network access controls in place.\n\n\u003e **Note** This resource can only be used with an account-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst exampleNetworkPolicy = new databricks.AccountNetworkPolicy(\"example_network_policy\", {\n    networkPolicyId: \"example-network-policy\",\n    egress: {\n        networkAccess: {\n            restrictionMode: \"RESTRICTED_ACCESS\",\n            allowedInternetDestinations: [{\n                destination: \"example.com\",\n                internetDestinationType: \"DNS_NAME\",\n            }],\n            allowedStorageDestinations: [{\n                bucketName: \"example-aws-cloud-storage\",\n                region: \"us-west-1\",\n                storageDestinationType: \"AWS_S3\",\n            }],\n            policyEnforcement: {\n                enforcementMode: \"ENFORCED\",\n            },\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexample_network_policy = databricks.AccountNetworkPolicy(\"example_network_policy\",\n    network_policy_id=\"example-network-policy\",\n    egress={\n        \"network_access\": {\n            \"restriction_mode\": \"RESTRICTED_ACCESS\",\n            \"allowed_internet_destinations\": [{\n                \"destination\": \"example.com\",\n                \"internet_destination_type\": \"DNS_NAME\",\n            }],\n            \"allowed_storage_destinations\": [{\n                \"bucket_name\": \"example-aws-cloud-storage\",\n                \"region\": \"us-west-1\",\n                \"storage_destination_type\": \"AWS_S3\",\n            }],\n            \"policy_enforcement\": {\n                \"enforcement_mode\": \"ENFORCED\",\n            },\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var exampleNetworkPolicy = new Databricks.AccountNetworkPolicy(\"example_network_policy\", new()\n    {\n        NetworkPolicyId = \"example-network-policy\",\n        Egress = new Databricks.Inputs.AccountNetworkPolicyEgressArgs\n        {\n            NetworkAccess = new Databricks.Inputs.AccountNetworkPolicyEgressNetworkAccessArgs\n            {\n                RestrictionMode = \"RESTRICTED_ACCESS\",\n                AllowedInternetDestinations = new[]\n                {\n                    new Databricks.Inputs.AccountNetworkPolicyEgressNetworkAccessAllowedInternetDestinationArgs\n                    {\n                        Destination = \"example.com\",\n                        InternetDestinationType = \"DNS_NAME\",\n                    },\n                },\n                AllowedStorageDestinations = new[]\n                {\n                    new Databricks.Inputs.AccountNetworkPolicyEgressNetworkAccessAllowedStorageDestinationArgs\n                    {\n                        BucketName = \"example-aws-cloud-storage\",\n                        Region = \"us-west-1\",\n                        StorageDestinationType = \"AWS_S3\",\n                    },\n                },\n                PolicyEnforcement = new Databricks.Inputs.AccountNetworkPolicyEgressNetworkAccessPolicyEnforcementArgs\n                {\n                    EnforcementMode = \"ENFORCED\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewAccountNetworkPolicy(ctx, \"example_network_policy\", \u0026databricks.AccountNetworkPolicyArgs{\n\t\t\tNetworkPolicyId: pulumi.String(\"example-network-policy\"),\n\t\t\tEgress: \u0026databricks.AccountNetworkPolicyEgressArgs{\n\t\t\t\tNetworkAccess: \u0026databricks.AccountNetworkPolicyEgressNetworkAccessArgs{\n\t\t\t\t\tRestrictionMode: pulumi.String(\"RESTRICTED_ACCESS\"),\n\t\t\t\t\tAllowedInternetDestinations: databricks.AccountNetworkPolicyEgressNetworkAccessAllowedInternetDestinationArray{\n\t\t\t\t\t\t\u0026databricks.AccountNetworkPolicyEgressNetworkAccessAllowedInternetDestinationArgs{\n\t\t\t\t\t\t\tDestination:             pulumi.String(\"example.com\"),\n\t\t\t\t\t\t\tInternetDestinationType: pulumi.String(\"DNS_NAME\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tAllowedStorageDestinations: databricks.AccountNetworkPolicyEgressNetworkAccessAllowedStorageDestinationArray{\n\t\t\t\t\t\t\u0026databricks.AccountNetworkPolicyEgressNetworkAccessAllowedStorageDestinationArgs{\n\t\t\t\t\t\t\tBucketName:             pulumi.String(\"example-aws-cloud-storage\"),\n\t\t\t\t\t\t\tRegion:                 pulumi.String(\"us-west-1\"),\n\t\t\t\t\t\t\tStorageDestinationType: pulumi.String(\"AWS_S3\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tPolicyEnforcement: \u0026databricks.AccountNetworkPolicyEgressNetworkAccessPolicyEnforcementArgs{\n\t\t\t\t\t\tEnforcementMode: pulumi.String(\"ENFORCED\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.AccountNetworkPolicy;\nimport com.pulumi.databricks.AccountNetworkPolicyArgs;\nimport com.pulumi.databricks.inputs.AccountNetworkPolicyEgressArgs;\nimport com.pulumi.databricks.inputs.AccountNetworkPolicyEgressNetworkAccessArgs;\nimport com.pulumi.databricks.inputs.AccountNetworkPolicyEgressNetworkAccessPolicyEnforcementArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var exampleNetworkPolicy = new AccountNetworkPolicy(\"exampleNetworkPolicy\", AccountNetworkPolicyArgs.builder()\n            .networkPolicyId(\"example-network-policy\")\n            .egress(AccountNetworkPolicyEgressArgs.builder()\n                .networkAccess(AccountNetworkPolicyEgressNetworkAccessArgs.builder()\n                    .restrictionMode(\"RESTRICTED_ACCESS\")\n                    .allowedInternetDestinations(AccountNetworkPolicyEgressNetworkAccessAllowedInternetDestinationArgs.builder()\n                        .destination(\"example.com\")\n                        .internetDestinationType(\"DNS_NAME\")\n                        .build())\n                    .allowedStorageDestinations(AccountNetworkPolicyEgressNetworkAccessAllowedStorageDestinationArgs.builder()\n                        .bucketName(\"example-aws-cloud-storage\")\n                        .region(\"us-west-1\")\n                        .storageDestinationType(\"AWS_S3\")\n                        .build())\n                    .policyEnforcement(AccountNetworkPolicyEgressNetworkAccessPolicyEnforcementArgs.builder()\n                        .enforcementMode(\"ENFORCED\")\n                        .build())\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  exampleNetworkPolicy:\n    type: databricks:AccountNetworkPolicy\n    name: example_network_policy\n    properties:\n      networkPolicyId: example-network-policy\n      egress:\n        networkAccess:\n          restrictionMode: RESTRICTED_ACCESS\n          allowedInternetDestinations:\n            - destination: example.com\n              internetDestinationType: DNS_NAME\n          allowedStorageDestinations:\n            - bucketName: example-aws-cloud-storage\n              region: us-west-1\n              storageDestinationType: AWS_S3\n          policyEnforcement:\n            enforcementMode: ENFORCED\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"accountId":{"type":"string","description":"The associated account ID for this Network Policy object\n"},"egress":{"$ref":"#/types/databricks:index/AccountNetworkPolicyEgress:AccountNetworkPolicyEgress","description":"The network policies applying for egress traffic\n"},"networkPolicyId":{"type":"string","description":"The unique identifier for the network policy\n"}},"inputProperties":{"accountId":{"type":"string","description":"The associated account ID for this Network Policy object\n"},"egress":{"$ref":"#/types/databricks:index/AccountNetworkPolicyEgress:AccountNetworkPolicyEgress","description":"The network policies applying for egress traffic\n"},"networkPolicyId":{"type":"string","description":"The unique identifier for the network policy\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering AccountNetworkPolicy resources.\n","properties":{"accountId":{"type":"string","description":"The associated account ID for this Network Policy object\n"},"egress":{"$ref":"#/types/databricks:index/AccountNetworkPolicyEgress:AccountNetworkPolicyEgress","description":"The network policies applying for egress traffic\n"},"networkPolicyId":{"type":"string","description":"The unique identifier for the network policy\n"}},"type":"object"}},"databricks:index/accountSettingUserPreferenceV2:AccountSettingUserPreferenceV2":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nUser preference is a configurable value that determines how a feature or behavior works for a specific user within the Databricks platform.\n\nSee user settings-metadata API for list of user preferences that can be modified using this resource.\n\n\n## Example Usage\n\nSetting an account user preference:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst themeSetting = new databricks.index.AccountUserSettingV2(\"theme_setting\", {\n    userId: \"\u003cuser-id\u003e\",\n    name: \"enableDarkMode\",\n    stringVal: {\n        value: \"dark\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ntheme_setting = databricks.index.AccountUserSettingV2(\"theme_setting\",\n    user_id=\u003cuser-id\u003e,\n    name=enableDarkMode,\n    string_val={\n        value: dark,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var themeSetting = new Databricks.Index.AccountUserSettingV2(\"theme_setting\", new()\n    {\n        UserId = \"\u003cuser-id\u003e\",\n        Name = \"enableDarkMode\",\n        StringVal = \n        {\n            { \"value\", \"dark\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewAccountUserSettingV2(ctx, \"theme_setting\", \u0026databricks.AccountUserSettingV2Args{\n\t\t\tUserId: \"\u003cuser-id\u003e\",\n\t\t\tName:   \"enableDarkMode\",\n\t\t\tStringVal: map[string]interface{}{\n\t\t\t\t\"value\": \"dark\",\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.AccountUserSettingV2;\nimport com.pulumi.databricks.AccountUserSettingV2Args;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var themeSetting = new AccountUserSettingV2(\"themeSetting\", AccountUserSettingV2Args.builder()\n            .userId(\"\u003cuser-id\u003e\")\n            .name(\"enableDarkMode\")\n            .stringVal(Map.of(\"value\", \"dark\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  themeSetting:\n    type: databricks:AccountUserSettingV2\n    name: theme_setting\n    properties:\n      userId: \u003cuser-id\u003e\n      name: enableDarkMode\n      stringVal:\n        value: dark\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nSetting a boolean user preference:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst enableLineNumbers = new databricks.index.AccountUserSettingV2(\"enable_line_numbers\", {\n    userId: \"\u003cuser-id\u003e\",\n    name: \"enableLineNumbers\",\n    booleanVal: {\n        value: true,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nenable_line_numbers = databricks.index.AccountUserSettingV2(\"enable_line_numbers\",\n    user_id=\u003cuser-id\u003e,\n    name=enableLineNumbers,\n    boolean_val={\n        value: True,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var enableLineNumbers = new Databricks.Index.AccountUserSettingV2(\"enable_line_numbers\", new()\n    {\n        UserId = \"\u003cuser-id\u003e\",\n        Name = \"enableLineNumbers\",\n        BooleanVal = \n        {\n            { \"value\", true },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewAccountUserSettingV2(ctx, \"enable_line_numbers\", \u0026databricks.AccountUserSettingV2Args{\n\t\t\tUserId: \"\u003cuser-id\u003e\",\n\t\t\tName:   \"enableLineNumbers\",\n\t\t\tBooleanVal: map[string]interface{}{\n\t\t\t\t\"value\": true,\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.AccountUserSettingV2;\nimport com.pulumi.databricks.AccountUserSettingV2Args;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var enableLineNumbers = new AccountUserSettingV2(\"enableLineNumbers\", AccountUserSettingV2Args.builder()\n            .userId(\"\u003cuser-id\u003e\")\n            .name(\"enableLineNumbers\")\n            .booleanVal(Map.of(\"value\", true))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  enableLineNumbers:\n    type: databricks:AccountUserSettingV2\n    name: enable_line_numbers\n    properties:\n      userId: \u003cuser-id\u003e\n      name: enableLineNumbers\n      booleanVal:\n        value: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"booleanVal":{"$ref":"#/types/databricks:index/AccountSettingUserPreferenceV2BooleanVal:AccountSettingUserPreferenceV2BooleanVal"},"effectiveBooleanVal":{"$ref":"#/types/databricks:index/AccountSettingUserPreferenceV2EffectiveBooleanVal:AccountSettingUserPreferenceV2EffectiveBooleanVal","description":"(BooleanMessage)\n"},"effectiveStringVal":{"$ref":"#/types/databricks:index/AccountSettingUserPreferenceV2EffectiveStringVal:AccountSettingUserPreferenceV2EffectiveStringVal","description":"(StringMessage)\n"},"name":{"type":"string","description":"Name of the setting\n"},"stringVal":{"$ref":"#/types/databricks:index/AccountSettingUserPreferenceV2StringVal:AccountSettingUserPreferenceV2StringVal"},"userId":{"type":"string","description":"User ID of the user\n"}},"required":["effectiveBooleanVal","effectiveStringVal","name"],"inputProperties":{"booleanVal":{"$ref":"#/types/databricks:index/AccountSettingUserPreferenceV2BooleanVal:AccountSettingUserPreferenceV2BooleanVal"},"name":{"type":"string","description":"Name of the setting\n"},"stringVal":{"$ref":"#/types/databricks:index/AccountSettingUserPreferenceV2StringVal:AccountSettingUserPreferenceV2StringVal"},"userId":{"type":"string","description":"User ID of the user\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering AccountSettingUserPreferenceV2 resources.\n","properties":{"booleanVal":{"$ref":"#/types/databricks:index/AccountSettingUserPreferenceV2BooleanVal:AccountSettingUserPreferenceV2BooleanVal"},"effectiveBooleanVal":{"$ref":"#/types/databricks:index/AccountSettingUserPreferenceV2EffectiveBooleanVal:AccountSettingUserPreferenceV2EffectiveBooleanVal","description":"(BooleanMessage)\n"},"effectiveStringVal":{"$ref":"#/types/databricks:index/AccountSettingUserPreferenceV2EffectiveStringVal:AccountSettingUserPreferenceV2EffectiveStringVal","description":"(StringMessage)\n"},"name":{"type":"string","description":"Name of the setting\n"},"stringVal":{"$ref":"#/types/databricks:index/AccountSettingUserPreferenceV2StringVal:AccountSettingUserPreferenceV2StringVal"},"userId":{"type":"string","description":"User ID of the user\n"}},"type":"object"}},"databricks:index/accountSettingV2:AccountSettingV2":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nSetting is a configurable value or control that determines how a feature or behavior works within the databricks platform.\n\n[//]: # (todo: add public link to metadata api after production doc link available)\nSee settings-metadata api for list of settings that can be modified using this resource. \n\n## Example Usage\n\nGetting an account level setting:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.AccountSettingV2(\"this\", {\n    name: \"llm_proxy_partner_powered\",\n    booleanVal: {\n        value: false,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.AccountSettingV2(\"this\",\n    name=\"llm_proxy_partner_powered\",\n    boolean_val={\n        \"value\": False,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.AccountSettingV2(\"this\", new()\n    {\n        Name = \"llm_proxy_partner_powered\",\n        BooleanVal = new Databricks.Inputs.AccountSettingV2BooleanValArgs\n        {\n            Value = false,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewAccountSettingV2(ctx, \"this\", \u0026databricks.AccountSettingV2Args{\n\t\t\tName: pulumi.String(\"llm_proxy_partner_powered\"),\n\t\t\tBooleanVal: \u0026databricks.AccountSettingV2BooleanValArgs{\n\t\t\t\tValue: pulumi.Bool(false),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.AccountSettingV2;\nimport com.pulumi.databricks.AccountSettingV2Args;\nimport com.pulumi.databricks.inputs.AccountSettingV2BooleanValArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new AccountSettingV2(\"this\", AccountSettingV2Args.builder()\n            .name(\"llm_proxy_partner_powered\")\n            .booleanVal(AccountSettingV2BooleanValArgs.builder()\n                .value(false)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:AccountSettingV2\n    properties:\n      name: llm_proxy_partner_powered\n      booleanVal:\n        value: false\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"aibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/AccountSettingV2AibiDashboardEmbeddingAccessPolicy:AccountSettingV2AibiDashboardEmbeddingAccessPolicy","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingAccessPolicy \"\u003e effective_aibi_dashboard_embedding_access_policy \u003c/span\u003efor final setting value\n"},"aibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/AccountSettingV2AibiDashboardEmbeddingApprovedDomains:AccountSettingV2AibiDashboardEmbeddingApprovedDomains","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingApprovedDomains \"\u003e effective_aibi_dashboard_embedding_approved_domains \u003c/span\u003efor final setting value\n"},"automaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspace:AccountSettingV2AutomaticClusterUpdateWorkspace","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" EffectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-python=\" effective_automatic_cluster_update_workspace \" pulumi-lang-yaml=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-java=\" effectiveAutomaticClusterUpdateWorkspace \"\u003e effective_automatic_cluster_update_workspace \u003c/span\u003efor final setting value\n"},"booleanVal":{"$ref":"#/types/databricks:index/AccountSettingV2BooleanVal:AccountSettingV2BooleanVal","description":"Setting value for boolean type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveBooleanVal \" pulumi-lang-dotnet=\" EffectiveBooleanVal \" pulumi-lang-go=\" effectiveBooleanVal \" pulumi-lang-python=\" effective_boolean_val \" pulumi-lang-yaml=\" effectiveBooleanVal \" pulumi-lang-java=\" effectiveBooleanVal \"\u003e effective_boolean_val \u003c/span\u003efor final setting value\n"},"effectiveAibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:AccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_access_policy\n"},"effectiveAibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:AccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_approved_domains\n"},"effectiveAutomaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspace:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspace","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the final effective value of setting. To set a value use automatic_cluster_update_workspace\n"},"effectiveBooleanVal":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveBooleanVal:AccountSettingV2EffectiveBooleanVal","description":"(BooleanMessage) - Effective setting value for boolean type setting. This is the final effective value of setting. To set a value use boolean_val\n"},"effectiveIntegerVal":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveIntegerVal:AccountSettingV2EffectiveIntegerVal","description":"(IntegerMessage) - Effective setting value for integer type setting. This is the final effective value of setting. To set a value use integer_val\n"},"effectivePersonalCompute":{"$ref":"#/types/databricks:index/AccountSettingV2EffectivePersonalCompute:AccountSettingV2EffectivePersonalCompute","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the final effective value of setting. To set a value use personal_compute\n"},"effectiveRestrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveRestrictWorkspaceAdmins:AccountSettingV2EffectiveRestrictWorkspaceAdmins","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the final effective value of setting. To set a value use restrict_workspace_admins\n"},"effectiveStringVal":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveStringVal:AccountSettingV2EffectiveStringVal","description":"(StringMessage) - Effective setting value for string type setting. This is the final effective value of setting. To set a value use string_val\n"},"integerVal":{"$ref":"#/types/databricks:index/AccountSettingV2IntegerVal:AccountSettingV2IntegerVal","description":"Setting value for integer type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveIntegerVal \" pulumi-lang-dotnet=\" EffectiveIntegerVal \" pulumi-lang-go=\" effectiveIntegerVal \" pulumi-lang-python=\" effective_integer_val \" pulumi-lang-yaml=\" effectiveIntegerVal \" pulumi-lang-java=\" effectiveIntegerVal \"\u003e effective_integer_val \u003c/span\u003efor final setting value\n"},"name":{"type":"string","description":"Name of the setting\n"},"personalCompute":{"$ref":"#/types/databricks:index/AccountSettingV2PersonalCompute:AccountSettingV2PersonalCompute","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectivePersonalCompute \" pulumi-lang-dotnet=\" EffectivePersonalCompute \" pulumi-lang-go=\" effectivePersonalCompute \" pulumi-lang-python=\" effective_personal_compute \" pulumi-lang-yaml=\" effectivePersonalCompute \" pulumi-lang-java=\" effectivePersonalCompute \"\u003e effective_personal_compute \u003c/span\u003efor final setting value\n"},"restrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/AccountSettingV2RestrictWorkspaceAdmins:AccountSettingV2RestrictWorkspaceAdmins","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-dotnet=\" EffectiveRestrictWorkspaceAdmins \" pulumi-lang-go=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-python=\" effective_restrict_workspace_admins \" pulumi-lang-yaml=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-java=\" effectiveRestrictWorkspaceAdmins \"\u003e effective_restrict_workspace_admins \u003c/span\u003efor final setting value\n"},"stringVal":{"$ref":"#/types/databricks:index/AccountSettingV2StringVal:AccountSettingV2StringVal","description":"Setting value for string type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveStringVal \" pulumi-lang-dotnet=\" EffectiveStringVal \" pulumi-lang-go=\" effectiveStringVal \" pulumi-lang-python=\" effective_string_val \" pulumi-lang-yaml=\" effectiveStringVal \" pulumi-lang-java=\" effectiveStringVal \"\u003e effective_string_val \u003c/span\u003efor final setting value\n"}},"required":["effectiveBooleanVal","effectiveIntegerVal","effectiveStringVal","name"],"inputProperties":{"aibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/AccountSettingV2AibiDashboardEmbeddingAccessPolicy:AccountSettingV2AibiDashboardEmbeddingAccessPolicy","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingAccessPolicy \"\u003e effective_aibi_dashboard_embedding_access_policy \u003c/span\u003efor final setting value\n"},"aibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/AccountSettingV2AibiDashboardEmbeddingApprovedDomains:AccountSettingV2AibiDashboardEmbeddingApprovedDomains","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingApprovedDomains \"\u003e effective_aibi_dashboard_embedding_approved_domains \u003c/span\u003efor final setting value\n"},"automaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspace:AccountSettingV2AutomaticClusterUpdateWorkspace","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" EffectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-python=\" effective_automatic_cluster_update_workspace \" pulumi-lang-yaml=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-java=\" effectiveAutomaticClusterUpdateWorkspace \"\u003e effective_automatic_cluster_update_workspace \u003c/span\u003efor final setting value\n"},"booleanVal":{"$ref":"#/types/databricks:index/AccountSettingV2BooleanVal:AccountSettingV2BooleanVal","description":"Setting value for boolean type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveBooleanVal \" pulumi-lang-dotnet=\" EffectiveBooleanVal \" pulumi-lang-go=\" effectiveBooleanVal \" pulumi-lang-python=\" effective_boolean_val \" pulumi-lang-yaml=\" effectiveBooleanVal \" pulumi-lang-java=\" effectiveBooleanVal \"\u003e effective_boolean_val \u003c/span\u003efor final setting value\n"},"effectiveAibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:AccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_access_policy\n"},"effectiveAibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:AccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_approved_domains\n"},"effectiveAutomaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspace:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspace","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the final effective value of setting. To set a value use automatic_cluster_update_workspace\n"},"effectivePersonalCompute":{"$ref":"#/types/databricks:index/AccountSettingV2EffectivePersonalCompute:AccountSettingV2EffectivePersonalCompute","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the final effective value of setting. To set a value use personal_compute\n"},"effectiveRestrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveRestrictWorkspaceAdmins:AccountSettingV2EffectiveRestrictWorkspaceAdmins","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the final effective value of setting. To set a value use restrict_workspace_admins\n"},"integerVal":{"$ref":"#/types/databricks:index/AccountSettingV2IntegerVal:AccountSettingV2IntegerVal","description":"Setting value for integer type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveIntegerVal \" pulumi-lang-dotnet=\" EffectiveIntegerVal \" pulumi-lang-go=\" effectiveIntegerVal \" pulumi-lang-python=\" effective_integer_val \" pulumi-lang-yaml=\" effectiveIntegerVal \" pulumi-lang-java=\" effectiveIntegerVal \"\u003e effective_integer_val \u003c/span\u003efor final setting value\n"},"name":{"type":"string","description":"Name of the setting\n"},"personalCompute":{"$ref":"#/types/databricks:index/AccountSettingV2PersonalCompute:AccountSettingV2PersonalCompute","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectivePersonalCompute \" pulumi-lang-dotnet=\" EffectivePersonalCompute \" pulumi-lang-go=\" effectivePersonalCompute \" pulumi-lang-python=\" effective_personal_compute \" pulumi-lang-yaml=\" effectivePersonalCompute \" pulumi-lang-java=\" effectivePersonalCompute \"\u003e effective_personal_compute \u003c/span\u003efor final setting value\n"},"restrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/AccountSettingV2RestrictWorkspaceAdmins:AccountSettingV2RestrictWorkspaceAdmins","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-dotnet=\" EffectiveRestrictWorkspaceAdmins \" pulumi-lang-go=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-python=\" effective_restrict_workspace_admins \" pulumi-lang-yaml=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-java=\" effectiveRestrictWorkspaceAdmins \"\u003e effective_restrict_workspace_admins \u003c/span\u003efor final setting value\n"},"stringVal":{"$ref":"#/types/databricks:index/AccountSettingV2StringVal:AccountSettingV2StringVal","description":"Setting value for string type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveStringVal \" pulumi-lang-dotnet=\" EffectiveStringVal \" pulumi-lang-go=\" effectiveStringVal \" pulumi-lang-python=\" effective_string_val \" pulumi-lang-yaml=\" effectiveStringVal \" pulumi-lang-java=\" effectiveStringVal \"\u003e effective_string_val \u003c/span\u003efor final setting value\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering AccountSettingV2 resources.\n","properties":{"aibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/AccountSettingV2AibiDashboardEmbeddingAccessPolicy:AccountSettingV2AibiDashboardEmbeddingAccessPolicy","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingAccessPolicy \"\u003e effective_aibi_dashboard_embedding_access_policy \u003c/span\u003efor final setting value\n"},"aibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/AccountSettingV2AibiDashboardEmbeddingApprovedDomains:AccountSettingV2AibiDashboardEmbeddingApprovedDomains","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingApprovedDomains \"\u003e effective_aibi_dashboard_embedding_approved_domains \u003c/span\u003efor final setting value\n"},"automaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/AccountSettingV2AutomaticClusterUpdateWorkspace:AccountSettingV2AutomaticClusterUpdateWorkspace","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" EffectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-python=\" effective_automatic_cluster_update_workspace \" pulumi-lang-yaml=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-java=\" effectiveAutomaticClusterUpdateWorkspace \"\u003e effective_automatic_cluster_update_workspace \u003c/span\u003efor final setting value\n"},"booleanVal":{"$ref":"#/types/databricks:index/AccountSettingV2BooleanVal:AccountSettingV2BooleanVal","description":"Setting value for boolean type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveBooleanVal \" pulumi-lang-dotnet=\" EffectiveBooleanVal \" pulumi-lang-go=\" effectiveBooleanVal \" pulumi-lang-python=\" effective_boolean_val \" pulumi-lang-yaml=\" effectiveBooleanVal \" pulumi-lang-java=\" effectiveBooleanVal \"\u003e effective_boolean_val \u003c/span\u003efor final setting value\n"},"effectiveAibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:AccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_access_policy\n"},"effectiveAibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:AccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_approved_domains\n"},"effectiveAutomaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveAutomaticClusterUpdateWorkspace:AccountSettingV2EffectiveAutomaticClusterUpdateWorkspace","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the final effective value of setting. To set a value use automatic_cluster_update_workspace\n"},"effectiveBooleanVal":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveBooleanVal:AccountSettingV2EffectiveBooleanVal","description":"(BooleanMessage) - Effective setting value for boolean type setting. This is the final effective value of setting. To set a value use boolean_val\n"},"effectiveIntegerVal":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveIntegerVal:AccountSettingV2EffectiveIntegerVal","description":"(IntegerMessage) - Effective setting value for integer type setting. This is the final effective value of setting. To set a value use integer_val\n"},"effectivePersonalCompute":{"$ref":"#/types/databricks:index/AccountSettingV2EffectivePersonalCompute:AccountSettingV2EffectivePersonalCompute","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the final effective value of setting. To set a value use personal_compute\n"},"effectiveRestrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveRestrictWorkspaceAdmins:AccountSettingV2EffectiveRestrictWorkspaceAdmins","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the final effective value of setting. To set a value use restrict_workspace_admins\n"},"effectiveStringVal":{"$ref":"#/types/databricks:index/AccountSettingV2EffectiveStringVal:AccountSettingV2EffectiveStringVal","description":"(StringMessage) - Effective setting value for string type setting. This is the final effective value of setting. To set a value use string_val\n"},"integerVal":{"$ref":"#/types/databricks:index/AccountSettingV2IntegerVal:AccountSettingV2IntegerVal","description":"Setting value for integer type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveIntegerVal \" pulumi-lang-dotnet=\" EffectiveIntegerVal \" pulumi-lang-go=\" effectiveIntegerVal \" pulumi-lang-python=\" effective_integer_val \" pulumi-lang-yaml=\" effectiveIntegerVal \" pulumi-lang-java=\" effectiveIntegerVal \"\u003e effective_integer_val \u003c/span\u003efor final setting value\n"},"name":{"type":"string","description":"Name of the setting\n"},"personalCompute":{"$ref":"#/types/databricks:index/AccountSettingV2PersonalCompute:AccountSettingV2PersonalCompute","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectivePersonalCompute \" pulumi-lang-dotnet=\" EffectivePersonalCompute \" pulumi-lang-go=\" effectivePersonalCompute \" pulumi-lang-python=\" effective_personal_compute \" pulumi-lang-yaml=\" effectivePersonalCompute \" pulumi-lang-java=\" effectivePersonalCompute \"\u003e effective_personal_compute \u003c/span\u003efor final setting value\n"},"restrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/AccountSettingV2RestrictWorkspaceAdmins:AccountSettingV2RestrictWorkspaceAdmins","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-dotnet=\" EffectiveRestrictWorkspaceAdmins \" pulumi-lang-go=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-python=\" effective_restrict_workspace_admins \" pulumi-lang-yaml=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-java=\" effectiveRestrictWorkspaceAdmins \"\u003e effective_restrict_workspace_admins \u003c/span\u003efor final setting value\n"},"stringVal":{"$ref":"#/types/databricks:index/AccountSettingV2StringVal:AccountSettingV2StringVal","description":"Setting value for string type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveStringVal \" pulumi-lang-dotnet=\" EffectiveStringVal \" pulumi-lang-go=\" effectiveStringVal \" pulumi-lang-python=\" effective_string_val \" pulumi-lang-yaml=\" effectiveStringVal \" pulumi-lang-java=\" effectiveStringVal \"\u003e effective_string_val \u003c/span\u003efor final setting value\n"}},"type":"object"}},"databricks:index/aibiDashboardEmbeddingAccessPolicySetting:AibiDashboardEmbeddingAccessPolicySetting":{"description":"The \u003cspan pulumi-lang-nodejs=\"`databricks.AibiDashboardEmbeddingAccessPolicySetting`\" pulumi-lang-dotnet=\"`databricks.AibiDashboardEmbeddingAccessPolicySetting`\" pulumi-lang-go=\"`AibiDashboardEmbeddingAccessPolicySetting`\" pulumi-lang-python=\"`AibiDashboardEmbeddingAccessPolicySetting`\" pulumi-lang-yaml=\"`databricks.AibiDashboardEmbeddingAccessPolicySetting`\" pulumi-lang-java=\"`databricks.AibiDashboardEmbeddingAccessPolicySetting`\"\u003e`databricks.AibiDashboardEmbeddingAccessPolicySetting`\u003c/span\u003e resource allows you to control [embedding of AI/BI Dashboards](https://learn.microsoft.com/en-us/azure/databricks/dashboards/admin/#manage-dashboard-embedding) into other sites.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.AibiDashboardEmbeddingAccessPolicySetting(\"this\", {aibiDashboardEmbeddingAccessPolicy: {\n    accessPolicyType: \"ALLOW_APPROVED_DOMAINS\",\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.AibiDashboardEmbeddingAccessPolicySetting(\"this\", aibi_dashboard_embedding_access_policy={\n    \"access_policy_type\": \"ALLOW_APPROVED_DOMAINS\",\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.AibiDashboardEmbeddingAccessPolicySetting(\"this\", new()\n    {\n        AibiDashboardEmbeddingAccessPolicy = new Databricks.Inputs.AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicyArgs\n        {\n            AccessPolicyType = \"ALLOW_APPROVED_DOMAINS\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewAibiDashboardEmbeddingAccessPolicySetting(ctx, \"this\", \u0026databricks.AibiDashboardEmbeddingAccessPolicySettingArgs{\n\t\t\tAibiDashboardEmbeddingAccessPolicy: \u0026databricks.AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicyArgs{\n\t\t\t\tAccessPolicyType: pulumi.String(\"ALLOW_APPROVED_DOMAINS\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.AibiDashboardEmbeddingAccessPolicySetting;\nimport com.pulumi.databricks.AibiDashboardEmbeddingAccessPolicySettingArgs;\nimport com.pulumi.databricks.inputs.AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicyArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new AibiDashboardEmbeddingAccessPolicySetting(\"this\", AibiDashboardEmbeddingAccessPolicySettingArgs.builder()\n            .aibiDashboardEmbeddingAccessPolicy(AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicyArgs.builder()\n                .accessPolicyType(\"ALLOW_APPROVED_DOMAINS\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:AibiDashboardEmbeddingAccessPolicySetting\n    properties:\n      aibiDashboardEmbeddingAccessPolicy:\n        accessPolicyType: ALLOW_APPROVED_DOMAINS\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n-\u003cspan pulumi-lang-nodejs=\" databricks.AibiDashboardEmbeddingApprovedDomainsSetting \" pulumi-lang-dotnet=\" databricks.AibiDashboardEmbeddingApprovedDomainsSetting \" pulumi-lang-go=\" AibiDashboardEmbeddingApprovedDomainsSetting \" pulumi-lang-python=\" AibiDashboardEmbeddingApprovedDomainsSetting \" pulumi-lang-yaml=\" databricks.AibiDashboardEmbeddingApprovedDomainsSetting \" pulumi-lang-java=\" databricks.AibiDashboardEmbeddingApprovedDomainsSetting \"\u003e databricks.AibiDashboardEmbeddingApprovedDomainsSetting \u003c/span\u003eis used to control approved domains.\n\n","properties":{"aibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicy:AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicy","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingAccessPolicySettingProviderConfig:AibiDashboardEmbeddingAccessPolicySettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"required":["aibiDashboardEmbeddingAccessPolicy","etag","settingName"],"inputProperties":{"aibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicy:AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicy","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingAccessPolicySettingProviderConfig:AibiDashboardEmbeddingAccessPolicySettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"requiredInputs":["aibiDashboardEmbeddingAccessPolicy"],"stateInputs":{"description":"Input properties used for looking up and filtering AibiDashboardEmbeddingAccessPolicySetting resources.\n","properties":{"aibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicy:AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicy","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingAccessPolicySettingProviderConfig:AibiDashboardEmbeddingAccessPolicySettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"type":"object"}},"databricks:index/aibiDashboardEmbeddingApprovedDomainsSetting:AibiDashboardEmbeddingApprovedDomainsSetting":{"description":"The \u003cspan pulumi-lang-nodejs=\"`databricks.AibiDashboardEmbeddingApprovedDomainsSetting`\" pulumi-lang-dotnet=\"`databricks.AibiDashboardEmbeddingApprovedDomainsSetting`\" pulumi-lang-go=\"`AibiDashboardEmbeddingApprovedDomainsSetting`\" pulumi-lang-python=\"`AibiDashboardEmbeddingApprovedDomainsSetting`\" pulumi-lang-yaml=\"`databricks.AibiDashboardEmbeddingApprovedDomainsSetting`\" pulumi-lang-java=\"`databricks.AibiDashboardEmbeddingApprovedDomainsSetting`\"\u003e`databricks.AibiDashboardEmbeddingApprovedDomainsSetting`\u003c/span\u003e resource allows you to specify the list of domains allowed for  [embedding of AI/BI Dashboards](https://learn.microsoft.com/en-us/azure/databricks/dashboards/admin/#manage-dashboard-embedding) into other sites.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.AibiDashboardEmbeddingAccessPolicySetting(\"this\", {aibiDashboardEmbeddingAccessPolicy: {\n    accessPolicyType: \"ALLOW_APPROVED_DOMAINS\",\n}});\nconst thisAibiDashboardEmbeddingApprovedDomainsSetting = new databricks.AibiDashboardEmbeddingApprovedDomainsSetting(\"this\", {aibiDashboardEmbeddingApprovedDomains: {\n    approvedDomains: [\"test.com\"],\n}}, {\n    dependsOn: [_this],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.AibiDashboardEmbeddingAccessPolicySetting(\"this\", aibi_dashboard_embedding_access_policy={\n    \"access_policy_type\": \"ALLOW_APPROVED_DOMAINS\",\n})\nthis_aibi_dashboard_embedding_approved_domains_setting = databricks.AibiDashboardEmbeddingApprovedDomainsSetting(\"this\", aibi_dashboard_embedding_approved_domains={\n    \"approved_domains\": [\"test.com\"],\n},\nopts = pulumi.ResourceOptions(depends_on=[this]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.AibiDashboardEmbeddingAccessPolicySetting(\"this\", new()\n    {\n        AibiDashboardEmbeddingAccessPolicy = new Databricks.Inputs.AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicyArgs\n        {\n            AccessPolicyType = \"ALLOW_APPROVED_DOMAINS\",\n        },\n    });\n\n    var thisAibiDashboardEmbeddingApprovedDomainsSetting = new Databricks.AibiDashboardEmbeddingApprovedDomainsSetting(\"this\", new()\n    {\n        AibiDashboardEmbeddingApprovedDomains = new Databricks.Inputs.AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomainsArgs\n        {\n            ApprovedDomains = new[]\n            {\n                \"test.com\",\n            },\n        },\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            @this,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewAibiDashboardEmbeddingAccessPolicySetting(ctx, \"this\", \u0026databricks.AibiDashboardEmbeddingAccessPolicySettingArgs{\n\t\t\tAibiDashboardEmbeddingAccessPolicy: \u0026databricks.AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicyArgs{\n\t\t\t\tAccessPolicyType: pulumi.String(\"ALLOW_APPROVED_DOMAINS\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewAibiDashboardEmbeddingApprovedDomainsSetting(ctx, \"this\", \u0026databricks.AibiDashboardEmbeddingApprovedDomainsSettingArgs{\n\t\t\tAibiDashboardEmbeddingApprovedDomains: \u0026databricks.AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomainsArgs{\n\t\t\t\tApprovedDomains: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"test.com\"),\n\t\t\t\t},\n\t\t\t},\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tthis,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.AibiDashboardEmbeddingAccessPolicySetting;\nimport com.pulumi.databricks.AibiDashboardEmbeddingAccessPolicySettingArgs;\nimport com.pulumi.databricks.inputs.AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicyArgs;\nimport com.pulumi.databricks.AibiDashboardEmbeddingApprovedDomainsSetting;\nimport com.pulumi.databricks.AibiDashboardEmbeddingApprovedDomainsSettingArgs;\nimport com.pulumi.databricks.inputs.AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomainsArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new AibiDashboardEmbeddingAccessPolicySetting(\"this\", AibiDashboardEmbeddingAccessPolicySettingArgs.builder()\n            .aibiDashboardEmbeddingAccessPolicy(AibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicyArgs.builder()\n                .accessPolicyType(\"ALLOW_APPROVED_DOMAINS\")\n                .build())\n            .build());\n\n        var thisAibiDashboardEmbeddingApprovedDomainsSetting = new AibiDashboardEmbeddingApprovedDomainsSetting(\"thisAibiDashboardEmbeddingApprovedDomainsSetting\", AibiDashboardEmbeddingApprovedDomainsSettingArgs.builder()\n            .aibiDashboardEmbeddingApprovedDomains(AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomainsArgs.builder()\n                .approvedDomains(\"test.com\")\n                .build())\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(this_)\n                .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:AibiDashboardEmbeddingAccessPolicySetting\n    properties:\n      aibiDashboardEmbeddingAccessPolicy:\n        accessPolicyType: ALLOW_APPROVED_DOMAINS\n  thisAibiDashboardEmbeddingApprovedDomainsSetting:\n    type: databricks:AibiDashboardEmbeddingApprovedDomainsSetting\n    name: this\n    properties:\n      aibiDashboardEmbeddingApprovedDomains:\n        approvedDomains:\n          - test.com\n    options:\n      dependsOn:\n        - ${this}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n-\u003cspan pulumi-lang-nodejs=\" databricks.AibiDashboardEmbeddingAccessPolicySetting \" pulumi-lang-dotnet=\" databricks.AibiDashboardEmbeddingAccessPolicySetting \" pulumi-lang-go=\" AibiDashboardEmbeddingAccessPolicySetting \" pulumi-lang-python=\" AibiDashboardEmbeddingAccessPolicySetting \" pulumi-lang-yaml=\" databricks.AibiDashboardEmbeddingAccessPolicySetting \" pulumi-lang-java=\" databricks.AibiDashboardEmbeddingAccessPolicySetting \"\u003e databricks.AibiDashboardEmbeddingAccessPolicySetting \u003c/span\u003eis used to control embedding policy.\n\n","properties":{"aibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomains:AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomains","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingApprovedDomainsSettingProviderConfig:AibiDashboardEmbeddingApprovedDomainsSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"required":["aibiDashboardEmbeddingApprovedDomains","etag","settingName"],"inputProperties":{"aibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomains:AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomains","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingApprovedDomainsSettingProviderConfig:AibiDashboardEmbeddingApprovedDomainsSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"requiredInputs":["aibiDashboardEmbeddingApprovedDomains"],"stateInputs":{"description":"Input properties used for looking up and filtering AibiDashboardEmbeddingApprovedDomainsSetting resources.\n","properties":{"aibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomains:AibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomains","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/AibiDashboardEmbeddingApprovedDomainsSettingProviderConfig:AibiDashboardEmbeddingApprovedDomainsSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"type":"object"}},"databricks:index/alert:Alert":{"description":"This resource allows you to manage [Databricks SQL Alerts](https://docs.databricks.com/en/sql/user/alerts/index.html).  It supersedes\u003cspan pulumi-lang-nodejs=\" databricks.SqlAlert \" pulumi-lang-dotnet=\" databricks.SqlAlert \" pulumi-lang-go=\" SqlAlert \" pulumi-lang-python=\" SqlAlert \" pulumi-lang-yaml=\" databricks.SqlAlert \" pulumi-lang-java=\" databricks.SqlAlert \"\u003e databricks.SqlAlert \u003c/span\u003eresource - see migration guide below for more details.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sharedDir = new databricks.Directory(\"shared_dir\", {path: \"/Shared/Queries\"});\n// This will be replaced with new databricks_query resource\nconst _this = new databricks.Query(\"this\", {\n    warehouseId: example.id,\n    displayName: \"My Query Name\",\n    queryText: \"SELECT 42 as value\",\n    parentPath: sharedDir.path,\n});\nconst alert = new databricks.Alert(\"alert\", {\n    queryId: _this.id,\n    displayName: \"TF new alert\",\n    parentPath: sharedDir.path,\n    condition: {\n        op: \"GREATER_THAN\",\n        operand: {\n            column: {\n                name: \"value\",\n            },\n        },\n        threshold: {\n            value: {\n                doubleValue: 42,\n            },\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nshared_dir = databricks.Directory(\"shared_dir\", path=\"/Shared/Queries\")\n# This will be replaced with new databricks_query resource\nthis = databricks.Query(\"this\",\n    warehouse_id=example[\"id\"],\n    display_name=\"My Query Name\",\n    query_text=\"SELECT 42 as value\",\n    parent_path=shared_dir.path)\nalert = databricks.Alert(\"alert\",\n    query_id=this.id,\n    display_name=\"TF new alert\",\n    parent_path=shared_dir.path,\n    condition={\n        \"op\": \"GREATER_THAN\",\n        \"operand\": {\n            \"column\": {\n                \"name\": \"value\",\n            },\n        },\n        \"threshold\": {\n            \"value\": {\n                \"double_value\": 42,\n            },\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sharedDir = new Databricks.Directory(\"shared_dir\", new()\n    {\n        Path = \"/Shared/Queries\",\n    });\n\n    // This will be replaced with new databricks_query resource\n    var @this = new Databricks.Query(\"this\", new()\n    {\n        WarehouseId = example.Id,\n        DisplayName = \"My Query Name\",\n        QueryText = \"SELECT 42 as value\",\n        ParentPath = sharedDir.Path,\n    });\n\n    var alert = new Databricks.Alert(\"alert\", new()\n    {\n        QueryId = @this.Id,\n        DisplayName = \"TF new alert\",\n        ParentPath = sharedDir.Path,\n        Condition = new Databricks.Inputs.AlertConditionArgs\n        {\n            Op = \"GREATER_THAN\",\n            Operand = new Databricks.Inputs.AlertConditionOperandArgs\n            {\n                Column = new Databricks.Inputs.AlertConditionOperandColumnArgs\n                {\n                    Name = \"value\",\n                },\n            },\n            Threshold = new Databricks.Inputs.AlertConditionThresholdArgs\n            {\n                Value = new Databricks.Inputs.AlertConditionThresholdValueArgs\n                {\n                    DoubleValue = 42,\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsharedDir, err := databricks.NewDirectory(ctx, \"shared_dir\", \u0026databricks.DirectoryArgs{\n\t\t\tPath: pulumi.String(\"/Shared/Queries\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// This will be replaced with new databricks_query resource\n\t\tthis, err := databricks.NewQuery(ctx, \"this\", \u0026databricks.QueryArgs{\n\t\t\tWarehouseId: pulumi.Any(example.Id),\n\t\t\tDisplayName: pulumi.String(\"My Query Name\"),\n\t\t\tQueryText:   pulumi.String(\"SELECT 42 as value\"),\n\t\t\tParentPath:  sharedDir.Path,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewAlert(ctx, \"alert\", \u0026databricks.AlertArgs{\n\t\t\tQueryId:     this.ID(),\n\t\t\tDisplayName: pulumi.String(\"TF new alert\"),\n\t\t\tParentPath:  sharedDir.Path,\n\t\t\tCondition: \u0026databricks.AlertConditionArgs{\n\t\t\t\tOp: pulumi.String(\"GREATER_THAN\"),\n\t\t\t\tOperand: \u0026databricks.AlertConditionOperandArgs{\n\t\t\t\t\tColumn: \u0026databricks.AlertConditionOperandColumnArgs{\n\t\t\t\t\t\tName: pulumi.String(\"value\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tThreshold: \u0026databricks.AlertConditionThresholdArgs{\n\t\t\t\t\tValue: \u0026databricks.AlertConditionThresholdValueArgs{\n\t\t\t\t\t\tDoubleValue: pulumi.Float64(42),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Directory;\nimport com.pulumi.databricks.DirectoryArgs;\nimport com.pulumi.databricks.Query;\nimport com.pulumi.databricks.QueryArgs;\nimport com.pulumi.databricks.Alert;\nimport com.pulumi.databricks.AlertArgs;\nimport com.pulumi.databricks.inputs.AlertConditionArgs;\nimport com.pulumi.databricks.inputs.AlertConditionOperandArgs;\nimport com.pulumi.databricks.inputs.AlertConditionOperandColumnArgs;\nimport com.pulumi.databricks.inputs.AlertConditionThresholdArgs;\nimport com.pulumi.databricks.inputs.AlertConditionThresholdValueArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sharedDir = new Directory(\"sharedDir\", DirectoryArgs.builder()\n            .path(\"/Shared/Queries\")\n            .build());\n\n        // This will be replaced with new databricks_query resource\n        var this_ = new Query(\"this\", QueryArgs.builder()\n            .warehouseId(example.id())\n            .displayName(\"My Query Name\")\n            .queryText(\"SELECT 42 as value\")\n            .parentPath(sharedDir.path())\n            .build());\n\n        var alert = new Alert(\"alert\", AlertArgs.builder()\n            .queryId(this_.id())\n            .displayName(\"TF new alert\")\n            .parentPath(sharedDir.path())\n            .condition(AlertConditionArgs.builder()\n                .op(\"GREATER_THAN\")\n                .operand(AlertConditionOperandArgs.builder()\n                    .column(AlertConditionOperandColumnArgs.builder()\n                        .name(\"value\")\n                        .build())\n                    .build())\n                .threshold(AlertConditionThresholdArgs.builder()\n                    .value(AlertConditionThresholdValueArgs.builder()\n                        .doubleValue(42.0)\n                        .build())\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sharedDir:\n    type: databricks:Directory\n    name: shared_dir\n    properties:\n      path: /Shared/Queries\n  # This will be replaced with new databricks_query resource\n  this:\n    type: databricks:Query\n    properties:\n      warehouseId: ${example.id}\n      displayName: My Query Name\n      queryText: SELECT 42 as value\n      parentPath: ${sharedDir.path}\n  alert:\n    type: databricks:Alert\n    properties:\n      queryId: ${this.id}\n      displayName: TF new alert\n      parentPath: ${sharedDir.path}\n      condition:\n        op: GREATER_THAN\n        operand:\n          column:\n            name: value\n        threshold:\n          value:\n            doubleValue: 42\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Migrating from \u003cspan pulumi-lang-nodejs=\"`databricks.SqlAlert`\" pulumi-lang-dotnet=\"`databricks.SqlAlert`\" pulumi-lang-go=\"`SqlAlert`\" pulumi-lang-python=\"`SqlAlert`\" pulumi-lang-yaml=\"`databricks.SqlAlert`\" pulumi-lang-java=\"`databricks.SqlAlert`\"\u003e`databricks.SqlAlert`\u003c/span\u003e resource\n\nUnder the hood, the new resource uses the same data as the \u003cspan pulumi-lang-nodejs=\"`databricks.SqlAlert`\" pulumi-lang-dotnet=\"`databricks.SqlAlert`\" pulumi-lang-go=\"`SqlAlert`\" pulumi-lang-python=\"`SqlAlert`\" pulumi-lang-yaml=\"`databricks.SqlAlert`\" pulumi-lang-java=\"`databricks.SqlAlert`\"\u003e`databricks.SqlAlert`\u003c/span\u003e, but is exposed via a different API. This means that we can migrate existing alerts without recreating them.  \n\n\u003e It's also recommended to migrate to the \u003cspan pulumi-lang-nodejs=\"`databricks.Query`\" pulumi-lang-dotnet=\"`databricks.Query`\" pulumi-lang-go=\"`Query`\" pulumi-lang-python=\"`Query`\" pulumi-lang-yaml=\"`databricks.Query`\" pulumi-lang-java=\"`databricks.Query`\"\u003e`databricks.Query`\u003c/span\u003e resource - see\u003cspan pulumi-lang-nodejs=\" databricks.Query \" pulumi-lang-dotnet=\" databricks.Query \" pulumi-lang-go=\" Query \" pulumi-lang-python=\" Query \" pulumi-lang-yaml=\" databricks.Query \" pulumi-lang-java=\" databricks.Query \"\u003e databricks.Query \u003c/span\u003efor more details.\n\nThis operation is done in few steps:\n\n* Record the ID of existing \u003cspan pulumi-lang-nodejs=\"`databricks.SqlAlert`\" pulumi-lang-dotnet=\"`databricks.SqlAlert`\" pulumi-lang-go=\"`SqlAlert`\" pulumi-lang-python=\"`SqlAlert`\" pulumi-lang-yaml=\"`databricks.SqlAlert`\" pulumi-lang-java=\"`databricks.SqlAlert`\"\u003e`databricks.SqlAlert`\u003c/span\u003e, for example, by executing the `terraform state show databricks_sql_alert.alert` command.\n* Create the code for the new implementation by performing the following changes:\n  * the \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e attribute is now named \u003cspan pulumi-lang-nodejs=\"`displayName`\" pulumi-lang-dotnet=\"`DisplayName`\" pulumi-lang-go=\"`displayName`\" pulumi-lang-python=\"`display_name`\" pulumi-lang-yaml=\"`displayName`\" pulumi-lang-java=\"`displayName`\"\u003e`display_name`\u003c/span\u003e\n  * the \u003cspan pulumi-lang-nodejs=\"`parent`\" pulumi-lang-dotnet=\"`Parent`\" pulumi-lang-go=\"`parent`\" pulumi-lang-python=\"`parent`\" pulumi-lang-yaml=\"`parent`\" pulumi-lang-java=\"`parent`\"\u003e`parent`\u003c/span\u003e (if exists) is renamed to \u003cspan pulumi-lang-nodejs=\"`parentPath`\" pulumi-lang-dotnet=\"`ParentPath`\" pulumi-lang-go=\"`parentPath`\" pulumi-lang-python=\"`parent_path`\" pulumi-lang-yaml=\"`parentPath`\" pulumi-lang-java=\"`parentPath`\"\u003e`parent_path`\u003c/span\u003e attribute and should be converted from `folders/object_id` to the actual path.\n  * the \u003cspan pulumi-lang-nodejs=\"`options`\" pulumi-lang-dotnet=\"`Options`\" pulumi-lang-go=\"`options`\" pulumi-lang-python=\"`options`\" pulumi-lang-yaml=\"`options`\" pulumi-lang-java=\"`options`\"\u003e`options`\u003c/span\u003e block is converted into the \u003cspan pulumi-lang-nodejs=\"`condition`\" pulumi-lang-dotnet=\"`Condition`\" pulumi-lang-go=\"`condition`\" pulumi-lang-python=\"`condition`\" pulumi-lang-yaml=\"`condition`\" pulumi-lang-java=\"`condition`\"\u003e`condition`\u003c/span\u003e block with the following changes:\n    * the value of the \u003cspan pulumi-lang-nodejs=\"`op`\" pulumi-lang-dotnet=\"`Op`\" pulumi-lang-go=\"`op`\" pulumi-lang-python=\"`op`\" pulumi-lang-yaml=\"`op`\" pulumi-lang-java=\"`op`\"\u003e`op`\u003c/span\u003e attribute should be converted from a mathematical operator into a string name, like, `\u003e` is becoming `GREATER_THAN`, `==` is becoming `EQUAL`, etc.\n    * the \u003cspan pulumi-lang-nodejs=\"`column`\" pulumi-lang-dotnet=\"`Column`\" pulumi-lang-go=\"`column`\" pulumi-lang-python=\"`column`\" pulumi-lang-yaml=\"`column`\" pulumi-lang-java=\"`column`\"\u003e`column`\u003c/span\u003e attribute is becoming the \u003cspan pulumi-lang-nodejs=\"`operand`\" pulumi-lang-dotnet=\"`Operand`\" pulumi-lang-go=\"`operand`\" pulumi-lang-python=\"`operand`\" pulumi-lang-yaml=\"`operand`\" pulumi-lang-java=\"`operand`\"\u003e`operand`\u003c/span\u003e block\n    * the \u003cspan pulumi-lang-nodejs=\"`value`\" pulumi-lang-dotnet=\"`Value`\" pulumi-lang-go=\"`value`\" pulumi-lang-python=\"`value`\" pulumi-lang-yaml=\"`value`\" pulumi-lang-java=\"`value`\"\u003e`value`\u003c/span\u003e attribute is becoming the \u003cspan pulumi-lang-nodejs=\"`threshold`\" pulumi-lang-dotnet=\"`Threshold`\" pulumi-lang-go=\"`threshold`\" pulumi-lang-python=\"`threshold`\" pulumi-lang-yaml=\"`threshold`\" pulumi-lang-java=\"`threshold`\"\u003e`threshold`\u003c/span\u003e block.  **Please note that the old implementation always used strings so you may have changes after import if you use \u003cspan pulumi-lang-nodejs=\"`doubleValue`\" pulumi-lang-dotnet=\"`DoubleValue`\" pulumi-lang-go=\"`doubleValue`\" pulumi-lang-python=\"`double_value`\" pulumi-lang-yaml=\"`doubleValue`\" pulumi-lang-java=\"`doubleValue`\"\u003e`double_value`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`boolValue`\" pulumi-lang-dotnet=\"`BoolValue`\" pulumi-lang-go=\"`boolValue`\" pulumi-lang-python=\"`bool_value`\" pulumi-lang-yaml=\"`boolValue`\" pulumi-lang-java=\"`boolValue`\"\u003e`bool_value`\u003c/span\u003e inside the block.**\n  * the \u003cspan pulumi-lang-nodejs=\"`rearm`\" pulumi-lang-dotnet=\"`Rearm`\" pulumi-lang-go=\"`rearm`\" pulumi-lang-python=\"`rearm`\" pulumi-lang-yaml=\"`rearm`\" pulumi-lang-java=\"`rearm`\"\u003e`rearm`\u003c/span\u003e attribute is renamed to \u003cspan pulumi-lang-nodejs=\"`secondsToRetrigger`\" pulumi-lang-dotnet=\"`SecondsToRetrigger`\" pulumi-lang-go=\"`secondsToRetrigger`\" pulumi-lang-python=\"`seconds_to_retrigger`\" pulumi-lang-yaml=\"`secondsToRetrigger`\" pulumi-lang-java=\"`secondsToRetrigger`\"\u003e`seconds_to_retrigger`\u003c/span\u003e.\n  \nFor example, if we have the original \u003cspan pulumi-lang-nodejs=\"`databricks.SqlAlert`\" pulumi-lang-dotnet=\"`databricks.SqlAlert`\" pulumi-lang-go=\"`SqlAlert`\" pulumi-lang-python=\"`SqlAlert`\" pulumi-lang-yaml=\"`databricks.SqlAlert`\" pulumi-lang-java=\"`databricks.SqlAlert`\"\u003e`databricks.SqlAlert`\u003c/span\u003e defined as:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst alert = new databricks.SqlAlert(\"alert\", {\n    queryId: _this.id,\n    name: \"My Alert\",\n    parent: `folders/${sharedDir.objectId}`,\n    options: {\n        column: \"value\",\n        op: \"\u003e\",\n        value: \"42\",\n        muted: false,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nalert = databricks.SqlAlert(\"alert\",\n    query_id=this[\"id\"],\n    name=\"My Alert\",\n    parent=f\"folders/{shared_dir['objectId']}\",\n    options={\n        \"column\": \"value\",\n        \"op\": \"\u003e\",\n        \"value\": \"42\",\n        \"muted\": False,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var alert = new Databricks.SqlAlert(\"alert\", new()\n    {\n        QueryId = @this.Id,\n        Name = \"My Alert\",\n        Parent = $\"folders/{sharedDir.ObjectId}\",\n        Options = new Databricks.Inputs.SqlAlertOptionsArgs\n        {\n            Column = \"value\",\n            Op = \"\u003e\",\n            Value = \"42\",\n            Muted = false,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSqlAlert(ctx, \"alert\", \u0026databricks.SqlAlertArgs{\n\t\t\tQueryId: pulumi.Any(this.Id),\n\t\t\tName:    pulumi.String(\"My Alert\"),\n\t\t\tParent:  pulumi.Sprintf(\"folders/%v\", sharedDir.ObjectId),\n\t\t\tOptions: \u0026databricks.SqlAlertOptionsArgs{\n\t\t\t\tColumn: pulumi.String(\"value\"),\n\t\t\t\tOp:     pulumi.String(\"\u003e\"),\n\t\t\t\tValue:  pulumi.String(\"42\"),\n\t\t\t\tMuted:  pulumi.Bool(false),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlAlert;\nimport com.pulumi.databricks.SqlAlertArgs;\nimport com.pulumi.databricks.inputs.SqlAlertOptionsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var alert = new SqlAlert(\"alert\", SqlAlertArgs.builder()\n            .queryId(this_.id())\n            .name(\"My Alert\")\n            .parent(String.format(\"folders/%s\", sharedDir.objectId()))\n            .options(SqlAlertOptionsArgs.builder()\n                .column(\"value\")\n                .op(\"\u003e\")\n                .value(\"42\")\n                .muted(false)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  alert:\n    type: databricks:SqlAlert\n    properties:\n      queryId: ${this.id}\n      name: My Alert\n      parent: folders/${sharedDir.objectId}\n      options:\n        column: value\n        op: '\u003e'\n        value: '42'\n        muted: false\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nwe'll have a new resource defined as:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst alert = new databricks.Alert(\"alert\", {\n    queryId: _this.id,\n    displayName: \"My Alert\",\n    parentPath: sharedDir.path,\n    condition: {\n        op: \"GREATER_THAN\",\n        operand: {\n            column: {\n                name: \"value\",\n            },\n        },\n        threshold: {\n            value: {\n                doubleValue: 42,\n            },\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nalert = databricks.Alert(\"alert\",\n    query_id=this[\"id\"],\n    display_name=\"My Alert\",\n    parent_path=shared_dir[\"path\"],\n    condition={\n        \"op\": \"GREATER_THAN\",\n        \"operand\": {\n            \"column\": {\n                \"name\": \"value\",\n            },\n        },\n        \"threshold\": {\n            \"value\": {\n                \"double_value\": 42,\n            },\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var alert = new Databricks.Alert(\"alert\", new()\n    {\n        QueryId = @this.Id,\n        DisplayName = \"My Alert\",\n        ParentPath = sharedDir.Path,\n        Condition = new Databricks.Inputs.AlertConditionArgs\n        {\n            Op = \"GREATER_THAN\",\n            Operand = new Databricks.Inputs.AlertConditionOperandArgs\n            {\n                Column = new Databricks.Inputs.AlertConditionOperandColumnArgs\n                {\n                    Name = \"value\",\n                },\n            },\n            Threshold = new Databricks.Inputs.AlertConditionThresholdArgs\n            {\n                Value = new Databricks.Inputs.AlertConditionThresholdValueArgs\n                {\n                    DoubleValue = 42,\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewAlert(ctx, \"alert\", \u0026databricks.AlertArgs{\n\t\t\tQueryId:     pulumi.Any(this.Id),\n\t\t\tDisplayName: pulumi.String(\"My Alert\"),\n\t\t\tParentPath:  pulumi.Any(sharedDir.Path),\n\t\t\tCondition: \u0026databricks.AlertConditionArgs{\n\t\t\t\tOp: pulumi.String(\"GREATER_THAN\"),\n\t\t\t\tOperand: \u0026databricks.AlertConditionOperandArgs{\n\t\t\t\t\tColumn: \u0026databricks.AlertConditionOperandColumnArgs{\n\t\t\t\t\t\tName: pulumi.String(\"value\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tThreshold: \u0026databricks.AlertConditionThresholdArgs{\n\t\t\t\t\tValue: \u0026databricks.AlertConditionThresholdValueArgs{\n\t\t\t\t\t\tDoubleValue: pulumi.Float64(42),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Alert;\nimport com.pulumi.databricks.AlertArgs;\nimport com.pulumi.databricks.inputs.AlertConditionArgs;\nimport com.pulumi.databricks.inputs.AlertConditionOperandArgs;\nimport com.pulumi.databricks.inputs.AlertConditionOperandColumnArgs;\nimport com.pulumi.databricks.inputs.AlertConditionThresholdArgs;\nimport com.pulumi.databricks.inputs.AlertConditionThresholdValueArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var alert = new Alert(\"alert\", AlertArgs.builder()\n            .queryId(this_.id())\n            .displayName(\"My Alert\")\n            .parentPath(sharedDir.path())\n            .condition(AlertConditionArgs.builder()\n                .op(\"GREATER_THAN\")\n                .operand(AlertConditionOperandArgs.builder()\n                    .column(AlertConditionOperandColumnArgs.builder()\n                        .name(\"value\")\n                        .build())\n                    .build())\n                .threshold(AlertConditionThresholdArgs.builder()\n                    .value(AlertConditionThresholdValueArgs.builder()\n                        .doubleValue(42.0)\n                        .build())\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  alert:\n    type: databricks:Alert\n    properties:\n      queryId: ${this.id}\n      displayName: My Alert\n      parentPath: ${sharedDir.path}\n      condition:\n        op: GREATER_THAN\n        operand:\n          column:\n            name: value\n        threshold:\n          value:\n            doubleValue: 42\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\u003cspan pulumi-lang-nodejs=\"\ndatabricks.Permissions \" pulumi-lang-dotnet=\"\ndatabricks.Permissions \" pulumi-lang-go=\"\nPermissions \" pulumi-lang-python=\"\nPermissions \" pulumi-lang-yaml=\"\ndatabricks.Permissions \" pulumi-lang-java=\"\ndatabricks.Permissions \"\u003e\ndatabricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Manage*, *Edit*, *Run* or *View* individual alerts.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst alertUsage = new databricks.Permissions(\"alert_usage\", {\n    sqlAlertId: alert.id,\n    accessControls: [{\n        groupName: \"users\",\n        permissionLevel: \"CAN_RUN\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nalert_usage = databricks.Permissions(\"alert_usage\",\n    sql_alert_id=alert[\"id\"],\n    access_controls=[{\n        \"group_name\": \"users\",\n        \"permission_level\": \"CAN_RUN\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var alertUsage = new Databricks.Permissions(\"alert_usage\", new()\n    {\n        SqlAlertId = alert.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_RUN\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPermissions(ctx, \"alert_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tSqlAlertId: pulumi.Any(alert.Id),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var alertUsage = new Permissions(\"alertUsage\", PermissionsArgs.builder()\n            .sqlAlertId(alert.id())\n            .accessControls(PermissionsAccessControlArgs.builder()\n                .groupName(\"users\")\n                .permissionLevel(\"CAN_RUN\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  alertUsage:\n    type: databricks:Permissions\n    name: alert_usage\n    properties:\n      sqlAlertId: ${alert.id}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_RUN\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\u003cspan pulumi-lang-nodejs=\"\ndatabricks.Permissions \" pulumi-lang-dotnet=\"\ndatabricks.Permissions \" pulumi-lang-go=\"\nPermissions \" pulumi-lang-python=\"\nPermissions \" pulumi-lang-yaml=\"\ndatabricks.Permissions \" pulumi-lang-java=\"\ndatabricks.Permissions \"\u003e\ndatabricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Manage*, *Edit*, *Run* or *View* individual alerts.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst alertUsage = new databricks.Permissions(\"alert_usage\", {\n    sqlAlertId: alert.id,\n    accessControls: [{\n        groupName: \"users\",\n        permissionLevel: \"CAN_RUN\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nalert_usage = databricks.Permissions(\"alert_usage\",\n    sql_alert_id=alert[\"id\"],\n    access_controls=[{\n        \"group_name\": \"users\",\n        \"permission_level\": \"CAN_RUN\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var alertUsage = new Databricks.Permissions(\"alert_usage\", new()\n    {\n        SqlAlertId = alert.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_RUN\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPermissions(ctx, \"alert_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tSqlAlertId: pulumi.Any(alert.Id),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var alertUsage = new Permissions(\"alertUsage\", PermissionsArgs.builder()\n            .sqlAlertId(alert.id())\n            .accessControls(PermissionsAccessControlArgs.builder()\n                .groupName(\"users\")\n                .permissionLevel(\"CAN_RUN\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  alertUsage:\n    type: databricks:Permissions\n    name: alert_usage\n    properties:\n      sqlAlertId: ${alert.id}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_RUN\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Query \" pulumi-lang-dotnet=\" databricks.Query \" pulumi-lang-go=\" Query \" pulumi-lang-python=\" Query \" pulumi-lang-yaml=\" databricks.Query \" pulumi-lang-java=\" databricks.Query \"\u003e databricks.Query \u003c/span\u003eto manage [Databricks SQL Queries](https://docs.databricks.com/sql/user/queries/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eto manage [Databricks SQL Endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eto manage directories in [Databricks Workpace](https://docs.databricks.com/workspace/workspace-objects.html).\n\n","properties":{"condition":{"$ref":"#/types/databricks:index/AlertCondition:AlertCondition","description":"Trigger conditions of the alert. Block consists of the following attributes:\n"},"createTime":{"type":"string","description":"The timestamp string indicating when the alert was created.\n"},"customBody":{"type":"string","description":"Custom body of alert notification, if it exists. See [Alerts API reference](https://docs.databricks.com/en/sql/user/alerts/index.html) for custom templating instructions.\n"},"customSubject":{"type":"string","description":"Custom subject of alert notification, if it exists. This includes email subject, Slack notification header, etc. See [Alerts API reference](https://docs.databricks.com/en/sql/user/alerts/index.html) for custom templating instructions.\n"},"displayName":{"type":"string","description":"Name of the alert.\n"},"lifecycleState":{"type":"string","description":"The workspace state of the alert. Used for tracking trashed status. (Possible values are `ACTIVE` or `TRASHED`).\n"},"notifyOnOk":{"type":"boolean","description":"Whether to notify alert subscribers when alert returns back to normal.\n"},"ownerUserName":{"type":"string","description":"Alert owner's username.\n"},"parentPath":{"type":"string","description":"The path to a workspace folder containing the alert. The default is the user's home folder.  If changed, the alert will be recreated.\n"},"providerConfig":{"$ref":"#/types/databricks:index/AlertProviderConfig:AlertProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"queryId":{"type":"string","description":"ID of the query evaluated by the alert.\n"},"secondsToRetrigger":{"type":"integer","description":"Number of seconds an alert must wait after being triggered to rearm itself. After rearming, it can be triggered again. If 0 or not specified, the alert will not be triggered again.\n"},"state":{"type":"string","description":"Current state of the alert's trigger status (`UNKNOWN`, `OK`, `TRIGGERED`). This field is set to `UNKNOWN` if the alert has not yet been evaluated or ran into an error during the last evaluation.\n"},"triggerTime":{"type":"string","description":"The timestamp string when the alert was last triggered if the alert has been triggered before.\n"},"updateTime":{"type":"string","description":"The timestamp string indicating when the alert was updated.\n"}},"required":["condition","createTime","displayName","lifecycleState","queryId","state","triggerTime","updateTime"],"inputProperties":{"condition":{"$ref":"#/types/databricks:index/AlertCondition:AlertCondition","description":"Trigger conditions of the alert. Block consists of the following attributes:\n"},"customBody":{"type":"string","description":"Custom body of alert notification, if it exists. See [Alerts API reference](https://docs.databricks.com/en/sql/user/alerts/index.html) for custom templating instructions.\n"},"customSubject":{"type":"string","description":"Custom subject of alert notification, if it exists. This includes email subject, Slack notification header, etc. See [Alerts API reference](https://docs.databricks.com/en/sql/user/alerts/index.html) for custom templating instructions.\n"},"displayName":{"type":"string","description":"Name of the alert.\n"},"notifyOnOk":{"type":"boolean","description":"Whether to notify alert subscribers when alert returns back to normal.\n"},"ownerUserName":{"type":"string","description":"Alert owner's username.\n"},"parentPath":{"type":"string","description":"The path to a workspace folder containing the alert. The default is the user's home folder.  If changed, the alert will be recreated.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/AlertProviderConfig:AlertProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"queryId":{"type":"string","description":"ID of the query evaluated by the alert.\n"},"secondsToRetrigger":{"type":"integer","description":"Number of seconds an alert must wait after being triggered to rearm itself. After rearming, it can be triggered again. If 0 or not specified, the alert will not be triggered again.\n"}},"requiredInputs":["condition","displayName","queryId"],"stateInputs":{"description":"Input properties used for looking up and filtering Alert resources.\n","properties":{"condition":{"$ref":"#/types/databricks:index/AlertCondition:AlertCondition","description":"Trigger conditions of the alert. Block consists of the following attributes:\n"},"createTime":{"type":"string","description":"The timestamp string indicating when the alert was created.\n"},"customBody":{"type":"string","description":"Custom body of alert notification, if it exists. See [Alerts API reference](https://docs.databricks.com/en/sql/user/alerts/index.html) for custom templating instructions.\n"},"customSubject":{"type":"string","description":"Custom subject of alert notification, if it exists. This includes email subject, Slack notification header, etc. See [Alerts API reference](https://docs.databricks.com/en/sql/user/alerts/index.html) for custom templating instructions.\n"},"displayName":{"type":"string","description":"Name of the alert.\n"},"lifecycleState":{"type":"string","description":"The workspace state of the alert. Used for tracking trashed status. (Possible values are `ACTIVE` or `TRASHED`).\n"},"notifyOnOk":{"type":"boolean","description":"Whether to notify alert subscribers when alert returns back to normal.\n"},"ownerUserName":{"type":"string","description":"Alert owner's username.\n"},"parentPath":{"type":"string","description":"The path to a workspace folder containing the alert. The default is the user's home folder.  If changed, the alert will be recreated.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/AlertProviderConfig:AlertProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"queryId":{"type":"string","description":"ID of the query evaluated by the alert.\n"},"secondsToRetrigger":{"type":"integer","description":"Number of seconds an alert must wait after being triggered to rearm itself. After rearming, it can be triggered again. If 0 or not specified, the alert will not be triggered again.\n"},"state":{"type":"string","description":"Current state of the alert's trigger status (`UNKNOWN`, `OK`, `TRIGGERED`). This field is set to `UNKNOWN` if the alert has not yet been evaluated or ran into an error during the last evaluation.\n"},"triggerTime":{"type":"string","description":"The timestamp string when the alert was last triggered if the alert has been triggered before.\n"},"updateTime":{"type":"string","description":"The timestamp string indicating when the alert was updated.\n"}},"type":"object"}},"databricks:index/alertV2:AlertV2":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThe Alert v2 resource allows you to manage SQL alerts in Databricks SQL. Alerts monitor query results and notify you when specific conditions are met.\n\nAlerts run on a schedule and evaluate query results against defined thresholds. When an alert is triggered, notifications can be sent to specified users or destinations.\n\n## Example Usage\n\n### Basic Alert Example\nThis example creates a basic alert that monitors a query and sends notifications to a user when the value exceeds a threshold:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst basicAlert = new databricks.AlertV2(\"basic_alert\", {\n    displayName: \"High Error Rate Alert\",\n    queryText: \"SELECT count(*) as error_count FROM logs WHERE level = 'ERROR' AND timestamp \u003e now() - interval 1 hour\",\n    warehouseId: \"a7066a8ef796be84\",\n    parentPath: \"/Users/user@example.com\",\n    evaluation: {\n        source: {\n            name: \"error_count\",\n            display: \"Error Count\",\n            aggregation: \"COUNT\",\n        },\n        comparisonOperator: \"GREATER_THAN\",\n        threshold: {\n            value: {\n                doubleValue: 100,\n            },\n        },\n        emptyResultState: \"OK\",\n        notification: {\n            subscriptions: [{\n                userEmail: \"user@example.com\",\n            }],\n            notifyOnOk: true,\n        },\n    },\n    schedule: {\n        quartzCronSchedule: \"0 0/15 * * * ?\",\n        timezoneId: \"America/Los_Angeles\",\n        pauseStatus: \"UNPAUSED\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nbasic_alert = databricks.AlertV2(\"basic_alert\",\n    display_name=\"High Error Rate Alert\",\n    query_text=\"SELECT count(*) as error_count FROM logs WHERE level = 'ERROR' AND timestamp \u003e now() - interval 1 hour\",\n    warehouse_id=\"a7066a8ef796be84\",\n    parent_path=\"/Users/user@example.com\",\n    evaluation={\n        \"source\": {\n            \"name\": \"error_count\",\n            \"display\": \"Error Count\",\n            \"aggregation\": \"COUNT\",\n        },\n        \"comparison_operator\": \"GREATER_THAN\",\n        \"threshold\": {\n            \"value\": {\n                \"double_value\": 100,\n            },\n        },\n        \"empty_result_state\": \"OK\",\n        \"notification\": {\n            \"subscriptions\": [{\n                \"user_email\": \"user@example.com\",\n            }],\n            \"notify_on_ok\": True,\n        },\n    },\n    schedule={\n        \"quartz_cron_schedule\": \"0 0/15 * * * ?\",\n        \"timezone_id\": \"America/Los_Angeles\",\n        \"pause_status\": \"UNPAUSED\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var basicAlert = new Databricks.AlertV2(\"basic_alert\", new()\n    {\n        DisplayName = \"High Error Rate Alert\",\n        QueryText = \"SELECT count(*) as error_count FROM logs WHERE level = 'ERROR' AND timestamp \u003e now() - interval 1 hour\",\n        WarehouseId = \"a7066a8ef796be84\",\n        ParentPath = \"/Users/user@example.com\",\n        Evaluation = new Databricks.Inputs.AlertV2EvaluationArgs\n        {\n            Source = new Databricks.Inputs.AlertV2EvaluationSourceArgs\n            {\n                Name = \"error_count\",\n                Display = \"Error Count\",\n                Aggregation = \"COUNT\",\n            },\n            ComparisonOperator = \"GREATER_THAN\",\n            Threshold = new Databricks.Inputs.AlertV2EvaluationThresholdArgs\n            {\n                Value = new Databricks.Inputs.AlertV2EvaluationThresholdValueArgs\n                {\n                    DoubleValue = 100,\n                },\n            },\n            EmptyResultState = \"OK\",\n            Notification = new Databricks.Inputs.AlertV2EvaluationNotificationArgs\n            {\n                Subscriptions = new[]\n                {\n                    new Databricks.Inputs.AlertV2EvaluationNotificationSubscriptionArgs\n                    {\n                        UserEmail = \"user@example.com\",\n                    },\n                },\n                NotifyOnOk = true,\n            },\n        },\n        Schedule = new Databricks.Inputs.AlertV2ScheduleArgs\n        {\n            QuartzCronSchedule = \"0 0/15 * * * ?\",\n            TimezoneId = \"America/Los_Angeles\",\n            PauseStatus = \"UNPAUSED\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewAlertV2(ctx, \"basic_alert\", \u0026databricks.AlertV2Args{\n\t\t\tDisplayName: pulumi.String(\"High Error Rate Alert\"),\n\t\t\tQueryText:   pulumi.String(\"SELECT count(*) as error_count FROM logs WHERE level = 'ERROR' AND timestamp \u003e now() - interval 1 hour\"),\n\t\t\tWarehouseId: pulumi.String(\"a7066a8ef796be84\"),\n\t\t\tParentPath:  pulumi.String(\"/Users/user@example.com\"),\n\t\t\tEvaluation: \u0026databricks.AlertV2EvaluationArgs{\n\t\t\t\tSource: \u0026databricks.AlertV2EvaluationSourceArgs{\n\t\t\t\t\tName:        pulumi.String(\"error_count\"),\n\t\t\t\t\tDisplay:     pulumi.String(\"Error Count\"),\n\t\t\t\t\tAggregation: pulumi.String(\"COUNT\"),\n\t\t\t\t},\n\t\t\t\tComparisonOperator: pulumi.String(\"GREATER_THAN\"),\n\t\t\t\tThreshold: \u0026databricks.AlertV2EvaluationThresholdArgs{\n\t\t\t\t\tValue: \u0026databricks.AlertV2EvaluationThresholdValueArgs{\n\t\t\t\t\t\tDoubleValue: pulumi.Float64(100),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tEmptyResultState: pulumi.String(\"OK\"),\n\t\t\t\tNotification: \u0026databricks.AlertV2EvaluationNotificationArgs{\n\t\t\t\t\tSubscriptions: databricks.AlertV2EvaluationNotificationSubscriptionArray{\n\t\t\t\t\t\t\u0026databricks.AlertV2EvaluationNotificationSubscriptionArgs{\n\t\t\t\t\t\t\tUserEmail: pulumi.String(\"user@example.com\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tNotifyOnOk: pulumi.Bool(true),\n\t\t\t\t},\n\t\t\t},\n\t\t\tSchedule: \u0026databricks.AlertV2ScheduleArgs{\n\t\t\t\tQuartzCronSchedule: pulumi.String(\"0 0/15 * * * ?\"),\n\t\t\t\tTimezoneId:         pulumi.String(\"America/Los_Angeles\"),\n\t\t\t\tPauseStatus:        pulumi.String(\"UNPAUSED\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.AlertV2;\nimport com.pulumi.databricks.AlertV2Args;\nimport com.pulumi.databricks.inputs.AlertV2EvaluationArgs;\nimport com.pulumi.databricks.inputs.AlertV2EvaluationSourceArgs;\nimport com.pulumi.databricks.inputs.AlertV2EvaluationThresholdArgs;\nimport com.pulumi.databricks.inputs.AlertV2EvaluationThresholdValueArgs;\nimport com.pulumi.databricks.inputs.AlertV2EvaluationNotificationArgs;\nimport com.pulumi.databricks.inputs.AlertV2ScheduleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var basicAlert = new AlertV2(\"basicAlert\", AlertV2Args.builder()\n            .displayName(\"High Error Rate Alert\")\n            .queryText(\"SELECT count(*) as error_count FROM logs WHERE level = 'ERROR' AND timestamp \u003e now() - interval 1 hour\")\n            .warehouseId(\"a7066a8ef796be84\")\n            .parentPath(\"/Users/user@example.com\")\n            .evaluation(AlertV2EvaluationArgs.builder()\n                .source(AlertV2EvaluationSourceArgs.builder()\n                    .name(\"error_count\")\n                    .display(\"Error Count\")\n                    .aggregation(\"COUNT\")\n                    .build())\n                .comparisonOperator(\"GREATER_THAN\")\n                .threshold(AlertV2EvaluationThresholdArgs.builder()\n                    .value(AlertV2EvaluationThresholdValueArgs.builder()\n                        .doubleValue(100.0)\n                        .build())\n                    .build())\n                .emptyResultState(\"OK\")\n                .notification(AlertV2EvaluationNotificationArgs.builder()\n                    .subscriptions(AlertV2EvaluationNotificationSubscriptionArgs.builder()\n                        .userEmail(\"user@example.com\")\n                        .build())\n                    .notifyOnOk(true)\n                    .build())\n                .build())\n            .schedule(AlertV2ScheduleArgs.builder()\n                .quartzCronSchedule(\"0 0/15 * * * ?\")\n                .timezoneId(\"America/Los_Angeles\")\n                .pauseStatus(\"UNPAUSED\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  basicAlert:\n    type: databricks:AlertV2\n    name: basic_alert\n    properties:\n      displayName: High Error Rate Alert\n      queryText: SELECT count(*) as error_count FROM logs WHERE level = 'ERROR' AND timestamp \u003e now() - interval 1 hour\n      warehouseId: a7066a8ef796be84\n      parentPath: /Users/user@example.com\n      evaluation:\n        source:\n          name: error_count\n          display: Error Count\n          aggregation: COUNT\n        comparisonOperator: GREATER_THAN\n        threshold:\n          value:\n            doubleValue: 100\n        emptyResultState: OK\n        notification:\n          subscriptions:\n            - userEmail: user@example.com\n          notifyOnOk: true\n      schedule:\n        quartzCronSchedule: 0 0/15 * * * ?\n        timezoneId: America/Los_Angeles\n        pauseStatus: UNPAUSED\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"createTime":{"type":"string","description":"(string) - The timestamp indicating when the alert was created\n"},"customDescription":{"type":"string","description":"Custom description for the alert. support mustache template\n"},"customSummary":{"type":"string","description":"Custom summary for the alert. support mustache template\n"},"displayName":{"type":"string","description":"The display name of the alert\n"},"effectiveRunAs":{"$ref":"#/types/databricks:index/AlertV2EffectiveRunAs:AlertV2EffectiveRunAs","description":"(AlertV2RunAs) - The actual identity that will be used to execute the alert.\nThis is an output-only field that shows the resolved run-as identity after applying\npermissions and defaults\n"},"evaluation":{"$ref":"#/types/databricks:index/AlertV2Evaluation:AlertV2Evaluation"},"lifecycleState":{"type":"string","description":"(string) - Indicates whether the query is trashed. Possible values are: `ACTIVE`, `DELETED`\n"},"ownerUserName":{"type":"string","description":"(string) - The owner's username. This field is set to \"Unavailable\" if the user has been deleted\n"},"parentPath":{"type":"string","description":"The workspace path of the folder containing the alert. Can only be set on create, and cannot be updated\n"},"providerConfig":{"$ref":"#/types/databricks:index/AlertV2ProviderConfig:AlertV2ProviderConfig","description":"Configure the provider for management through account provider.\n"},"purgeOnDelete":{"type":"boolean","description":"Purge the resource on delete\n"},"queryText":{"type":"string","description":"Text of the query to be run\n"},"runAs":{"$ref":"#/types/databricks:index/AlertV2RunAs:AlertV2RunAs","description":"Specifies the identity that will be used to run the alert.\nThis field allows you to configure alerts to run as a specific user or service principal.\n- For user identity: Set \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e to the email of an active workspace user. Users can only set this to their own email.\n- For service principal: Set \u003cspan pulumi-lang-nodejs=\"`servicePrincipalName`\" pulumi-lang-dotnet=\"`ServicePrincipalName`\" pulumi-lang-go=\"`servicePrincipalName`\" pulumi-lang-python=\"`service_principal_name`\" pulumi-lang-yaml=\"`servicePrincipalName`\" pulumi-lang-java=\"`servicePrincipalName`\"\u003e`service_principal_name`\u003c/span\u003e to the application ID. Requires the `servicePrincipal/user` role.\nIf not specified, the alert will run as the request user\n"},"runAsUserName":{"type":"string","description":"The run as username or application ID of service principal.\nOn Create and Update, this field can be set to application ID of an active service principal. Setting this field requires the servicePrincipal/user role.\nDeprecated: Use \u003cspan pulumi-lang-nodejs=\"`runAs`\" pulumi-lang-dotnet=\"`RunAs`\" pulumi-lang-go=\"`runAs`\" pulumi-lang-python=\"`run_as`\" pulumi-lang-yaml=\"`runAs`\" pulumi-lang-java=\"`runAs`\"\u003e`run_as`\u003c/span\u003e field instead. This field will be removed in a future release\n"},"schedule":{"$ref":"#/types/databricks:index/AlertV2Schedule:AlertV2Schedule"},"updateTime":{"type":"string","description":"(string) - The timestamp indicating when the alert was updated\n"},"warehouseId":{"type":"string","description":"ID of the SQL warehouse attached to the alert\n"}},"required":["createTime","displayName","effectiveRunAs","evaluation","lifecycleState","ownerUserName","queryText","schedule","updateTime","warehouseId"],"inputProperties":{"customDescription":{"type":"string","description":"Custom description for the alert. support mustache template\n"},"customSummary":{"type":"string","description":"Custom summary for the alert. support mustache template\n"},"displayName":{"type":"string","description":"The display name of the alert\n"},"evaluation":{"$ref":"#/types/databricks:index/AlertV2Evaluation:AlertV2Evaluation"},"parentPath":{"type":"string","description":"The workspace path of the folder containing the alert. Can only be set on create, and cannot be updated\n"},"providerConfig":{"$ref":"#/types/databricks:index/AlertV2ProviderConfig:AlertV2ProviderConfig","description":"Configure the provider for management through account provider.\n"},"purgeOnDelete":{"type":"boolean","description":"Purge the resource on delete\n"},"queryText":{"type":"string","description":"Text of the query to be run\n"},"runAs":{"$ref":"#/types/databricks:index/AlertV2RunAs:AlertV2RunAs","description":"Specifies the identity that will be used to run the alert.\nThis field allows you to configure alerts to run as a specific user or service principal.\n- For user identity: Set \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e to the email of an active workspace user. Users can only set this to their own email.\n- For service principal: Set \u003cspan pulumi-lang-nodejs=\"`servicePrincipalName`\" pulumi-lang-dotnet=\"`ServicePrincipalName`\" pulumi-lang-go=\"`servicePrincipalName`\" pulumi-lang-python=\"`service_principal_name`\" pulumi-lang-yaml=\"`servicePrincipalName`\" pulumi-lang-java=\"`servicePrincipalName`\"\u003e`service_principal_name`\u003c/span\u003e to the application ID. Requires the `servicePrincipal/user` role.\nIf not specified, the alert will run as the request user\n"},"runAsUserName":{"type":"string","description":"The run as username or application ID of service principal.\nOn Create and Update, this field can be set to application ID of an active service principal. Setting this field requires the servicePrincipal/user role.\nDeprecated: Use \u003cspan pulumi-lang-nodejs=\"`runAs`\" pulumi-lang-dotnet=\"`RunAs`\" pulumi-lang-go=\"`runAs`\" pulumi-lang-python=\"`run_as`\" pulumi-lang-yaml=\"`runAs`\" pulumi-lang-java=\"`runAs`\"\u003e`run_as`\u003c/span\u003e field instead. This field will be removed in a future release\n"},"schedule":{"$ref":"#/types/databricks:index/AlertV2Schedule:AlertV2Schedule"},"warehouseId":{"type":"string","description":"ID of the SQL warehouse attached to the alert\n"}},"requiredInputs":["displayName","evaluation","queryText","schedule","warehouseId"],"stateInputs":{"description":"Input properties used for looking up and filtering AlertV2 resources.\n","properties":{"createTime":{"type":"string","description":"(string) - The timestamp indicating when the alert was created\n"},"customDescription":{"type":"string","description":"Custom description for the alert. support mustache template\n"},"customSummary":{"type":"string","description":"Custom summary for the alert. support mustache template\n"},"displayName":{"type":"string","description":"The display name of the alert\n"},"effectiveRunAs":{"$ref":"#/types/databricks:index/AlertV2EffectiveRunAs:AlertV2EffectiveRunAs","description":"(AlertV2RunAs) - The actual identity that will be used to execute the alert.\nThis is an output-only field that shows the resolved run-as identity after applying\npermissions and defaults\n"},"evaluation":{"$ref":"#/types/databricks:index/AlertV2Evaluation:AlertV2Evaluation"},"lifecycleState":{"type":"string","description":"(string) - Indicates whether the query is trashed. Possible values are: `ACTIVE`, `DELETED`\n"},"ownerUserName":{"type":"string","description":"(string) - The owner's username. This field is set to \"Unavailable\" if the user has been deleted\n"},"parentPath":{"type":"string","description":"The workspace path of the folder containing the alert. Can only be set on create, and cannot be updated\n"},"providerConfig":{"$ref":"#/types/databricks:index/AlertV2ProviderConfig:AlertV2ProviderConfig","description":"Configure the provider for management through account provider.\n"},"purgeOnDelete":{"type":"boolean","description":"Purge the resource on delete\n"},"queryText":{"type":"string","description":"Text of the query to be run\n"},"runAs":{"$ref":"#/types/databricks:index/AlertV2RunAs:AlertV2RunAs","description":"Specifies the identity that will be used to run the alert.\nThis field allows you to configure alerts to run as a specific user or service principal.\n- For user identity: Set \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e to the email of an active workspace user. Users can only set this to their own email.\n- For service principal: Set \u003cspan pulumi-lang-nodejs=\"`servicePrincipalName`\" pulumi-lang-dotnet=\"`ServicePrincipalName`\" pulumi-lang-go=\"`servicePrincipalName`\" pulumi-lang-python=\"`service_principal_name`\" pulumi-lang-yaml=\"`servicePrincipalName`\" pulumi-lang-java=\"`servicePrincipalName`\"\u003e`service_principal_name`\u003c/span\u003e to the application ID. Requires the `servicePrincipal/user` role.\nIf not specified, the alert will run as the request user\n"},"runAsUserName":{"type":"string","description":"The run as username or application ID of service principal.\nOn Create and Update, this field can be set to application ID of an active service principal. Setting this field requires the servicePrincipal/user role.\nDeprecated: Use \u003cspan pulumi-lang-nodejs=\"`runAs`\" pulumi-lang-dotnet=\"`RunAs`\" pulumi-lang-go=\"`runAs`\" pulumi-lang-python=\"`run_as`\" pulumi-lang-yaml=\"`runAs`\" pulumi-lang-java=\"`runAs`\"\u003e`run_as`\u003c/span\u003e field instead. This field will be removed in a future release\n"},"schedule":{"$ref":"#/types/databricks:index/AlertV2Schedule:AlertV2Schedule"},"updateTime":{"type":"string","description":"(string) - The timestamp indicating when the alert was updated\n"},"warehouseId":{"type":"string","description":"ID of the SQL warehouse attached to the alert\n"}},"type":"object"}},"databricks:index/app:App":{"description":"[Databricks Apps](https://docs.databricks.com/en/dev-tools/databricks-apps/index.html) run directly on a customer's Databricks instance, integrate with their data, use and extend Databricks services, and enable users to interact through single sign-on. This resource creates the application but does not handle app deployment, which should be handled separately as part of your CI/CD pipeline.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.App(\"this\", {\n    name: \"my-custom-app\",\n    description: \"My app\",\n    resources: [\n        {\n            name: \"sql-warehouse\",\n            sqlWarehouse: {\n                id: \"e9ca293f79a74b5c\",\n                permission: \"CAN_MANAGE\",\n            },\n        },\n        {\n            name: \"serving-endpoint\",\n            servingEndpoint: {\n                name: \"databricks-meta-llama-3-1-70b-instruct\",\n                permission: \"CAN_MANAGE\",\n            },\n        },\n        {\n            name: \"job\",\n            job: {\n                id: \"1234\",\n                permission: \"CAN_MANAGE\",\n            },\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.App(\"this\",\n    name=\"my-custom-app\",\n    description=\"My app\",\n    resources=[\n        {\n            \"name\": \"sql-warehouse\",\n            \"sql_warehouse\": {\n                \"id\": \"e9ca293f79a74b5c\",\n                \"permission\": \"CAN_MANAGE\",\n            },\n        },\n        {\n            \"name\": \"serving-endpoint\",\n            \"serving_endpoint\": {\n                \"name\": \"databricks-meta-llama-3-1-70b-instruct\",\n                \"permission\": \"CAN_MANAGE\",\n            },\n        },\n        {\n            \"name\": \"job\",\n            \"job\": {\n                \"id\": \"1234\",\n                \"permission\": \"CAN_MANAGE\",\n            },\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.App(\"this\", new()\n    {\n        Name = \"my-custom-app\",\n        Description = \"My app\",\n        Resources = new[]\n        {\n            new Databricks.Inputs.AppResourceArgs\n            {\n                Name = \"sql-warehouse\",\n                SqlWarehouse = new Databricks.Inputs.AppResourceSqlWarehouseArgs\n                {\n                    Id = \"e9ca293f79a74b5c\",\n                    Permission = \"CAN_MANAGE\",\n                },\n            },\n            new Databricks.Inputs.AppResourceArgs\n            {\n                Name = \"serving-endpoint\",\n                ServingEndpoint = new Databricks.Inputs.AppResourceServingEndpointArgs\n                {\n                    Name = \"databricks-meta-llama-3-1-70b-instruct\",\n                    Permission = \"CAN_MANAGE\",\n                },\n            },\n            new Databricks.Inputs.AppResourceArgs\n            {\n                Name = \"job\",\n                Job = new Databricks.Inputs.AppResourceJobArgs\n                {\n                    Id = \"1234\",\n                    Permission = \"CAN_MANAGE\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewApp(ctx, \"this\", \u0026databricks.AppArgs{\n\t\t\tName:        pulumi.String(\"my-custom-app\"),\n\t\t\tDescription: pulumi.String(\"My app\"),\n\t\t\tResources: databricks.AppResourceArray{\n\t\t\t\t\u0026databricks.AppResourceArgs{\n\t\t\t\t\tName: pulumi.String(\"sql-warehouse\"),\n\t\t\t\t\tSqlWarehouse: \u0026databricks.AppResourceSqlWarehouseArgs{\n\t\t\t\t\t\tId:         pulumi.String(\"e9ca293f79a74b5c\"),\n\t\t\t\t\t\tPermission: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.AppResourceArgs{\n\t\t\t\t\tName: pulumi.String(\"serving-endpoint\"),\n\t\t\t\t\tServingEndpoint: \u0026databricks.AppResourceServingEndpointArgs{\n\t\t\t\t\t\tName:       pulumi.String(\"databricks-meta-llama-3-1-70b-instruct\"),\n\t\t\t\t\t\tPermission: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.AppResourceArgs{\n\t\t\t\t\tName: pulumi.String(\"job\"),\n\t\t\t\t\tJob: \u0026databricks.AppResourceJobArgs{\n\t\t\t\t\t\tId:         pulumi.String(\"1234\"),\n\t\t\t\t\t\tPermission: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.App;\nimport com.pulumi.databricks.AppArgs;\nimport com.pulumi.databricks.inputs.AppResourceArgs;\nimport com.pulumi.databricks.inputs.AppResourceSqlWarehouseArgs;\nimport com.pulumi.databricks.inputs.AppResourceServingEndpointArgs;\nimport com.pulumi.databricks.inputs.AppResourceJobArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new App(\"this\", AppArgs.builder()\n            .name(\"my-custom-app\")\n            .description(\"My app\")\n            .resources(            \n                AppResourceArgs.builder()\n                    .name(\"sql-warehouse\")\n                    .sqlWarehouse(AppResourceSqlWarehouseArgs.builder()\n                        .id(\"e9ca293f79a74b5c\")\n                        .permission(\"CAN_MANAGE\")\n                        .build())\n                    .build(),\n                AppResourceArgs.builder()\n                    .name(\"serving-endpoint\")\n                    .servingEndpoint(AppResourceServingEndpointArgs.builder()\n                        .name(\"databricks-meta-llama-3-1-70b-instruct\")\n                        .permission(\"CAN_MANAGE\")\n                        .build())\n                    .build(),\n                AppResourceArgs.builder()\n                    .name(\"job\")\n                    .job(AppResourceJobArgs.builder()\n                        .id(\"1234\")\n                        .permission(\"CAN_MANAGE\")\n                        .build())\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:App\n    properties:\n      name: my-custom-app\n      description: My app\n      resources:\n        - name: sql-warehouse\n          sqlWarehouse:\n            id: e9ca293f79a74b5c\n            permission: CAN_MANAGE\n        - name: serving-endpoint\n          servingEndpoint:\n            name: databricks-meta-llama-3-1-70b-instruct\n            permission: CAN_MANAGE\n        - name: job\n          job:\n            id: '1234'\n            permission: CAN_MANAGE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eto manage Databricks SQL [Endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.ModelServing \" pulumi-lang-dotnet=\" databricks.ModelServing \" pulumi-lang-go=\" ModelServing \" pulumi-lang-python=\" ModelServing \" pulumi-lang-yaml=\" databricks.ModelServing \" pulumi-lang-java=\" databricks.ModelServing \"\u003e databricks.ModelServing \u003c/span\u003eto serve this model on a Databricks serving endpoint.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Secret \" pulumi-lang-dotnet=\" databricks.Secret \" pulumi-lang-go=\" Secret \" pulumi-lang-python=\" Secret \" pulumi-lang-yaml=\" databricks.Secret \" pulumi-lang-java=\" databricks.Secret \"\u003e databricks.Secret \u003c/span\u003eto manage [secrets](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code.\n\n","properties":{"activeDeployment":{"$ref":"#/types/databricks:index/AppActiveDeployment:AppActiveDeployment"},"appStatus":{"$ref":"#/types/databricks:index/AppAppStatus:AppAppStatus","description":"attribute\n"},"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this resource.\n"},"computeSize":{"type":"string","description":"A string specifying compute size for the App. Possible values are `MEDIUM`, `LARGE`.\n"},"computeStatus":{"$ref":"#/types/databricks:index/AppComputeStatus:AppComputeStatus","description":"attribute\n"},"createTime":{"type":"string","description":"The creation time of the app.\n"},"creator":{"type":"string","description":"The email of the user that created the app.\n"},"defaultSourceCodePath":{"type":"string","description":"The default workspace file system path of the source code from which app deployment are created. This field tracks the workspace source code path of the last active deployment.\n"},"description":{"type":"string","description":"The description of the app.\n"},"effectiveBudgetPolicyId":{"type":"string","description":"The effective budget policy ID.\n"},"effectiveUsagePolicyId":{"type":"string"},"effectiveUserApiScopes":{"type":"array","items":{"type":"string"},"description":"A list of effective api scopes granted to the user access token.\n"},"gitRepository":{"$ref":"#/types/databricks:index/AppGitRepository:AppGitRepository"},"name":{"type":"string","description":"The name of the app. The name must contain only lowercase alphanumeric characters and hyphens. It must be unique within the workspace.\n"},"noCompute":{"type":"boolean"},"oauth2AppClientId":{"type":"string"},"oauth2AppIntegrationId":{"type":"string"},"pendingDeployment":{"$ref":"#/types/databricks:index/AppPendingDeployment:AppPendingDeployment"},"providerConfig":{"$ref":"#/types/databricks:index/AppProviderConfig:AppProviderConfig"},"resources":{"type":"array","items":{"$ref":"#/types/databricks:index/AppResource:AppResource"},"description":"A list of resources that the app have access to.\n"},"servicePrincipalClientId":{"type":"string","description":"client_id (application_id) of the app service principal\n"},"servicePrincipalId":{"type":"integer","description":"id of the app service principal\n"},"servicePrincipalName":{"type":"string","description":"name of the app service principal\n"},"space":{"type":"string"},"updateTime":{"type":"string","description":"The update time of the app.\n"},"updater":{"type":"string","description":"The email of the user that last updated the app.\n"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"},"usagePolicyId":{"type":"string"},"userApiScopes":{"type":"array","items":{"type":"string"},"description":"A list of api scopes granted to the user access token.\n"}},"required":["activeDeployment","appStatus","computeSize","computeStatus","createTime","creator","defaultSourceCodePath","effectiveBudgetPolicyId","effectiveUsagePolicyId","effectiveUserApiScopes","name","oauth2AppClientId","oauth2AppIntegrationId","pendingDeployment","servicePrincipalClientId","servicePrincipalId","servicePrincipalName","updateTime","updater","url"],"inputProperties":{"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this resource.\n"},"computeSize":{"type":"string","description":"A string specifying compute size for the App. Possible values are `MEDIUM`, `LARGE`.\n"},"description":{"type":"string","description":"The description of the app.\n"},"gitRepository":{"$ref":"#/types/databricks:index/AppGitRepository:AppGitRepository"},"name":{"type":"string","description":"The name of the app. The name must contain only lowercase alphanumeric characters and hyphens. It must be unique within the workspace.\n"},"noCompute":{"type":"boolean"},"providerConfig":{"$ref":"#/types/databricks:index/AppProviderConfig:AppProviderConfig"},"resources":{"type":"array","items":{"$ref":"#/types/databricks:index/AppResource:AppResource"},"description":"A list of resources that the app have access to.\n"},"space":{"type":"string"},"usagePolicyId":{"type":"string"},"userApiScopes":{"type":"array","items":{"type":"string"},"description":"A list of api scopes granted to the user access token.\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering App resources.\n","properties":{"activeDeployment":{"$ref":"#/types/databricks:index/AppActiveDeployment:AppActiveDeployment"},"appStatus":{"$ref":"#/types/databricks:index/AppAppStatus:AppAppStatus","description":"attribute\n"},"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this resource.\n"},"computeSize":{"type":"string","description":"A string specifying compute size for the App. Possible values are `MEDIUM`, `LARGE`.\n"},"computeStatus":{"$ref":"#/types/databricks:index/AppComputeStatus:AppComputeStatus","description":"attribute\n"},"createTime":{"type":"string","description":"The creation time of the app.\n"},"creator":{"type":"string","description":"The email of the user that created the app.\n"},"defaultSourceCodePath":{"type":"string","description":"The default workspace file system path of the source code from which app deployment are created. This field tracks the workspace source code path of the last active deployment.\n"},"description":{"type":"string","description":"The description of the app.\n"},"effectiveBudgetPolicyId":{"type":"string","description":"The effective budget policy ID.\n"},"effectiveUsagePolicyId":{"type":"string"},"effectiveUserApiScopes":{"type":"array","items":{"type":"string"},"description":"A list of effective api scopes granted to the user access token.\n"},"gitRepository":{"$ref":"#/types/databricks:index/AppGitRepository:AppGitRepository"},"name":{"type":"string","description":"The name of the app. The name must contain only lowercase alphanumeric characters and hyphens. It must be unique within the workspace.\n"},"noCompute":{"type":"boolean"},"oauth2AppClientId":{"type":"string"},"oauth2AppIntegrationId":{"type":"string"},"pendingDeployment":{"$ref":"#/types/databricks:index/AppPendingDeployment:AppPendingDeployment"},"providerConfig":{"$ref":"#/types/databricks:index/AppProviderConfig:AppProviderConfig"},"resources":{"type":"array","items":{"$ref":"#/types/databricks:index/AppResource:AppResource"},"description":"A list of resources that the app have access to.\n"},"servicePrincipalClientId":{"type":"string","description":"client_id (application_id) of the app service principal\n"},"servicePrincipalId":{"type":"integer","description":"id of the app service principal\n"},"servicePrincipalName":{"type":"string","description":"name of the app service principal\n"},"space":{"type":"string"},"updateTime":{"type":"string","description":"The update time of the app.\n"},"updater":{"type":"string","description":"The email of the user that last updated the app.\n"},"url":{"type":"string","description":"The URL of the app once it is deployed.\n"},"usagePolicyId":{"type":"string"},"userApiScopes":{"type":"array","items":{"type":"string"},"description":"A list of api scopes granted to the user access token.\n"}},"type":"object"}},"databricks:index/appsSettingsCustomTemplate:AppsSettingsCustomTemplate":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nCustom App Templates store the metadata of custom app code hosted in an external Git repository, enabling users to reuse boilerplate code when creating apps.\n\n## Example Usage\n\n### Basic Example\n\nThis example creates a Custom Template in the workspace with the specified name.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.AppsSettingsCustomTemplate(\"this\", {\n    name: \"my-custom-template\",\n    description: \"A sample custom app template\",\n    gitRepo: \"https://github.com/example/repo.git\",\n    path: \"path-to-template\",\n    gitProvider: \"github\",\n    manifest: {\n        version: 1,\n        name: \"my-custom-app\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.AppsSettingsCustomTemplate(\"this\",\n    name=\"my-custom-template\",\n    description=\"A sample custom app template\",\n    git_repo=\"https://github.com/example/repo.git\",\n    path=\"path-to-template\",\n    git_provider=\"github\",\n    manifest={\n        \"version\": 1,\n        \"name\": \"my-custom-app\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.AppsSettingsCustomTemplate(\"this\", new()\n    {\n        Name = \"my-custom-template\",\n        Description = \"A sample custom app template\",\n        GitRepo = \"https://github.com/example/repo.git\",\n        Path = \"path-to-template\",\n        GitProvider = \"github\",\n        Manifest = new Databricks.Inputs.AppsSettingsCustomTemplateManifestArgs\n        {\n            Version = 1,\n            Name = \"my-custom-app\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewAppsSettingsCustomTemplate(ctx, \"this\", \u0026databricks.AppsSettingsCustomTemplateArgs{\n\t\t\tName:        pulumi.String(\"my-custom-template\"),\n\t\t\tDescription: pulumi.String(\"A sample custom app template\"),\n\t\t\tGitRepo:     pulumi.String(\"https://github.com/example/repo.git\"),\n\t\t\tPath:        pulumi.String(\"path-to-template\"),\n\t\t\tGitProvider: pulumi.String(\"github\"),\n\t\t\tManifest: \u0026databricks.AppsSettingsCustomTemplateManifestArgs{\n\t\t\t\tVersion: pulumi.Int(1),\n\t\t\t\tName:    pulumi.String(\"my-custom-app\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.AppsSettingsCustomTemplate;\nimport com.pulumi.databricks.AppsSettingsCustomTemplateArgs;\nimport com.pulumi.databricks.inputs.AppsSettingsCustomTemplateManifestArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new AppsSettingsCustomTemplate(\"this\", AppsSettingsCustomTemplateArgs.builder()\n            .name(\"my-custom-template\")\n            .description(\"A sample custom app template\")\n            .gitRepo(\"https://github.com/example/repo.git\")\n            .path(\"path-to-template\")\n            .gitProvider(\"github\")\n            .manifest(AppsSettingsCustomTemplateManifestArgs.builder()\n                .version(1)\n                .name(\"my-custom-app\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:AppsSettingsCustomTemplate\n    properties:\n      name: my-custom-template\n      description: A sample custom app template\n      gitRepo: https://github.com/example/repo.git\n      path: path-to-template\n      gitProvider: github\n      manifest:\n        version: 1\n        name: my-custom-app\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Example with API Scopes\n\nThis example creates a custom template that declares required user API scopes.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst apiScopesExample = new databricks.AppsSettingsCustomTemplate(\"api_scopes_example\", {\n    name: \"my-api-template\",\n    description: \"A template that requests user API scopes\",\n    gitRepo: \"https://github.com/example/my-app.git\",\n    path: \"templates/app\",\n    gitProvider: \"github\",\n    manifest: {\n        version: 1,\n        name: \"my-databricks-app\",\n        description: \"This app requires the SQL API scope.\",\n        userApiScopes: [\"sql\"],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\napi_scopes_example = databricks.AppsSettingsCustomTemplate(\"api_scopes_example\",\n    name=\"my-api-template\",\n    description=\"A template that requests user API scopes\",\n    git_repo=\"https://github.com/example/my-app.git\",\n    path=\"templates/app\",\n    git_provider=\"github\",\n    manifest={\n        \"version\": 1,\n        \"name\": \"my-databricks-app\",\n        \"description\": \"This app requires the SQL API scope.\",\n        \"user_api_scopes\": [\"sql\"],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var apiScopesExample = new Databricks.AppsSettingsCustomTemplate(\"api_scopes_example\", new()\n    {\n        Name = \"my-api-template\",\n        Description = \"A template that requests user API scopes\",\n        GitRepo = \"https://github.com/example/my-app.git\",\n        Path = \"templates/app\",\n        GitProvider = \"github\",\n        Manifest = new Databricks.Inputs.AppsSettingsCustomTemplateManifestArgs\n        {\n            Version = 1,\n            Name = \"my-databricks-app\",\n            Description = \"This app requires the SQL API scope.\",\n            UserApiScopes = new[]\n            {\n                \"sql\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewAppsSettingsCustomTemplate(ctx, \"api_scopes_example\", \u0026databricks.AppsSettingsCustomTemplateArgs{\n\t\t\tName:        pulumi.String(\"my-api-template\"),\n\t\t\tDescription: pulumi.String(\"A template that requests user API scopes\"),\n\t\t\tGitRepo:     pulumi.String(\"https://github.com/example/my-app.git\"),\n\t\t\tPath:        pulumi.String(\"templates/app\"),\n\t\t\tGitProvider: pulumi.String(\"github\"),\n\t\t\tManifest: \u0026databricks.AppsSettingsCustomTemplateManifestArgs{\n\t\t\t\tVersion:     pulumi.Int(1),\n\t\t\t\tName:        pulumi.String(\"my-databricks-app\"),\n\t\t\t\tDescription: pulumi.String(\"This app requires the SQL API scope.\"),\n\t\t\t\tUserApiScopes: []string{\n\t\t\t\t\t\"sql\",\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.AppsSettingsCustomTemplate;\nimport com.pulumi.databricks.AppsSettingsCustomTemplateArgs;\nimport com.pulumi.databricks.inputs.AppsSettingsCustomTemplateManifestArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var apiScopesExample = new AppsSettingsCustomTemplate(\"apiScopesExample\", AppsSettingsCustomTemplateArgs.builder()\n            .name(\"my-api-template\")\n            .description(\"A template that requests user API scopes\")\n            .gitRepo(\"https://github.com/example/my-app.git\")\n            .path(\"templates/app\")\n            .gitProvider(\"github\")\n            .manifest(AppsSettingsCustomTemplateManifestArgs.builder()\n                .version(1)\n                .name(\"my-databricks-app\")\n                .description(\"This app requires the SQL API scope.\")\n                .userApiScopes(List.of(\"sql\"))\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  apiScopesExample:\n    type: databricks:AppsSettingsCustomTemplate\n    name: api_scopes_example\n    properties:\n      name: my-api-template\n      description: A template that requests user API scopes\n      gitRepo: https://github.com/example/my-app.git\n      path: templates/app\n      gitProvider: github\n      manifest:\n        version: 1\n        name: my-databricks-app\n        description: This app requires the SQL API scope.\n        userApiScopes:\n          - sql\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Example with Resource Requirements\n\nThis example defines a template that requests specific workspace resources with permissions granted.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst resourcesExample = new databricks.AppsSettingsCustomTemplate(\"resources_example\", {\n    name: \"my-resource-template\",\n    description: \"Template that requires secret and SQL warehouse access\",\n    gitRepo: \"https://github.com/example/resource-app.git\",\n    path: \"resource-template\",\n    gitProvider: \"github\",\n    manifest: {\n        version: 1,\n        name: \"resource-consuming-app\",\n        description: \"This app requires access to a secret and SQL warehouse.\",\n        resourceSpecs: [\n            {\n                name: \"my-secret\",\n                description: \"A secret needed by the app\",\n                secretSpec: {\n                    permission: \"READ\",\n                },\n            },\n            {\n                name: \"warehouse\",\n                description: \"Warehouse access\",\n                sqlWarehouseSpec: {\n                    permission: \"CAN_USE\",\n                },\n            },\n        ],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nresources_example = databricks.AppsSettingsCustomTemplate(\"resources_example\",\n    name=\"my-resource-template\",\n    description=\"Template that requires secret and SQL warehouse access\",\n    git_repo=\"https://github.com/example/resource-app.git\",\n    path=\"resource-template\",\n    git_provider=\"github\",\n    manifest={\n        \"version\": 1,\n        \"name\": \"resource-consuming-app\",\n        \"description\": \"This app requires access to a secret and SQL warehouse.\",\n        \"resource_specs\": [\n            {\n                \"name\": \"my-secret\",\n                \"description\": \"A secret needed by the app\",\n                \"secret_spec\": {\n                    \"permission\": \"READ\",\n                },\n            },\n            {\n                \"name\": \"warehouse\",\n                \"description\": \"Warehouse access\",\n                \"sql_warehouse_spec\": {\n                    \"permission\": \"CAN_USE\",\n                },\n            },\n        ],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var resourcesExample = new Databricks.AppsSettingsCustomTemplate(\"resources_example\", new()\n    {\n        Name = \"my-resource-template\",\n        Description = \"Template that requires secret and SQL warehouse access\",\n        GitRepo = \"https://github.com/example/resource-app.git\",\n        Path = \"resource-template\",\n        GitProvider = \"github\",\n        Manifest = new Databricks.Inputs.AppsSettingsCustomTemplateManifestArgs\n        {\n            Version = 1,\n            Name = \"resource-consuming-app\",\n            Description = \"This app requires access to a secret and SQL warehouse.\",\n            ResourceSpecs = new[]\n            {\n                new Databricks.Inputs.AppsSettingsCustomTemplateManifestResourceSpecArgs\n                {\n                    Name = \"my-secret\",\n                    Description = \"A secret needed by the app\",\n                    SecretSpec = new Databricks.Inputs.AppsSettingsCustomTemplateManifestResourceSpecSecretSpecArgs\n                    {\n                        Permission = \"READ\",\n                    },\n                },\n                new Databricks.Inputs.AppsSettingsCustomTemplateManifestResourceSpecArgs\n                {\n                    Name = \"warehouse\",\n                    Description = \"Warehouse access\",\n                    SqlWarehouseSpec = new Databricks.Inputs.AppsSettingsCustomTemplateManifestResourceSpecSqlWarehouseSpecArgs\n                    {\n                        Permission = \"CAN_USE\",\n                    },\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewAppsSettingsCustomTemplate(ctx, \"resources_example\", \u0026databricks.AppsSettingsCustomTemplateArgs{\n\t\t\tName:        pulumi.String(\"my-resource-template\"),\n\t\t\tDescription: pulumi.String(\"Template that requires secret and SQL warehouse access\"),\n\t\t\tGitRepo:     pulumi.String(\"https://github.com/example/resource-app.git\"),\n\t\t\tPath:        pulumi.String(\"resource-template\"),\n\t\t\tGitProvider: pulumi.String(\"github\"),\n\t\t\tManifest: \u0026databricks.AppsSettingsCustomTemplateManifestArgs{\n\t\t\t\tVersion:     pulumi.Int(1),\n\t\t\t\tName:        pulumi.String(\"resource-consuming-app\"),\n\t\t\t\tDescription: pulumi.String(\"This app requires access to a secret and SQL warehouse.\"),\n\t\t\t\tResourceSpecs: databricks.AppsSettingsCustomTemplateManifestResourceSpecArray{\n\t\t\t\t\t\u0026databricks.AppsSettingsCustomTemplateManifestResourceSpecArgs{\n\t\t\t\t\t\tName:        pulumi.String(\"my-secret\"),\n\t\t\t\t\t\tDescription: pulumi.String(\"A secret needed by the app\"),\n\t\t\t\t\t\tSecretSpec: \u0026databricks.AppsSettingsCustomTemplateManifestResourceSpecSecretSpecArgs{\n\t\t\t\t\t\t\tPermission: pulumi.String(\"READ\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\t\u0026databricks.AppsSettingsCustomTemplateManifestResourceSpecArgs{\n\t\t\t\t\t\tName:        pulumi.String(\"warehouse\"),\n\t\t\t\t\t\tDescription: pulumi.String(\"Warehouse access\"),\n\t\t\t\t\t\tSqlWarehouseSpec: \u0026databricks.AppsSettingsCustomTemplateManifestResourceSpecSqlWarehouseSpecArgs{\n\t\t\t\t\t\t\tPermission: pulumi.String(\"CAN_USE\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.AppsSettingsCustomTemplate;\nimport com.pulumi.databricks.AppsSettingsCustomTemplateArgs;\nimport com.pulumi.databricks.inputs.AppsSettingsCustomTemplateManifestArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var resourcesExample = new AppsSettingsCustomTemplate(\"resourcesExample\", AppsSettingsCustomTemplateArgs.builder()\n            .name(\"my-resource-template\")\n            .description(\"Template that requires secret and SQL warehouse access\")\n            .gitRepo(\"https://github.com/example/resource-app.git\")\n            .path(\"resource-template\")\n            .gitProvider(\"github\")\n            .manifest(AppsSettingsCustomTemplateManifestArgs.builder()\n                .version(1)\n                .name(\"resource-consuming-app\")\n                .description(\"This app requires access to a secret and SQL warehouse.\")\n                .resourceSpecs(                \n                    AppsSettingsCustomTemplateManifestResourceSpecArgs.builder()\n                        .name(\"my-secret\")\n                        .description(\"A secret needed by the app\")\n                        .secretSpec(AppsSettingsCustomTemplateManifestResourceSpecSecretSpecArgs.builder()\n                            .permission(\"READ\")\n                            .build())\n                        .build(),\n                    AppsSettingsCustomTemplateManifestResourceSpecArgs.builder()\n                        .name(\"warehouse\")\n                        .description(\"Warehouse access\")\n                        .sqlWarehouseSpec(AppsSettingsCustomTemplateManifestResourceSpecSqlWarehouseSpecArgs.builder()\n                            .permission(\"CAN_USE\")\n                            .build())\n                        .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  resourcesExample:\n    type: databricks:AppsSettingsCustomTemplate\n    name: resources_example\n    properties:\n      name: my-resource-template\n      description: Template that requires secret and SQL warehouse access\n      gitRepo: https://github.com/example/resource-app.git\n      path: resource-template\n      gitProvider: github\n      manifest:\n        version: 1\n        name: resource-consuming-app\n        description: This app requires access to a secret and SQL warehouse.\n        resourceSpecs:\n          - name: my-secret\n            description: A secret needed by the app\n            secretSpec:\n              permission: READ\n          - name: warehouse\n            description: Warehouse access\n            sqlWarehouseSpec:\n              permission: CAN_USE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"creator":{"type":"string","description":"(string)\n"},"description":{"type":"string","description":"The description of the template\n"},"gitProvider":{"type":"string","description":"The Git provider of the template\n"},"gitRepo":{"type":"string","description":"The Git repository URL that the template resides in\n"},"manifest":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateManifest:AppsSettingsCustomTemplateManifest","description":"The manifest of the template. It defines fields and default values when installing the template\n"},"name":{"type":"string","description":"The name of the template. It must contain only alphanumeric characters, hyphens, underscores, and whitespaces.\nIt must be unique within the workspace\n"},"path":{"type":"string","description":"The path to the template within the Git repository\n"},"providerConfig":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateProviderConfig:AppsSettingsCustomTemplateProviderConfig","description":"Configure the provider for management through account provider.\n"}},"required":["creator","gitProvider","gitRepo","manifest","name","path"],"inputProperties":{"description":{"type":"string","description":"The description of the template\n"},"gitProvider":{"type":"string","description":"The Git provider of the template\n"},"gitRepo":{"type":"string","description":"The Git repository URL that the template resides in\n"},"manifest":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateManifest:AppsSettingsCustomTemplateManifest","description":"The manifest of the template. It defines fields and default values when installing the template\n"},"name":{"type":"string","description":"The name of the template. It must contain only alphanumeric characters, hyphens, underscores, and whitespaces.\nIt must be unique within the workspace\n"},"path":{"type":"string","description":"The path to the template within the Git repository\n"},"providerConfig":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateProviderConfig:AppsSettingsCustomTemplateProviderConfig","description":"Configure the provider for management through account provider.\n"}},"requiredInputs":["gitProvider","gitRepo","manifest","path"],"stateInputs":{"description":"Input properties used for looking up and filtering AppsSettingsCustomTemplate resources.\n","properties":{"creator":{"type":"string","description":"(string)\n"},"description":{"type":"string","description":"The description of the template\n"},"gitProvider":{"type":"string","description":"The Git provider of the template\n"},"gitRepo":{"type":"string","description":"The Git repository URL that the template resides in\n"},"manifest":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateManifest:AppsSettingsCustomTemplateManifest","description":"The manifest of the template. It defines fields and default values when installing the template\n"},"name":{"type":"string","description":"The name of the template. It must contain only alphanumeric characters, hyphens, underscores, and whitespaces.\nIt must be unique within the workspace\n"},"path":{"type":"string","description":"The path to the template within the Git repository\n"},"providerConfig":{"$ref":"#/types/databricks:index/AppsSettingsCustomTemplateProviderConfig:AppsSettingsCustomTemplateProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"}},"databricks:index/appsSpace:AppsSpace":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n","properties":{"createTime":{"type":"string","description":"(string) - The creation time of the app space. Formatted timestamp in ISO 6801\n"},"creator":{"type":"string","description":"(string) - The email of the user that created the app space\n"},"description":{"type":"string","description":"The description of the app space\n"},"effectiveUsagePolicyId":{"type":"string","description":"(string) - The effective usage policy ID used by apps in the space\n"},"effectiveUserApiScopes":{"type":"array","items":{"type":"string"},"description":"(list of string) - The effective api scopes granted to the user access token\n"},"name":{"type":"string","description":"(string) - The name of the app space. The name must contain only lowercase alphanumeric characters and hyphens.\nIt must be unique within the workspace\n"},"oauth2AppClientId":{"type":"string","description":"(string) - The OAuth2 app client ID for the app space\n"},"oauth2AppIntegrationId":{"type":"string","description":"(string) - The OAuth2 app integration ID for the app space\n"},"providerConfig":{"$ref":"#/types/databricks:index/AppsSpaceProviderConfig:AppsSpaceProviderConfig","description":"Configure the provider for management through account provider.\n"},"resources":{"type":"array","items":{"$ref":"#/types/databricks:index/AppsSpaceResource:AppsSpaceResource"},"description":"Resources for the app space. Resources configured at the space level are available to all apps in the space\n"},"servicePrincipalClientId":{"type":"string","description":"(string) - The service principal client ID for the app space\n"},"servicePrincipalId":{"type":"integer","description":"(integer) - The service principal ID for the app space\n"},"servicePrincipalName":{"type":"string","description":"(string) - The service principal name for the app space\n"},"status":{"$ref":"#/types/databricks:index/AppsSpaceStatus:AppsSpaceStatus","description":"(SpaceStatus) - The status of the app space\n"},"updateTime":{"type":"string","description":"(string) - The update time of the app space. Formatted timestamp in ISO 6801\n"},"updater":{"type":"string","description":"(string) - The email of the user that last updated the app space\n"},"usagePolicyId":{"type":"string","description":"The usage policy ID for managing cost at the space level\n"},"userApiScopes":{"type":"array","items":{"type":"string"},"description":"OAuth scopes for apps in the space\n"}},"required":["createTime","creator","effectiveUsagePolicyId","effectiveUserApiScopes","name","oauth2AppClientId","oauth2AppIntegrationId","servicePrincipalClientId","servicePrincipalId","servicePrincipalName","status","updateTime","updater"],"inputProperties":{"description":{"type":"string","description":"The description of the app space\n"},"providerConfig":{"$ref":"#/types/databricks:index/AppsSpaceProviderConfig:AppsSpaceProviderConfig","description":"Configure the provider for management through account provider.\n"},"resources":{"type":"array","items":{"$ref":"#/types/databricks:index/AppsSpaceResource:AppsSpaceResource"},"description":"Resources for the app space. Resources configured at the space level are available to all apps in the space\n"},"usagePolicyId":{"type":"string","description":"The usage policy ID for managing cost at the space level\n"},"userApiScopes":{"type":"array","items":{"type":"string"},"description":"OAuth scopes for apps in the space\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering AppsSpace resources.\n","properties":{"createTime":{"type":"string","description":"(string) - The creation time of the app space. Formatted timestamp in ISO 6801\n"},"creator":{"type":"string","description":"(string) - The email of the user that created the app space\n"},"description":{"type":"string","description":"The description of the app space\n"},"effectiveUsagePolicyId":{"type":"string","description":"(string) - The effective usage policy ID used by apps in the space\n"},"effectiveUserApiScopes":{"type":"array","items":{"type":"string"},"description":"(list of string) - The effective api scopes granted to the user access token\n"},"name":{"type":"string","description":"(string) - The name of the app space. The name must contain only lowercase alphanumeric characters and hyphens.\nIt must be unique within the workspace\n"},"oauth2AppClientId":{"type":"string","description":"(string) - The OAuth2 app client ID for the app space\n"},"oauth2AppIntegrationId":{"type":"string","description":"(string) - The OAuth2 app integration ID for the app space\n"},"providerConfig":{"$ref":"#/types/databricks:index/AppsSpaceProviderConfig:AppsSpaceProviderConfig","description":"Configure the provider for management through account provider.\n"},"resources":{"type":"array","items":{"$ref":"#/types/databricks:index/AppsSpaceResource:AppsSpaceResource"},"description":"Resources for the app space. Resources configured at the space level are available to all apps in the space\n"},"servicePrincipalClientId":{"type":"string","description":"(string) - The service principal client ID for the app space\n"},"servicePrincipalId":{"type":"integer","description":"(integer) - The service principal ID for the app space\n"},"servicePrincipalName":{"type":"string","description":"(string) - The service principal name for the app space\n"},"status":{"$ref":"#/types/databricks:index/AppsSpaceStatus:AppsSpaceStatus","description":"(SpaceStatus) - The status of the app space\n"},"updateTime":{"type":"string","description":"(string) - The update time of the app space. Formatted timestamp in ISO 6801\n"},"updater":{"type":"string","description":"(string) - The email of the user that last updated the app space\n"},"usagePolicyId":{"type":"string","description":"The usage policy ID for managing cost at the space level\n"},"userApiScopes":{"type":"array","items":{"type":"string"},"description":"OAuth scopes for apps in the space\n"}},"type":"object"}},"databricks:index/artifactAllowlist:ArtifactAllowlist":{"description":"In Databricks Runtime 13.3 and above, you can add libraries and init scripts to the allowlist in UC so that users can leverage these artifacts on compute configured with shared access mode.\n\n\u003e It is required to define all allowlist for an artifact type in a single resource, otherwise Pulumi cannot guarantee config drift prevention.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst initScripts = new databricks.ArtifactAllowlist(\"init_scripts\", {\n    artifactType: \"INIT_SCRIPT\",\n    artifactMatchers: [{\n        artifact: \"/Volumes/inits\",\n        matchType: \"PREFIX_MATCH\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ninit_scripts = databricks.ArtifactAllowlist(\"init_scripts\",\n    artifact_type=\"INIT_SCRIPT\",\n    artifact_matchers=[{\n        \"artifact\": \"/Volumes/inits\",\n        \"match_type\": \"PREFIX_MATCH\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var initScripts = new Databricks.ArtifactAllowlist(\"init_scripts\", new()\n    {\n        ArtifactType = \"INIT_SCRIPT\",\n        ArtifactMatchers = new[]\n        {\n            new Databricks.Inputs.ArtifactAllowlistArtifactMatcherArgs\n            {\n                Artifact = \"/Volumes/inits\",\n                MatchType = \"PREFIX_MATCH\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewArtifactAllowlist(ctx, \"init_scripts\", \u0026databricks.ArtifactAllowlistArgs{\n\t\t\tArtifactType: pulumi.String(\"INIT_SCRIPT\"),\n\t\t\tArtifactMatchers: databricks.ArtifactAllowlistArtifactMatcherArray{\n\t\t\t\t\u0026databricks.ArtifactAllowlistArtifactMatcherArgs{\n\t\t\t\t\tArtifact:  pulumi.String(\"/Volumes/inits\"),\n\t\t\t\t\tMatchType: pulumi.String(\"PREFIX_MATCH\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ArtifactAllowlist;\nimport com.pulumi.databricks.ArtifactAllowlistArgs;\nimport com.pulumi.databricks.inputs.ArtifactAllowlistArtifactMatcherArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var initScripts = new ArtifactAllowlist(\"initScripts\", ArtifactAllowlistArgs.builder()\n            .artifactType(\"INIT_SCRIPT\")\n            .artifactMatchers(ArtifactAllowlistArtifactMatcherArgs.builder()\n                .artifact(\"/Volumes/inits\")\n                .matchType(\"PREFIX_MATCH\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  initScripts:\n    type: databricks:ArtifactAllowlist\n    name: init_scripts\n    properties:\n      artifactType: INIT_SCRIPT\n      artifactMatchers:\n        - artifact: /Volumes/inits\n          matchType: PREFIX_MATCH\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Library \" pulumi-lang-dotnet=\" databricks.Library \" pulumi-lang-go=\" Library \" pulumi-lang-python=\" Library \" pulumi-lang-yaml=\" databricks.Library \" pulumi-lang-java=\" databricks.Library \"\u003e databricks.Library \u003c/span\u003eto install a [library](https://docs.databricks.com/libraries/index.html) on databricks_cluster.\n\n","properties":{"artifactMatchers":{"type":"array","items":{"$ref":"#/types/databricks:index/ArtifactAllowlistArtifactMatcher:ArtifactAllowlistArtifactMatcher"}},"artifactType":{"type":"string","description":"The artifact type of the allowlist. Can be `INIT_SCRIPT`, `LIBRARY_JAR` or `LIBRARY_MAVEN`. Change forces creation of a new resource.\n"},"createdAt":{"type":"integer","description":"Time at which this artifact allowlist was set.\n"},"createdBy":{"type":"string","description":"Identity that set the artifact allowlist.\n"},"metastoreId":{"type":"string","description":"ID of the parent metastore.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ArtifactAllowlistProviderConfig:ArtifactAllowlistProviderConfig"}},"required":["artifactMatchers","artifactType","createdAt","createdBy","metastoreId"],"inputProperties":{"artifactMatchers":{"type":"array","items":{"$ref":"#/types/databricks:index/ArtifactAllowlistArtifactMatcher:ArtifactAllowlistArtifactMatcher"}},"artifactType":{"type":"string","description":"The artifact type of the allowlist. Can be `INIT_SCRIPT`, `LIBRARY_JAR` or `LIBRARY_MAVEN`. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"createdAt":{"type":"integer","description":"Time at which this artifact allowlist was set.\n"},"createdBy":{"type":"string","description":"Identity that set the artifact allowlist.\n"},"metastoreId":{"type":"string","description":"ID of the parent metastore.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ArtifactAllowlistProviderConfig:ArtifactAllowlistProviderConfig"}},"requiredInputs":["artifactMatchers","artifactType"],"stateInputs":{"description":"Input properties used for looking up and filtering ArtifactAllowlist resources.\n","properties":{"artifactMatchers":{"type":"array","items":{"$ref":"#/types/databricks:index/ArtifactAllowlistArtifactMatcher:ArtifactAllowlistArtifactMatcher"}},"artifactType":{"type":"string","description":"The artifact type of the allowlist. Can be `INIT_SCRIPT`, `LIBRARY_JAR` or `LIBRARY_MAVEN`. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"createdAt":{"type":"integer","description":"Time at which this artifact allowlist was set.\n"},"createdBy":{"type":"string","description":"Identity that set the artifact allowlist.\n"},"metastoreId":{"type":"string","description":"ID of the parent metastore.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ArtifactAllowlistProviderConfig:ArtifactAllowlistProviderConfig"}},"type":"object"}},"databricks:index/automaticClusterUpdateWorkspaceSetting:AutomaticClusterUpdateWorkspaceSetting":{"properties":{"automaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspace:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspace"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/AutomaticClusterUpdateWorkspaceSettingProviderConfig:AutomaticClusterUpdateWorkspaceSettingProviderConfig"},"settingName":{"type":"string"}},"required":["automaticClusterUpdateWorkspace","etag","settingName"],"inputProperties":{"automaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspace:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspace"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/AutomaticClusterUpdateWorkspaceSettingProviderConfig:AutomaticClusterUpdateWorkspaceSettingProviderConfig"},"settingName":{"type":"string"}},"requiredInputs":["automaticClusterUpdateWorkspace"],"stateInputs":{"description":"Input properties used for looking up and filtering AutomaticClusterUpdateWorkspaceSetting resources.\n","properties":{"automaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspace:AutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspace"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/AutomaticClusterUpdateWorkspaceSettingProviderConfig:AutomaticClusterUpdateWorkspaceSettingProviderConfig"},"settingName":{"type":"string"}},"type":"object"}},"databricks:index/budget:Budget":{"description":"This resource allows you to manage [Databricks Budgets](https://docs.databricks.com/en/admin/account-settings/budgets.html).\n\n\u003e This feature is in [Public Preview](https://docs.databricks.com/release-notes/release-types.html).\n\n\u003e This resource can only be used with an account-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Budget(\"this\", {\n    displayName: \"databricks-workspace-budget\",\n    alertConfigurations: [{\n        timePeriod: \"MONTH\",\n        triggerType: \"CUMULATIVE_SPENDING_EXCEEDED\",\n        quantityType: \"LIST_PRICE_DOLLARS_USD\",\n        quantityThreshold: \"840\",\n        actionConfigurations: [{\n            actionType: \"EMAIL_NOTIFICATION\",\n            target: \"abc@gmail.com\",\n        }],\n    }],\n    filter: {\n        workspaceId: {\n            operator: \"IN\",\n            values: [1234567890098765],\n        },\n        tags: [\n            {\n                key: \"Team\",\n                value: {\n                    operator: \"IN\",\n                    values: [\"Data Science\"],\n                },\n            },\n            {\n                key: \"Environment\",\n                value: {\n                    operator: \"IN\",\n                    values: [\"Development\"],\n                },\n            },\n        ],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Budget(\"this\",\n    display_name=\"databricks-workspace-budget\",\n    alert_configurations=[{\n        \"time_period\": \"MONTH\",\n        \"trigger_type\": \"CUMULATIVE_SPENDING_EXCEEDED\",\n        \"quantity_type\": \"LIST_PRICE_DOLLARS_USD\",\n        \"quantity_threshold\": \"840\",\n        \"action_configurations\": [{\n            \"action_type\": \"EMAIL_NOTIFICATION\",\n            \"target\": \"abc@gmail.com\",\n        }],\n    }],\n    filter={\n        \"workspace_id\": {\n            \"operator\": \"IN\",\n            \"values\": [1234567890098765],\n        },\n        \"tags\": [\n            {\n                \"key\": \"Team\",\n                \"value\": {\n                    \"operator\": \"IN\",\n                    \"values\": [\"Data Science\"],\n                },\n            },\n            {\n                \"key\": \"Environment\",\n                \"value\": {\n                    \"operator\": \"IN\",\n                    \"values\": [\"Development\"],\n                },\n            },\n        ],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Budget(\"this\", new()\n    {\n        DisplayName = \"databricks-workspace-budget\",\n        AlertConfigurations = new[]\n        {\n            new Databricks.Inputs.BudgetAlertConfigurationArgs\n            {\n                TimePeriod = \"MONTH\",\n                TriggerType = \"CUMULATIVE_SPENDING_EXCEEDED\",\n                QuantityType = \"LIST_PRICE_DOLLARS_USD\",\n                QuantityThreshold = \"840\",\n                ActionConfigurations = new[]\n                {\n                    new Databricks.Inputs.BudgetAlertConfigurationActionConfigurationArgs\n                    {\n                        ActionType = \"EMAIL_NOTIFICATION\",\n                        Target = \"abc@gmail.com\",\n                    },\n                },\n            },\n        },\n        Filter = new Databricks.Inputs.BudgetFilterArgs\n        {\n            WorkspaceId = new Databricks.Inputs.BudgetFilterWorkspaceIdArgs\n            {\n                Operator = \"IN\",\n                Values = new[]\n                {\n                    1234567890098765,\n                },\n            },\n            Tags = new[]\n            {\n                new Databricks.Inputs.BudgetFilterTagArgs\n                {\n                    Key = \"Team\",\n                    Value = new Databricks.Inputs.BudgetFilterTagValueArgs\n                    {\n                        Operator = \"IN\",\n                        Values = new[]\n                        {\n                            \"Data Science\",\n                        },\n                    },\n                },\n                new Databricks.Inputs.BudgetFilterTagArgs\n                {\n                    Key = \"Environment\",\n                    Value = new Databricks.Inputs.BudgetFilterTagValueArgs\n                    {\n                        Operator = \"IN\",\n                        Values = new[]\n                        {\n                            \"Development\",\n                        },\n                    },\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewBudget(ctx, \"this\", \u0026databricks.BudgetArgs{\n\t\t\tDisplayName: pulumi.String(\"databricks-workspace-budget\"),\n\t\t\tAlertConfigurations: databricks.BudgetAlertConfigurationArray{\n\t\t\t\t\u0026databricks.BudgetAlertConfigurationArgs{\n\t\t\t\t\tTimePeriod:        pulumi.String(\"MONTH\"),\n\t\t\t\t\tTriggerType:       pulumi.String(\"CUMULATIVE_SPENDING_EXCEEDED\"),\n\t\t\t\t\tQuantityType:      pulumi.String(\"LIST_PRICE_DOLLARS_USD\"),\n\t\t\t\t\tQuantityThreshold: pulumi.String(\"840\"),\n\t\t\t\t\tActionConfigurations: databricks.BudgetAlertConfigurationActionConfigurationArray{\n\t\t\t\t\t\t\u0026databricks.BudgetAlertConfigurationActionConfigurationArgs{\n\t\t\t\t\t\t\tActionType: pulumi.String(\"EMAIL_NOTIFICATION\"),\n\t\t\t\t\t\t\tTarget:     pulumi.String(\"abc@gmail.com\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tFilter: \u0026databricks.BudgetFilterArgs{\n\t\t\t\tWorkspaceId: \u0026databricks.BudgetFilterWorkspaceIdArgs{\n\t\t\t\t\tOperator: pulumi.String(\"IN\"),\n\t\t\t\t\tValues: pulumi.IntArray{\n\t\t\t\t\t\tpulumi.Int(1234567890098765),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tTags: databricks.BudgetFilterTagArray{\n\t\t\t\t\t\u0026databricks.BudgetFilterTagArgs{\n\t\t\t\t\t\tKey: pulumi.String(\"Team\"),\n\t\t\t\t\t\tValue: \u0026databricks.BudgetFilterTagValueArgs{\n\t\t\t\t\t\t\tOperator: pulumi.String(\"IN\"),\n\t\t\t\t\t\t\tValues: pulumi.StringArray{\n\t\t\t\t\t\t\t\tpulumi.String(\"Data Science\"),\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\t\u0026databricks.BudgetFilterTagArgs{\n\t\t\t\t\t\tKey: pulumi.String(\"Environment\"),\n\t\t\t\t\t\tValue: \u0026databricks.BudgetFilterTagValueArgs{\n\t\t\t\t\t\t\tOperator: pulumi.String(\"IN\"),\n\t\t\t\t\t\t\tValues: pulumi.StringArray{\n\t\t\t\t\t\t\t\tpulumi.String(\"Development\"),\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Budget;\nimport com.pulumi.databricks.BudgetArgs;\nimport com.pulumi.databricks.inputs.BudgetAlertConfigurationArgs;\nimport com.pulumi.databricks.inputs.BudgetFilterArgs;\nimport com.pulumi.databricks.inputs.BudgetFilterWorkspaceIdArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Budget(\"this\", BudgetArgs.builder()\n            .displayName(\"databricks-workspace-budget\")\n            .alertConfigurations(BudgetAlertConfigurationArgs.builder()\n                .timePeriod(\"MONTH\")\n                .triggerType(\"CUMULATIVE_SPENDING_EXCEEDED\")\n                .quantityType(\"LIST_PRICE_DOLLARS_USD\")\n                .quantityThreshold(\"840\")\n                .actionConfigurations(BudgetAlertConfigurationActionConfigurationArgs.builder()\n                    .actionType(\"EMAIL_NOTIFICATION\")\n                    .target(\"abc@gmail.com\")\n                    .build())\n                .build())\n            .filter(BudgetFilterArgs.builder()\n                .workspaceId(BudgetFilterWorkspaceIdArgs.builder()\n                    .operator(\"IN\")\n                    .values(1234567890098765)\n                    .build())\n                .tags(                \n                    BudgetFilterTagArgs.builder()\n                        .key(\"Team\")\n                        .value(BudgetFilterTagValueArgs.builder()\n                            .operator(\"IN\")\n                            .values(\"Data Science\")\n                            .build())\n                        .build(),\n                    BudgetFilterTagArgs.builder()\n                        .key(\"Environment\")\n                        .value(BudgetFilterTagValueArgs.builder()\n                            .operator(\"IN\")\n                            .values(\"Development\")\n                            .build())\n                        .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Budget\n    properties:\n      displayName: databricks-workspace-budget\n      alertConfigurations:\n        - timePeriod: MONTH\n          triggerType: CUMULATIVE_SPENDING_EXCEEDED\n          quantityType: LIST_PRICE_DOLLARS_USD\n          quantityThreshold: '840'\n          actionConfigurations:\n            - actionType: EMAIL_NOTIFICATION\n              target: abc@gmail.com\n      filter:\n        workspaceId:\n          operator: IN\n          values:\n            - 1.234567890098765e+15\n        tags:\n          - key: Team\n            value:\n              operator: IN\n              values:\n                - Data Science\n          - key: Environment\n            value:\n              operator: IN\n              values:\n                - Development\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up Databricks workspaces.\n\n","properties":{"accountId":{"type":"string","description":"The ID of the Databricks Account.\n"},"alertConfigurations":{"type":"array","items":{"$ref":"#/types/databricks:index/BudgetAlertConfiguration:BudgetAlertConfiguration"}},"budgetConfigurationId":{"type":"string","description":"The ID of the budget configuration.\n"},"createTime":{"type":"integer"},"displayName":{"type":"string","description":"Name of the budget in Databricks Account.\n"},"filter":{"$ref":"#/types/databricks:index/BudgetFilter:BudgetFilter"},"updateTime":{"type":"integer"}},"required":["accountId","budgetConfigurationId","createTime","updateTime"],"inputProperties":{"accountId":{"type":"string","description":"The ID of the Databricks Account.\n"},"alertConfigurations":{"type":"array","items":{"$ref":"#/types/databricks:index/BudgetAlertConfiguration:BudgetAlertConfiguration"}},"budgetConfigurationId":{"type":"string","description":"The ID of the budget configuration.\n"},"createTime":{"type":"integer"},"displayName":{"type":"string","description":"Name of the budget in Databricks Account.\n"},"filter":{"$ref":"#/types/databricks:index/BudgetFilter:BudgetFilter"},"updateTime":{"type":"integer"}},"stateInputs":{"description":"Input properties used for looking up and filtering Budget resources.\n","properties":{"accountId":{"type":"string","description":"The ID of the Databricks Account.\n"},"alertConfigurations":{"type":"array","items":{"$ref":"#/types/databricks:index/BudgetAlertConfiguration:BudgetAlertConfiguration"}},"budgetConfigurationId":{"type":"string","description":"The ID of the budget configuration.\n"},"createTime":{"type":"integer"},"displayName":{"type":"string","description":"Name of the budget in Databricks Account.\n"},"filter":{"$ref":"#/types/databricks:index/BudgetFilter:BudgetFilter"},"updateTime":{"type":"integer"}},"type":"object"}},"databricks:index/budgetPolicy:BudgetPolicy":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nAdministrators can use budget policies to ensure that the correct tags appear automatically on serverless resources without depending on users to attach tags manually, allowing for customized cost reporting and chargebacks.\n\nBudget policies consist of tags that are applied to any serverless compute activity incurred by a user assigned to the policy.\n\nThe tags are logged in your billing records, allowing you to attribute serverless usage to specific budgets.\n\n\u003e **Note** This resource can only be used with an account-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.BudgetPolicy(\"this\", {\n    policyName: \"my-budget-policy\",\n    customTags: [{\n        key: \"mykey\",\n        value: \"myvalue\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.BudgetPolicy(\"this\",\n    policy_name=\"my-budget-policy\",\n    custom_tags=[{\n        \"key\": \"mykey\",\n        \"value\": \"myvalue\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.BudgetPolicy(\"this\", new()\n    {\n        PolicyName = \"my-budget-policy\",\n        CustomTags = new[]\n        {\n            new Databricks.Inputs.BudgetPolicyCustomTagArgs\n            {\n                Key = \"mykey\",\n                Value = \"myvalue\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewBudgetPolicy(ctx, \"this\", \u0026databricks.BudgetPolicyArgs{\n\t\t\tPolicyName: pulumi.String(\"my-budget-policy\"),\n\t\t\tCustomTags: databricks.BudgetPolicyCustomTagArray{\n\t\t\t\t\u0026databricks.BudgetPolicyCustomTagArgs{\n\t\t\t\t\tKey:   pulumi.String(\"mykey\"),\n\t\t\t\t\tValue: pulumi.String(\"myvalue\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.BudgetPolicy;\nimport com.pulumi.databricks.BudgetPolicyArgs;\nimport com.pulumi.databricks.inputs.BudgetPolicyCustomTagArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new BudgetPolicy(\"this\", BudgetPolicyArgs.builder()\n            .policyName(\"my-budget-policy\")\n            .customTags(BudgetPolicyCustomTagArgs.builder()\n                .key(\"mykey\")\n                .value(\"myvalue\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:BudgetPolicy\n    properties:\n      policyName: my-budget-policy\n      customTags:\n        - key: mykey\n          value: myvalue\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"bindingWorkspaceIds":{"type":"array","items":{"type":"integer"},"description":"List of workspaces that this budget policy will be exclusively bound to.\nAn empty binding implies that this budget policy is open to any workspace in the account\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/BudgetPolicyCustomTag:BudgetPolicyCustomTag"},"description":"A list of tags defined by the customer. At most 20 entries are allowed per policy\n"},"policyId":{"type":"string","description":"(string) - The Id of the policy. This field is generated by Databricks and globally unique\n"},"policyName":{"type":"string","description":"The name of the policy.\n- Must be unique among active policies.\n- Can contain only characters from the ISO 8859-1 (latin1) set.\n- Can't start with reserved keywords such as `databricks:default-policy`\n"}},"required":["policyId"],"inputProperties":{"bindingWorkspaceIds":{"type":"array","items":{"type":"integer"},"description":"List of workspaces that this budget policy will be exclusively bound to.\nAn empty binding implies that this budget policy is open to any workspace in the account\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/BudgetPolicyCustomTag:BudgetPolicyCustomTag"},"description":"A list of tags defined by the customer. At most 20 entries are allowed per policy\n"},"policyId":{"type":"string","description":"(string) - The Id of the policy. This field is generated by Databricks and globally unique\n"},"policyName":{"type":"string","description":"The name of the policy.\n- Must be unique among active policies.\n- Can contain only characters from the ISO 8859-1 (latin1) set.\n- Can't start with reserved keywords such as `databricks:default-policy`\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering BudgetPolicy resources.\n","properties":{"bindingWorkspaceIds":{"type":"array","items":{"type":"integer"},"description":"List of workspaces that this budget policy will be exclusively bound to.\nAn empty binding implies that this budget policy is open to any workspace in the account\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/BudgetPolicyCustomTag:BudgetPolicyCustomTag"},"description":"A list of tags defined by the customer. At most 20 entries are allowed per policy\n"},"policyId":{"type":"string","description":"(string) - The Id of the policy. This field is generated by Databricks and globally unique\n"},"policyName":{"type":"string","description":"The name of the policy.\n- Must be unique among active policies.\n- Can contain only characters from the ISO 8859-1 (latin1) set.\n- Can't start with reserved keywords such as `databricks:default-policy`\n"}},"type":"object"}},"databricks:index/catalog:Catalog":{"description":"Within a metastore, Unity Catalog provides a 3-level namespace for organizing data: Catalogs, Databases (also called Schemas), and Tables / Views.\n\nA \u003cspan pulumi-lang-nodejs=\"`databricks.Catalog`\" pulumi-lang-dotnet=\"`databricks.Catalog`\" pulumi-lang-go=\"`Catalog`\" pulumi-lang-python=\"`Catalog`\" pulumi-lang-yaml=\"`databricks.Catalog`\" pulumi-lang-java=\"`databricks.Catalog`\"\u003e`databricks.Catalog`\u003c/span\u003e is contained within\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eand can contain databricks_schema. By default, Databricks creates \u003cspan pulumi-lang-nodejs=\"`default`\" pulumi-lang-dotnet=\"`Default`\" pulumi-lang-go=\"`default`\" pulumi-lang-python=\"`default`\" pulumi-lang-yaml=\"`default`\" pulumi-lang-java=\"`default`\"\u003e`default`\u003c/span\u003e schema for every new catalog, but Pulumi plugin is removing this auto-created schema, so that resource destruction could be done in a clean way.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    comment: \"this catalog is managed by terraform\",\n    properties: {\n        purpose: \"testing\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    comment=\"this catalog is managed by terraform\",\n    properties={\n        \"purpose\": \"testing\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        Comment = \"this catalog is managed by terraform\",\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:    pulumi.String(\"sandbox\"),\n\t\t\tComment: pulumi.String(\"this catalog is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .comment(\"this catalog is managed by terraform\")\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      comment: this catalog is managed by terraform\n      properties:\n        purpose: testing\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.getTables \" pulumi-lang-dotnet=\" databricks.getTables \" pulumi-lang-go=\" getTables \" pulumi-lang-python=\" get_tables \" pulumi-lang-yaml=\" databricks.getTables \" pulumi-lang-java=\" databricks.getTables \"\u003e databricks.getTables \u003c/span\u003edata to list tables within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getSchemas \" pulumi-lang-dotnet=\" databricks.getSchemas \" pulumi-lang-go=\" getSchemas \" pulumi-lang-python=\" get_schemas \" pulumi-lang-yaml=\" databricks.getSchemas \" pulumi-lang-java=\" databricks.getSchemas \"\u003e databricks.getSchemas \u003c/span\u003edata to list schemas within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getCatalogs \" pulumi-lang-dotnet=\" databricks.getCatalogs \" pulumi-lang-go=\" getCatalogs \" pulumi-lang-python=\" get_catalogs \" pulumi-lang-yaml=\" databricks.getCatalogs \" pulumi-lang-java=\" databricks.getCatalogs \"\u003e databricks.getCatalogs \u003c/span\u003edata to list catalogs within Unity Catalog.\n\n","properties":{"browseOnly":{"type":"boolean"},"catalogType":{"type":"string","description":"the type of the catalog.\n"},"comment":{"type":"string","description":"User-supplied free-form text.\n"},"connectionName":{"type":"string","description":"For Foreign Catalogs: the name of the connection to an external data source. Changes forces creation of a new resource.\n"},"createdAt":{"type":"integer","description":"time at which this catalog was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"username of catalog creator.\n"},"effectivePredictiveOptimizationFlag":{"$ref":"#/types/databricks:index/CatalogEffectivePredictiveOptimizationFlag:CatalogEffectivePredictiveOptimizationFlag"},"enablePredictiveOptimization":{"type":"string","description":"Whether predictive optimization should be enabled for this object and objects under it. Can be `ENABLE`, `DISABLE` or `INHERIT`\n"},"forceDestroy":{"type":"boolean","description":"Delete catalog regardless of its contents.\n"},"fullName":{"type":"string"},"isolationMode":{"type":"string","description":"Whether the catalog is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATED` or `OPEN`. Setting the catalog to `ISOLATED` will automatically allow access from the current workspace.\n"},"metastoreId":{"type":"string","description":"ID of the parent metastore.\n"},"name":{"type":"string","description":"Name of Catalog relative to parent metastore.\n"},"options":{"type":"object","additionalProperties":{"type":"string"},"description":"For Foreign Catalogs: the name of the entity from an external data source that maps to a catalog. For example, the database name in a PostgreSQL server.\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the catalog owner.\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"Extensible Catalog properties.\n"},"providerConfig":{"$ref":"#/types/databricks:index/CatalogProviderConfig:CatalogProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"providerName":{"type":"string","description":"For Delta Sharing Catalogs: the name of the delta sharing provider. Change forces creation of a new resource.\n"},"provisioningInfo":{"$ref":"#/types/databricks:index/CatalogProvisioningInfo:CatalogProvisioningInfo"},"securableType":{"type":"string","description":"the type of Unity Catalog securable.\n"},"shareName":{"type":"string","description":"For Delta Sharing Catalogs: the name of the share under the share provider. Change forces creation of a new resource.\n"},"storageLocation":{"type":"string","description":"effective storage Location URL (full path) for managed tables within catalog.\n"},"storageRoot":{"type":"string","description":"Managed location of the catalog. Location in cloud storage where data for managed tables will be stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). If not specified, the location will default to the metastore root location. Change forces creation of a new resource.\n"},"updatedAt":{"type":"integer","description":"time at which this catalog was last modified, in epoch milliseconds..\n"},"updatedBy":{"type":"string","description":"username of user who last modified catalog.\n"}},"required":["catalogType","createdAt","createdBy","effectivePredictiveOptimizationFlag","enablePredictiveOptimization","fullName","isolationMode","metastoreId","name","owner","provisioningInfo","securableType","storageLocation","updatedAt","updatedBy"],"inputProperties":{"browseOnly":{"type":"boolean"},"comment":{"type":"string","description":"User-supplied free-form text.\n"},"connectionName":{"type":"string","description":"For Foreign Catalogs: the name of the connection to an external data source. Changes forces creation of a new resource.\n","willReplaceOnChanges":true},"effectivePredictiveOptimizationFlag":{"$ref":"#/types/databricks:index/CatalogEffectivePredictiveOptimizationFlag:CatalogEffectivePredictiveOptimizationFlag"},"enablePredictiveOptimization":{"type":"string","description":"Whether predictive optimization should be enabled for this object and objects under it. Can be `ENABLE`, `DISABLE` or `INHERIT`\n"},"forceDestroy":{"type":"boolean","description":"Delete catalog regardless of its contents.\n"},"isolationMode":{"type":"string","description":"Whether the catalog is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATED` or `OPEN`. Setting the catalog to `ISOLATED` will automatically allow access from the current workspace.\n"},"metastoreId":{"type":"string","description":"ID of the parent metastore.\n"},"name":{"type":"string","description":"Name of Catalog relative to parent metastore.\n"},"options":{"type":"object","additionalProperties":{"type":"string"},"description":"For Foreign Catalogs: the name of the entity from an external data source that maps to a catalog. For example, the database name in a PostgreSQL server.\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the catalog owner.\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"Extensible Catalog properties.\n"},"providerConfig":{"$ref":"#/types/databricks:index/CatalogProviderConfig:CatalogProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"providerName":{"type":"string","description":"For Delta Sharing Catalogs: the name of the delta sharing provider. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"provisioningInfo":{"$ref":"#/types/databricks:index/CatalogProvisioningInfo:CatalogProvisioningInfo"},"shareName":{"type":"string","description":"For Delta Sharing Catalogs: the name of the share under the share provider. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"storageRoot":{"type":"string","description":"Managed location of the catalog. Location in cloud storage where data for managed tables will be stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). If not specified, the location will default to the metastore root location. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"stateInputs":{"description":"Input properties used for looking up and filtering Catalog resources.\n","properties":{"browseOnly":{"type":"boolean"},"catalogType":{"type":"string","description":"the type of the catalog.\n"},"comment":{"type":"string","description":"User-supplied free-form text.\n"},"connectionName":{"type":"string","description":"For Foreign Catalogs: the name of the connection to an external data source. Changes forces creation of a new resource.\n","willReplaceOnChanges":true},"createdAt":{"type":"integer","description":"time at which this catalog was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"username of catalog creator.\n"},"effectivePredictiveOptimizationFlag":{"$ref":"#/types/databricks:index/CatalogEffectivePredictiveOptimizationFlag:CatalogEffectivePredictiveOptimizationFlag"},"enablePredictiveOptimization":{"type":"string","description":"Whether predictive optimization should be enabled for this object and objects under it. Can be `ENABLE`, `DISABLE` or `INHERIT`\n"},"forceDestroy":{"type":"boolean","description":"Delete catalog regardless of its contents.\n"},"fullName":{"type":"string"},"isolationMode":{"type":"string","description":"Whether the catalog is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATED` or `OPEN`. Setting the catalog to `ISOLATED` will automatically allow access from the current workspace.\n"},"metastoreId":{"type":"string","description":"ID of the parent metastore.\n"},"name":{"type":"string","description":"Name of Catalog relative to parent metastore.\n"},"options":{"type":"object","additionalProperties":{"type":"string"},"description":"For Foreign Catalogs: the name of the entity from an external data source that maps to a catalog. For example, the database name in a PostgreSQL server.\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the catalog owner.\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"Extensible Catalog properties.\n"},"providerConfig":{"$ref":"#/types/databricks:index/CatalogProviderConfig:CatalogProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"providerName":{"type":"string","description":"For Delta Sharing Catalogs: the name of the delta sharing provider. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"provisioningInfo":{"$ref":"#/types/databricks:index/CatalogProvisioningInfo:CatalogProvisioningInfo"},"securableType":{"type":"string","description":"the type of Unity Catalog securable.\n"},"shareName":{"type":"string","description":"For Delta Sharing Catalogs: the name of the share under the share provider. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"storageLocation":{"type":"string","description":"effective storage Location URL (full path) for managed tables within catalog.\n"},"storageRoot":{"type":"string","description":"Managed location of the catalog. Location in cloud storage where data for managed tables will be stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). If not specified, the location will default to the metastore root location. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"updatedAt":{"type":"integer","description":"time at which this catalog was last modified, in epoch milliseconds..\n"},"updatedBy":{"type":"string","description":"username of user who last modified catalog.\n"}},"type":"object"}},"databricks:index/catalogWorkspaceBinding:CatalogWorkspaceBinding":{"description":"\u003e This resource has been deprecated and will be removed soon. Please use the\u003cspan pulumi-lang-nodejs=\" databricks.WorkspaceBinding \" pulumi-lang-dotnet=\" databricks.WorkspaceBinding \" pulumi-lang-go=\" WorkspaceBinding \" pulumi-lang-python=\" WorkspaceBinding \" pulumi-lang-yaml=\" databricks.WorkspaceBinding \" pulumi-lang-java=\" databricks.WorkspaceBinding \"\u003e databricks.WorkspaceBinding \u003c/span\u003eresource instead.\n\nIf you use workspaces to isolate user data access, you may want to limit catalog access to specific workspaces in your account, also known as workspace-catalog binding\n\nBy default, Databricks assigns the catalog to all workspaces attached to the current metastore. By using \u003cspan pulumi-lang-nodejs=\"`databricks.CatalogWorkspaceBinding`\" pulumi-lang-dotnet=\"`databricks.CatalogWorkspaceBinding`\" pulumi-lang-go=\"`CatalogWorkspaceBinding`\" pulumi-lang-python=\"`CatalogWorkspaceBinding`\" pulumi-lang-yaml=\"`databricks.CatalogWorkspaceBinding`\" pulumi-lang-java=\"`databricks.CatalogWorkspaceBinding`\"\u003e`databricks.CatalogWorkspaceBinding`\u003c/span\u003e, the catalog will be unassigned from all workspaces and only assigned explicitly using this resource.\n\n\u003e To use this resource the catalog must have its isolation mode set to `ISOLATED` in the \u003cspan pulumi-lang-nodejs=\"`databricks.Catalog`\" pulumi-lang-dotnet=\"`databricks.Catalog`\" pulumi-lang-go=\"`Catalog`\" pulumi-lang-python=\"`Catalog`\" pulumi-lang-yaml=\"`databricks.Catalog`\" pulumi-lang-java=\"`databricks.Catalog`\"\u003e`databricks.Catalog`\u003c/span\u003e resource. Alternatively, the isolation mode can be set using the UI or API by following [this guide](https://docs.databricks.com/data-governance/unity-catalog/create-catalogs.html#configuration).\n\n\u003e If the catalog's isolation mode was set to `ISOLATED` using Pulumi then the catalog will have been automatically bound to the workspace it was created from.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    isolationMode: \"ISOLATED\",\n});\nconst sandboxCatalogWorkspaceBinding = new databricks.CatalogWorkspaceBinding(\"sandbox\", {\n    securableName: sandbox.name,\n    workspaceId: other.workspaceId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    isolation_mode=\"ISOLATED\")\nsandbox_catalog_workspace_binding = databricks.CatalogWorkspaceBinding(\"sandbox\",\n    securable_name=sandbox.name,\n    workspace_id=other[\"workspaceId\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        IsolationMode = \"ISOLATED\",\n    });\n\n    var sandboxCatalogWorkspaceBinding = new Databricks.CatalogWorkspaceBinding(\"sandbox\", new()\n    {\n        SecurableName = sandbox.Name,\n        WorkspaceId = other.WorkspaceId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:          pulumi.String(\"sandbox\"),\n\t\t\tIsolationMode: pulumi.String(\"ISOLATED\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewCatalogWorkspaceBinding(ctx, \"sandbox\", \u0026databricks.CatalogWorkspaceBindingArgs{\n\t\t\tSecurableName: sandbox.Name,\n\t\t\tWorkspaceId:   pulumi.Any(other.WorkspaceId),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.CatalogWorkspaceBinding;\nimport com.pulumi.databricks.CatalogWorkspaceBindingArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .isolationMode(\"ISOLATED\")\n            .build());\n\n        var sandboxCatalogWorkspaceBinding = new CatalogWorkspaceBinding(\"sandboxCatalogWorkspaceBinding\", CatalogWorkspaceBindingArgs.builder()\n            .securableName(sandbox.name())\n            .workspaceId(other.workspaceId())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      isolationMode: ISOLATED\n  sandboxCatalogWorkspaceBinding:\n    type: databricks:CatalogWorkspaceBinding\n    name: sandbox\n    properties:\n      securableName: ${sandbox.name}\n      workspaceId: ${other.workspaceId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"bindingType":{"type":"string","description":"Binding mode. Default to `BINDING_TYPE_READ_WRITE`. Possible values are `BINDING_TYPE_READ_ONLY`, `BINDING_TYPE_READ_WRITE`\n"},"catalogName":{"type":"string","deprecationMessage":"Please use 'securable_name' and 'securable_type instead."},"providerConfig":{"$ref":"#/types/databricks:index/CatalogWorkspaceBindingProviderConfig:CatalogWorkspaceBindingProviderConfig"},"securableName":{"type":"string","description":"Name of securable. Change forces creation of a new resource.\n"},"securableType":{"type":"string","description":"Type of securable. Default to \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e. Change forces creation of a new resource.\n"},"workspaceId":{"type":"string","description":"ID of the workspace. Change forces creation of a new resource.\n"}},"required":["securableName","workspaceId"],"inputProperties":{"bindingType":{"type":"string","description":"Binding mode. Default to `BINDING_TYPE_READ_WRITE`. Possible values are `BINDING_TYPE_READ_ONLY`, `BINDING_TYPE_READ_WRITE`\n","willReplaceOnChanges":true},"catalogName":{"type":"string","deprecationMessage":"Please use 'securable_name' and 'securable_type instead.","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/CatalogWorkspaceBindingProviderConfig:CatalogWorkspaceBindingProviderConfig","willReplaceOnChanges":true},"securableName":{"type":"string","description":"Name of securable. Change forces creation of a new resource.\n"},"securableType":{"type":"string","description":"Type of securable. Default to \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"workspaceId":{"type":"string","description":"ID of the workspace. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"requiredInputs":["workspaceId"],"stateInputs":{"description":"Input properties used for looking up and filtering CatalogWorkspaceBinding resources.\n","properties":{"bindingType":{"type":"string","description":"Binding mode. Default to `BINDING_TYPE_READ_WRITE`. Possible values are `BINDING_TYPE_READ_ONLY`, `BINDING_TYPE_READ_WRITE`\n","willReplaceOnChanges":true},"catalogName":{"type":"string","deprecationMessage":"Please use 'securable_name' and 'securable_type instead.","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/CatalogWorkspaceBindingProviderConfig:CatalogWorkspaceBindingProviderConfig","willReplaceOnChanges":true},"securableName":{"type":"string","description":"Name of securable. Change forces creation of a new resource.\n"},"securableType":{"type":"string","description":"Type of securable. Default to \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"workspaceId":{"type":"string","description":"ID of the workspace. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/cluster:Cluster":{"description":"This resource allows you to manage [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e In case of `Cannot access cluster ####-######-####### that was terminated or unpinned more than 30 days ago` errors, please upgrade to v0.5.5 or later. If for some reason you cannot upgrade the version of provider, then the other viable option to unblock the apply pipeline is `terraform state rm path.to.databricks_cluster.resource` command.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst smallest = databricks.getNodeType({\n    localDisk: true,\n});\nconst latestLts = databricks.getSparkVersion({\n    longTermSupport: true,\n});\nconst sharedAutoscaling = new databricks.Cluster(\"shared_autoscaling\", {\n    clusterName: \"Shared Autoscaling\",\n    sparkVersion: latestLts.then(latestLts =\u003e latestLts.id),\n    nodeTypeId: smallest.then(smallest =\u003e smallest.id),\n    autoterminationMinutes: 20,\n    autoscale: {\n        minWorkers: 1,\n        maxWorkers: 50,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsmallest = databricks.get_node_type(local_disk=True)\nlatest_lts = databricks.get_spark_version(long_term_support=True)\nshared_autoscaling = databricks.Cluster(\"shared_autoscaling\",\n    cluster_name=\"Shared Autoscaling\",\n    spark_version=latest_lts.id,\n    node_type_id=smallest.id,\n    autotermination_minutes=20,\n    autoscale={\n        \"min_workers\": 1,\n        \"max_workers\": 50,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var smallest = Databricks.GetNodeType.Invoke(new()\n    {\n        LocalDisk = true,\n    });\n\n    var latestLts = Databricks.GetSparkVersion.Invoke(new()\n    {\n        LongTermSupport = true,\n    });\n\n    var sharedAutoscaling = new Databricks.Cluster(\"shared_autoscaling\", new()\n    {\n        ClusterName = \"Shared Autoscaling\",\n        SparkVersion = latestLts.Apply(getSparkVersionResult =\u003e getSparkVersionResult.Id),\n        NodeTypeId = smallest.Apply(getNodeTypeResult =\u003e getNodeTypeResult.Id),\n        AutoterminationMinutes = 20,\n        Autoscale = new Databricks.Inputs.ClusterAutoscaleArgs\n        {\n            MinWorkers = 1,\n            MaxWorkers = 50,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsmallest, err := databricks.GetNodeType(ctx, \u0026databricks.GetNodeTypeArgs{\n\t\t\tLocalDisk: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlatestLts, err := databricks.GetSparkVersion(ctx, \u0026databricks.GetSparkVersionArgs{\n\t\t\tLongTermSupport: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewCluster(ctx, \"shared_autoscaling\", \u0026databricks.ClusterArgs{\n\t\t\tClusterName:            pulumi.String(\"Shared Autoscaling\"),\n\t\t\tSparkVersion:           pulumi.String(latestLts.Id),\n\t\t\tNodeTypeId:             pulumi.String(smallest.Id),\n\t\t\tAutoterminationMinutes: pulumi.Int(20),\n\t\t\tAutoscale: \u0026databricks.ClusterAutoscaleArgs{\n\t\t\t\tMinWorkers: pulumi.Int(1),\n\t\t\t\tMaxWorkers: pulumi.Int(50),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetNodeTypeArgs;\nimport com.pulumi.databricks.inputs.GetSparkVersionArgs;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport com.pulumi.databricks.inputs.ClusterAutoscaleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var smallest = DatabricksFunctions.getNodeType(GetNodeTypeArgs.builder()\n            .localDisk(true)\n            .build());\n\n        final var latestLts = DatabricksFunctions.getSparkVersion(GetSparkVersionArgs.builder()\n            .longTermSupport(true)\n            .build());\n\n        var sharedAutoscaling = new Cluster(\"sharedAutoscaling\", ClusterArgs.builder()\n            .clusterName(\"Shared Autoscaling\")\n            .sparkVersion(latestLts.id())\n            .nodeTypeId(smallest.id())\n            .autoterminationMinutes(20)\n            .autoscale(ClusterAutoscaleArgs.builder()\n                .minWorkers(1)\n                .maxWorkers(50)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sharedAutoscaling:\n    type: databricks:Cluster\n    name: shared_autoscaling\n    properties:\n      clusterName: Shared Autoscaling\n      sparkVersion: ${latestLts.id}\n      nodeTypeId: ${smallest.id}\n      autoterminationMinutes: 20\n      autoscale:\n        minWorkers: 1\n        maxWorkers: 50\nvariables:\n  smallest:\n    fn::invoke:\n      function: databricks:getNodeType\n      arguments:\n        localDisk: true\n  latestLts:\n    fn::invoke:\n      function: databricks:getSparkVersion\n      arguments:\n        longTermSupport: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003ecan control which groups or individual users can create clusters.\n*\u003cspan pulumi-lang-nodejs=\" databricks.ClusterPolicy \" pulumi-lang-dotnet=\" databricks.ClusterPolicy \" pulumi-lang-go=\" ClusterPolicy \" pulumi-lang-python=\" ClusterPolicy \" pulumi-lang-yaml=\" databricks.ClusterPolicy \" pulumi-lang-java=\" databricks.ClusterPolicy \"\u003e databricks.ClusterPolicy \u003c/span\u003ecan control which kinds of clusters users can create.\n* Users, who have access to Cluster Policy, but do not have an \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set would still be able to create clusters, but within the boundary of the policy.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Manage*, *Restart* or *Attach to* individual clusters.\n* \u003cspan pulumi-lang-nodejs=\"`instanceProfileArn`\" pulumi-lang-dotnet=\"`InstanceProfileArn`\" pulumi-lang-go=\"`instanceProfileArn`\" pulumi-lang-python=\"`instance_profile_arn`\" pulumi-lang-yaml=\"`instanceProfileArn`\" pulumi-lang-java=\"`instanceProfileArn`\"\u003e`instance_profile_arn`\u003c/span\u003e *(AWS only)* can control which data a given cluster can access through cloud-native controls.\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* Dynamic Passthrough Clusters for a Group guide.\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getClusters \" pulumi-lang-dotnet=\" databricks.getClusters \" pulumi-lang-go=\" getClusters \" pulumi-lang-python=\" get_clusters \" pulumi-lang-yaml=\" databricks.getClusters \" pulumi-lang-java=\" databricks.getClusters \"\u003e databricks.getClusters \u003c/span\u003edata to retrieve a list of\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eids.\n*\u003cspan pulumi-lang-nodejs=\" databricks.ClusterPolicy \" pulumi-lang-dotnet=\" databricks.ClusterPolicy \" pulumi-lang-go=\" ClusterPolicy \" pulumi-lang-python=\" ClusterPolicy \" pulumi-lang-yaml=\" databricks.ClusterPolicy \" pulumi-lang-java=\" databricks.ClusterPolicy \"\u003e databricks.ClusterPolicy \u003c/span\u003eto create a\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003epolicy, which limits the ability to create clusters based on a set of rules.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getCurrentUser \" pulumi-lang-dotnet=\" databricks.getCurrentUser \" pulumi-lang-go=\" getCurrentUser \" pulumi-lang-python=\" get_current_user \" pulumi-lang-yaml=\" databricks.getCurrentUser \" pulumi-lang-java=\" databricks.getCurrentUser \"\u003e databricks.getCurrentUser \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eor databricks_service_principal, that is calling Databricks REST API.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GlobalInitScript \" pulumi-lang-dotnet=\" databricks.GlobalInitScript \" pulumi-lang-go=\" GlobalInitScript \" pulumi-lang-python=\" GlobalInitScript \" pulumi-lang-yaml=\" databricks.GlobalInitScript \" pulumi-lang-java=\" databricks.GlobalInitScript \"\u003e databricks.GlobalInitScript \u003c/span\u003eto manage [global init scripts](https://docs.databricks.com/clusters/init-scripts.html#global-init-scripts), which are run on all\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand databricks_job.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstancePool \" pulumi-lang-dotnet=\" databricks.InstancePool \" pulumi-lang-go=\" InstancePool \" pulumi-lang-python=\" InstancePool \" pulumi-lang-yaml=\" databricks.InstancePool \" pulumi-lang-java=\" databricks.InstancePool \"\u003e databricks.InstancePool \u003c/span\u003eto manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a databricks_cluster.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Library \" pulumi-lang-dotnet=\" databricks.Library \" pulumi-lang-go=\" Library \" pulumi-lang-python=\" Library \" pulumi-lang-yaml=\" databricks.Library \" pulumi-lang-java=\" databricks.Library \"\u003e databricks.Library \u003c/span\u003eto install a [library](https://docs.databricks.com/libraries/index.html) on databricks_cluster.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getNodeType \" pulumi-lang-dotnet=\" databricks.getNodeType \" pulumi-lang-go=\" getNodeType \" pulumi-lang-python=\" get_node_type \" pulumi-lang-yaml=\" databricks.getNodeType \" pulumi-lang-java=\" databricks.getNodeType \"\u003e databricks.getNodeType \u003c/span\u003edata to get the smallest node type for\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003ethat fits search criteria, like amount of RAM or number of cores.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Pipeline \" pulumi-lang-dotnet=\" databricks.Pipeline \" pulumi-lang-go=\" Pipeline \" pulumi-lang-python=\" Pipeline \" pulumi-lang-yaml=\" databricks.Pipeline \" pulumi-lang-java=\" databricks.Pipeline \"\u003e databricks.Pipeline \u003c/span\u003eto deploy [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt).\n*\u003cspan pulumi-lang-nodejs=\" databricks.getSparkVersion \" pulumi-lang-dotnet=\" databricks.getSparkVersion \" pulumi-lang-go=\" getSparkVersion \" pulumi-lang-python=\" get_spark_version \" pulumi-lang-yaml=\" databricks.getSparkVersion \" pulumi-lang-java=\" databricks.getSparkVersion \"\u003e databricks.getSparkVersion \u003c/span\u003edata to get [Databricks Runtime (DBR)](https://docs.databricks.com/runtime/dbr.html) version that could be used for \u003cspan pulumi-lang-nodejs=\"`sparkVersion`\" pulumi-lang-dotnet=\"`SparkVersion`\" pulumi-lang-go=\"`sparkVersion`\" pulumi-lang-python=\"`spark_version`\" pulumi-lang-yaml=\"`sparkVersion`\" pulumi-lang-java=\"`sparkVersion`\"\u003e`spark_version`\u003c/span\u003e parameter in\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand other resources.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getZones \" pulumi-lang-dotnet=\" databricks.getZones \" pulumi-lang-go=\" getZones \" pulumi-lang-python=\" get_zones \" pulumi-lang-yaml=\" databricks.getZones \" pulumi-lang-java=\" databricks.getZones \"\u003e databricks.getZones \u003c/span\u003edata to fetch all available AWS availability zones on your workspace on AWS.\n\n## Import\n\nThe resource cluster can be imported using cluster id.\n\n```bash\nterraform import databricks_cluster.this \u003ccluster-id\u003e\n```\n\n","properties":{"applyPolicyDefaultValues":{"type":"boolean","description":"Whether to use policy default values for missing cluster attributes.\n"},"autoscale":{"$ref":"#/types/databricks:index/ClusterAutoscale:ClusterAutoscale"},"autoterminationMinutes":{"type":"integer","description":"Automatically terminate the cluster after being inactive for this time in minutes. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination. Defaults to \u003cspan pulumi-lang-nodejs=\"`60`\" pulumi-lang-dotnet=\"`60`\" pulumi-lang-go=\"`60`\" pulumi-lang-python=\"`60`\" pulumi-lang-yaml=\"`60`\" pulumi-lang-java=\"`60`\"\u003e`60`\u003c/span\u003e.  *We highly recommend having this setting present for Interactive/BI clusters.*\n"},"awsAttributes":{"$ref":"#/types/databricks:index/ClusterAwsAttributes:ClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/ClusterAzureAttributes:ClusterAzureAttributes"},"clusterId":{"type":"string"},"clusterLogConf":{"$ref":"#/types/databricks:index/ClusterClusterLogConf:ClusterClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterClusterMountInfo:ClusterClusterMountInfo"}},"clusterName":{"type":"string","description":"Cluster name, which doesn't have to be unique. If not specified at creation, the cluster name will be an empty string.\n"},"customTags":{"type":"object","additionalProperties":{"type":"string"},"description":"should have tag `ResourceClass` set to value `Serverless`\n\nFor example:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst clusterWithTableAccessControl = new databricks.Cluster(\"cluster_with_table_access_control\", {\n    clusterName: \"Shared High-Concurrency\",\n    sparkVersion: latestLts.id,\n    nodeTypeId: smallest.id,\n    autoterminationMinutes: 20,\n    sparkConf: {\n        \"spark.databricks.repl.allowedLanguages\": \"python,sql\",\n        \"spark.databricks.cluster.profile\": \"serverless\",\n    },\n    customTags: {\n        ResourceClass: \"Serverless\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncluster_with_table_access_control = databricks.Cluster(\"cluster_with_table_access_control\",\n    cluster_name=\"Shared High-Concurrency\",\n    spark_version=latest_lts[\"id\"],\n    node_type_id=smallest[\"id\"],\n    autotermination_minutes=20,\n    spark_conf={\n        \"spark.databricks.repl.allowedLanguages\": \"python,sql\",\n        \"spark.databricks.cluster.profile\": \"serverless\",\n    },\n    custom_tags={\n        \"ResourceClass\": \"Serverless\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var clusterWithTableAccessControl = new Databricks.Cluster(\"cluster_with_table_access_control\", new()\n    {\n        ClusterName = \"Shared High-Concurrency\",\n        SparkVersion = latestLts.Id,\n        NodeTypeId = smallest.Id,\n        AutoterminationMinutes = 20,\n        SparkConf = \n        {\n            { \"spark.databricks.repl.allowedLanguages\", \"python,sql\" },\n            { \"spark.databricks.cluster.profile\", \"serverless\" },\n        },\n        CustomTags = \n        {\n            { \"ResourceClass\", \"Serverless\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewCluster(ctx, \"cluster_with_table_access_control\", \u0026databricks.ClusterArgs{\n\t\t\tClusterName:            pulumi.String(\"Shared High-Concurrency\"),\n\t\t\tSparkVersion:           pulumi.Any(latestLts.Id),\n\t\t\tNodeTypeId:             pulumi.Any(smallest.Id),\n\t\t\tAutoterminationMinutes: pulumi.Int(20),\n\t\t\tSparkConf: pulumi.StringMap{\n\t\t\t\t\"spark.databricks.repl.allowedLanguages\": pulumi.String(\"python,sql\"),\n\t\t\t\t\"spark.databricks.cluster.profile\":       pulumi.String(\"serverless\"),\n\t\t\t},\n\t\t\tCustomTags: pulumi.StringMap{\n\t\t\t\t\"ResourceClass\": pulumi.String(\"Serverless\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var clusterWithTableAccessControl = new Cluster(\"clusterWithTableAccessControl\", ClusterArgs.builder()\n            .clusterName(\"Shared High-Concurrency\")\n            .sparkVersion(latestLts.id())\n            .nodeTypeId(smallest.id())\n            .autoterminationMinutes(20)\n            .sparkConf(Map.ofEntries(\n                Map.entry(\"spark.databricks.repl.allowedLanguages\", \"python,sql\"),\n                Map.entry(\"spark.databricks.cluster.profile\", \"serverless\")\n            ))\n            .customTags(Map.of(\"ResourceClass\", \"Serverless\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  clusterWithTableAccessControl:\n    type: databricks:Cluster\n    name: cluster_with_table_access_control\n    properties:\n      clusterName: Shared High-Concurrency\n      sparkVersion: ${latestLts.id}\n      nodeTypeId: ${smallest.id}\n      autoterminationMinutes: 20\n      sparkConf:\n        spark.databricks.repl.allowedLanguages: python,sql\n        spark.databricks.cluster.profile: serverless\n      customTags:\n        ResourceClass: Serverless\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"},"dataSecurityMode":{"type":"string","description":"Select the security features of the cluster (see [API docs](https://docs.databricks.com/api/workspace/clusters/create#data_security_mode) for full list of values). [Unity Catalog requires](https://docs.databricks.com/data-governance/unity-catalog/compute.html#create-clusters--sql-warehouses-with-unity-catalog-access) `SINGLE_USER` or `USER_ISOLATION` mode. `LEGACY_PASSTHROUGH` for passthrough cluster and `LEGACY_TABLE_ACL` for Table ACL cluster. If omitted, default security features are enabled. To disable security features use `NONE` or legacy mode `NO_ISOLATION`.  If \u003cspan pulumi-lang-nodejs=\"`kind`\" pulumi-lang-dotnet=\"`Kind`\" pulumi-lang-go=\"`kind`\" pulumi-lang-python=\"`kind`\" pulumi-lang-yaml=\"`kind`\" pulumi-lang-java=\"`kind`\"\u003e`kind`\u003c/span\u003e is specified, then the following options are available:\n* `DATA_SECURITY_MODE_AUTO`: Databricks will choose the most appropriate access mode depending on your compute configuration.\n* `DATA_SECURITY_MODE_STANDARD`: Alias for `USER_ISOLATION`.\n* `DATA_SECURITY_MODE_DEDICATED`: Alias for `SINGLE_USER`.\n"},"defaultTags":{"type":"object","additionalProperties":{"type":"string"},"description":"(map) Tags that are added by Databricks by default, regardless of any \u003cspan pulumi-lang-nodejs=\"`customTags`\" pulumi-lang-dotnet=\"`CustomTags`\" pulumi-lang-go=\"`customTags`\" pulumi-lang-python=\"`custom_tags`\" pulumi-lang-yaml=\"`customTags`\" pulumi-lang-java=\"`customTags`\"\u003e`custom_tags`\u003c/span\u003e that may have been added. These include: Vendor: Databricks, Creator: \u003cusername_of_creator\u003e, ClusterName: \u003cname_of_cluster\u003e, ClusterId: \u003cid_of_cluster\u003e, Name: \u003cDatabricks internal use\u003e, and any workspace and pool tags.\n"},"dockerImage":{"$ref":"#/types/databricks:index/ClusterDockerImage:ClusterDockerImage"},"driverInstancePoolId":{"type":"string","description":"similar to \u003cspan pulumi-lang-nodejs=\"`instancePoolId`\" pulumi-lang-dotnet=\"`InstancePoolId`\" pulumi-lang-go=\"`instancePoolId`\" pulumi-lang-python=\"`instance_pool_id`\" pulumi-lang-yaml=\"`instancePoolId`\" pulumi-lang-java=\"`instancePoolId`\"\u003e`instance_pool_id`\u003c/span\u003e, but for driver node. If omitted, and \u003cspan pulumi-lang-nodejs=\"`instancePoolId`\" pulumi-lang-dotnet=\"`InstancePoolId`\" pulumi-lang-go=\"`instancePoolId`\" pulumi-lang-python=\"`instance_pool_id`\" pulumi-lang-yaml=\"`instancePoolId`\" pulumi-lang-java=\"`instancePoolId`\"\u003e`instance_pool_id`\u003c/span\u003e is specified, then the driver will be allocated from that pool.\n"},"driverNodeTypeFlexibility":{"$ref":"#/types/databricks:index/ClusterDriverNodeTypeFlexibility:ClusterDriverNodeTypeFlexibility","description":"a block describing the alternative driver node types if \u003cspan pulumi-lang-nodejs=\"`driverNodeTypeId`\" pulumi-lang-dotnet=\"`DriverNodeTypeId`\" pulumi-lang-go=\"`driverNodeTypeId`\" pulumi-lang-python=\"`driver_node_type_id`\" pulumi-lang-yaml=\"`driverNodeTypeId`\" pulumi-lang-java=\"`driverNodeTypeId`\"\u003e`driver_node_type_id`\u003c/span\u003e isn't available.\n"},"driverNodeTypeId":{"type":"string","description":"The node type of the Spark driver. This field is optional; if unset, API will set the driver node type to the same value as \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e defined above.\n"},"enableElasticDisk":{"type":"boolean","description":"If you don't want to allocate a fixed number of EBS volumes at cluster creation time, use autoscaling local storage. With autoscaling local storage, Databricks monitors the amount of free disk space available on your cluster's Spark workers. If a worker begins to run too low on disk, Databricks automatically attaches a new EBS volume to the worker before it runs out of disk space. EBS volumes are attached up to a limit of 5 TB of total disk space per instance (including the instance's local storage). To scale down EBS usage, make sure you have \u003cspan pulumi-lang-nodejs=\"`autoterminationMinutes`\" pulumi-lang-dotnet=\"`AutoterminationMinutes`\" pulumi-lang-go=\"`autoterminationMinutes`\" pulumi-lang-python=\"`autotermination_minutes`\" pulumi-lang-yaml=\"`autoterminationMinutes`\" pulumi-lang-java=\"`autoterminationMinutes`\"\u003e`autotermination_minutes`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`autoscale`\" pulumi-lang-dotnet=\"`Autoscale`\" pulumi-lang-go=\"`autoscale`\" pulumi-lang-python=\"`autoscale`\" pulumi-lang-yaml=\"`autoscale`\" pulumi-lang-java=\"`autoscale`\"\u003e`autoscale`\u003c/span\u003e attributes set. More documentation available at [cluster configuration page](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage-1).\n"},"enableLocalDiskEncryption":{"type":"boolean","description":"Some instance types you use to run clusters may have locally attached disks. Databricks may store shuffle data or temporary data on these locally attached disks. To ensure that all data at rest is encrypted for all storage types, including shuffle data stored temporarily on your cluster's local disks, you can enable local disk encryption. When local disk encryption is enabled, Databricks generates an encryption key locally unique to each cluster node and uses it to encrypt all data stored on local disks. The scope of the key is local to each cluster node and is destroyed along with the cluster node itself. During its lifetime, the key resides in memory for encryption and decryption and is stored encrypted on the disk. *Your workloads may run more slowly because of the performance impact of reading and writing encrypted data to and from local volumes. This feature is not available for all Azure Databricks subscriptions. Contact your Microsoft or Databricks account representative to request access.*\n"},"gcpAttributes":{"$ref":"#/types/databricks:index/ClusterGcpAttributes:ClusterGcpAttributes"},"idempotencyToken":{"type":"string","description":"An optional token to guarantee the idempotency of cluster creation requests. If an active cluster with the provided token already exists, the request will not create a new cluster, but it will return the existing running cluster's ID instead. If you specify the idempotency token, upon failure, you can retry until the request succeeds. Databricks platform guarantees to launch exactly one cluster with that idempotency token. This token should have at most 64 characters.\n"},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterInitScript:ClusterInitScript"}},"instancePoolId":{"type":"string","description":"To reduce cluster start time, you can attach a cluster to a predefined pool of idle instances. When attached to a pool, a cluster allocates its driver and worker nodes from the pool. If the pool does not have sufficient idle resources to accommodate the cluster's request, it expands by allocating new instances from the instance provider. When an attached cluster changes its state to `TERMINATED`, the instances it used are returned to the pool and reused by a different cluster.\n"},"isPinned":{"type":"boolean","description":"boolean value specifying if the cluster is pinned (not pinned by default). You must be a Databricks administrator to use this.  The pinned clusters' maximum number is [limited to 100](https://docs.databricks.com/clusters/clusters-manage.html#pin-a-cluster), so \u003cspan pulumi-lang-nodejs=\"`apply`\" pulumi-lang-dotnet=\"`Apply`\" pulumi-lang-go=\"`apply`\" pulumi-lang-python=\"`apply`\" pulumi-lang-yaml=\"`apply`\" pulumi-lang-java=\"`apply`\"\u003e`apply`\u003c/span\u003e may fail if you have more than that (this number may change over time, so check Databricks documentation for actual number).\n"},"isSingleNode":{"type":"boolean","description":"When set to true, Databricks will automatically set single node related \u003cspan pulumi-lang-nodejs=\"`customTags`\" pulumi-lang-dotnet=\"`CustomTags`\" pulumi-lang-go=\"`customTags`\" pulumi-lang-python=\"`custom_tags`\" pulumi-lang-yaml=\"`customTags`\" pulumi-lang-java=\"`customTags`\"\u003e`custom_tags`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`sparkConf`\" pulumi-lang-dotnet=\"`SparkConf`\" pulumi-lang-go=\"`sparkConf`\" pulumi-lang-python=\"`spark_conf`\" pulumi-lang-yaml=\"`sparkConf`\" pulumi-lang-java=\"`sparkConf`\"\u003e`spark_conf`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`numWorkers`\" pulumi-lang-dotnet=\"`NumWorkers`\" pulumi-lang-go=\"`numWorkers`\" pulumi-lang-python=\"`num_workers`\" pulumi-lang-yaml=\"`numWorkers`\" pulumi-lang-java=\"`numWorkers`\"\u003e`num_workers`\u003c/span\u003e.\n"},"kind":{"type":"string","description":"The kind of compute described by this compute specification.  Possible values (see [API docs](https://docs.databricks.com/api/workspace/clusters/create#kind) for full list): `CLASSIC_PREVIEW` (if corresponding public preview is enabled).\n"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterLibrary:ClusterLibrary"}},"noWait":{"type":"boolean","description":"If true, the provider will not wait for the cluster to reach `RUNNING` state when creating the cluster, allowing cluster creation and library installation to continue asynchronously. Defaults to false (the provider will wait for cluster creation and library installation to succeed).\n"},"nodeTypeId":{"type":"string","description":"Any supported\u003cspan pulumi-lang-nodejs=\" databricks.getNodeType \" pulumi-lang-dotnet=\" databricks.getNodeType \" pulumi-lang-go=\" getNodeType \" pulumi-lang-python=\" get_node_type \" pulumi-lang-yaml=\" databricks.getNodeType \" pulumi-lang-java=\" databricks.getNodeType \"\u003e databricks.getNodeType \u003c/span\u003eid. If \u003cspan pulumi-lang-nodejs=\"`instancePoolId`\" pulumi-lang-dotnet=\"`InstancePoolId`\" pulumi-lang-go=\"`instancePoolId`\" pulumi-lang-python=\"`instance_pool_id`\" pulumi-lang-yaml=\"`instancePoolId`\" pulumi-lang-java=\"`instancePoolId`\"\u003e`instance_pool_id`\u003c/span\u003e is specified, this field is not needed.\n"},"numWorkers":{"type":"integer","description":"Number of worker nodes that this cluster should have. A cluster has one Spark driver and \u003cspan pulumi-lang-nodejs=\"`numWorkers`\" pulumi-lang-dotnet=\"`NumWorkers`\" pulumi-lang-go=\"`numWorkers`\" pulumi-lang-python=\"`num_workers`\" pulumi-lang-yaml=\"`numWorkers`\" pulumi-lang-java=\"`numWorkers`\"\u003e`num_workers`\u003c/span\u003e executors for a total of \u003cspan pulumi-lang-nodejs=\"`numWorkers`\" pulumi-lang-dotnet=\"`NumWorkers`\" pulumi-lang-go=\"`numWorkers`\" pulumi-lang-python=\"`num_workers`\" pulumi-lang-yaml=\"`numWorkers`\" pulumi-lang-java=\"`numWorkers`\"\u003e`num_workers`\u003c/span\u003e + 1 Spark nodes.\n"},"policyId":{"type":"string","description":"Identifier of Cluster Policy to validate cluster and preset certain defaults. *The primary use for cluster policies is to allow users to create policy-scoped clusters via UI rather than sharing configuration for API-created clusters.* For example, when you specify \u003cspan pulumi-lang-nodejs=\"`policyId`\" pulumi-lang-dotnet=\"`PolicyId`\" pulumi-lang-go=\"`policyId`\" pulumi-lang-python=\"`policy_id`\" pulumi-lang-yaml=\"`policyId`\" pulumi-lang-java=\"`policyId`\"\u003e`policy_id`\u003c/span\u003e of [external metastore](https://docs.databricks.com/administration-guide/clusters/policies.html#external-metastore-policy) policy, you still have to fill in relevant keys for \u003cspan pulumi-lang-nodejs=\"`sparkConf`\" pulumi-lang-dotnet=\"`SparkConf`\" pulumi-lang-go=\"`sparkConf`\" pulumi-lang-python=\"`spark_conf`\" pulumi-lang-yaml=\"`sparkConf`\" pulumi-lang-java=\"`sparkConf`\"\u003e`spark_conf`\u003c/span\u003e.  If relevant fields aren't filled in, then it will cause the configuration drift detected on each plan/apply, and Pulumi will try to apply the detected changes.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ClusterProviderConfig:ClusterProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"remoteDiskThroughput":{"type":"integer"},"runtimeEngine":{"type":"string","description":"The type of runtime engine to use. If not specified, the runtime engine type is inferred based on the\u003cspan pulumi-lang-nodejs=\" sparkVersion \" pulumi-lang-dotnet=\" SparkVersion \" pulumi-lang-go=\" sparkVersion \" pulumi-lang-python=\" spark_version \" pulumi-lang-yaml=\" sparkVersion \" pulumi-lang-java=\" sparkVersion \"\u003e spark_version \u003c/span\u003evalue. Allowed values include: `PHOTON`, `STANDARD`.\n"},"singleUserName":{"type":"string","description":"The optional user name of the user (or group name if \u003cspan pulumi-lang-nodejs=\"`kind`\" pulumi-lang-dotnet=\"`Kind`\" pulumi-lang-go=\"`kind`\" pulumi-lang-python=\"`kind`\" pulumi-lang-yaml=\"`kind`\" pulumi-lang-java=\"`kind`\"\u003e`kind`\u003c/span\u003e if specified) to assign to an interactive cluster. This field is required when using \u003cspan pulumi-lang-nodejs=\"`dataSecurityMode`\" pulumi-lang-dotnet=\"`DataSecurityMode`\" pulumi-lang-go=\"`dataSecurityMode`\" pulumi-lang-python=\"`data_security_mode`\" pulumi-lang-yaml=\"`dataSecurityMode`\" pulumi-lang-java=\"`dataSecurityMode`\"\u003e`data_security_mode`\u003c/span\u003e set to `SINGLE_USER` or AAD Passthrough for Azure Data Lake Storage (ADLS) with a single-user cluster (i.e., not high-concurrency clusters).\n"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"},"description":"should have following items:\n* `spark.databricks.repl.allowedLanguages` set to a list of supported languages, for example: `python,sql`, or `python,sql,r`.  Scala is not supported!\n* `spark.databricks.cluster.profile` set to \u003cspan pulumi-lang-nodejs=\"`serverless`\" pulumi-lang-dotnet=\"`Serverless`\" pulumi-lang-go=\"`serverless`\" pulumi-lang-python=\"`serverless`\" pulumi-lang-yaml=\"`serverless`\" pulumi-lang-java=\"`serverless`\"\u003e`serverless`\u003c/span\u003e\n"},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"},"description":"Map with environment variable key-value pairs to fine-tune Spark clusters. Key-value pairs of the form (X,Y) are exported (i.e., X='Y') while launching the driver and workers.\n"},"sparkVersion":{"type":"string","description":"[Runtime version](https://docs.databricks.com/runtime/index.html) of the cluster. Any supported\u003cspan pulumi-lang-nodejs=\" databricks.getSparkVersion \" pulumi-lang-dotnet=\" databricks.getSparkVersion \" pulumi-lang-go=\" getSparkVersion \" pulumi-lang-python=\" get_spark_version \" pulumi-lang-yaml=\" databricks.getSparkVersion \" pulumi-lang-java=\" databricks.getSparkVersion \"\u003e databricks.getSparkVersion \u003c/span\u003eid.  We advise using Cluster Policies to restrict the list of versions for simplicity while maintaining enough control.\n"},"sshPublicKeys":{"type":"array","items":{"type":"string"},"description":"SSH public key contents that will be added to each Spark node in this cluster. The corresponding private keys can be used to login with the user name ubuntu on port 2200. You can specify up to 10 keys.\n"},"state":{"type":"string","description":"(string) State of the cluster.\n"},"totalInitialRemoteDiskSize":{"type":"integer"},"url":{"type":"string"},"useMlRuntime":{"type":"boolean","description":"Whenever ML runtime should be selected or not.  Actual runtime is determined by \u003cspan pulumi-lang-nodejs=\"`sparkVersion`\" pulumi-lang-dotnet=\"`SparkVersion`\" pulumi-lang-go=\"`sparkVersion`\" pulumi-lang-python=\"`spark_version`\" pulumi-lang-yaml=\"`sparkVersion`\" pulumi-lang-java=\"`sparkVersion`\"\u003e`spark_version`\u003c/span\u003e (DBR release), this field \u003cspan pulumi-lang-nodejs=\"`useMlRuntime`\" pulumi-lang-dotnet=\"`UseMlRuntime`\" pulumi-lang-go=\"`useMlRuntime`\" pulumi-lang-python=\"`use_ml_runtime`\" pulumi-lang-yaml=\"`useMlRuntime`\" pulumi-lang-java=\"`useMlRuntime`\"\u003e`use_ml_runtime`\u003c/span\u003e, and whether \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e is GPU node or not.\n"},"workerNodeTypeFlexibility":{"$ref":"#/types/databricks:index/ClusterWorkerNodeTypeFlexibility:ClusterWorkerNodeTypeFlexibility","description":"a block describing the alternative driver node types if \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e isn't available.\n"},"workloadType":{"$ref":"#/types/databricks:index/ClusterWorkloadType:ClusterWorkloadType"}},"required":["clusterId","defaultTags","driverInstancePoolId","driverNodeTypeId","enableElasticDisk","enableLocalDiskEncryption","nodeTypeId","sparkVersion","state","url"],"inputProperties":{"applyPolicyDefaultValues":{"type":"boolean","description":"Whether to use policy default values for missing cluster attributes.\n"},"autoscale":{"$ref":"#/types/databricks:index/ClusterAutoscale:ClusterAutoscale"},"autoterminationMinutes":{"type":"integer","description":"Automatically terminate the cluster after being inactive for this time in minutes. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination. Defaults to \u003cspan pulumi-lang-nodejs=\"`60`\" pulumi-lang-dotnet=\"`60`\" pulumi-lang-go=\"`60`\" pulumi-lang-python=\"`60`\" pulumi-lang-yaml=\"`60`\" pulumi-lang-java=\"`60`\"\u003e`60`\u003c/span\u003e.  *We highly recommend having this setting present for Interactive/BI clusters.*\n"},"awsAttributes":{"$ref":"#/types/databricks:index/ClusterAwsAttributes:ClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/ClusterAzureAttributes:ClusterAzureAttributes"},"clusterLogConf":{"$ref":"#/types/databricks:index/ClusterClusterLogConf:ClusterClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterClusterMountInfo:ClusterClusterMountInfo"}},"clusterName":{"type":"string","description":"Cluster name, which doesn't have to be unique. If not specified at creation, the cluster name will be an empty string.\n"},"customTags":{"type":"object","additionalProperties":{"type":"string"},"description":"should have tag `ResourceClass` set to value `Serverless`\n\nFor example:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst clusterWithTableAccessControl = new databricks.Cluster(\"cluster_with_table_access_control\", {\n    clusterName: \"Shared High-Concurrency\",\n    sparkVersion: latestLts.id,\n    nodeTypeId: smallest.id,\n    autoterminationMinutes: 20,\n    sparkConf: {\n        \"spark.databricks.repl.allowedLanguages\": \"python,sql\",\n        \"spark.databricks.cluster.profile\": \"serverless\",\n    },\n    customTags: {\n        ResourceClass: \"Serverless\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncluster_with_table_access_control = databricks.Cluster(\"cluster_with_table_access_control\",\n    cluster_name=\"Shared High-Concurrency\",\n    spark_version=latest_lts[\"id\"],\n    node_type_id=smallest[\"id\"],\n    autotermination_minutes=20,\n    spark_conf={\n        \"spark.databricks.repl.allowedLanguages\": \"python,sql\",\n        \"spark.databricks.cluster.profile\": \"serverless\",\n    },\n    custom_tags={\n        \"ResourceClass\": \"Serverless\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var clusterWithTableAccessControl = new Databricks.Cluster(\"cluster_with_table_access_control\", new()\n    {\n        ClusterName = \"Shared High-Concurrency\",\n        SparkVersion = latestLts.Id,\n        NodeTypeId = smallest.Id,\n        AutoterminationMinutes = 20,\n        SparkConf = \n        {\n            { \"spark.databricks.repl.allowedLanguages\", \"python,sql\" },\n            { \"spark.databricks.cluster.profile\", \"serverless\" },\n        },\n        CustomTags = \n        {\n            { \"ResourceClass\", \"Serverless\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewCluster(ctx, \"cluster_with_table_access_control\", \u0026databricks.ClusterArgs{\n\t\t\tClusterName:            pulumi.String(\"Shared High-Concurrency\"),\n\t\t\tSparkVersion:           pulumi.Any(latestLts.Id),\n\t\t\tNodeTypeId:             pulumi.Any(smallest.Id),\n\t\t\tAutoterminationMinutes: pulumi.Int(20),\n\t\t\tSparkConf: pulumi.StringMap{\n\t\t\t\t\"spark.databricks.repl.allowedLanguages\": pulumi.String(\"python,sql\"),\n\t\t\t\t\"spark.databricks.cluster.profile\":       pulumi.String(\"serverless\"),\n\t\t\t},\n\t\t\tCustomTags: pulumi.StringMap{\n\t\t\t\t\"ResourceClass\": pulumi.String(\"Serverless\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var clusterWithTableAccessControl = new Cluster(\"clusterWithTableAccessControl\", ClusterArgs.builder()\n            .clusterName(\"Shared High-Concurrency\")\n            .sparkVersion(latestLts.id())\n            .nodeTypeId(smallest.id())\n            .autoterminationMinutes(20)\n            .sparkConf(Map.ofEntries(\n                Map.entry(\"spark.databricks.repl.allowedLanguages\", \"python,sql\"),\n                Map.entry(\"spark.databricks.cluster.profile\", \"serverless\")\n            ))\n            .customTags(Map.of(\"ResourceClass\", \"Serverless\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  clusterWithTableAccessControl:\n    type: databricks:Cluster\n    name: cluster_with_table_access_control\n    properties:\n      clusterName: Shared High-Concurrency\n      sparkVersion: ${latestLts.id}\n      nodeTypeId: ${smallest.id}\n      autoterminationMinutes: 20\n      sparkConf:\n        spark.databricks.repl.allowedLanguages: python,sql\n        spark.databricks.cluster.profile: serverless\n      customTags:\n        ResourceClass: Serverless\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"},"dataSecurityMode":{"type":"string","description":"Select the security features of the cluster (see [API docs](https://docs.databricks.com/api/workspace/clusters/create#data_security_mode) for full list of values). [Unity Catalog requires](https://docs.databricks.com/data-governance/unity-catalog/compute.html#create-clusters--sql-warehouses-with-unity-catalog-access) `SINGLE_USER` or `USER_ISOLATION` mode. `LEGACY_PASSTHROUGH` for passthrough cluster and `LEGACY_TABLE_ACL` for Table ACL cluster. If omitted, default security features are enabled. To disable security features use `NONE` or legacy mode `NO_ISOLATION`.  If \u003cspan pulumi-lang-nodejs=\"`kind`\" pulumi-lang-dotnet=\"`Kind`\" pulumi-lang-go=\"`kind`\" pulumi-lang-python=\"`kind`\" pulumi-lang-yaml=\"`kind`\" pulumi-lang-java=\"`kind`\"\u003e`kind`\u003c/span\u003e is specified, then the following options are available:\n* `DATA_SECURITY_MODE_AUTO`: Databricks will choose the most appropriate access mode depending on your compute configuration.\n* `DATA_SECURITY_MODE_STANDARD`: Alias for `USER_ISOLATION`.\n* `DATA_SECURITY_MODE_DEDICATED`: Alias for `SINGLE_USER`.\n"},"dockerImage":{"$ref":"#/types/databricks:index/ClusterDockerImage:ClusterDockerImage"},"driverInstancePoolId":{"type":"string","description":"similar to \u003cspan pulumi-lang-nodejs=\"`instancePoolId`\" pulumi-lang-dotnet=\"`InstancePoolId`\" pulumi-lang-go=\"`instancePoolId`\" pulumi-lang-python=\"`instance_pool_id`\" pulumi-lang-yaml=\"`instancePoolId`\" pulumi-lang-java=\"`instancePoolId`\"\u003e`instance_pool_id`\u003c/span\u003e, but for driver node. If omitted, and \u003cspan pulumi-lang-nodejs=\"`instancePoolId`\" pulumi-lang-dotnet=\"`InstancePoolId`\" pulumi-lang-go=\"`instancePoolId`\" pulumi-lang-python=\"`instance_pool_id`\" pulumi-lang-yaml=\"`instancePoolId`\" pulumi-lang-java=\"`instancePoolId`\"\u003e`instance_pool_id`\u003c/span\u003e is specified, then the driver will be allocated from that pool.\n"},"driverNodeTypeFlexibility":{"$ref":"#/types/databricks:index/ClusterDriverNodeTypeFlexibility:ClusterDriverNodeTypeFlexibility","description":"a block describing the alternative driver node types if \u003cspan pulumi-lang-nodejs=\"`driverNodeTypeId`\" pulumi-lang-dotnet=\"`DriverNodeTypeId`\" pulumi-lang-go=\"`driverNodeTypeId`\" pulumi-lang-python=\"`driver_node_type_id`\" pulumi-lang-yaml=\"`driverNodeTypeId`\" pulumi-lang-java=\"`driverNodeTypeId`\"\u003e`driver_node_type_id`\u003c/span\u003e isn't available.\n"},"driverNodeTypeId":{"type":"string","description":"The node type of the Spark driver. This field is optional; if unset, API will set the driver node type to the same value as \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e defined above.\n"},"enableElasticDisk":{"type":"boolean","description":"If you don't want to allocate a fixed number of EBS volumes at cluster creation time, use autoscaling local storage. With autoscaling local storage, Databricks monitors the amount of free disk space available on your cluster's Spark workers. If a worker begins to run too low on disk, Databricks automatically attaches a new EBS volume to the worker before it runs out of disk space. EBS volumes are attached up to a limit of 5 TB of total disk space per instance (including the instance's local storage). To scale down EBS usage, make sure you have \u003cspan pulumi-lang-nodejs=\"`autoterminationMinutes`\" pulumi-lang-dotnet=\"`AutoterminationMinutes`\" pulumi-lang-go=\"`autoterminationMinutes`\" pulumi-lang-python=\"`autotermination_minutes`\" pulumi-lang-yaml=\"`autoterminationMinutes`\" pulumi-lang-java=\"`autoterminationMinutes`\"\u003e`autotermination_minutes`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`autoscale`\" pulumi-lang-dotnet=\"`Autoscale`\" pulumi-lang-go=\"`autoscale`\" pulumi-lang-python=\"`autoscale`\" pulumi-lang-yaml=\"`autoscale`\" pulumi-lang-java=\"`autoscale`\"\u003e`autoscale`\u003c/span\u003e attributes set. More documentation available at [cluster configuration page](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage-1).\n"},"enableLocalDiskEncryption":{"type":"boolean","description":"Some instance types you use to run clusters may have locally attached disks. Databricks may store shuffle data or temporary data on these locally attached disks. To ensure that all data at rest is encrypted for all storage types, including shuffle data stored temporarily on your cluster's local disks, you can enable local disk encryption. When local disk encryption is enabled, Databricks generates an encryption key locally unique to each cluster node and uses it to encrypt all data stored on local disks. The scope of the key is local to each cluster node and is destroyed along with the cluster node itself. During its lifetime, the key resides in memory for encryption and decryption and is stored encrypted on the disk. *Your workloads may run more slowly because of the performance impact of reading and writing encrypted data to and from local volumes. This feature is not available for all Azure Databricks subscriptions. Contact your Microsoft or Databricks account representative to request access.*\n"},"gcpAttributes":{"$ref":"#/types/databricks:index/ClusterGcpAttributes:ClusterGcpAttributes"},"idempotencyToken":{"type":"string","description":"An optional token to guarantee the idempotency of cluster creation requests. If an active cluster with the provided token already exists, the request will not create a new cluster, but it will return the existing running cluster's ID instead. If you specify the idempotency token, upon failure, you can retry until the request succeeds. Databricks platform guarantees to launch exactly one cluster with that idempotency token. This token should have at most 64 characters.\n","willReplaceOnChanges":true},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterInitScript:ClusterInitScript"}},"instancePoolId":{"type":"string","description":"To reduce cluster start time, you can attach a cluster to a predefined pool of idle instances. When attached to a pool, a cluster allocates its driver and worker nodes from the pool. If the pool does not have sufficient idle resources to accommodate the cluster's request, it expands by allocating new instances from the instance provider. When an attached cluster changes its state to `TERMINATED`, the instances it used are returned to the pool and reused by a different cluster.\n"},"isPinned":{"type":"boolean","description":"boolean value specifying if the cluster is pinned (not pinned by default). You must be a Databricks administrator to use this.  The pinned clusters' maximum number is [limited to 100](https://docs.databricks.com/clusters/clusters-manage.html#pin-a-cluster), so \u003cspan pulumi-lang-nodejs=\"`apply`\" pulumi-lang-dotnet=\"`Apply`\" pulumi-lang-go=\"`apply`\" pulumi-lang-python=\"`apply`\" pulumi-lang-yaml=\"`apply`\" pulumi-lang-java=\"`apply`\"\u003e`apply`\u003c/span\u003e may fail if you have more than that (this number may change over time, so check Databricks documentation for actual number).\n"},"isSingleNode":{"type":"boolean","description":"When set to true, Databricks will automatically set single node related \u003cspan pulumi-lang-nodejs=\"`customTags`\" pulumi-lang-dotnet=\"`CustomTags`\" pulumi-lang-go=\"`customTags`\" pulumi-lang-python=\"`custom_tags`\" pulumi-lang-yaml=\"`customTags`\" pulumi-lang-java=\"`customTags`\"\u003e`custom_tags`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`sparkConf`\" pulumi-lang-dotnet=\"`SparkConf`\" pulumi-lang-go=\"`sparkConf`\" pulumi-lang-python=\"`spark_conf`\" pulumi-lang-yaml=\"`sparkConf`\" pulumi-lang-java=\"`sparkConf`\"\u003e`spark_conf`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`numWorkers`\" pulumi-lang-dotnet=\"`NumWorkers`\" pulumi-lang-go=\"`numWorkers`\" pulumi-lang-python=\"`num_workers`\" pulumi-lang-yaml=\"`numWorkers`\" pulumi-lang-java=\"`numWorkers`\"\u003e`num_workers`\u003c/span\u003e.\n"},"kind":{"type":"string","description":"The kind of compute described by this compute specification.  Possible values (see [API docs](https://docs.databricks.com/api/workspace/clusters/create#kind) for full list): `CLASSIC_PREVIEW` (if corresponding public preview is enabled).\n"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterLibrary:ClusterLibrary"}},"noWait":{"type":"boolean","description":"If true, the provider will not wait for the cluster to reach `RUNNING` state when creating the cluster, allowing cluster creation and library installation to continue asynchronously. Defaults to false (the provider will wait for cluster creation and library installation to succeed).\n"},"nodeTypeId":{"type":"string","description":"Any supported\u003cspan pulumi-lang-nodejs=\" databricks.getNodeType \" pulumi-lang-dotnet=\" databricks.getNodeType \" pulumi-lang-go=\" getNodeType \" pulumi-lang-python=\" get_node_type \" pulumi-lang-yaml=\" databricks.getNodeType \" pulumi-lang-java=\" databricks.getNodeType \"\u003e databricks.getNodeType \u003c/span\u003eid. If \u003cspan pulumi-lang-nodejs=\"`instancePoolId`\" pulumi-lang-dotnet=\"`InstancePoolId`\" pulumi-lang-go=\"`instancePoolId`\" pulumi-lang-python=\"`instance_pool_id`\" pulumi-lang-yaml=\"`instancePoolId`\" pulumi-lang-java=\"`instancePoolId`\"\u003e`instance_pool_id`\u003c/span\u003e is specified, this field is not needed.\n"},"numWorkers":{"type":"integer","description":"Number of worker nodes that this cluster should have. A cluster has one Spark driver and \u003cspan pulumi-lang-nodejs=\"`numWorkers`\" pulumi-lang-dotnet=\"`NumWorkers`\" pulumi-lang-go=\"`numWorkers`\" pulumi-lang-python=\"`num_workers`\" pulumi-lang-yaml=\"`numWorkers`\" pulumi-lang-java=\"`numWorkers`\"\u003e`num_workers`\u003c/span\u003e executors for a total of \u003cspan pulumi-lang-nodejs=\"`numWorkers`\" pulumi-lang-dotnet=\"`NumWorkers`\" pulumi-lang-go=\"`numWorkers`\" pulumi-lang-python=\"`num_workers`\" pulumi-lang-yaml=\"`numWorkers`\" pulumi-lang-java=\"`numWorkers`\"\u003e`num_workers`\u003c/span\u003e + 1 Spark nodes.\n"},"policyId":{"type":"string","description":"Identifier of Cluster Policy to validate cluster and preset certain defaults. *The primary use for cluster policies is to allow users to create policy-scoped clusters via UI rather than sharing configuration for API-created clusters.* For example, when you specify \u003cspan pulumi-lang-nodejs=\"`policyId`\" pulumi-lang-dotnet=\"`PolicyId`\" pulumi-lang-go=\"`policyId`\" pulumi-lang-python=\"`policy_id`\" pulumi-lang-yaml=\"`policyId`\" pulumi-lang-java=\"`policyId`\"\u003e`policy_id`\u003c/span\u003e of [external metastore](https://docs.databricks.com/administration-guide/clusters/policies.html#external-metastore-policy) policy, you still have to fill in relevant keys for \u003cspan pulumi-lang-nodejs=\"`sparkConf`\" pulumi-lang-dotnet=\"`SparkConf`\" pulumi-lang-go=\"`sparkConf`\" pulumi-lang-python=\"`spark_conf`\" pulumi-lang-yaml=\"`sparkConf`\" pulumi-lang-java=\"`sparkConf`\"\u003e`spark_conf`\u003c/span\u003e.  If relevant fields aren't filled in, then it will cause the configuration drift detected on each plan/apply, and Pulumi will try to apply the detected changes.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ClusterProviderConfig:ClusterProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"remoteDiskThroughput":{"type":"integer"},"runtimeEngine":{"type":"string","description":"The type of runtime engine to use. If not specified, the runtime engine type is inferred based on the\u003cspan pulumi-lang-nodejs=\" sparkVersion \" pulumi-lang-dotnet=\" SparkVersion \" pulumi-lang-go=\" sparkVersion \" pulumi-lang-python=\" spark_version \" pulumi-lang-yaml=\" sparkVersion \" pulumi-lang-java=\" sparkVersion \"\u003e spark_version \u003c/span\u003evalue. Allowed values include: `PHOTON`, `STANDARD`.\n"},"singleUserName":{"type":"string","description":"The optional user name of the user (or group name if \u003cspan pulumi-lang-nodejs=\"`kind`\" pulumi-lang-dotnet=\"`Kind`\" pulumi-lang-go=\"`kind`\" pulumi-lang-python=\"`kind`\" pulumi-lang-yaml=\"`kind`\" pulumi-lang-java=\"`kind`\"\u003e`kind`\u003c/span\u003e if specified) to assign to an interactive cluster. This field is required when using \u003cspan pulumi-lang-nodejs=\"`dataSecurityMode`\" pulumi-lang-dotnet=\"`DataSecurityMode`\" pulumi-lang-go=\"`dataSecurityMode`\" pulumi-lang-python=\"`data_security_mode`\" pulumi-lang-yaml=\"`dataSecurityMode`\" pulumi-lang-java=\"`dataSecurityMode`\"\u003e`data_security_mode`\u003c/span\u003e set to `SINGLE_USER` or AAD Passthrough for Azure Data Lake Storage (ADLS) with a single-user cluster (i.e., not high-concurrency clusters).\n"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"},"description":"should have following items:\n* `spark.databricks.repl.allowedLanguages` set to a list of supported languages, for example: `python,sql`, or `python,sql,r`.  Scala is not supported!\n* `spark.databricks.cluster.profile` set to \u003cspan pulumi-lang-nodejs=\"`serverless`\" pulumi-lang-dotnet=\"`Serverless`\" pulumi-lang-go=\"`serverless`\" pulumi-lang-python=\"`serverless`\" pulumi-lang-yaml=\"`serverless`\" pulumi-lang-java=\"`serverless`\"\u003e`serverless`\u003c/span\u003e\n"},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"},"description":"Map with environment variable key-value pairs to fine-tune Spark clusters. Key-value pairs of the form (X,Y) are exported (i.e., X='Y') while launching the driver and workers.\n"},"sparkVersion":{"type":"string","description":"[Runtime version](https://docs.databricks.com/runtime/index.html) of the cluster. Any supported\u003cspan pulumi-lang-nodejs=\" databricks.getSparkVersion \" pulumi-lang-dotnet=\" databricks.getSparkVersion \" pulumi-lang-go=\" getSparkVersion \" pulumi-lang-python=\" get_spark_version \" pulumi-lang-yaml=\" databricks.getSparkVersion \" pulumi-lang-java=\" databricks.getSparkVersion \"\u003e databricks.getSparkVersion \u003c/span\u003eid.  We advise using Cluster Policies to restrict the list of versions for simplicity while maintaining enough control.\n"},"sshPublicKeys":{"type":"array","items":{"type":"string"},"description":"SSH public key contents that will be added to each Spark node in this cluster. The corresponding private keys can be used to login with the user name ubuntu on port 2200. You can specify up to 10 keys.\n"},"totalInitialRemoteDiskSize":{"type":"integer"},"useMlRuntime":{"type":"boolean","description":"Whenever ML runtime should be selected or not.  Actual runtime is determined by \u003cspan pulumi-lang-nodejs=\"`sparkVersion`\" pulumi-lang-dotnet=\"`SparkVersion`\" pulumi-lang-go=\"`sparkVersion`\" pulumi-lang-python=\"`spark_version`\" pulumi-lang-yaml=\"`sparkVersion`\" pulumi-lang-java=\"`sparkVersion`\"\u003e`spark_version`\u003c/span\u003e (DBR release), this field \u003cspan pulumi-lang-nodejs=\"`useMlRuntime`\" pulumi-lang-dotnet=\"`UseMlRuntime`\" pulumi-lang-go=\"`useMlRuntime`\" pulumi-lang-python=\"`use_ml_runtime`\" pulumi-lang-yaml=\"`useMlRuntime`\" pulumi-lang-java=\"`useMlRuntime`\"\u003e`use_ml_runtime`\u003c/span\u003e, and whether \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e is GPU node or not.\n"},"workerNodeTypeFlexibility":{"$ref":"#/types/databricks:index/ClusterWorkerNodeTypeFlexibility:ClusterWorkerNodeTypeFlexibility","description":"a block describing the alternative driver node types if \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e isn't available.\n"},"workloadType":{"$ref":"#/types/databricks:index/ClusterWorkloadType:ClusterWorkloadType"}},"requiredInputs":["sparkVersion"],"stateInputs":{"description":"Input properties used for looking up and filtering Cluster resources.\n","properties":{"applyPolicyDefaultValues":{"type":"boolean","description":"Whether to use policy default values for missing cluster attributes.\n"},"autoscale":{"$ref":"#/types/databricks:index/ClusterAutoscale:ClusterAutoscale"},"autoterminationMinutes":{"type":"integer","description":"Automatically terminate the cluster after being inactive for this time in minutes. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination. Defaults to \u003cspan pulumi-lang-nodejs=\"`60`\" pulumi-lang-dotnet=\"`60`\" pulumi-lang-go=\"`60`\" pulumi-lang-python=\"`60`\" pulumi-lang-yaml=\"`60`\" pulumi-lang-java=\"`60`\"\u003e`60`\u003c/span\u003e.  *We highly recommend having this setting present for Interactive/BI clusters.*\n"},"awsAttributes":{"$ref":"#/types/databricks:index/ClusterAwsAttributes:ClusterAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/ClusterAzureAttributes:ClusterAzureAttributes"},"clusterId":{"type":"string"},"clusterLogConf":{"$ref":"#/types/databricks:index/ClusterClusterLogConf:ClusterClusterLogConf"},"clusterMountInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterClusterMountInfo:ClusterClusterMountInfo"}},"clusterName":{"type":"string","description":"Cluster name, which doesn't have to be unique. If not specified at creation, the cluster name will be an empty string.\n"},"customTags":{"type":"object","additionalProperties":{"type":"string"},"description":"should have tag `ResourceClass` set to value `Serverless`\n\nFor example:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst clusterWithTableAccessControl = new databricks.Cluster(\"cluster_with_table_access_control\", {\n    clusterName: \"Shared High-Concurrency\",\n    sparkVersion: latestLts.id,\n    nodeTypeId: smallest.id,\n    autoterminationMinutes: 20,\n    sparkConf: {\n        \"spark.databricks.repl.allowedLanguages\": \"python,sql\",\n        \"spark.databricks.cluster.profile\": \"serverless\",\n    },\n    customTags: {\n        ResourceClass: \"Serverless\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncluster_with_table_access_control = databricks.Cluster(\"cluster_with_table_access_control\",\n    cluster_name=\"Shared High-Concurrency\",\n    spark_version=latest_lts[\"id\"],\n    node_type_id=smallest[\"id\"],\n    autotermination_minutes=20,\n    spark_conf={\n        \"spark.databricks.repl.allowedLanguages\": \"python,sql\",\n        \"spark.databricks.cluster.profile\": \"serverless\",\n    },\n    custom_tags={\n        \"ResourceClass\": \"Serverless\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var clusterWithTableAccessControl = new Databricks.Cluster(\"cluster_with_table_access_control\", new()\n    {\n        ClusterName = \"Shared High-Concurrency\",\n        SparkVersion = latestLts.Id,\n        NodeTypeId = smallest.Id,\n        AutoterminationMinutes = 20,\n        SparkConf = \n        {\n            { \"spark.databricks.repl.allowedLanguages\", \"python,sql\" },\n            { \"spark.databricks.cluster.profile\", \"serverless\" },\n        },\n        CustomTags = \n        {\n            { \"ResourceClass\", \"Serverless\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewCluster(ctx, \"cluster_with_table_access_control\", \u0026databricks.ClusterArgs{\n\t\t\tClusterName:            pulumi.String(\"Shared High-Concurrency\"),\n\t\t\tSparkVersion:           pulumi.Any(latestLts.Id),\n\t\t\tNodeTypeId:             pulumi.Any(smallest.Id),\n\t\t\tAutoterminationMinutes: pulumi.Int(20),\n\t\t\tSparkConf: pulumi.StringMap{\n\t\t\t\t\"spark.databricks.repl.allowedLanguages\": pulumi.String(\"python,sql\"),\n\t\t\t\t\"spark.databricks.cluster.profile\":       pulumi.String(\"serverless\"),\n\t\t\t},\n\t\t\tCustomTags: pulumi.StringMap{\n\t\t\t\t\"ResourceClass\": pulumi.String(\"Serverless\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var clusterWithTableAccessControl = new Cluster(\"clusterWithTableAccessControl\", ClusterArgs.builder()\n            .clusterName(\"Shared High-Concurrency\")\n            .sparkVersion(latestLts.id())\n            .nodeTypeId(smallest.id())\n            .autoterminationMinutes(20)\n            .sparkConf(Map.ofEntries(\n                Map.entry(\"spark.databricks.repl.allowedLanguages\", \"python,sql\"),\n                Map.entry(\"spark.databricks.cluster.profile\", \"serverless\")\n            ))\n            .customTags(Map.of(\"ResourceClass\", \"Serverless\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  clusterWithTableAccessControl:\n    type: databricks:Cluster\n    name: cluster_with_table_access_control\n    properties:\n      clusterName: Shared High-Concurrency\n      sparkVersion: ${latestLts.id}\n      nodeTypeId: ${smallest.id}\n      autoterminationMinutes: 20\n      sparkConf:\n        spark.databricks.repl.allowedLanguages: python,sql\n        spark.databricks.cluster.profile: serverless\n      customTags:\n        ResourceClass: Serverless\n```\n\u003c!--End PulumiCodeChooser --\u003e\n"},"dataSecurityMode":{"type":"string","description":"Select the security features of the cluster (see [API docs](https://docs.databricks.com/api/workspace/clusters/create#data_security_mode) for full list of values). [Unity Catalog requires](https://docs.databricks.com/data-governance/unity-catalog/compute.html#create-clusters--sql-warehouses-with-unity-catalog-access) `SINGLE_USER` or `USER_ISOLATION` mode. `LEGACY_PASSTHROUGH` for passthrough cluster and `LEGACY_TABLE_ACL` for Table ACL cluster. If omitted, default security features are enabled. To disable security features use `NONE` or legacy mode `NO_ISOLATION`.  If \u003cspan pulumi-lang-nodejs=\"`kind`\" pulumi-lang-dotnet=\"`Kind`\" pulumi-lang-go=\"`kind`\" pulumi-lang-python=\"`kind`\" pulumi-lang-yaml=\"`kind`\" pulumi-lang-java=\"`kind`\"\u003e`kind`\u003c/span\u003e is specified, then the following options are available:\n* `DATA_SECURITY_MODE_AUTO`: Databricks will choose the most appropriate access mode depending on your compute configuration.\n* `DATA_SECURITY_MODE_STANDARD`: Alias for `USER_ISOLATION`.\n* `DATA_SECURITY_MODE_DEDICATED`: Alias for `SINGLE_USER`.\n"},"defaultTags":{"type":"object","additionalProperties":{"type":"string"},"description":"(map) Tags that are added by Databricks by default, regardless of any \u003cspan pulumi-lang-nodejs=\"`customTags`\" pulumi-lang-dotnet=\"`CustomTags`\" pulumi-lang-go=\"`customTags`\" pulumi-lang-python=\"`custom_tags`\" pulumi-lang-yaml=\"`customTags`\" pulumi-lang-java=\"`customTags`\"\u003e`custom_tags`\u003c/span\u003e that may have been added. These include: Vendor: Databricks, Creator: \u003cusername_of_creator\u003e, ClusterName: \u003cname_of_cluster\u003e, ClusterId: \u003cid_of_cluster\u003e, Name: \u003cDatabricks internal use\u003e, and any workspace and pool tags.\n"},"dockerImage":{"$ref":"#/types/databricks:index/ClusterDockerImage:ClusterDockerImage"},"driverInstancePoolId":{"type":"string","description":"similar to \u003cspan pulumi-lang-nodejs=\"`instancePoolId`\" pulumi-lang-dotnet=\"`InstancePoolId`\" pulumi-lang-go=\"`instancePoolId`\" pulumi-lang-python=\"`instance_pool_id`\" pulumi-lang-yaml=\"`instancePoolId`\" pulumi-lang-java=\"`instancePoolId`\"\u003e`instance_pool_id`\u003c/span\u003e, but for driver node. If omitted, and \u003cspan pulumi-lang-nodejs=\"`instancePoolId`\" pulumi-lang-dotnet=\"`InstancePoolId`\" pulumi-lang-go=\"`instancePoolId`\" pulumi-lang-python=\"`instance_pool_id`\" pulumi-lang-yaml=\"`instancePoolId`\" pulumi-lang-java=\"`instancePoolId`\"\u003e`instance_pool_id`\u003c/span\u003e is specified, then the driver will be allocated from that pool.\n"},"driverNodeTypeFlexibility":{"$ref":"#/types/databricks:index/ClusterDriverNodeTypeFlexibility:ClusterDriverNodeTypeFlexibility","description":"a block describing the alternative driver node types if \u003cspan pulumi-lang-nodejs=\"`driverNodeTypeId`\" pulumi-lang-dotnet=\"`DriverNodeTypeId`\" pulumi-lang-go=\"`driverNodeTypeId`\" pulumi-lang-python=\"`driver_node_type_id`\" pulumi-lang-yaml=\"`driverNodeTypeId`\" pulumi-lang-java=\"`driverNodeTypeId`\"\u003e`driver_node_type_id`\u003c/span\u003e isn't available.\n"},"driverNodeTypeId":{"type":"string","description":"The node type of the Spark driver. This field is optional; if unset, API will set the driver node type to the same value as \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e defined above.\n"},"enableElasticDisk":{"type":"boolean","description":"If you don't want to allocate a fixed number of EBS volumes at cluster creation time, use autoscaling local storage. With autoscaling local storage, Databricks monitors the amount of free disk space available on your cluster's Spark workers. If a worker begins to run too low on disk, Databricks automatically attaches a new EBS volume to the worker before it runs out of disk space. EBS volumes are attached up to a limit of 5 TB of total disk space per instance (including the instance's local storage). To scale down EBS usage, make sure you have \u003cspan pulumi-lang-nodejs=\"`autoterminationMinutes`\" pulumi-lang-dotnet=\"`AutoterminationMinutes`\" pulumi-lang-go=\"`autoterminationMinutes`\" pulumi-lang-python=\"`autotermination_minutes`\" pulumi-lang-yaml=\"`autoterminationMinutes`\" pulumi-lang-java=\"`autoterminationMinutes`\"\u003e`autotermination_minutes`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`autoscale`\" pulumi-lang-dotnet=\"`Autoscale`\" pulumi-lang-go=\"`autoscale`\" pulumi-lang-python=\"`autoscale`\" pulumi-lang-yaml=\"`autoscale`\" pulumi-lang-java=\"`autoscale`\"\u003e`autoscale`\u003c/span\u003e attributes set. More documentation available at [cluster configuration page](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage-1).\n"},"enableLocalDiskEncryption":{"type":"boolean","description":"Some instance types you use to run clusters may have locally attached disks. Databricks may store shuffle data or temporary data on these locally attached disks. To ensure that all data at rest is encrypted for all storage types, including shuffle data stored temporarily on your cluster's local disks, you can enable local disk encryption. When local disk encryption is enabled, Databricks generates an encryption key locally unique to each cluster node and uses it to encrypt all data stored on local disks. The scope of the key is local to each cluster node and is destroyed along with the cluster node itself. During its lifetime, the key resides in memory for encryption and decryption and is stored encrypted on the disk. *Your workloads may run more slowly because of the performance impact of reading and writing encrypted data to and from local volumes. This feature is not available for all Azure Databricks subscriptions. Contact your Microsoft or Databricks account representative to request access.*\n"},"gcpAttributes":{"$ref":"#/types/databricks:index/ClusterGcpAttributes:ClusterGcpAttributes"},"idempotencyToken":{"type":"string","description":"An optional token to guarantee the idempotency of cluster creation requests. If an active cluster with the provided token already exists, the request will not create a new cluster, but it will return the existing running cluster's ID instead. If you specify the idempotency token, upon failure, you can retry until the request succeeds. Databricks platform guarantees to launch exactly one cluster with that idempotency token. This token should have at most 64 characters.\n","willReplaceOnChanges":true},"initScripts":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterInitScript:ClusterInitScript"}},"instancePoolId":{"type":"string","description":"To reduce cluster start time, you can attach a cluster to a predefined pool of idle instances. When attached to a pool, a cluster allocates its driver and worker nodes from the pool. If the pool does not have sufficient idle resources to accommodate the cluster's request, it expands by allocating new instances from the instance provider. When an attached cluster changes its state to `TERMINATED`, the instances it used are returned to the pool and reused by a different cluster.\n"},"isPinned":{"type":"boolean","description":"boolean value specifying if the cluster is pinned (not pinned by default). You must be a Databricks administrator to use this.  The pinned clusters' maximum number is [limited to 100](https://docs.databricks.com/clusters/clusters-manage.html#pin-a-cluster), so \u003cspan pulumi-lang-nodejs=\"`apply`\" pulumi-lang-dotnet=\"`Apply`\" pulumi-lang-go=\"`apply`\" pulumi-lang-python=\"`apply`\" pulumi-lang-yaml=\"`apply`\" pulumi-lang-java=\"`apply`\"\u003e`apply`\u003c/span\u003e may fail if you have more than that (this number may change over time, so check Databricks documentation for actual number).\n"},"isSingleNode":{"type":"boolean","description":"When set to true, Databricks will automatically set single node related \u003cspan pulumi-lang-nodejs=\"`customTags`\" pulumi-lang-dotnet=\"`CustomTags`\" pulumi-lang-go=\"`customTags`\" pulumi-lang-python=\"`custom_tags`\" pulumi-lang-yaml=\"`customTags`\" pulumi-lang-java=\"`customTags`\"\u003e`custom_tags`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`sparkConf`\" pulumi-lang-dotnet=\"`SparkConf`\" pulumi-lang-go=\"`sparkConf`\" pulumi-lang-python=\"`spark_conf`\" pulumi-lang-yaml=\"`sparkConf`\" pulumi-lang-java=\"`sparkConf`\"\u003e`spark_conf`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`numWorkers`\" pulumi-lang-dotnet=\"`NumWorkers`\" pulumi-lang-go=\"`numWorkers`\" pulumi-lang-python=\"`num_workers`\" pulumi-lang-yaml=\"`numWorkers`\" pulumi-lang-java=\"`numWorkers`\"\u003e`num_workers`\u003c/span\u003e.\n"},"kind":{"type":"string","description":"The kind of compute described by this compute specification.  Possible values (see [API docs](https://docs.databricks.com/api/workspace/clusters/create#kind) for full list): `CLASSIC_PREVIEW` (if corresponding public preview is enabled).\n"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterLibrary:ClusterLibrary"}},"noWait":{"type":"boolean","description":"If true, the provider will not wait for the cluster to reach `RUNNING` state when creating the cluster, allowing cluster creation and library installation to continue asynchronously. Defaults to false (the provider will wait for cluster creation and library installation to succeed).\n"},"nodeTypeId":{"type":"string","description":"Any supported\u003cspan pulumi-lang-nodejs=\" databricks.getNodeType \" pulumi-lang-dotnet=\" databricks.getNodeType \" pulumi-lang-go=\" getNodeType \" pulumi-lang-python=\" get_node_type \" pulumi-lang-yaml=\" databricks.getNodeType \" pulumi-lang-java=\" databricks.getNodeType \"\u003e databricks.getNodeType \u003c/span\u003eid. If \u003cspan pulumi-lang-nodejs=\"`instancePoolId`\" pulumi-lang-dotnet=\"`InstancePoolId`\" pulumi-lang-go=\"`instancePoolId`\" pulumi-lang-python=\"`instance_pool_id`\" pulumi-lang-yaml=\"`instancePoolId`\" pulumi-lang-java=\"`instancePoolId`\"\u003e`instance_pool_id`\u003c/span\u003e is specified, this field is not needed.\n"},"numWorkers":{"type":"integer","description":"Number of worker nodes that this cluster should have. A cluster has one Spark driver and \u003cspan pulumi-lang-nodejs=\"`numWorkers`\" pulumi-lang-dotnet=\"`NumWorkers`\" pulumi-lang-go=\"`numWorkers`\" pulumi-lang-python=\"`num_workers`\" pulumi-lang-yaml=\"`numWorkers`\" pulumi-lang-java=\"`numWorkers`\"\u003e`num_workers`\u003c/span\u003e executors for a total of \u003cspan pulumi-lang-nodejs=\"`numWorkers`\" pulumi-lang-dotnet=\"`NumWorkers`\" pulumi-lang-go=\"`numWorkers`\" pulumi-lang-python=\"`num_workers`\" pulumi-lang-yaml=\"`numWorkers`\" pulumi-lang-java=\"`numWorkers`\"\u003e`num_workers`\u003c/span\u003e + 1 Spark nodes.\n"},"policyId":{"type":"string","description":"Identifier of Cluster Policy to validate cluster and preset certain defaults. *The primary use for cluster policies is to allow users to create policy-scoped clusters via UI rather than sharing configuration for API-created clusters.* For example, when you specify \u003cspan pulumi-lang-nodejs=\"`policyId`\" pulumi-lang-dotnet=\"`PolicyId`\" pulumi-lang-go=\"`policyId`\" pulumi-lang-python=\"`policy_id`\" pulumi-lang-yaml=\"`policyId`\" pulumi-lang-java=\"`policyId`\"\u003e`policy_id`\u003c/span\u003e of [external metastore](https://docs.databricks.com/administration-guide/clusters/policies.html#external-metastore-policy) policy, you still have to fill in relevant keys for \u003cspan pulumi-lang-nodejs=\"`sparkConf`\" pulumi-lang-dotnet=\"`SparkConf`\" pulumi-lang-go=\"`sparkConf`\" pulumi-lang-python=\"`spark_conf`\" pulumi-lang-yaml=\"`sparkConf`\" pulumi-lang-java=\"`sparkConf`\"\u003e`spark_conf`\u003c/span\u003e.  If relevant fields aren't filled in, then it will cause the configuration drift detected on each plan/apply, and Pulumi will try to apply the detected changes.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ClusterProviderConfig:ClusterProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"remoteDiskThroughput":{"type":"integer"},"runtimeEngine":{"type":"string","description":"The type of runtime engine to use. If not specified, the runtime engine type is inferred based on the\u003cspan pulumi-lang-nodejs=\" sparkVersion \" pulumi-lang-dotnet=\" SparkVersion \" pulumi-lang-go=\" sparkVersion \" pulumi-lang-python=\" spark_version \" pulumi-lang-yaml=\" sparkVersion \" pulumi-lang-java=\" sparkVersion \"\u003e spark_version \u003c/span\u003evalue. Allowed values include: `PHOTON`, `STANDARD`.\n"},"singleUserName":{"type":"string","description":"The optional user name of the user (or group name if \u003cspan pulumi-lang-nodejs=\"`kind`\" pulumi-lang-dotnet=\"`Kind`\" pulumi-lang-go=\"`kind`\" pulumi-lang-python=\"`kind`\" pulumi-lang-yaml=\"`kind`\" pulumi-lang-java=\"`kind`\"\u003e`kind`\u003c/span\u003e if specified) to assign to an interactive cluster. This field is required when using \u003cspan pulumi-lang-nodejs=\"`dataSecurityMode`\" pulumi-lang-dotnet=\"`DataSecurityMode`\" pulumi-lang-go=\"`dataSecurityMode`\" pulumi-lang-python=\"`data_security_mode`\" pulumi-lang-yaml=\"`dataSecurityMode`\" pulumi-lang-java=\"`dataSecurityMode`\"\u003e`data_security_mode`\u003c/span\u003e set to `SINGLE_USER` or AAD Passthrough for Azure Data Lake Storage (ADLS) with a single-user cluster (i.e., not high-concurrency clusters).\n"},"sparkConf":{"type":"object","additionalProperties":{"type":"string"},"description":"should have following items:\n* `spark.databricks.repl.allowedLanguages` set to a list of supported languages, for example: `python,sql`, or `python,sql,r`.  Scala is not supported!\n* `spark.databricks.cluster.profile` set to \u003cspan pulumi-lang-nodejs=\"`serverless`\" pulumi-lang-dotnet=\"`Serverless`\" pulumi-lang-go=\"`serverless`\" pulumi-lang-python=\"`serverless`\" pulumi-lang-yaml=\"`serverless`\" pulumi-lang-java=\"`serverless`\"\u003e`serverless`\u003c/span\u003e\n"},"sparkEnvVars":{"type":"object","additionalProperties":{"type":"string"},"description":"Map with environment variable key-value pairs to fine-tune Spark clusters. Key-value pairs of the form (X,Y) are exported (i.e., X='Y') while launching the driver and workers.\n"},"sparkVersion":{"type":"string","description":"[Runtime version](https://docs.databricks.com/runtime/index.html) of the cluster. Any supported\u003cspan pulumi-lang-nodejs=\" databricks.getSparkVersion \" pulumi-lang-dotnet=\" databricks.getSparkVersion \" pulumi-lang-go=\" getSparkVersion \" pulumi-lang-python=\" get_spark_version \" pulumi-lang-yaml=\" databricks.getSparkVersion \" pulumi-lang-java=\" databricks.getSparkVersion \"\u003e databricks.getSparkVersion \u003c/span\u003eid.  We advise using Cluster Policies to restrict the list of versions for simplicity while maintaining enough control.\n"},"sshPublicKeys":{"type":"array","items":{"type":"string"},"description":"SSH public key contents that will be added to each Spark node in this cluster. The corresponding private keys can be used to login with the user name ubuntu on port 2200. You can specify up to 10 keys.\n"},"state":{"type":"string","description":"(string) State of the cluster.\n"},"totalInitialRemoteDiskSize":{"type":"integer"},"url":{"type":"string"},"useMlRuntime":{"type":"boolean","description":"Whenever ML runtime should be selected or not.  Actual runtime is determined by \u003cspan pulumi-lang-nodejs=\"`sparkVersion`\" pulumi-lang-dotnet=\"`SparkVersion`\" pulumi-lang-go=\"`sparkVersion`\" pulumi-lang-python=\"`spark_version`\" pulumi-lang-yaml=\"`sparkVersion`\" pulumi-lang-java=\"`sparkVersion`\"\u003e`spark_version`\u003c/span\u003e (DBR release), this field \u003cspan pulumi-lang-nodejs=\"`useMlRuntime`\" pulumi-lang-dotnet=\"`UseMlRuntime`\" pulumi-lang-go=\"`useMlRuntime`\" pulumi-lang-python=\"`use_ml_runtime`\" pulumi-lang-yaml=\"`useMlRuntime`\" pulumi-lang-java=\"`useMlRuntime`\"\u003e`use_ml_runtime`\u003c/span\u003e, and whether \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e is GPU node or not.\n"},"workerNodeTypeFlexibility":{"$ref":"#/types/databricks:index/ClusterWorkerNodeTypeFlexibility:ClusterWorkerNodeTypeFlexibility","description":"a block describing the alternative driver node types if \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e isn't available.\n"},"workloadType":{"$ref":"#/types/databricks:index/ClusterWorkloadType:ClusterWorkloadType"}},"type":"object"}},"databricks:index/clusterPolicy:ClusterPolicy":{"description":"This resource creates a cluster policy, which limits the ability to create clusters based on a set of rules. The policy rules limit the attributes or attribute values available for cluster creation. cluster policies have ACLs that limit their use to specific users and groups. Only admin users can create, edit, and delete policies. Admin users also have access to all policies.\n\n\u003e This resource can only be used with a workspace-level provider!\n\nCluster policies let you:\n\n* Limit users to create clusters with prescribed settings.\n* Simplify the user interface and enable more users to create their own clusters (by fixing and hiding some values).\n* Control cost by limiting per cluster maximum cost (by setting limits on attributes whose values contribute to hourly price).\n\nCluster policy permissions limit which policies a user can select in the Policy drop-down when the user creates a cluster:\n\n* If no policies have been created in the workspace, the Policy drop-down does not display.\n* A user who has cluster create permission can select the `Free form` policy and create fully-configurable clusters.\n* A user who has both cluster create permission and access to cluster policies can select the Free form policy and policies they have access to.\n* A user that has access to only cluster policies, can select the policies they have access to.\n\n## Example Usage\n\nLet us take a look at an example of how you can manage two teams: Marketing and Data Engineering. In the following scenario we want the marketing team to have a really good query experience, so we enabled delta cache for them. On the other hand we want the data engineering team to be able to utilize bigger clusters so we increased the dbus per hour that they can spend. This strategy allows your marketing users and data engineering users to use Databricks in a self service manner but have a different experience in regards to security and performance. And down the line if you need to add more global settings you can propagate them through the \"base cluster policy\".\n\n`modules/base-cluster-policy/main.tf` could look like:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst config = new pulumi.Config();\n// Team that performs the work\nconst team = config.requireObject\u003cany\u003e(\"team\");\n// Cluster policy overrides\nconst policyOverrides = config.requireObject\u003cany\u003e(\"policyOverrides\");\nconst defaultPolicy = {\n    dbus_per_hour: {\n        type: \"range\",\n        maxValue: 10,\n    },\n    autotermination_minutes: {\n        type: \"fixed\",\n        value: 20,\n        hidden: true,\n    },\n    \"custom_tags.Team\": {\n        type: \"fixed\",\n        value: team,\n    },\n};\nconst fairUse = new databricks.ClusterPolicy(\"fair_use\", {\n    name: `${team} cluster policy`,\n    definition: JSON.stringify(std.merge({\n        input: [\n            defaultPolicy,\n            policyOverrides,\n        ],\n    }).then(invoke =\u003e invoke.result)),\n    libraries: [\n        {\n            pypi: {\n                \"package\": \"databricks-sdk==0.12.0\",\n            },\n        },\n        {\n            maven: {\n                coordinates: \"com.oracle.database.jdbc:ojdbc8:XXXX\",\n            },\n        },\n    ],\n});\nconst canUseClusterPolicyinstanceProfile = new databricks.Permissions(\"can_use_cluster_policyinstance_profile\", {\n    clusterPolicyId: fairUse.id,\n    accessControls: [{\n        groupName: team,\n        permissionLevel: \"CAN_USE\",\n    }],\n});\n```\n```python\nimport pulumi\nimport json\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\nconfig = pulumi.Config()\n# Team that performs the work\nteam = config.require_object(\"team\")\n# Cluster policy overrides\npolicy_overrides = config.require_object(\"policyOverrides\")\ndefault_policy = {\n    \"dbus_per_hour\": {\n        \"type\": \"range\",\n        \"maxValue\": 10,\n    },\n    \"autotermination_minutes\": {\n        \"type\": \"fixed\",\n        \"value\": 20,\n        \"hidden\": True,\n    },\n    \"custom_tags.Team\": {\n        \"type\": \"fixed\",\n        \"value\": team,\n    },\n}\nfair_use = databricks.ClusterPolicy(\"fair_use\",\n    name=f\"{team} cluster policy\",\n    definition=json.dumps(std.merge(input=[\n        default_policy,\n        policy_overrides,\n    ]).result),\n    libraries=[\n        {\n            \"pypi\": {\n                \"package\": \"databricks-sdk==0.12.0\",\n            },\n        },\n        {\n            \"maven\": {\n                \"coordinates\": \"com.oracle.database.jdbc:ojdbc8:XXXX\",\n            },\n        },\n    ])\ncan_use_cluster_policyinstance_profile = databricks.Permissions(\"can_use_cluster_policyinstance_profile\",\n    cluster_policy_id=fair_use.id,\n    access_controls=[{\n        \"group_name\": team,\n        \"permission_level\": \"CAN_USE\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text.Json;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Team that performs the work\n    var team = config.RequireObject\u003cdynamic\u003e(\"team\");\n    // Cluster policy overrides\n    var policyOverrides = config.RequireObject\u003cdynamic\u003e(\"policyOverrides\");\n    var defaultPolicy = \n    {\n        { \"dbus_per_hour\", \n        {\n            { \"type\", \"range\" },\n            { \"maxValue\", 10 },\n        } },\n        { \"autotermination_minutes\", \n        {\n            { \"type\", \"fixed\" },\n            { \"value\", 20 },\n            { \"hidden\", true },\n        } },\n        { \"custom_tags.Team\", \n        {\n            { \"type\", \"fixed\" },\n            { \"value\", team },\n        } },\n    };\n\n    var fairUse = new Databricks.ClusterPolicy(\"fair_use\", new()\n    {\n        Name = $\"{team} cluster policy\",\n        Definition = JsonSerializer.Serialize(Std.Merge.Invoke(new()\n        {\n            Input = new[]\n            {\n                defaultPolicy,\n                policyOverrides,\n            },\n        }).Apply(invoke =\u003e invoke.Result)),\n        Libraries = new[]\n        {\n            new Databricks.Inputs.ClusterPolicyLibraryArgs\n            {\n                Pypi = new Databricks.Inputs.ClusterPolicyLibraryPypiArgs\n                {\n                    Package = \"databricks-sdk==0.12.0\",\n                },\n            },\n            new Databricks.Inputs.ClusterPolicyLibraryArgs\n            {\n                Maven = new Databricks.Inputs.ClusterPolicyLibraryMavenArgs\n                {\n                    Coordinates = \"com.oracle.database.jdbc:ojdbc8:XXXX\",\n                },\n            },\n        },\n    });\n\n    var canUseClusterPolicyinstanceProfile = new Databricks.Permissions(\"can_use_cluster_policyinstance_profile\", new()\n    {\n        ClusterPolicyId = fairUse.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = team,\n                PermissionLevel = \"CAN_USE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Team that performs the work\n\t\tteam := cfg.RequireObject(\"team\")\n\t\t// Cluster policy overrides\n\t\tpolicyOverrides := cfg.RequireObject(\"policyOverrides\")\n\t\tdefaultPolicy := map[string]interface{}{\n\t\t\t\"dbus_per_hour\": map[string]interface{}{\n\t\t\t\t\"type\":     \"range\",\n\t\t\t\t\"maxValue\": 10,\n\t\t\t},\n\t\t\t\"autotermination_minutes\": map[string]interface{}{\n\t\t\t\t\"type\":   \"fixed\",\n\t\t\t\t\"value\":  20,\n\t\t\t\t\"hidden\": true,\n\t\t\t},\n\t\t\t\"custom_tags.Team\": map[string]interface{}{\n\t\t\t\t\"type\":  \"fixed\",\n\t\t\t\t\"value\": team,\n\t\t\t},\n\t\t}\n\t\ttmpJSON0, err := json.Marshal(std.Merge(ctx, \u0026std.MergeArgs{\n\t\t\tInput: []interface{}{\n\t\t\t\tdefaultPolicy,\n\t\t\t\tpolicyOverrides,\n\t\t\t},\n\t\t}, nil).Result)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tjson0 := string(tmpJSON0)\n\t\tfairUse, err := databricks.NewClusterPolicy(ctx, \"fair_use\", \u0026databricks.ClusterPolicyArgs{\n\t\t\tName:       pulumi.Sprintf(\"%v cluster policy\", team),\n\t\t\tDefinition: pulumi.String(json0),\n\t\t\tLibraries: databricks.ClusterPolicyLibraryArray{\n\t\t\t\t\u0026databricks.ClusterPolicyLibraryArgs{\n\t\t\t\t\tPypi: \u0026databricks.ClusterPolicyLibraryPypiArgs{\n\t\t\t\t\t\tPackage: pulumi.String(\"databricks-sdk==0.12.0\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.ClusterPolicyLibraryArgs{\n\t\t\t\t\tMaven: \u0026databricks.ClusterPolicyLibraryMavenArgs{\n\t\t\t\t\t\tCoordinates: pulumi.String(\"com.oracle.database.jdbc:ojdbc8:XXXX\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"can_use_cluster_policyinstance_profile\", \u0026databricks.PermissionsArgs{\n\t\t\tClusterPolicyId: fairUse.ID(),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.Any(team),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ClusterPolicy;\nimport com.pulumi.databricks.ClusterPolicyArgs;\nimport com.pulumi.databricks.inputs.ClusterPolicyLibraryArgs;\nimport com.pulumi.databricks.inputs.ClusterPolicyLibraryPypiArgs;\nimport com.pulumi.databricks.inputs.ClusterPolicyLibraryMavenArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.MergeArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport static com.pulumi.codegen.internal.Serialization.*;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var team = config.get(\"team\");\n        final var policyOverrides = config.get(\"policyOverrides\");\n        final var defaultPolicy = Map.ofEntries(\n            Map.entry(\"dbus_per_hour\", Map.ofEntries(\n                Map.entry(\"type\", \"range\"),\n                Map.entry(\"maxValue\", 10)\n            )),\n            Map.entry(\"autotermination_minutes\", Map.ofEntries(\n                Map.entry(\"type\", \"fixed\"),\n                Map.entry(\"value\", 20),\n                Map.entry(\"hidden\", true)\n            )),\n            Map.entry(\"custom_tags.Team\", Map.ofEntries(\n                Map.entry(\"type\", \"fixed\"),\n                Map.entry(\"value\", team)\n            ))\n        );\n\n        var fairUse = new ClusterPolicy(\"fairUse\", ClusterPolicyArgs.builder()\n            .name(String.format(\"%s cluster policy\", team))\n            .definition(serializeJson(\n                StdFunctions.merge(MergeArgs.builder()\n                    .input(                    \n                        defaultPolicy,\n                        policyOverrides)\n                    .build()).result()))\n            .libraries(            \n                ClusterPolicyLibraryArgs.builder()\n                    .pypi(ClusterPolicyLibraryPypiArgs.builder()\n                        .package_(\"databricks-sdk==0.12.0\")\n                        .build())\n                    .build(),\n                ClusterPolicyLibraryArgs.builder()\n                    .maven(ClusterPolicyLibraryMavenArgs.builder()\n                        .coordinates(\"com.oracle.database.jdbc:ojdbc8:XXXX\")\n                        .build())\n                    .build())\n            .build());\n\n        var canUseClusterPolicyinstanceProfile = new Permissions(\"canUseClusterPolicyinstanceProfile\", PermissionsArgs.builder()\n            .clusterPolicyId(fairUse.id())\n            .accessControls(PermissionsAccessControlArgs.builder()\n                .groupName(team)\n                .permissionLevel(\"CAN_USE\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  team:\n    type: dynamic\n  policyOverrides:\n    type: dynamic\nresources:\n  fairUse:\n    type: databricks:ClusterPolicy\n    name: fair_use\n    properties:\n      name: ${team} cluster policy\n      definition:\n        fn::toJSON:\n          fn::invoke:\n            function: std:merge\n            arguments:\n              input:\n                - ${defaultPolicy}\n                - ${policyOverrides}\n            return: result\n      libraries:\n        - pypi:\n            package: databricks-sdk==0.12.0\n        - maven:\n            coordinates: com.oracle.database.jdbc:ojdbc8:XXXX\n  canUseClusterPolicyinstanceProfile:\n    type: databricks:Permissions\n    name: can_use_cluster_policyinstance_profile\n    properties:\n      clusterPolicyId: ${fairUse.id}\n      accessControls:\n        - groupName: ${team}\n          permissionLevel: CAN_USE\nvariables:\n  defaultPolicy:\n    dbus_per_hour:\n      type: range\n      maxValue: 10\n    autotermination_minutes:\n      type: fixed\n      value: 20\n      hidden: true\n    custom_tags.Team:\n      type: fixed\n      value: ${team}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nAnd custom instances of that base policy module for our marketing and data engineering teams would look like:\n\n### Overriding the built-in cluster policies\n\nYou can override built-in cluster policies by creating a \u003cspan pulumi-lang-nodejs=\"`databricks.ClusterPolicy`\" pulumi-lang-dotnet=\"`databricks.ClusterPolicy`\" pulumi-lang-go=\"`ClusterPolicy`\" pulumi-lang-python=\"`ClusterPolicy`\" pulumi-lang-yaml=\"`databricks.ClusterPolicy`\" pulumi-lang-java=\"`databricks.ClusterPolicy`\"\u003e`databricks.ClusterPolicy`\u003c/span\u003e resource with following attributes:\n\n* \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e - the name of the built-in cluster policy.\n* \u003cspan pulumi-lang-nodejs=\"`policyFamilyId`\" pulumi-lang-dotnet=\"`PolicyFamilyId`\" pulumi-lang-go=\"`policyFamilyId`\" pulumi-lang-python=\"`policy_family_id`\" pulumi-lang-yaml=\"`policyFamilyId`\" pulumi-lang-java=\"`policyFamilyId`\"\u003e`policy_family_id`\u003c/span\u003e - the ID of the cluster policy family used for built-in cluster policy.\n* \u003cspan pulumi-lang-nodejs=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-dotnet=\"`PolicyFamilyDefinitionOverrides`\" pulumi-lang-go=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-python=\"`policy_family_definition_overrides`\" pulumi-lang-yaml=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-java=\"`policyFamilyDefinitionOverrides`\"\u003e`policy_family_definition_overrides`\u003c/span\u003e - settings to override in the built-in cluster policy.\n\nYou can obtain the list of defined cluster policies families using the `databricks policy-families list` command of the new [Databricks CLI](https://docs.databricks.com/en/dev-tools/cli/index.html), or via [list policy families](https://docs.databricks.com/api/workspace/policyfamilies/list) REST API.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst personalVmOverride = {\n    autotermination_minutes: {\n        type: \"fixed\",\n        value: 220,\n        hidden: true,\n    },\n    \"custom_tags.Team\": {\n        type: \"fixed\",\n        value: team,\n    },\n};\nconst personalVm = new databricks.ClusterPolicy(\"personal_vm\", {\n    policyFamilyId: \"personal-vm\",\n    policyFamilyDefinitionOverrides: JSON.stringify(personalVmOverride),\n    name: \"Personal Compute\",\n});\n```\n```python\nimport pulumi\nimport json\nimport pulumi_databricks as databricks\n\npersonal_vm_override = {\n    \"autotermination_minutes\": {\n        \"type\": \"fixed\",\n        \"value\": 220,\n        \"hidden\": True,\n    },\n    \"custom_tags.Team\": {\n        \"type\": \"fixed\",\n        \"value\": team,\n    },\n}\npersonal_vm = databricks.ClusterPolicy(\"personal_vm\",\n    policy_family_id=\"personal-vm\",\n    policy_family_definition_overrides=json.dumps(personal_vm_override),\n    name=\"Personal Compute\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text.Json;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var personalVmOverride = \n    {\n        { \"autotermination_minutes\", \n        {\n            { \"type\", \"fixed\" },\n            { \"value\", 220 },\n            { \"hidden\", true },\n        } },\n        { \"custom_tags.Team\", \n        {\n            { \"type\", \"fixed\" },\n            { \"value\", team },\n        } },\n    };\n\n    var personalVm = new Databricks.ClusterPolicy(\"personal_vm\", new()\n    {\n        PolicyFamilyId = \"personal-vm\",\n        PolicyFamilyDefinitionOverrides = JsonSerializer.Serialize(personalVmOverride),\n        Name = \"Personal Compute\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tpersonalVmOverride := map[string]interface{}{\n\t\t\t\"autotermination_minutes\": map[string]interface{}{\n\t\t\t\t\"type\":   \"fixed\",\n\t\t\t\t\"value\":  220,\n\t\t\t\t\"hidden\": true,\n\t\t\t},\n\t\t\t\"custom_tags.Team\": map[string]interface{}{\n\t\t\t\t\"type\":  \"fixed\",\n\t\t\t\t\"value\": team,\n\t\t\t},\n\t\t}\n\t\ttmpJSON0, err := json.Marshal(personalVmOverride)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tjson0 := string(tmpJSON0)\n\t\t_, err = databricks.NewClusterPolicy(ctx, \"personal_vm\", \u0026databricks.ClusterPolicyArgs{\n\t\t\tPolicyFamilyId:                  pulumi.String(\"personal-vm\"),\n\t\t\tPolicyFamilyDefinitionOverrides: pulumi.String(json0),\n\t\t\tName:                            pulumi.String(\"Personal Compute\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ClusterPolicy;\nimport com.pulumi.databricks.ClusterPolicyArgs;\nimport static com.pulumi.codegen.internal.Serialization.*;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var personalVmOverride = Map.ofEntries(\n            Map.entry(\"autotermination_minutes\", Map.ofEntries(\n                Map.entry(\"type\", \"fixed\"),\n                Map.entry(\"value\", 220),\n                Map.entry(\"hidden\", true)\n            )),\n            Map.entry(\"custom_tags.Team\", Map.ofEntries(\n                Map.entry(\"type\", \"fixed\"),\n                Map.entry(\"value\", team)\n            ))\n        );\n\n        var personalVm = new ClusterPolicy(\"personalVm\", ClusterPolicyArgs.builder()\n            .policyFamilyId(\"personal-vm\")\n            .policyFamilyDefinitionOverrides(serializeJson(\n                personalVmOverride))\n            .name(\"Personal Compute\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  personalVm:\n    type: databricks:ClusterPolicy\n    name: personal_vm\n    properties:\n      policyFamilyId: personal-vm\n      policyFamilyDefinitionOverrides:\n        fn::toJSON: ${personalVmOverride}\n      name: Personal Compute\nvariables:\n  personalVmOverride:\n    autotermination_minutes:\n      type: fixed\n      value: 220\n      hidden: true\n    custom_tags.Team:\n      type: fixed\n      value: ${team}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* Dynamic Passthrough Clusters for a Group guide.\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getClusters \" pulumi-lang-dotnet=\" databricks.getClusters \" pulumi-lang-go=\" getClusters \" pulumi-lang-python=\" get_clusters \" pulumi-lang-yaml=\" databricks.getClusters \" pulumi-lang-java=\" databricks.getClusters \"\u003e databricks.getClusters \u003c/span\u003edata to retrieve a list of\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eids.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.getCurrentUser \" pulumi-lang-dotnet=\" databricks.getCurrentUser \" pulumi-lang-go=\" getCurrentUser \" pulumi-lang-python=\" get_current_user \" pulumi-lang-yaml=\" databricks.getCurrentUser \" pulumi-lang-java=\" databricks.getCurrentUser \"\u003e databricks.getCurrentUser \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eor databricks_service_principal, that is calling Databricks REST API.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GlobalInitScript \" pulumi-lang-dotnet=\" databricks.GlobalInitScript \" pulumi-lang-go=\" GlobalInitScript \" pulumi-lang-python=\" GlobalInitScript \" pulumi-lang-yaml=\" databricks.GlobalInitScript \" pulumi-lang-java=\" databricks.GlobalInitScript \"\u003e databricks.GlobalInitScript \u003c/span\u003eto manage [global init scripts](https://docs.databricks.com/clusters/init-scripts.html#global-init-scripts), which are run on all\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand databricks_job.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstancePool \" pulumi-lang-dotnet=\" databricks.InstancePool \" pulumi-lang-go=\" InstancePool \" pulumi-lang-python=\" InstancePool \" pulumi-lang-yaml=\" databricks.InstancePool \" pulumi-lang-java=\" databricks.InstancePool \"\u003e databricks.InstancePool \u003c/span\u003eto manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.IpAccessList \" pulumi-lang-dotnet=\" databricks.IpAccessList \" pulumi-lang-go=\" IpAccessList \" pulumi-lang-python=\" IpAccessList \" pulumi-lang-yaml=\" databricks.IpAccessList \" pulumi-lang-java=\" databricks.IpAccessList \"\u003e databricks.IpAccessList \u003c/span\u003eto allow access from [predefined IP ranges](https://docs.databricks.com/security/network/ip-access-list.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Library \" pulumi-lang-dotnet=\" databricks.Library \" pulumi-lang-go=\" Library \" pulumi-lang-python=\" Library \" pulumi-lang-yaml=\" databricks.Library \" pulumi-lang-java=\" databricks.Library \"\u003e databricks.Library \u003c/span\u003eto install a [library](https://docs.databricks.com/libraries/index.html) on databricks_cluster.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getNodeType \" pulumi-lang-dotnet=\" databricks.getNodeType \" pulumi-lang-go=\" getNodeType \" pulumi-lang-python=\" get_node_type \" pulumi-lang-yaml=\" databricks.getNodeType \" pulumi-lang-java=\" databricks.getNodeType \"\u003e databricks.getNodeType \u003c/span\u003edata to get the smallest node type for\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003ethat fits search criteria, like amount of RAM or number of cores.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eto manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getSparkVersion \" pulumi-lang-dotnet=\" databricks.getSparkVersion \" pulumi-lang-go=\" getSparkVersion \" pulumi-lang-python=\" get_spark_version \" pulumi-lang-yaml=\" databricks.getSparkVersion \" pulumi-lang-java=\" databricks.getSparkVersion \"\u003e databricks.getSparkVersion \u003c/span\u003edata to get [Databricks Runtime (DBR)](https://docs.databricks.com/runtime/dbr.html) version that could be used for \u003cspan pulumi-lang-nodejs=\"`sparkVersion`\" pulumi-lang-dotnet=\"`SparkVersion`\" pulumi-lang-go=\"`sparkVersion`\" pulumi-lang-python=\"`spark_version`\" pulumi-lang-yaml=\"`sparkVersion`\" pulumi-lang-java=\"`sparkVersion`\"\u003e`spark_version`\u003c/span\u003e parameter in\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand other resources.\n*\u003cspan pulumi-lang-nodejs=\" databricks.UserInstanceProfile \" pulumi-lang-dotnet=\" databricks.UserInstanceProfile \" pulumi-lang-go=\" UserInstanceProfile \" pulumi-lang-python=\" UserInstanceProfile \" pulumi-lang-yaml=\" databricks.UserInstanceProfile \" pulumi-lang-java=\" databricks.UserInstanceProfile \"\u003e databricks.UserInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_user.\n*\u003cspan pulumi-lang-nodejs=\" databricks.WorkspaceConf \" pulumi-lang-dotnet=\" databricks.WorkspaceConf \" pulumi-lang-go=\" WorkspaceConf \" pulumi-lang-python=\" WorkspaceConf \" pulumi-lang-yaml=\" databricks.WorkspaceConf \" pulumi-lang-java=\" databricks.WorkspaceConf \"\u003e databricks.WorkspaceConf \u003c/span\u003eto manage workspace configuration for expert usage.\n\n","properties":{"definition":{"type":"string","description":"Policy definition: JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition). Cannot be used with \u003cspan pulumi-lang-nodejs=\"`policyFamilyId`\" pulumi-lang-dotnet=\"`PolicyFamilyId`\" pulumi-lang-go=\"`policyFamilyId`\" pulumi-lang-python=\"`policy_family_id`\" pulumi-lang-yaml=\"`policyFamilyId`\" pulumi-lang-java=\"`policyFamilyId`\"\u003e`policy_family_id`\u003c/span\u003e\n"},"description":{"type":"string","description":"Additional human-readable description of the cluster policy.\n"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterPolicyLibrary:ClusterPolicyLibrary"}},"maxClustersPerUser":{"type":"integer","description":"Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.\n"},"name":{"type":"string","description":"Cluster policy name. This must be unique. Length must be between 1 and 100 characters.\n"},"policyFamilyDefinitionOverrides":{"type":"string","description":"Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.\n"},"policyFamilyId":{"type":"string","description":"ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with \u003cspan pulumi-lang-nodejs=\"`definition`\" pulumi-lang-dotnet=\"`Definition`\" pulumi-lang-go=\"`definition`\" pulumi-lang-python=\"`definition`\" pulumi-lang-yaml=\"`definition`\" pulumi-lang-java=\"`definition`\"\u003e`definition`\u003c/span\u003e. Use \u003cspan pulumi-lang-nodejs=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-dotnet=\"`PolicyFamilyDefinitionOverrides`\" pulumi-lang-go=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-python=\"`policy_family_definition_overrides`\" pulumi-lang-yaml=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-java=\"`policyFamilyDefinitionOverrides`\"\u003e`policy_family_definition_overrides`\u003c/span\u003e instead to customize the policy definition.\n"},"policyId":{"type":"string","description":"Canonical unique identifier for the cluster policy.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ClusterPolicyProviderConfig:ClusterPolicyProviderConfig"}},"required":["definition","name","policyId"],"inputProperties":{"definition":{"type":"string","description":"Policy definition: JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition). Cannot be used with \u003cspan pulumi-lang-nodejs=\"`policyFamilyId`\" pulumi-lang-dotnet=\"`PolicyFamilyId`\" pulumi-lang-go=\"`policyFamilyId`\" pulumi-lang-python=\"`policy_family_id`\" pulumi-lang-yaml=\"`policyFamilyId`\" pulumi-lang-java=\"`policyFamilyId`\"\u003e`policy_family_id`\u003c/span\u003e\n"},"description":{"type":"string","description":"Additional human-readable description of the cluster policy.\n"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterPolicyLibrary:ClusterPolicyLibrary"}},"maxClustersPerUser":{"type":"integer","description":"Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.\n"},"name":{"type":"string","description":"Cluster policy name. This must be unique. Length must be between 1 and 100 characters.\n"},"policyFamilyDefinitionOverrides":{"type":"string","description":"Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.\n"},"policyFamilyId":{"type":"string","description":"ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with \u003cspan pulumi-lang-nodejs=\"`definition`\" pulumi-lang-dotnet=\"`Definition`\" pulumi-lang-go=\"`definition`\" pulumi-lang-python=\"`definition`\" pulumi-lang-yaml=\"`definition`\" pulumi-lang-java=\"`definition`\"\u003e`definition`\u003c/span\u003e. Use \u003cspan pulumi-lang-nodejs=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-dotnet=\"`PolicyFamilyDefinitionOverrides`\" pulumi-lang-go=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-python=\"`policy_family_definition_overrides`\" pulumi-lang-yaml=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-java=\"`policyFamilyDefinitionOverrides`\"\u003e`policy_family_definition_overrides`\u003c/span\u003e instead to customize the policy definition.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ClusterPolicyProviderConfig:ClusterPolicyProviderConfig"}},"stateInputs":{"description":"Input properties used for looking up and filtering ClusterPolicy resources.\n","properties":{"definition":{"type":"string","description":"Policy definition: JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition). Cannot be used with \u003cspan pulumi-lang-nodejs=\"`policyFamilyId`\" pulumi-lang-dotnet=\"`PolicyFamilyId`\" pulumi-lang-go=\"`policyFamilyId`\" pulumi-lang-python=\"`policy_family_id`\" pulumi-lang-yaml=\"`policyFamilyId`\" pulumi-lang-java=\"`policyFamilyId`\"\u003e`policy_family_id`\u003c/span\u003e\n"},"description":{"type":"string","description":"Additional human-readable description of the cluster policy.\n"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/ClusterPolicyLibrary:ClusterPolicyLibrary"}},"maxClustersPerUser":{"type":"integer","description":"Maximum number of clusters allowed per user. When omitted, there is no limit. If specified, value must be greater than zero.\n"},"name":{"type":"string","description":"Cluster policy name. This must be unique. Length must be between 1 and 100 characters.\n"},"policyFamilyDefinitionOverrides":{"type":"string","description":"Policy definition JSON document expressed in Databricks Policy Definition Language. The JSON document must be passed as a string and cannot be embedded in the requests. You can use this to customize the policy definition inherited from the policy family. Policy rules specified here are merged into the inherited policy definition.\n"},"policyFamilyId":{"type":"string","description":"ID of the policy family. The cluster policy's policy definition inherits the policy family's policy definition. Cannot be used with \u003cspan pulumi-lang-nodejs=\"`definition`\" pulumi-lang-dotnet=\"`Definition`\" pulumi-lang-go=\"`definition`\" pulumi-lang-python=\"`definition`\" pulumi-lang-yaml=\"`definition`\" pulumi-lang-java=\"`definition`\"\u003e`definition`\u003c/span\u003e. Use \u003cspan pulumi-lang-nodejs=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-dotnet=\"`PolicyFamilyDefinitionOverrides`\" pulumi-lang-go=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-python=\"`policy_family_definition_overrides`\" pulumi-lang-yaml=\"`policyFamilyDefinitionOverrides`\" pulumi-lang-java=\"`policyFamilyDefinitionOverrides`\"\u003e`policy_family_definition_overrides`\u003c/span\u003e instead to customize the policy definition.\n"},"policyId":{"type":"string","description":"Canonical unique identifier for the cluster policy.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ClusterPolicyProviderConfig:ClusterPolicyProviderConfig"}},"type":"object"}},"databricks:index/complianceSecurityProfileWorkspaceSetting:ComplianceSecurityProfileWorkspaceSetting":{"properties":{"complianceSecurityProfileWorkspace":{"$ref":"#/types/databricks:index/ComplianceSecurityProfileWorkspaceSettingComplianceSecurityProfileWorkspace:ComplianceSecurityProfileWorkspaceSettingComplianceSecurityProfileWorkspace"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/ComplianceSecurityProfileWorkspaceSettingProviderConfig:ComplianceSecurityProfileWorkspaceSettingProviderConfig"},"settingName":{"type":"string"}},"required":["complianceSecurityProfileWorkspace","etag","settingName"],"inputProperties":{"complianceSecurityProfileWorkspace":{"$ref":"#/types/databricks:index/ComplianceSecurityProfileWorkspaceSettingComplianceSecurityProfileWorkspace:ComplianceSecurityProfileWorkspaceSettingComplianceSecurityProfileWorkspace"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/ComplianceSecurityProfileWorkspaceSettingProviderConfig:ComplianceSecurityProfileWorkspaceSettingProviderConfig"},"settingName":{"type":"string"}},"requiredInputs":["complianceSecurityProfileWorkspace"],"stateInputs":{"description":"Input properties used for looking up and filtering ComplianceSecurityProfileWorkspaceSetting resources.\n","properties":{"complianceSecurityProfileWorkspace":{"$ref":"#/types/databricks:index/ComplianceSecurityProfileWorkspaceSettingComplianceSecurityProfileWorkspace:ComplianceSecurityProfileWorkspaceSettingComplianceSecurityProfileWorkspace"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/ComplianceSecurityProfileWorkspaceSettingProviderConfig:ComplianceSecurityProfileWorkspaceSettingProviderConfig"},"settingName":{"type":"string"}},"type":"object"}},"databricks:index/connection:Connection":{"description":"\u003e This resource can only be used with a workspace-level provider!\n\nLakehouse Federation is the query federation platform for Databricks. Databricks uses Unity Catalog to manage query federation. To make a dataset available for read-only querying using Lakehouse Federation, you create the following:\n\n- A connection, a securable object in Unity Catalog that specifies a path and credentials for accessing an external database system.\n- A foreign catalog\n\nThis resource manages connections in Unity Catalog. Please note that OAuth U2M is not supported as it requires user interaction for authentication.\n\n## Example Usage\n\nCreate a connection to a MySQL database\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst mysql = new databricks.Connection(\"mysql\", {\n    name: \"mysql_connection\",\n    connectionType: \"MYSQL\",\n    comment: \"this is a connection to mysql db\",\n    options: {\n        host: \"test.mysql.database.azure.com\",\n        port: \"3306\",\n        user: \"user\",\n        password: \"password\",\n    },\n    properties: {\n        purpose: \"testing\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nmysql = databricks.Connection(\"mysql\",\n    name=\"mysql_connection\",\n    connection_type=\"MYSQL\",\n    comment=\"this is a connection to mysql db\",\n    options={\n        \"host\": \"test.mysql.database.azure.com\",\n        \"port\": \"3306\",\n        \"user\": \"user\",\n        \"password\": \"password\",\n    },\n    properties={\n        \"purpose\": \"testing\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var mysql = new Databricks.Connection(\"mysql\", new()\n    {\n        Name = \"mysql_connection\",\n        ConnectionType = \"MYSQL\",\n        Comment = \"this is a connection to mysql db\",\n        Options = \n        {\n            { \"host\", \"test.mysql.database.azure.com\" },\n            { \"port\", \"3306\" },\n            { \"user\", \"user\" },\n            { \"password\", \"password\" },\n        },\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewConnection(ctx, \"mysql\", \u0026databricks.ConnectionArgs{\n\t\t\tName:           pulumi.String(\"mysql_connection\"),\n\t\t\tConnectionType: pulumi.String(\"MYSQL\"),\n\t\t\tComment:        pulumi.String(\"this is a connection to mysql db\"),\n\t\t\tOptions: pulumi.StringMap{\n\t\t\t\t\"host\":     pulumi.String(\"test.mysql.database.azure.com\"),\n\t\t\t\t\"port\":     pulumi.String(\"3306\"),\n\t\t\t\t\"user\":     pulumi.String(\"user\"),\n\t\t\t\t\"password\": pulumi.String(\"password\"),\n\t\t\t},\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Connection;\nimport com.pulumi.databricks.ConnectionArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var mysql = new Connection(\"mysql\", ConnectionArgs.builder()\n            .name(\"mysql_connection\")\n            .connectionType(\"MYSQL\")\n            .comment(\"this is a connection to mysql db\")\n            .options(Map.ofEntries(\n                Map.entry(\"host\", \"test.mysql.database.azure.com\"),\n                Map.entry(\"port\", \"3306\"),\n                Map.entry(\"user\", \"user\"),\n                Map.entry(\"password\", \"password\")\n            ))\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  mysql:\n    type: databricks:Connection\n    properties:\n      name: mysql_connection\n      connectionType: MYSQL\n      comment: this is a connection to mysql db\n      options:\n        host: test.mysql.database.azure.com\n        port: '3306'\n        user: user\n        password: password\n      properties:\n        purpose: testing\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreate a connection to a BigQuery database\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst bigquery = new databricks.Connection(\"bigquery\", {\n    name: \"bq_connection\",\n    connectionType: \"BIGQUERY\",\n    comment: \"this is a connection to BQ\",\n    options: {\n        GoogleServiceAccountKeyJson: JSON.stringify({\n            type: \"service_account\",\n            project_id: \"PROJECT_ID\",\n            private_key_id: \"KEY_ID\",\n            private_key: `-----BEGIN PRIVATE KEY-----\nPRIVATE_KEY\n-----END PRIVATE KEY-----\n`,\n            client_email: \"SERVICE_ACCOUNT_EMAIL\",\n            client_id: \"CLIENT_ID\",\n            auth_uri: \"https://accounts.google.com/o/oauth2/auth\",\n            token_uri: \"https://accounts.google.com/o/oauth2/token\",\n            auth_provider_x509_cert_url: \"https://www.googleapis.com/oauth2/v1/certs\",\n            client_x509_cert_url: \"https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL\",\n            universe_domain: \"googleapis.com\",\n        }),\n    },\n    properties: {\n        purpose: \"testing\",\n    },\n});\n```\n```python\nimport pulumi\nimport json\nimport pulumi_databricks as databricks\n\nbigquery = databricks.Connection(\"bigquery\",\n    name=\"bq_connection\",\n    connection_type=\"BIGQUERY\",\n    comment=\"this is a connection to BQ\",\n    options={\n        \"GoogleServiceAccountKeyJson\": json.dumps({\n            \"type\": \"service_account\",\n            \"project_id\": \"PROJECT_ID\",\n            \"private_key_id\": \"KEY_ID\",\n            \"private_key\": \"\"\"-----BEGIN PRIVATE KEY-----\nPRIVATE_KEY\n-----END PRIVATE KEY-----\n\"\"\",\n            \"client_email\": \"SERVICE_ACCOUNT_EMAIL\",\n            \"client_id\": \"CLIENT_ID\",\n            \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n            \"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n            \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n            \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL\",\n            \"universe_domain\": \"googleapis.com\",\n        }),\n    },\n    properties={\n        \"purpose\": \"testing\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text.Json;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var bigquery = new Databricks.Connection(\"bigquery\", new()\n    {\n        Name = \"bq_connection\",\n        ConnectionType = \"BIGQUERY\",\n        Comment = \"this is a connection to BQ\",\n        Options = \n        {\n            { \"GoogleServiceAccountKeyJson\", JsonSerializer.Serialize(new Dictionary\u003cstring, object?\u003e\n            {\n                [\"type\"] = \"service_account\",\n                [\"project_id\"] = \"PROJECT_ID\",\n                [\"private_key_id\"] = \"KEY_ID\",\n                [\"private_key\"] = @\"-----BEGIN PRIVATE KEY-----\nPRIVATE_KEY\n-----END PRIVATE KEY-----\n\",\n                [\"client_email\"] = \"SERVICE_ACCOUNT_EMAIL\",\n                [\"client_id\"] = \"CLIENT_ID\",\n                [\"auth_uri\"] = \"https://accounts.google.com/o/oauth2/auth\",\n                [\"token_uri\"] = \"https://accounts.google.com/o/oauth2/token\",\n                [\"auth_provider_x509_cert_url\"] = \"https://www.googleapis.com/oauth2/v1/certs\",\n                [\"client_x509_cert_url\"] = \"https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL\",\n                [\"universe_domain\"] = \"googleapis.com\",\n            }) },\n        },\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\ttmpJSON0, err := json.Marshal(map[string]interface{}{\n\t\t\t\"type\":                        \"service_account\",\n\t\t\t\"project_id\":                  \"PROJECT_ID\",\n\t\t\t\"private_key_id\":              \"KEY_ID\",\n\t\t\t\"private_key\":                 \"-----BEGIN PRIVATE KEY-----\\nPRIVATE_KEY\\n-----END PRIVATE KEY-----\\n\",\n\t\t\t\"client_email\":                \"SERVICE_ACCOUNT_EMAIL\",\n\t\t\t\"client_id\":                   \"CLIENT_ID\",\n\t\t\t\"auth_uri\":                    \"https://accounts.google.com/o/oauth2/auth\",\n\t\t\t\"token_uri\":                   \"https://accounts.google.com/o/oauth2/token\",\n\t\t\t\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n\t\t\t\"client_x509_cert_url\":        \"https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL\",\n\t\t\t\"universe_domain\":             \"googleapis.com\",\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tjson0 := string(tmpJSON0)\n\t\t_, err = databricks.NewConnection(ctx, \"bigquery\", \u0026databricks.ConnectionArgs{\n\t\t\tName:           pulumi.String(\"bq_connection\"),\n\t\t\tConnectionType: pulumi.String(\"BIGQUERY\"),\n\t\t\tComment:        pulumi.String(\"this is a connection to BQ\"),\n\t\t\tOptions: pulumi.StringMap{\n\t\t\t\t\"GoogleServiceAccountKeyJson\": pulumi.String(json0),\n\t\t\t},\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Connection;\nimport com.pulumi.databricks.ConnectionArgs;\nimport static com.pulumi.codegen.internal.Serialization.*;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var bigquery = new Connection(\"bigquery\", ConnectionArgs.builder()\n            .name(\"bq_connection\")\n            .connectionType(\"BIGQUERY\")\n            .comment(\"this is a connection to BQ\")\n            .options(Map.of(\"GoogleServiceAccountKeyJson\", serializeJson(\n                jsonObject(\n                    jsonProperty(\"type\", \"service_account\"),\n                    jsonProperty(\"project_id\", \"PROJECT_ID\"),\n                    jsonProperty(\"private_key_id\", \"KEY_ID\"),\n                    jsonProperty(\"private_key\", \"\"\"\n-----BEGIN PRIVATE KEY-----\nPRIVATE_KEY\n-----END PRIVATE KEY-----\n                    \"\"\"),\n                    jsonProperty(\"client_email\", \"SERVICE_ACCOUNT_EMAIL\"),\n                    jsonProperty(\"client_id\", \"CLIENT_ID\"),\n                    jsonProperty(\"auth_uri\", \"https://accounts.google.com/o/oauth2/auth\"),\n                    jsonProperty(\"token_uri\", \"https://accounts.google.com/o/oauth2/token\"),\n                    jsonProperty(\"auth_provider_x509_cert_url\", \"https://www.googleapis.com/oauth2/v1/certs\"),\n                    jsonProperty(\"client_x509_cert_url\", \"https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL\"),\n                    jsonProperty(\"universe_domain\", \"googleapis.com\")\n                ))))\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  bigquery:\n    type: databricks:Connection\n    properties:\n      name: bq_connection\n      connectionType: BIGQUERY\n      comment: this is a connection to BQ\n      options:\n        GoogleServiceAccountKeyJson:\n          fn::toJSON:\n            type: service_account\n            project_id: PROJECT_ID\n            private_key_id: KEY_ID\n            private_key: |\n              -----BEGIN PRIVATE KEY-----\n              PRIVATE_KEY\n              -----END PRIVATE KEY-----\n            client_email: SERVICE_ACCOUNT_EMAIL\n            client_id: CLIENT_ID\n            auth_uri: https://accounts.google.com/o/oauth2/auth\n            token_uri: https://accounts.google.com/o/oauth2/token\n            auth_provider_x509_cert_url: https://www.googleapis.com/oauth2/v1/certs\n            client_x509_cert_url: https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL\n            universe_domain: googleapis.com\n      properties:\n        purpose: testing\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreate a connection to builtin Hive Metastore\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst hms = new databricks.Connection(\"hms\", {\n    name: \"hms-builtin\",\n    connectionType: \"HIVE_METASTORE\",\n    comment: \"This is a connection to builtin HMS\",\n    options: {\n        builtin: \"true\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nhms = databricks.Connection(\"hms\",\n    name=\"hms-builtin\",\n    connection_type=\"HIVE_METASTORE\",\n    comment=\"This is a connection to builtin HMS\",\n    options={\n        \"builtin\": \"true\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var hms = new Databricks.Connection(\"hms\", new()\n    {\n        Name = \"hms-builtin\",\n        ConnectionType = \"HIVE_METASTORE\",\n        Comment = \"This is a connection to builtin HMS\",\n        Options = \n        {\n            { \"builtin\", \"true\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewConnection(ctx, \"hms\", \u0026databricks.ConnectionArgs{\n\t\t\tName:           pulumi.String(\"hms-builtin\"),\n\t\t\tConnectionType: pulumi.String(\"HIVE_METASTORE\"),\n\t\t\tComment:        pulumi.String(\"This is a connection to builtin HMS\"),\n\t\t\tOptions: pulumi.StringMap{\n\t\t\t\t\"builtin\": pulumi.String(\"true\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Connection;\nimport com.pulumi.databricks.ConnectionArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var hms = new Connection(\"hms\", ConnectionArgs.builder()\n            .name(\"hms-builtin\")\n            .connectionType(\"HIVE_METASTORE\")\n            .comment(\"This is a connection to builtin HMS\")\n            .options(Map.of(\"builtin\", \"true\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  hms:\n    type: databricks:Connection\n    properties:\n      name: hms-builtin\n      connectionType: HIVE_METASTORE\n      comment: This is a connection to builtin HMS\n      options:\n        builtin: 'true'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreate a HTTP connection with bearer token\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst httpBearer = new databricks.Connection(\"http_bearer\", {\n    name: \"http_bearer\",\n    connectionType: \"HTTP\",\n    comment: \"This is a connection to a HTTP service\",\n    options: {\n        host: \"https://example.com\",\n        port: \"8433\",\n        base_path: \"/api/\",\n        bearer_token: \"bearer_token\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nhttp_bearer = databricks.Connection(\"http_bearer\",\n    name=\"http_bearer\",\n    connection_type=\"HTTP\",\n    comment=\"This is a connection to a HTTP service\",\n    options={\n        \"host\": \"https://example.com\",\n        \"port\": \"8433\",\n        \"base_path\": \"/api/\",\n        \"bearer_token\": \"bearer_token\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var httpBearer = new Databricks.Connection(\"http_bearer\", new()\n    {\n        Name = \"http_bearer\",\n        ConnectionType = \"HTTP\",\n        Comment = \"This is a connection to a HTTP service\",\n        Options = \n        {\n            { \"host\", \"https://example.com\" },\n            { \"port\", \"8433\" },\n            { \"base_path\", \"/api/\" },\n            { \"bearer_token\", \"bearer_token\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewConnection(ctx, \"http_bearer\", \u0026databricks.ConnectionArgs{\n\t\t\tName:           pulumi.String(\"http_bearer\"),\n\t\t\tConnectionType: pulumi.String(\"HTTP\"),\n\t\t\tComment:        pulumi.String(\"This is a connection to a HTTP service\"),\n\t\t\tOptions: pulumi.StringMap{\n\t\t\t\t\"host\":         pulumi.String(\"https://example.com\"),\n\t\t\t\t\"port\":         pulumi.String(\"8433\"),\n\t\t\t\t\"base_path\":    pulumi.String(\"/api/\"),\n\t\t\t\t\"bearer_token\": pulumi.String(\"bearer_token\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Connection;\nimport com.pulumi.databricks.ConnectionArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var httpBearer = new Connection(\"httpBearer\", ConnectionArgs.builder()\n            .name(\"http_bearer\")\n            .connectionType(\"HTTP\")\n            .comment(\"This is a connection to a HTTP service\")\n            .options(Map.ofEntries(\n                Map.entry(\"host\", \"https://example.com\"),\n                Map.entry(\"port\", \"8433\"),\n                Map.entry(\"base_path\", \"/api/\"),\n                Map.entry(\"bearer_token\", \"bearer_token\")\n            ))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  httpBearer:\n    type: databricks:Connection\n    name: http_bearer\n    properties:\n      name: http_bearer\n      connectionType: HTTP\n      comment: This is a connection to a HTTP service\n      options:\n        host: https://example.com\n        port: '8433'\n        base_path: /api/\n        bearer_token: bearer_token\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreate a HTTP connection with OAuth M2M\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst httpOauth = new databricks.Connection(\"http_oauth\", {\n    name: \"http_oauth\",\n    connectionType: \"HTTP\",\n    comment: \"This is a connection to a HTTP service\",\n    options: {\n        host: \"https://example.com\",\n        port: \"8433\",\n        base_path: \"/api/\",\n        client_id: \"client_id\",\n        client_secret: \"client_secret\",\n        oauth_scope: \"channels:read channels:history chat:write\",\n        token_endpoint: \"https://authorization-server.com/oauth/token\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nhttp_oauth = databricks.Connection(\"http_oauth\",\n    name=\"http_oauth\",\n    connection_type=\"HTTP\",\n    comment=\"This is a connection to a HTTP service\",\n    options={\n        \"host\": \"https://example.com\",\n        \"port\": \"8433\",\n        \"base_path\": \"/api/\",\n        \"client_id\": \"client_id\",\n        \"client_secret\": \"client_secret\",\n        \"oauth_scope\": \"channels:read channels:history chat:write\",\n        \"token_endpoint\": \"https://authorization-server.com/oauth/token\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var httpOauth = new Databricks.Connection(\"http_oauth\", new()\n    {\n        Name = \"http_oauth\",\n        ConnectionType = \"HTTP\",\n        Comment = \"This is a connection to a HTTP service\",\n        Options = \n        {\n            { \"host\", \"https://example.com\" },\n            { \"port\", \"8433\" },\n            { \"base_path\", \"/api/\" },\n            { \"client_id\", \"client_id\" },\n            { \"client_secret\", \"client_secret\" },\n            { \"oauth_scope\", \"channels:read channels:history chat:write\" },\n            { \"token_endpoint\", \"https://authorization-server.com/oauth/token\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewConnection(ctx, \"http_oauth\", \u0026databricks.ConnectionArgs{\n\t\t\tName:           pulumi.String(\"http_oauth\"),\n\t\t\tConnectionType: pulumi.String(\"HTTP\"),\n\t\t\tComment:        pulumi.String(\"This is a connection to a HTTP service\"),\n\t\t\tOptions: pulumi.StringMap{\n\t\t\t\t\"host\":           pulumi.String(\"https://example.com\"),\n\t\t\t\t\"port\":           pulumi.String(\"8433\"),\n\t\t\t\t\"base_path\":      pulumi.String(\"/api/\"),\n\t\t\t\t\"client_id\":      pulumi.String(\"client_id\"),\n\t\t\t\t\"client_secret\":  pulumi.String(\"client_secret\"),\n\t\t\t\t\"oauth_scope\":    pulumi.String(\"channels:read channels:history chat:write\"),\n\t\t\t\t\"token_endpoint\": pulumi.String(\"https://authorization-server.com/oauth/token\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Connection;\nimport com.pulumi.databricks.ConnectionArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var httpOauth = new Connection(\"httpOauth\", ConnectionArgs.builder()\n            .name(\"http_oauth\")\n            .connectionType(\"HTTP\")\n            .comment(\"This is a connection to a HTTP service\")\n            .options(Map.ofEntries(\n                Map.entry(\"host\", \"https://example.com\"),\n                Map.entry(\"port\", \"8433\"),\n                Map.entry(\"base_path\", \"/api/\"),\n                Map.entry(\"client_id\", \"client_id\"),\n                Map.entry(\"client_secret\", \"client_secret\"),\n                Map.entry(\"oauth_scope\", \"channels:read channels:history chat:write\"),\n                Map.entry(\"token_endpoint\", \"https://authorization-server.com/oauth/token\")\n            ))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  httpOauth:\n    type: databricks:Connection\n    name: http_oauth\n    properties:\n      name: http_oauth\n      connectionType: HTTP\n      comment: This is a connection to a HTTP service\n      options:\n        host: https://example.com\n        port: '8433'\n        base_path: /api/\n        client_id: client_id\n        client_secret: client_secret\n        oauth_scope: channels:read channels:history chat:write\n        token_endpoint: https://authorization-server.com/oauth/token\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreate a PowerBI connection with OAuth M2M\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst pbi = new databricks.Connection(\"pbi\", {\n    name: \"test-pbi\",\n    connectionType: \"POWER_BI\",\n    options: {\n        authorization_endpoint: \"https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize\",\n        client_id: \"client_id\",\n        client_secret: \"client_secret\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\npbi = databricks.Connection(\"pbi\",\n    name=\"test-pbi\",\n    connection_type=\"POWER_BI\",\n    options={\n        \"authorization_endpoint\": \"https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize\",\n        \"client_id\": \"client_id\",\n        \"client_secret\": \"client_secret\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var pbi = new Databricks.Connection(\"pbi\", new()\n    {\n        Name = \"test-pbi\",\n        ConnectionType = \"POWER_BI\",\n        Options = \n        {\n            { \"authorization_endpoint\", \"https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize\" },\n            { \"client_id\", \"client_id\" },\n            { \"client_secret\", \"client_secret\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewConnection(ctx, \"pbi\", \u0026databricks.ConnectionArgs{\n\t\t\tName:           pulumi.String(\"test-pbi\"),\n\t\t\tConnectionType: pulumi.String(\"POWER_BI\"),\n\t\t\tOptions: pulumi.StringMap{\n\t\t\t\t\"authorization_endpoint\": pulumi.String(\"https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize\"),\n\t\t\t\t\"client_id\":              pulumi.String(\"client_id\"),\n\t\t\t\t\"client_secret\":          pulumi.String(\"client_secret\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Connection;\nimport com.pulumi.databricks.ConnectionArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var pbi = new Connection(\"pbi\", ConnectionArgs.builder()\n            .name(\"test-pbi\")\n            .connectionType(\"POWER_BI\")\n            .options(Map.ofEntries(\n                Map.entry(\"authorization_endpoint\", \"https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize\"),\n                Map.entry(\"client_id\", \"client_id\"),\n                Map.entry(\"client_secret\", \"client_secret\")\n            ))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  pbi:\n    type: databricks:Connection\n    properties:\n      name: test-pbi\n      connectionType: POWER_BI\n      options:\n        authorization_endpoint: https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize\n        client_id: client_id\n        client_secret: client_secret\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"comment":{"type":"string","description":"Free-form text. Change forces creation of a new resource.\n"},"connectionId":{"type":"string","description":"Unique ID of the connection.\n"},"connectionType":{"type":"string","description":"Connection type. `MYSQL`, `POSTGRESQL`, `SNOWFLAKE`, `REDSHIFT` `SQLDW`, `SQLSERVER`, `DATABRICKS`, `SALESFORCE`, `BIGQUERY`, `WORKDAY_RAAS`, `HIVE_METASTORE`, `GA4_RAW_DATA`, `SERVICENOW`, `SALESFORCE_DATA_CLOUD`, `GLUE`, `ORACLE`, `TERADATA`, `HTTP` or `POWER_BI` are supported. Up-to-date list of connection type supported is in the [documentation](https://docs.databricks.com/query-federation/index.html#supported-data-sources). Change forces creation of a new resource.\n"},"createdAt":{"type":"integer","description":"Time at which this connection was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of connection creator.\n"},"credentialType":{"type":"string","description":"The type of credential for this connection.\n"},"fullName":{"type":"string","description":"Full name of connection.\n"},"metastoreId":{"type":"string","description":"Unique ID of the UC metastore for this connection.\n"},"name":{"type":"string","description":"Name of the Connection.\n"},"options":{"type":"object","additionalProperties":{"type":"string"},"description":"The key value of options required by the connection, e.g. \u003cspan pulumi-lang-nodejs=\"`host`\" pulumi-lang-dotnet=\"`Host`\" pulumi-lang-go=\"`host`\" pulumi-lang-python=\"`host`\" pulumi-lang-yaml=\"`host`\" pulumi-lang-java=\"`host`\"\u003e`host`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`port`\" pulumi-lang-dotnet=\"`Port`\" pulumi-lang-go=\"`port`\" pulumi-lang-python=\"`port`\" pulumi-lang-yaml=\"`port`\" pulumi-lang-java=\"`port`\"\u003e`port`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`user`\" pulumi-lang-dotnet=\"`User`\" pulumi-lang-go=\"`user`\" pulumi-lang-python=\"`user`\" pulumi-lang-yaml=\"`user`\" pulumi-lang-java=\"`user`\"\u003e`user`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`password`\" pulumi-lang-dotnet=\"`Password`\" pulumi-lang-go=\"`password`\" pulumi-lang-python=\"`password`\" pulumi-lang-yaml=\"`password`\" pulumi-lang-java=\"`password`\"\u003e`password`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`authorizationEndpoint`\" pulumi-lang-dotnet=\"`AuthorizationEndpoint`\" pulumi-lang-go=\"`authorizationEndpoint`\" pulumi-lang-python=\"`authorization_endpoint`\" pulumi-lang-yaml=\"`authorizationEndpoint`\" pulumi-lang-java=\"`authorizationEndpoint`\"\u003e`authorization_endpoint`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`clientId`\" pulumi-lang-dotnet=\"`ClientId`\" pulumi-lang-go=\"`clientId`\" pulumi-lang-python=\"`client_id`\" pulumi-lang-yaml=\"`clientId`\" pulumi-lang-java=\"`clientId`\"\u003e`client_id`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`clientSecret`\" pulumi-lang-dotnet=\"`ClientSecret`\" pulumi-lang-go=\"`clientSecret`\" pulumi-lang-python=\"`client_secret`\" pulumi-lang-yaml=\"`clientSecret`\" pulumi-lang-java=\"`clientSecret`\"\u003e`client_secret`\u003c/span\u003e or `GoogleServiceAccountKeyJson`. Please consult the [documentation](https://docs.databricks.com/query-federation/index.html#supported-data-sources) for the required option.\n","secret":true},"owner":{"type":"string","description":"Name of the connection owner.\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"Free-form connection properties. Change forces creation of a new resource.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ConnectionProviderConfig:ConnectionProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"provisioningInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/ConnectionProvisioningInfo:ConnectionProvisioningInfo"},"description":"Object with the status of an asynchronously provisioned resource.\n"},"readOnly":{"type":"boolean","description":"Indicates whether the connection is read-only. Change forces creation of a new resource.\n"},"securableType":{"type":"string"},"updatedAt":{"type":"integer","description":"Time at which connection this was last modified, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of user who last modified the connection.\n"},"url":{"type":"string","description":"URL of the remote data source, extracted from options.\n"}},"required":["connectionId","createdAt","createdBy","credentialType","fullName","metastoreId","name","owner","provisioningInfos","readOnly","securableType","updatedAt","updatedBy","url"],"inputProperties":{"comment":{"type":"string","description":"Free-form text. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"connectionType":{"type":"string","description":"Connection type. `MYSQL`, `POSTGRESQL`, `SNOWFLAKE`, `REDSHIFT` `SQLDW`, `SQLSERVER`, `DATABRICKS`, `SALESFORCE`, `BIGQUERY`, `WORKDAY_RAAS`, `HIVE_METASTORE`, `GA4_RAW_DATA`, `SERVICENOW`, `SALESFORCE_DATA_CLOUD`, `GLUE`, `ORACLE`, `TERADATA`, `HTTP` or `POWER_BI` are supported. Up-to-date list of connection type supported is in the [documentation](https://docs.databricks.com/query-federation/index.html#supported-data-sources). Change forces creation of a new resource.\n","willReplaceOnChanges":true},"name":{"type":"string","description":"Name of the Connection.\n"},"options":{"type":"object","additionalProperties":{"type":"string"},"description":"The key value of options required by the connection, e.g. \u003cspan pulumi-lang-nodejs=\"`host`\" pulumi-lang-dotnet=\"`Host`\" pulumi-lang-go=\"`host`\" pulumi-lang-python=\"`host`\" pulumi-lang-yaml=\"`host`\" pulumi-lang-java=\"`host`\"\u003e`host`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`port`\" pulumi-lang-dotnet=\"`Port`\" pulumi-lang-go=\"`port`\" pulumi-lang-python=\"`port`\" pulumi-lang-yaml=\"`port`\" pulumi-lang-java=\"`port`\"\u003e`port`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`user`\" pulumi-lang-dotnet=\"`User`\" pulumi-lang-go=\"`user`\" pulumi-lang-python=\"`user`\" pulumi-lang-yaml=\"`user`\" pulumi-lang-java=\"`user`\"\u003e`user`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`password`\" pulumi-lang-dotnet=\"`Password`\" pulumi-lang-go=\"`password`\" pulumi-lang-python=\"`password`\" pulumi-lang-yaml=\"`password`\" pulumi-lang-java=\"`password`\"\u003e`password`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`authorizationEndpoint`\" pulumi-lang-dotnet=\"`AuthorizationEndpoint`\" pulumi-lang-go=\"`authorizationEndpoint`\" pulumi-lang-python=\"`authorization_endpoint`\" pulumi-lang-yaml=\"`authorizationEndpoint`\" pulumi-lang-java=\"`authorizationEndpoint`\"\u003e`authorization_endpoint`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`clientId`\" pulumi-lang-dotnet=\"`ClientId`\" pulumi-lang-go=\"`clientId`\" pulumi-lang-python=\"`client_id`\" pulumi-lang-yaml=\"`clientId`\" pulumi-lang-java=\"`clientId`\"\u003e`client_id`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`clientSecret`\" pulumi-lang-dotnet=\"`ClientSecret`\" pulumi-lang-go=\"`clientSecret`\" pulumi-lang-python=\"`client_secret`\" pulumi-lang-yaml=\"`clientSecret`\" pulumi-lang-java=\"`clientSecret`\"\u003e`client_secret`\u003c/span\u003e or `GoogleServiceAccountKeyJson`. Please consult the [documentation](https://docs.databricks.com/query-federation/index.html#supported-data-sources) for the required option.\n","secret":true},"owner":{"type":"string","description":"Name of the connection owner.\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"Free-form connection properties. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/ConnectionProviderConfig:ConnectionProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"readOnly":{"type":"boolean","description":"Indicates whether the connection is read-only. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"stateInputs":{"description":"Input properties used for looking up and filtering Connection resources.\n","properties":{"comment":{"type":"string","description":"Free-form text. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"connectionId":{"type":"string","description":"Unique ID of the connection.\n"},"connectionType":{"type":"string","description":"Connection type. `MYSQL`, `POSTGRESQL`, `SNOWFLAKE`, `REDSHIFT` `SQLDW`, `SQLSERVER`, `DATABRICKS`, `SALESFORCE`, `BIGQUERY`, `WORKDAY_RAAS`, `HIVE_METASTORE`, `GA4_RAW_DATA`, `SERVICENOW`, `SALESFORCE_DATA_CLOUD`, `GLUE`, `ORACLE`, `TERADATA`, `HTTP` or `POWER_BI` are supported. Up-to-date list of connection type supported is in the [documentation](https://docs.databricks.com/query-federation/index.html#supported-data-sources). Change forces creation of a new resource.\n","willReplaceOnChanges":true},"createdAt":{"type":"integer","description":"Time at which this connection was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of connection creator.\n"},"credentialType":{"type":"string","description":"The type of credential for this connection.\n"},"fullName":{"type":"string","description":"Full name of connection.\n"},"metastoreId":{"type":"string","description":"Unique ID of the UC metastore for this connection.\n"},"name":{"type":"string","description":"Name of the Connection.\n"},"options":{"type":"object","additionalProperties":{"type":"string"},"description":"The key value of options required by the connection, e.g. \u003cspan pulumi-lang-nodejs=\"`host`\" pulumi-lang-dotnet=\"`Host`\" pulumi-lang-go=\"`host`\" pulumi-lang-python=\"`host`\" pulumi-lang-yaml=\"`host`\" pulumi-lang-java=\"`host`\"\u003e`host`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`port`\" pulumi-lang-dotnet=\"`Port`\" pulumi-lang-go=\"`port`\" pulumi-lang-python=\"`port`\" pulumi-lang-yaml=\"`port`\" pulumi-lang-java=\"`port`\"\u003e`port`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`user`\" pulumi-lang-dotnet=\"`User`\" pulumi-lang-go=\"`user`\" pulumi-lang-python=\"`user`\" pulumi-lang-yaml=\"`user`\" pulumi-lang-java=\"`user`\"\u003e`user`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`password`\" pulumi-lang-dotnet=\"`Password`\" pulumi-lang-go=\"`password`\" pulumi-lang-python=\"`password`\" pulumi-lang-yaml=\"`password`\" pulumi-lang-java=\"`password`\"\u003e`password`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`authorizationEndpoint`\" pulumi-lang-dotnet=\"`AuthorizationEndpoint`\" pulumi-lang-go=\"`authorizationEndpoint`\" pulumi-lang-python=\"`authorization_endpoint`\" pulumi-lang-yaml=\"`authorizationEndpoint`\" pulumi-lang-java=\"`authorizationEndpoint`\"\u003e`authorization_endpoint`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`clientId`\" pulumi-lang-dotnet=\"`ClientId`\" pulumi-lang-go=\"`clientId`\" pulumi-lang-python=\"`client_id`\" pulumi-lang-yaml=\"`clientId`\" pulumi-lang-java=\"`clientId`\"\u003e`client_id`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`clientSecret`\" pulumi-lang-dotnet=\"`ClientSecret`\" pulumi-lang-go=\"`clientSecret`\" pulumi-lang-python=\"`client_secret`\" pulumi-lang-yaml=\"`clientSecret`\" pulumi-lang-java=\"`clientSecret`\"\u003e`client_secret`\u003c/span\u003e or `GoogleServiceAccountKeyJson`. Please consult the [documentation](https://docs.databricks.com/query-federation/index.html#supported-data-sources) for the required option.\n","secret":true},"owner":{"type":"string","description":"Name of the connection owner.\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"Free-form connection properties. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/ConnectionProviderConfig:ConnectionProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"provisioningInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/ConnectionProvisioningInfo:ConnectionProvisioningInfo"},"description":"Object with the status of an asynchronously provisioned resource.\n"},"readOnly":{"type":"boolean","description":"Indicates whether the connection is read-only. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"securableType":{"type":"string"},"updatedAt":{"type":"integer","description":"Time at which connection this was last modified, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of user who last modified the connection.\n"},"url":{"type":"string","description":"URL of the remote data source, extracted from options.\n"}},"type":"object"}},"databricks:index/credential:Credential":{"description":"A credential represents an authentication and authorization mechanism for accessing services on your cloud tenant. Each credential is subject to Unity Catalog access-control policies that control which users and groups can access the credential.\n\n\u003e This resource can only be used with a workspace-level provider!\n\nThe type of credential to be created is determined by the \u003cspan pulumi-lang-nodejs=\"`purpose`\" pulumi-lang-dotnet=\"`Purpose`\" pulumi-lang-go=\"`purpose`\" pulumi-lang-python=\"`purpose`\" pulumi-lang-yaml=\"`purpose`\" pulumi-lang-java=\"`purpose`\"\u003e`purpose`\u003c/span\u003e field, which should be either `SERVICE` or `STORAGE`.\nThe caller must be a metastore admin or have the metastore privilege `CREATE_STORAGE_CREDENTIAL` for storage credentials, or `CREATE_SERVICE_CREDENTIAL` for service credentials. The user who creates the credential can delegate ownership to another user or group to manage permissions on it\n\nOn AWS, the IAM role for a credential requires a trust policy. See [documentation](https://docs.databricks.com/en/connect/unity-catalog/cloud-services/service-credentials.html#step-1-create-an-iam-role) for more details. The data source\u003cspan pulumi-lang-nodejs=\" databricks.getAwsUnityCatalogAssumeRolePolicy \" pulumi-lang-dotnet=\" databricks.getAwsUnityCatalogAssumeRolePolicy \" pulumi-lang-go=\" getAwsUnityCatalogAssumeRolePolicy \" pulumi-lang-python=\" get_aws_unity_catalog_assume_role_policy \" pulumi-lang-yaml=\" databricks.getAwsUnityCatalogAssumeRolePolicy \" pulumi-lang-java=\" databricks.getAwsUnityCatalogAssumeRolePolicy \"\u003e databricks.getAwsUnityCatalogAssumeRolePolicy \u003c/span\u003ecan be used to create the necessary AWS Unity Catalog assume role policy.\n\n## Example Usage\n\nFor AWS\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst external = new databricks.Credential(\"external\", {\n    name: externalDataAccess.name,\n    awsIamRole: {\n        roleArn: externalDataAccess.arn,\n    },\n    purpose: \"SERVICE\",\n    comment: \"Managed by TF\",\n});\nconst externalCreds = new databricks.Grants(\"external_creds\", {\n    credential: external.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\"ACCESS\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexternal = databricks.Credential(\"external\",\n    name=external_data_access[\"name\"],\n    aws_iam_role={\n        \"role_arn\": external_data_access[\"arn\"],\n    },\n    purpose=\"SERVICE\",\n    comment=\"Managed by TF\")\nexternal_creds = databricks.Grants(\"external_creds\",\n    credential=external.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\"ACCESS\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var external = new Databricks.Credential(\"external\", new()\n    {\n        Name = externalDataAccess.Name,\n        AwsIamRole = new Databricks.Inputs.CredentialAwsIamRoleArgs\n        {\n            RoleArn = externalDataAccess.Arn,\n        },\n        Purpose = \"SERVICE\",\n        Comment = \"Managed by TF\",\n    });\n\n    var externalCreds = new Databricks.Grants(\"external_creds\", new()\n    {\n        Credential = external.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"ACCESS\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternal, err := databricks.NewCredential(ctx, \"external\", \u0026databricks.CredentialArgs{\n\t\t\tName: pulumi.Any(externalDataAccess.Name),\n\t\t\tAwsIamRole: \u0026databricks.CredentialAwsIamRoleArgs{\n\t\t\t\tRoleArn: pulumi.Any(externalDataAccess.Arn),\n\t\t\t},\n\t\t\tPurpose: pulumi.String(\"SERVICE\"),\n\t\t\tComment: pulumi.String(\"Managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"external_creds\", \u0026databricks.GrantsArgs{\n\t\t\tCredential: external.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"ACCESS\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Credential;\nimport com.pulumi.databricks.CredentialArgs;\nimport com.pulumi.databricks.inputs.CredentialAwsIamRoleArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var external = new Credential(\"external\", CredentialArgs.builder()\n            .name(externalDataAccess.name())\n            .awsIamRole(CredentialAwsIamRoleArgs.builder()\n                .roleArn(externalDataAccess.arn())\n                .build())\n            .purpose(\"SERVICE\")\n            .comment(\"Managed by TF\")\n            .build());\n\n        var externalCreds = new Grants(\"externalCreds\", GrantsArgs.builder()\n            .credential(external.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(\"ACCESS\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  external:\n    type: databricks:Credential\n    properties:\n      name: ${externalDataAccess.name}\n      awsIamRole:\n        roleArn: ${externalDataAccess.arn}\n      purpose: SERVICE\n      comment: Managed by TF\n  externalCreds:\n    type: databricks:Grants\n    name: external_creds\n    properties:\n      credential: ${external.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - ACCESS\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nFor Azure\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst externalMi = new databricks.Credential(\"external_mi\", {\n    name: \"mi_credential\",\n    azureManagedIdentity: {\n        accessConnectorId: example.id,\n    },\n    purpose: \"SERVICE\",\n    comment: \"Managed identity credential managed by TF\",\n});\nconst externalCreds = new databricks.Grants(\"external_creds\", {\n    credential: externalMi.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\"ACCESS\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexternal_mi = databricks.Credential(\"external_mi\",\n    name=\"mi_credential\",\n    azure_managed_identity={\n        \"access_connector_id\": example[\"id\"],\n    },\n    purpose=\"SERVICE\",\n    comment=\"Managed identity credential managed by TF\")\nexternal_creds = databricks.Grants(\"external_creds\",\n    credential=external_mi.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\"ACCESS\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var externalMi = new Databricks.Credential(\"external_mi\", new()\n    {\n        Name = \"mi_credential\",\n        AzureManagedIdentity = new Databricks.Inputs.CredentialAzureManagedIdentityArgs\n        {\n            AccessConnectorId = example.Id,\n        },\n        Purpose = \"SERVICE\",\n        Comment = \"Managed identity credential managed by TF\",\n    });\n\n    var externalCreds = new Databricks.Grants(\"external_creds\", new()\n    {\n        Credential = externalMi.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"ACCESS\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternalMi, err := databricks.NewCredential(ctx, \"external_mi\", \u0026databricks.CredentialArgs{\n\t\t\tName: pulumi.String(\"mi_credential\"),\n\t\t\tAzureManagedIdentity: \u0026databricks.CredentialAzureManagedIdentityArgs{\n\t\t\t\tAccessConnectorId: pulumi.Any(example.Id),\n\t\t\t},\n\t\t\tPurpose: pulumi.String(\"SERVICE\"),\n\t\t\tComment: pulumi.String(\"Managed identity credential managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"external_creds\", \u0026databricks.GrantsArgs{\n\t\t\tCredential: externalMi.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"ACCESS\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Credential;\nimport com.pulumi.databricks.CredentialArgs;\nimport com.pulumi.databricks.inputs.CredentialAzureManagedIdentityArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var externalMi = new Credential(\"externalMi\", CredentialArgs.builder()\n            .name(\"mi_credential\")\n            .azureManagedIdentity(CredentialAzureManagedIdentityArgs.builder()\n                .accessConnectorId(example.id())\n                .build())\n            .purpose(\"SERVICE\")\n            .comment(\"Managed identity credential managed by TF\")\n            .build());\n\n        var externalCreds = new Grants(\"externalCreds\", GrantsArgs.builder()\n            .credential(externalMi.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(\"ACCESS\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  externalMi:\n    type: databricks:Credential\n    name: external_mi\n    properties:\n      name: mi_credential\n      azureManagedIdentity:\n        accessConnectorId: ${example.id}\n      purpose: SERVICE\n      comment: Managed identity credential managed by TF\n  externalCreds:\n    type: databricks:Grants\n    name: external_creds\n    properties:\n      credential: ${externalMi.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - ACCESS\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nFor GCP \n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst externalGcpSa = new databricks.Credential(\"external_gcp_sa\", {\n    name: \"gcp_sa_credential\",\n    databricksGcpServiceAccount: {},\n    purpose: \"SERVICE\",\n    comment: \"GCP SA credential managed by TF\",\n});\nconst externalCreds = new databricks.Grants(\"external_creds\", {\n    credential: externalGcpSa.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\"ACCESS\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexternal_gcp_sa = databricks.Credential(\"external_gcp_sa\",\n    name=\"gcp_sa_credential\",\n    databricks_gcp_service_account={},\n    purpose=\"SERVICE\",\n    comment=\"GCP SA credential managed by TF\")\nexternal_creds = databricks.Grants(\"external_creds\",\n    credential=external_gcp_sa.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\"ACCESS\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var externalGcpSa = new Databricks.Credential(\"external_gcp_sa\", new()\n    {\n        Name = \"gcp_sa_credential\",\n        DatabricksGcpServiceAccount = null,\n        Purpose = \"SERVICE\",\n        Comment = \"GCP SA credential managed by TF\",\n    });\n\n    var externalCreds = new Databricks.Grants(\"external_creds\", new()\n    {\n        Credential = externalGcpSa.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"ACCESS\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternalGcpSa, err := databricks.NewCredential(ctx, \"external_gcp_sa\", \u0026databricks.CredentialArgs{\n\t\t\tName:                        pulumi.String(\"gcp_sa_credential\"),\n\t\t\tDatabricksGcpServiceAccount: \u0026databricks.CredentialDatabricksGcpServiceAccountArgs{},\n\t\t\tPurpose:                     pulumi.String(\"SERVICE\"),\n\t\t\tComment:                     pulumi.String(\"GCP SA credential managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"external_creds\", \u0026databricks.GrantsArgs{\n\t\t\tCredential: externalGcpSa.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"ACCESS\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Credential;\nimport com.pulumi.databricks.CredentialArgs;\nimport com.pulumi.databricks.inputs.CredentialDatabricksGcpServiceAccountArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var externalGcpSa = new Credential(\"externalGcpSa\", CredentialArgs.builder()\n            .name(\"gcp_sa_credential\")\n            .databricksGcpServiceAccount(CredentialDatabricksGcpServiceAccountArgs.builder()\n                .build())\n            .purpose(\"SERVICE\")\n            .comment(\"GCP SA credential managed by TF\")\n            .build());\n\n        var externalCreds = new Grants(\"externalCreds\", GrantsArgs.builder()\n            .credential(externalGcpSa.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(\"ACCESS\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  externalGcpSa:\n    type: databricks:Credential\n    name: external_gcp_sa\n    properties:\n      name: gcp_sa_credential\n      databricksGcpServiceAccount: {}\n      purpose: SERVICE\n      comment: GCP SA credential managed by TF\n  externalCreds:\n    type: databricks:Grants\n    name: external_creds\n    properties:\n      credential: ${externalGcpSa.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - ACCESS\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"awsIamRole":{"$ref":"#/types/databricks:index/CredentialAwsIamRole:CredentialAwsIamRole"},"azureManagedIdentity":{"$ref":"#/types/databricks:index/CredentialAzureManagedIdentity:CredentialAzureManagedIdentity"},"azureServicePrincipal":{"$ref":"#/types/databricks:index/CredentialAzureServicePrincipal:CredentialAzureServicePrincipal"},"comment":{"type":"string"},"createdAt":{"type":"integer"},"createdBy":{"type":"string"},"credentialId":{"type":"string","description":"Unique ID of the credential.\n"},"databricksGcpServiceAccount":{"$ref":"#/types/databricks:index/CredentialDatabricksGcpServiceAccount:CredentialDatabricksGcpServiceAccount"},"forceDestroy":{"type":"boolean","description":"Delete credential regardless of its dependencies.\n"},"forceUpdate":{"type":"boolean","description":"Update credential regardless of its dependents.\n"},"fullName":{"type":"string"},"isolationMode":{"type":"string","description":"Whether the credential is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATION_MODE_ISOLATED` or `ISOLATION_MODE_OPEN`. Setting the credential to `ISOLATION_MODE_ISOLATED` will automatically restrict access to only from the current workspace.\n\n\u003cspan pulumi-lang-nodejs=\"`awsIamRole`\" pulumi-lang-dotnet=\"`AwsIamRole`\" pulumi-lang-go=\"`awsIamRole`\" pulumi-lang-python=\"`aws_iam_role`\" pulumi-lang-yaml=\"`awsIamRole`\" pulumi-lang-java=\"`awsIamRole`\"\u003e`aws_iam_role`\u003c/span\u003e optional configuration block for credential details for AWS:\n"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"Name of Credentials, which must be unique within the databricks_metastore. Change of the \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e forces creation of a new resource.\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the credential owner.\n"},"purpose":{"type":"string","description":"Indicates the purpose of the credential. Can be `SERVICE` or `STORAGE`.\n"},"readOnly":{"type":"boolean","description":"Indicates whether the credential is only usable for read operations. Only applicable when purpose is `STORAGE`.\n"},"skipValidation":{"type":"boolean","description":"Suppress validation errors if any \u0026 force save the credential.\n"},"updatedAt":{"type":"integer"},"updatedBy":{"type":"string"},"usedForManagedStorage":{"type":"boolean"}},"required":["createdAt","createdBy","credentialId","databricksGcpServiceAccount","fullName","isolationMode","metastoreId","name","owner","purpose","updatedAt","updatedBy","usedForManagedStorage"],"inputProperties":{"awsIamRole":{"$ref":"#/types/databricks:index/CredentialAwsIamRole:CredentialAwsIamRole"},"azureManagedIdentity":{"$ref":"#/types/databricks:index/CredentialAzureManagedIdentity:CredentialAzureManagedIdentity"},"azureServicePrincipal":{"$ref":"#/types/databricks:index/CredentialAzureServicePrincipal:CredentialAzureServicePrincipal"},"comment":{"type":"string"},"createdAt":{"type":"integer"},"createdBy":{"type":"string"},"databricksGcpServiceAccount":{"$ref":"#/types/databricks:index/CredentialDatabricksGcpServiceAccount:CredentialDatabricksGcpServiceAccount"},"forceDestroy":{"type":"boolean","description":"Delete credential regardless of its dependencies.\n"},"forceUpdate":{"type":"boolean","description":"Update credential regardless of its dependents.\n"},"fullName":{"type":"string"},"isolationMode":{"type":"string","description":"Whether the credential is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATION_MODE_ISOLATED` or `ISOLATION_MODE_OPEN`. Setting the credential to `ISOLATION_MODE_ISOLATED` will automatically restrict access to only from the current workspace.\n\n\u003cspan pulumi-lang-nodejs=\"`awsIamRole`\" pulumi-lang-dotnet=\"`AwsIamRole`\" pulumi-lang-go=\"`awsIamRole`\" pulumi-lang-python=\"`aws_iam_role`\" pulumi-lang-yaml=\"`awsIamRole`\" pulumi-lang-java=\"`awsIamRole`\"\u003e`aws_iam_role`\u003c/span\u003e optional configuration block for credential details for AWS:\n"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"Name of Credentials, which must be unique within the databricks_metastore. Change of the \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the credential owner.\n"},"purpose":{"type":"string","description":"Indicates the purpose of the credential. Can be `SERVICE` or `STORAGE`.\n"},"readOnly":{"type":"boolean","description":"Indicates whether the credential is only usable for read operations. Only applicable when purpose is `STORAGE`.\n"},"skipValidation":{"type":"boolean","description":"Suppress validation errors if any \u0026 force save the credential.\n"},"updatedAt":{"type":"integer"},"updatedBy":{"type":"string"},"usedForManagedStorage":{"type":"boolean"}},"requiredInputs":["purpose"],"stateInputs":{"description":"Input properties used for looking up and filtering Credential resources.\n","properties":{"awsIamRole":{"$ref":"#/types/databricks:index/CredentialAwsIamRole:CredentialAwsIamRole"},"azureManagedIdentity":{"$ref":"#/types/databricks:index/CredentialAzureManagedIdentity:CredentialAzureManagedIdentity"},"azureServicePrincipal":{"$ref":"#/types/databricks:index/CredentialAzureServicePrincipal:CredentialAzureServicePrincipal"},"comment":{"type":"string"},"createdAt":{"type":"integer"},"createdBy":{"type":"string"},"credentialId":{"type":"string","description":"Unique ID of the credential.\n"},"databricksGcpServiceAccount":{"$ref":"#/types/databricks:index/CredentialDatabricksGcpServiceAccount:CredentialDatabricksGcpServiceAccount"},"forceDestroy":{"type":"boolean","description":"Delete credential regardless of its dependencies.\n"},"forceUpdate":{"type":"boolean","description":"Update credential regardless of its dependents.\n"},"fullName":{"type":"string"},"isolationMode":{"type":"string","description":"Whether the credential is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATION_MODE_ISOLATED` or `ISOLATION_MODE_OPEN`. Setting the credential to `ISOLATION_MODE_ISOLATED` will automatically restrict access to only from the current workspace.\n\n\u003cspan pulumi-lang-nodejs=\"`awsIamRole`\" pulumi-lang-dotnet=\"`AwsIamRole`\" pulumi-lang-go=\"`awsIamRole`\" pulumi-lang-python=\"`aws_iam_role`\" pulumi-lang-yaml=\"`awsIamRole`\" pulumi-lang-java=\"`awsIamRole`\"\u003e`aws_iam_role`\u003c/span\u003e optional configuration block for credential details for AWS:\n"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"Name of Credentials, which must be unique within the databricks_metastore. Change of the \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the credential owner.\n"},"purpose":{"type":"string","description":"Indicates the purpose of the credential. Can be `SERVICE` or `STORAGE`.\n"},"readOnly":{"type":"boolean","description":"Indicates whether the credential is only usable for read operations. Only applicable when purpose is `STORAGE`.\n"},"skipValidation":{"type":"boolean","description":"Suppress validation errors if any \u0026 force save the credential.\n"},"updatedAt":{"type":"integer"},"updatedBy":{"type":"string"},"usedForManagedStorage":{"type":"boolean"}},"type":"object"}},"databricks:index/customAppIntegration:CustomAppIntegration":{"description":"This resource allows you to enable [custom OAuth applications](https://docs.databricks.com/en/integrations/enable-disable-oauth.html#enable-custom-oauth-applications-using-the-databricks-ui).\n\n\u003e This resource can only be used with an account-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.CustomAppIntegration(\"this\", {\n    name: \"custom_integration_name\",\n    redirectUrls: [\"https://example.com\"],\n    scopes: [\"all-apis\"],\n    tokenAccessPolicy: {\n        accessTokenTtlInMinutes: 15,\n        refreshTokenTtlInMinutes: 30,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.CustomAppIntegration(\"this\",\n    name=\"custom_integration_name\",\n    redirect_urls=[\"https://example.com\"],\n    scopes=[\"all-apis\"],\n    token_access_policy={\n        \"access_token_ttl_in_minutes\": 15,\n        \"refresh_token_ttl_in_minutes\": 30,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.CustomAppIntegration(\"this\", new()\n    {\n        Name = \"custom_integration_name\",\n        RedirectUrls = new[]\n        {\n            \"https://example.com\",\n        },\n        Scopes = new[]\n        {\n            \"all-apis\",\n        },\n        TokenAccessPolicy = new Databricks.Inputs.CustomAppIntegrationTokenAccessPolicyArgs\n        {\n            AccessTokenTtlInMinutes = 15,\n            RefreshTokenTtlInMinutes = 30,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewCustomAppIntegration(ctx, \"this\", \u0026databricks.CustomAppIntegrationArgs{\n\t\t\tName: pulumi.String(\"custom_integration_name\"),\n\t\t\tRedirectUrls: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"https://example.com\"),\n\t\t\t},\n\t\t\tScopes: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"all-apis\"),\n\t\t\t},\n\t\t\tTokenAccessPolicy: \u0026databricks.CustomAppIntegrationTokenAccessPolicyArgs{\n\t\t\t\tAccessTokenTtlInMinutes:  pulumi.Int(15),\n\t\t\t\tRefreshTokenTtlInMinutes: pulumi.Int(30),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.CustomAppIntegration;\nimport com.pulumi.databricks.CustomAppIntegrationArgs;\nimport com.pulumi.databricks.inputs.CustomAppIntegrationTokenAccessPolicyArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new CustomAppIntegration(\"this\", CustomAppIntegrationArgs.builder()\n            .name(\"custom_integration_name\")\n            .redirectUrls(\"https://example.com\")\n            .scopes(\"all-apis\")\n            .tokenAccessPolicy(CustomAppIntegrationTokenAccessPolicyArgs.builder()\n                .accessTokenTtlInMinutes(15)\n                .refreshTokenTtlInMinutes(30)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:CustomAppIntegration\n    properties:\n      name: custom_integration_name\n      redirectUrls:\n        - https://example.com\n      scopes:\n        - all-apis\n      tokenAccessPolicy:\n        accessTokenTtlInMinutes: 15\n        refreshTokenTtlInMinutes: 30\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up Databricks workspaces.\n\n","properties":{"clientId":{"type":"string","description":"OAuth client-id generated by Databricks\n"},"clientSecret":{"type":"string","description":"OAuth client-secret generated by the Databricks if this is a confidential OAuth app.\n","secret":true},"confidential":{"type":"boolean","description":"Indicates whether an OAuth client secret is required to authenticate this client. Default to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e. Change requires a new resource.\n"},"createTime":{"type":"string"},"createdBy":{"type":"integer"},"creatorUsername":{"type":"string"},"integrationId":{"type":"string","description":"Unique integration id for the custom OAuth app.\n"},"name":{"type":"string","description":"Name of the custom OAuth app. Change requires a new resource.\n"},"redirectUrls":{"type":"array","items":{"type":"string"},"description":"List of OAuth redirect urls.\n"},"scopes":{"type":"array","items":{"type":"string"},"description":"OAuth scopes granted to the application. Supported scopes: `all-apis`, \u003cspan pulumi-lang-nodejs=\"`sql`\" pulumi-lang-dotnet=\"`Sql`\" pulumi-lang-go=\"`sql`\" pulumi-lang-python=\"`sql`\" pulumi-lang-yaml=\"`sql`\" pulumi-lang-java=\"`sql`\"\u003e`sql`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`offlineAccess`\" pulumi-lang-dotnet=\"`OfflineAccess`\" pulumi-lang-go=\"`offlineAccess`\" pulumi-lang-python=\"`offline_access`\" pulumi-lang-yaml=\"`offlineAccess`\" pulumi-lang-java=\"`offlineAccess`\"\u003e`offline_access`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`openid`\" pulumi-lang-dotnet=\"`Openid`\" pulumi-lang-go=\"`openid`\" pulumi-lang-python=\"`openid`\" pulumi-lang-yaml=\"`openid`\" pulumi-lang-java=\"`openid`\"\u003e`openid`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`profile`\" pulumi-lang-dotnet=\"`Profile`\" pulumi-lang-go=\"`profile`\" pulumi-lang-python=\"`profile`\" pulumi-lang-yaml=\"`profile`\" pulumi-lang-java=\"`profile`\"\u003e`profile`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`email`\" pulumi-lang-dotnet=\"`Email`\" pulumi-lang-go=\"`email`\" pulumi-lang-python=\"`email`\" pulumi-lang-yaml=\"`email`\" pulumi-lang-java=\"`email`\"\u003e`email`\u003c/span\u003e.\n"},"tokenAccessPolicy":{"$ref":"#/types/databricks:index/CustomAppIntegrationTokenAccessPolicy:CustomAppIntegrationTokenAccessPolicy"},"userAuthorizedScopes":{"type":"array","items":{"type":"string"}}},"required":["clientId","clientSecret","createTime","createdBy","creatorUsername","integrationId","name"],"inputProperties":{"clientId":{"type":"string","description":"OAuth client-id generated by Databricks\n"},"clientSecret":{"type":"string","description":"OAuth client-secret generated by the Databricks if this is a confidential OAuth app.\n","secret":true},"confidential":{"type":"boolean","description":"Indicates whether an OAuth client secret is required to authenticate this client. Default to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e. Change requires a new resource.\n","willReplaceOnChanges":true},"createTime":{"type":"string"},"createdBy":{"type":"integer"},"creatorUsername":{"type":"string"},"integrationId":{"type":"string","description":"Unique integration id for the custom OAuth app.\n"},"name":{"type":"string","description":"Name of the custom OAuth app. Change requires a new resource.\n","willReplaceOnChanges":true},"redirectUrls":{"type":"array","items":{"type":"string"},"description":"List of OAuth redirect urls.\n"},"scopes":{"type":"array","items":{"type":"string"},"description":"OAuth scopes granted to the application. Supported scopes: `all-apis`, \u003cspan pulumi-lang-nodejs=\"`sql`\" pulumi-lang-dotnet=\"`Sql`\" pulumi-lang-go=\"`sql`\" pulumi-lang-python=\"`sql`\" pulumi-lang-yaml=\"`sql`\" pulumi-lang-java=\"`sql`\"\u003e`sql`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`offlineAccess`\" pulumi-lang-dotnet=\"`OfflineAccess`\" pulumi-lang-go=\"`offlineAccess`\" pulumi-lang-python=\"`offline_access`\" pulumi-lang-yaml=\"`offlineAccess`\" pulumi-lang-java=\"`offlineAccess`\"\u003e`offline_access`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`openid`\" pulumi-lang-dotnet=\"`Openid`\" pulumi-lang-go=\"`openid`\" pulumi-lang-python=\"`openid`\" pulumi-lang-yaml=\"`openid`\" pulumi-lang-java=\"`openid`\"\u003e`openid`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`profile`\" pulumi-lang-dotnet=\"`Profile`\" pulumi-lang-go=\"`profile`\" pulumi-lang-python=\"`profile`\" pulumi-lang-yaml=\"`profile`\" pulumi-lang-java=\"`profile`\"\u003e`profile`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`email`\" pulumi-lang-dotnet=\"`Email`\" pulumi-lang-go=\"`email`\" pulumi-lang-python=\"`email`\" pulumi-lang-yaml=\"`email`\" pulumi-lang-java=\"`email`\"\u003e`email`\u003c/span\u003e.\n"},"tokenAccessPolicy":{"$ref":"#/types/databricks:index/CustomAppIntegrationTokenAccessPolicy:CustomAppIntegrationTokenAccessPolicy"},"userAuthorizedScopes":{"type":"array","items":{"type":"string"}}},"stateInputs":{"description":"Input properties used for looking up and filtering CustomAppIntegration resources.\n","properties":{"clientId":{"type":"string","description":"OAuth client-id generated by Databricks\n"},"clientSecret":{"type":"string","description":"OAuth client-secret generated by the Databricks if this is a confidential OAuth app.\n","secret":true},"confidential":{"type":"boolean","description":"Indicates whether an OAuth client secret is required to authenticate this client. Default to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e. Change requires a new resource.\n","willReplaceOnChanges":true},"createTime":{"type":"string"},"createdBy":{"type":"integer"},"creatorUsername":{"type":"string"},"integrationId":{"type":"string","description":"Unique integration id for the custom OAuth app.\n"},"name":{"type":"string","description":"Name of the custom OAuth app. Change requires a new resource.\n","willReplaceOnChanges":true},"redirectUrls":{"type":"array","items":{"type":"string"},"description":"List of OAuth redirect urls.\n"},"scopes":{"type":"array","items":{"type":"string"},"description":"OAuth scopes granted to the application. Supported scopes: `all-apis`, \u003cspan pulumi-lang-nodejs=\"`sql`\" pulumi-lang-dotnet=\"`Sql`\" pulumi-lang-go=\"`sql`\" pulumi-lang-python=\"`sql`\" pulumi-lang-yaml=\"`sql`\" pulumi-lang-java=\"`sql`\"\u003e`sql`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`offlineAccess`\" pulumi-lang-dotnet=\"`OfflineAccess`\" pulumi-lang-go=\"`offlineAccess`\" pulumi-lang-python=\"`offline_access`\" pulumi-lang-yaml=\"`offlineAccess`\" pulumi-lang-java=\"`offlineAccess`\"\u003e`offline_access`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`openid`\" pulumi-lang-dotnet=\"`Openid`\" pulumi-lang-go=\"`openid`\" pulumi-lang-python=\"`openid`\" pulumi-lang-yaml=\"`openid`\" pulumi-lang-java=\"`openid`\"\u003e`openid`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`profile`\" pulumi-lang-dotnet=\"`Profile`\" pulumi-lang-go=\"`profile`\" pulumi-lang-python=\"`profile`\" pulumi-lang-yaml=\"`profile`\" pulumi-lang-java=\"`profile`\"\u003e`profile`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`email`\" pulumi-lang-dotnet=\"`Email`\" pulumi-lang-go=\"`email`\" pulumi-lang-python=\"`email`\" pulumi-lang-yaml=\"`email`\" pulumi-lang-java=\"`email`\"\u003e`email`\u003c/span\u003e.\n"},"tokenAccessPolicy":{"$ref":"#/types/databricks:index/CustomAppIntegrationTokenAccessPolicy:CustomAppIntegrationTokenAccessPolicy"},"userAuthorizedScopes":{"type":"array","items":{"type":"string"}}},"type":"object"}},"databricks:index/dashboard:Dashboard":{"description":"This resource allows you to manage Databricks [Dashboards](https://docs.databricks.com/en/dashboards/index.html). To manage [Dashboards](https://docs.databricks.com/en/dashboards/index.html) you must have a warehouse access on your databricks workspace.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\nDashboard using \u003cspan pulumi-lang-nodejs=\"`serializedDashboard`\" pulumi-lang-dotnet=\"`SerializedDashboard`\" pulumi-lang-go=\"`serializedDashboard`\" pulumi-lang-python=\"`serialized_dashboard`\" pulumi-lang-yaml=\"`serializedDashboard`\" pulumi-lang-java=\"`serializedDashboard`\"\u003e`serialized_dashboard`\u003c/span\u003e attribute:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst starter = databricks.getSqlWarehouse({\n    name: \"Starter Warehouse\",\n});\nconst dashboard = new databricks.Dashboard(\"dashboard\", {\n    displayName: \"New Dashboard\",\n    warehouseId: starter.then(starter =\u003e starter.id),\n    serializedDashboard: \"{\\\"pages\\\":[{\\\"name\\\":\\\"new_name\\\",\\\"displayName\\\":\\\"New Page\\\"}]}\",\n    embedCredentials: false,\n    parentPath: \"/Shared/provider-test\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nstarter = databricks.get_sql_warehouse(name=\"Starter Warehouse\")\ndashboard = databricks.Dashboard(\"dashboard\",\n    display_name=\"New Dashboard\",\n    warehouse_id=starter.id,\n    serialized_dashboard=\"{\\\"pages\\\":[{\\\"name\\\":\\\"new_name\\\",\\\"displayName\\\":\\\"New Page\\\"}]}\",\n    embed_credentials=False,\n    parent_path=\"/Shared/provider-test\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var starter = Databricks.GetSqlWarehouse.Invoke(new()\n    {\n        Name = \"Starter Warehouse\",\n    });\n\n    var dashboard = new Databricks.Dashboard(\"dashboard\", new()\n    {\n        DisplayName = \"New Dashboard\",\n        WarehouseId = starter.Apply(getSqlWarehouseResult =\u003e getSqlWarehouseResult.Id),\n        SerializedDashboard = \"{\\\"pages\\\":[{\\\"name\\\":\\\"new_name\\\",\\\"displayName\\\":\\\"New Page\\\"}]}\",\n        EmbedCredentials = false,\n        ParentPath = \"/Shared/provider-test\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tstarter, err := databricks.GetSqlWarehouse(ctx, \u0026databricks.GetSqlWarehouseArgs{\n\t\t\tName: pulumi.StringRef(\"Starter Warehouse\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewDashboard(ctx, \"dashboard\", \u0026databricks.DashboardArgs{\n\t\t\tDisplayName:         pulumi.String(\"New Dashboard\"),\n\t\t\tWarehouseId:         pulumi.String(starter.Id),\n\t\t\tSerializedDashboard: pulumi.String(\"{\\\"pages\\\":[{\\\"name\\\":\\\"new_name\\\",\\\"displayName\\\":\\\"New Page\\\"}]}\"),\n\t\t\tEmbedCredentials:    pulumi.Bool(false),\n\t\t\tParentPath:          pulumi.String(\"/Shared/provider-test\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetSqlWarehouseArgs;\nimport com.pulumi.databricks.Dashboard;\nimport com.pulumi.databricks.DashboardArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var starter = DatabricksFunctions.getSqlWarehouse(GetSqlWarehouseArgs.builder()\n            .name(\"Starter Warehouse\")\n            .build());\n\n        var dashboard = new Dashboard(\"dashboard\", DashboardArgs.builder()\n            .displayName(\"New Dashboard\")\n            .warehouseId(starter.id())\n            .serializedDashboard(\"{\\\"pages\\\":[{\\\"name\\\":\\\"new_name\\\",\\\"displayName\\\":\\\"New Page\\\"}]}\")\n            .embedCredentials(false)\n            .parentPath(\"/Shared/provider-test\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  dashboard:\n    type: databricks:Dashboard\n    properties:\n      displayName: New Dashboard\n      warehouseId: ${starter.id}\n      serializedDashboard: '{\"pages\":[{\"name\":\"new_name\",\"displayName\":\"New Page\"}]}'\n      embedCredentials: false # Optional\n      parentPath: /Shared/provider-test\nvariables:\n  starter:\n    fn::invoke:\n      function: databricks:getSqlWarehouse\n      arguments:\n        name: Starter Warehouse\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nDashboard using \u003cspan pulumi-lang-nodejs=\"`filePath`\" pulumi-lang-dotnet=\"`FilePath`\" pulumi-lang-go=\"`filePath`\" pulumi-lang-python=\"`file_path`\" pulumi-lang-yaml=\"`filePath`\" pulumi-lang-java=\"`filePath`\"\u003e`file_path`\u003c/span\u003e attribute:\n\n","properties":{"createTime":{"type":"string"},"dashboardChangeDetected":{"type":"boolean"},"dashboardId":{"type":"string"},"datasetCatalog":{"type":"string","description":"Sets the default catalog for all datasets in this dashboard. Does not impact table references that use fully qualified catalog names (ex: samples.nyctaxi.trips).\n"},"datasetSchema":{"type":"string","description":"Sets the default schema for all datasets in this dashboard. Does not impact table references that use fully qualified catalog names (ex: samples.nyctaxi.trips).\n"},"displayName":{"type":"string","description":"The display name of the dashboard.\n"},"embedCredentials":{"type":"boolean","description":"Whether to embed credentials in the dashboard. Default is \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e.\n"},"etag":{"type":"string"},"filePath":{"type":"string","description":"The path to the dashboard JSON file. Conflicts with \u003cspan pulumi-lang-nodejs=\"`serializedDashboard`\" pulumi-lang-dotnet=\"`SerializedDashboard`\" pulumi-lang-go=\"`serializedDashboard`\" pulumi-lang-python=\"`serialized_dashboard`\" pulumi-lang-yaml=\"`serializedDashboard`\" pulumi-lang-java=\"`serializedDashboard`\"\u003e`serialized_dashboard`\u003c/span\u003e.\n"},"lifecycleState":{"type":"string"},"md5":{"type":"string"},"parentPath":{"type":"string","description":"The workspace path of the folder containing the dashboard. Includes leading slash and no trailing slash.  If folder doesn't exist, it will be created.\n"},"path":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DashboardProviderConfig:DashboardProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"serializedDashboard":{"type":"string","description":"The contents of the dashboard in serialized string form. Conflicts with \u003cspan pulumi-lang-nodejs=\"`filePath`\" pulumi-lang-dotnet=\"`FilePath`\" pulumi-lang-go=\"`filePath`\" pulumi-lang-python=\"`file_path`\" pulumi-lang-yaml=\"`filePath`\" pulumi-lang-java=\"`filePath`\"\u003e`file_path`\u003c/span\u003e.\n"},"updateTime":{"type":"string"},"warehouseId":{"type":"string","description":"The warehouse ID used to run the dashboard.\n"}},"required":["createTime","dashboardId","displayName","lifecycleState","md5","parentPath","path","updateTime","warehouseId"],"inputProperties":{"createTime":{"type":"string"},"dashboardChangeDetected":{"type":"boolean"},"dashboardId":{"type":"string"},"datasetCatalog":{"type":"string","description":"Sets the default catalog for all datasets in this dashboard. Does not impact table references that use fully qualified catalog names (ex: samples.nyctaxi.trips).\n"},"datasetSchema":{"type":"string","description":"Sets the default schema for all datasets in this dashboard. Does not impact table references that use fully qualified catalog names (ex: samples.nyctaxi.trips).\n"},"displayName":{"type":"string","description":"The display name of the dashboard.\n"},"embedCredentials":{"type":"boolean","description":"Whether to embed credentials in the dashboard. Default is \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e.\n"},"etag":{"type":"string"},"filePath":{"type":"string","description":"The path to the dashboard JSON file. Conflicts with \u003cspan pulumi-lang-nodejs=\"`serializedDashboard`\" pulumi-lang-dotnet=\"`SerializedDashboard`\" pulumi-lang-go=\"`serializedDashboard`\" pulumi-lang-python=\"`serialized_dashboard`\" pulumi-lang-yaml=\"`serializedDashboard`\" pulumi-lang-java=\"`serializedDashboard`\"\u003e`serialized_dashboard`\u003c/span\u003e.\n"},"lifecycleState":{"type":"string"},"md5":{"type":"string"},"parentPath":{"type":"string","description":"The workspace path of the folder containing the dashboard. Includes leading slash and no trailing slash.  If folder doesn't exist, it will be created.\n","willReplaceOnChanges":true},"path":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DashboardProviderConfig:DashboardProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"serializedDashboard":{"type":"string","description":"The contents of the dashboard in serialized string form. Conflicts with \u003cspan pulumi-lang-nodejs=\"`filePath`\" pulumi-lang-dotnet=\"`FilePath`\" pulumi-lang-go=\"`filePath`\" pulumi-lang-python=\"`file_path`\" pulumi-lang-yaml=\"`filePath`\" pulumi-lang-java=\"`filePath`\"\u003e`file_path`\u003c/span\u003e.\n"},"updateTime":{"type":"string"},"warehouseId":{"type":"string","description":"The warehouse ID used to run the dashboard.\n"}},"requiredInputs":["displayName","parentPath","warehouseId"],"stateInputs":{"description":"Input properties used for looking up and filtering Dashboard resources.\n","properties":{"createTime":{"type":"string"},"dashboardChangeDetected":{"type":"boolean"},"dashboardId":{"type":"string"},"datasetCatalog":{"type":"string","description":"Sets the default catalog for all datasets in this dashboard. Does not impact table references that use fully qualified catalog names (ex: samples.nyctaxi.trips).\n"},"datasetSchema":{"type":"string","description":"Sets the default schema for all datasets in this dashboard. Does not impact table references that use fully qualified catalog names (ex: samples.nyctaxi.trips).\n"},"displayName":{"type":"string","description":"The display name of the dashboard.\n"},"embedCredentials":{"type":"boolean","description":"Whether to embed credentials in the dashboard. Default is \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e.\n"},"etag":{"type":"string"},"filePath":{"type":"string","description":"The path to the dashboard JSON file. Conflicts with \u003cspan pulumi-lang-nodejs=\"`serializedDashboard`\" pulumi-lang-dotnet=\"`SerializedDashboard`\" pulumi-lang-go=\"`serializedDashboard`\" pulumi-lang-python=\"`serialized_dashboard`\" pulumi-lang-yaml=\"`serializedDashboard`\" pulumi-lang-java=\"`serializedDashboard`\"\u003e`serialized_dashboard`\u003c/span\u003e.\n"},"lifecycleState":{"type":"string"},"md5":{"type":"string"},"parentPath":{"type":"string","description":"The workspace path of the folder containing the dashboard. Includes leading slash and no trailing slash.  If folder doesn't exist, it will be created.\n","willReplaceOnChanges":true},"path":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DashboardProviderConfig:DashboardProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"serializedDashboard":{"type":"string","description":"The contents of the dashboard in serialized string form. Conflicts with \u003cspan pulumi-lang-nodejs=\"`filePath`\" pulumi-lang-dotnet=\"`FilePath`\" pulumi-lang-go=\"`filePath`\" pulumi-lang-python=\"`file_path`\" pulumi-lang-yaml=\"`filePath`\" pulumi-lang-java=\"`filePath`\"\u003e`file_path`\u003c/span\u003e.\n"},"updateTime":{"type":"string"},"warehouseId":{"type":"string","description":"The warehouse ID used to run the dashboard.\n"}},"type":"object"}},"databricks:index/dataQualityMonitor:DataQualityMonitor":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis resource allows you to set up data quality monitoring checks for Unity Catalog objects, currently schema and table. \n\nFor the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003cspan pulumi-lang-nodejs=\"`objectType`\" pulumi-lang-dotnet=\"`ObjectType`\" pulumi-lang-go=\"`objectType`\" pulumi-lang-python=\"`object_type`\" pulumi-lang-yaml=\"`objectType`\" pulumi-lang-java=\"`objectType`\"\u003e`object_type`\u003c/span\u003e, you must either:\n1. be an owner of the table's parent catalog, have **USE_SCHEMA** on the table's parent schema, and have **SELECT** access on the table\n2. have **USE_CATALOG** on the table's parent catalog, be an owner of the table's parent schema, and have **SELECT** access on the table.\n3. have the following permissions:\n   - **USE_CATALOG** on the table's parent catalog\n   - **USE_SCHEMA** on the table's parent schema\n   - be an owner of the table.\n\n\u003e **Note** This resource can only be used with a workspace-level provider!\n\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Schema(\"this\", {\n    catalogName: \"my_catalog\",\n    name: \"my_schema\",\n});\nconst thisDataQualityMonitor = new databricks.DataQualityMonitor(\"this\", {\n    objectType: \"schema\",\n    objectId: _this.schemaId,\n    anomalyDetectionConfig: {},\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Schema(\"this\",\n    catalog_name=\"my_catalog\",\n    name=\"my_schema\")\nthis_data_quality_monitor = databricks.DataQualityMonitor(\"this\",\n    object_type=\"schema\",\n    object_id=this.schema_id,\n    anomaly_detection_config={})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Schema(\"this\", new()\n    {\n        CatalogName = \"my_catalog\",\n        Name = \"my_schema\",\n    });\n\n    var thisDataQualityMonitor = new Databricks.DataQualityMonitor(\"this\", new()\n    {\n        ObjectType = \"schema\",\n        ObjectId = @this.SchemaId,\n        AnomalyDetectionConfig = null,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewSchema(ctx, \"this\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: pulumi.String(\"my_catalog\"),\n\t\t\tName:        pulumi.String(\"my_schema\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewDataQualityMonitor(ctx, \"this\", \u0026databricks.DataQualityMonitorArgs{\n\t\t\tObjectType:             pulumi.String(\"schema\"),\n\t\t\tObjectId:               this.SchemaId,\n\t\t\tAnomalyDetectionConfig: \u0026databricks.DataQualityMonitorAnomalyDetectionConfigArgs{},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport com.pulumi.databricks.DataQualityMonitor;\nimport com.pulumi.databricks.DataQualityMonitorArgs;\nimport com.pulumi.databricks.inputs.DataQualityMonitorAnomalyDetectionConfigArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Schema(\"this\", SchemaArgs.builder()\n            .catalogName(\"my_catalog\")\n            .name(\"my_schema\")\n            .build());\n\n        var thisDataQualityMonitor = new DataQualityMonitor(\"thisDataQualityMonitor\", DataQualityMonitorArgs.builder()\n            .objectType(\"schema\")\n            .objectId(this_.schemaId())\n            .anomalyDetectionConfig(DataQualityMonitorAnomalyDetectionConfigArgs.builder()\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Schema\n    properties:\n      catalogName: my_catalog\n      name: my_schema\n  thisDataQualityMonitor:\n    type: databricks:DataQualityMonitor\n    name: this\n    properties:\n      objectType: schema\n      objectId: ${this.schemaId}\n      anomalyDetectionConfig: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"anomalyDetectionConfig":{"$ref":"#/types/databricks:index/DataQualityMonitorAnomalyDetectionConfig:DataQualityMonitorAnomalyDetectionConfig","description":"Anomaly Detection Configuration, applicable to \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e object types\n"},"dataProfilingConfig":{"$ref":"#/types/databricks:index/DataQualityMonitorDataProfilingConfig:DataQualityMonitorDataProfilingConfig","description":"Data Profiling Configuration, applicable to \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e object types. Exactly one `Analysis Configuration`\nmust be present\n"},"objectId":{"type":"string","description":"The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[schemaId]\" pulumi-lang-dotnet=\"[SchemaId]\" pulumi-lang-go=\"[schemaId]\" pulumi-lang-python=\"[schema_id]\" pulumi-lang-yaml=\"[schemaId]\" pulumi-lang-java=\"[schemaId]\"\u003e[schema_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the `Schemas` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Schema ID` field.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[tableId]\" pulumi-lang-dotnet=\"[TableId]\" pulumi-lang-go=\"[tableId]\" pulumi-lang-python=\"[table_id]\" pulumi-lang-yaml=\"[tableId]\" pulumi-lang-java=\"[tableId]\"\u003e[table_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/tables/get#table_id) of the `Tables` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Table ID` field\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n"},"providerConfig":{"$ref":"#/types/databricks:index/DataQualityMonitorProviderConfig:DataQualityMonitorProviderConfig","description":"Configure the provider for management through account provider.\n"}},"required":["objectId","objectType"],"inputProperties":{"anomalyDetectionConfig":{"$ref":"#/types/databricks:index/DataQualityMonitorAnomalyDetectionConfig:DataQualityMonitorAnomalyDetectionConfig","description":"Anomaly Detection Configuration, applicable to \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e object types\n"},"dataProfilingConfig":{"$ref":"#/types/databricks:index/DataQualityMonitorDataProfilingConfig:DataQualityMonitorDataProfilingConfig","description":"Data Profiling Configuration, applicable to \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e object types. Exactly one `Analysis Configuration`\nmust be present\n"},"objectId":{"type":"string","description":"The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[schemaId]\" pulumi-lang-dotnet=\"[SchemaId]\" pulumi-lang-go=\"[schemaId]\" pulumi-lang-python=\"[schema_id]\" pulumi-lang-yaml=\"[schemaId]\" pulumi-lang-java=\"[schemaId]\"\u003e[schema_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the `Schemas` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Schema ID` field.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[tableId]\" pulumi-lang-dotnet=\"[TableId]\" pulumi-lang-go=\"[tableId]\" pulumi-lang-python=\"[table_id]\" pulumi-lang-yaml=\"[tableId]\" pulumi-lang-java=\"[tableId]\"\u003e[table_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/tables/get#table_id) of the `Tables` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Table ID` field\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n"},"providerConfig":{"$ref":"#/types/databricks:index/DataQualityMonitorProviderConfig:DataQualityMonitorProviderConfig","description":"Configure the provider for management through account provider.\n"}},"requiredInputs":["objectId","objectType"],"stateInputs":{"description":"Input properties used for looking up and filtering DataQualityMonitor resources.\n","properties":{"anomalyDetectionConfig":{"$ref":"#/types/databricks:index/DataQualityMonitorAnomalyDetectionConfig:DataQualityMonitorAnomalyDetectionConfig","description":"Anomaly Detection Configuration, applicable to \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e object types\n"},"dataProfilingConfig":{"$ref":"#/types/databricks:index/DataQualityMonitorDataProfilingConfig:DataQualityMonitorDataProfilingConfig","description":"Data Profiling Configuration, applicable to \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e object types. Exactly one `Analysis Configuration`\nmust be present\n"},"objectId":{"type":"string","description":"The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[schemaId]\" pulumi-lang-dotnet=\"[SchemaId]\" pulumi-lang-go=\"[schemaId]\" pulumi-lang-python=\"[schema_id]\" pulumi-lang-yaml=\"[schemaId]\" pulumi-lang-java=\"[schemaId]\"\u003e[schema_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the `Schemas` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Schema ID` field.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[tableId]\" pulumi-lang-dotnet=\"[TableId]\" pulumi-lang-go=\"[tableId]\" pulumi-lang-python=\"[table_id]\" pulumi-lang-yaml=\"[tableId]\" pulumi-lang-java=\"[tableId]\"\u003e[table_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/tables/get#table_id) of the `Tables` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Table ID` field\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n"},"providerConfig":{"$ref":"#/types/databricks:index/DataQualityMonitorProviderConfig:DataQualityMonitorProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"}},"databricks:index/dataQualityRefresh:DataQualityRefresh":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis resource allows you to refresh the data quality monitoring checks on Unity Catalog tables.\n\nYou must either:\n1. be an owner of the table's parent catalog, have **USE_SCHEMA** on the table's parent schema, and have **SELECT** access on the table\n2. have **USE_CATALOG** on the table's parent catalog, be an owner of the table's parent schema, and have **SELECT** access on the table.\n3. have the following permissions:\n   - **USE_CATALOG** on the table's parent catalog\n   - **USE_SCHEMA** on the table's parent schema\n   - be an owner of the table.\n\n\u003e **Note** This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    comment: \"this catalog is managed by terraform\",\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst myTestSchema = new databricks.Schema(\"myTestSchema\", {\n    catalogName: sandbox.id,\n    name: \"myTestSchema\",\n    comment: \"this database is managed by terraform\",\n    properties: {\n        kind: \"various\",\n    },\n});\nconst myTestTable = new databricks.SqlTable(\"myTestTable\", {\n    catalogName: \"main\",\n    schemaName: myTestSchema.name,\n    name: \"bar\",\n    tableType: \"MANAGED\",\n    dataSourceFormat: \"DELTA\",\n    columns: [{\n        name: \"timestamp\",\n        type: \"int\",\n    }],\n});\nconst _this = new databricks.DataQualityMonitor(\"this\", {\n    objectType: \"table\",\n    objectId: myTestTable.id,\n    dataProfilingConfig: {\n        outputSchema: myTestSchema.schemaId,\n    },\n});\nconst thisDataQualityRefresh = new databricks.DataQualityRefresh(\"this\", {\n    objectType: \"table\",\n    objectId: myTestTable.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    comment=\"this catalog is managed by terraform\",\n    properties={\n        \"purpose\": \"testing\",\n    })\nmy_test_schema = databricks.Schema(\"myTestSchema\",\n    catalog_name=sandbox.id,\n    name=\"myTestSchema\",\n    comment=\"this database is managed by terraform\",\n    properties={\n        \"kind\": \"various\",\n    })\nmy_test_table = databricks.SqlTable(\"myTestTable\",\n    catalog_name=\"main\",\n    schema_name=my_test_schema.name,\n    name=\"bar\",\n    table_type=\"MANAGED\",\n    data_source_format=\"DELTA\",\n    columns=[{\n        \"name\": \"timestamp\",\n        \"type\": \"int\",\n    }])\nthis = databricks.DataQualityMonitor(\"this\",\n    object_type=\"table\",\n    object_id=my_test_table.id,\n    data_profiling_config={\n        \"output_schema\": my_test_schema.schema_id,\n    })\nthis_data_quality_refresh = databricks.DataQualityRefresh(\"this\",\n    object_type=\"table\",\n    object_id=my_test_table.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        Comment = \"this catalog is managed by terraform\",\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var myTestSchema = new Databricks.Schema(\"myTestSchema\", new()\n    {\n        CatalogName = sandbox.Id,\n        Name = \"myTestSchema\",\n        Comment = \"this database is managed by terraform\",\n        Properties = \n        {\n            { \"kind\", \"various\" },\n        },\n    });\n\n    var myTestTable = new Databricks.SqlTable(\"myTestTable\", new()\n    {\n        CatalogName = \"main\",\n        SchemaName = myTestSchema.Name,\n        Name = \"bar\",\n        TableType = \"MANAGED\",\n        DataSourceFormat = \"DELTA\",\n        Columns = new[]\n        {\n            new Databricks.Inputs.SqlTableColumnArgs\n            {\n                Name = \"timestamp\",\n                Type = \"int\",\n            },\n        },\n    });\n\n    var @this = new Databricks.DataQualityMonitor(\"this\", new()\n    {\n        ObjectType = \"table\",\n        ObjectId = myTestTable.Id,\n        DataProfilingConfig = new Databricks.Inputs.DataQualityMonitorDataProfilingConfigArgs\n        {\n            OutputSchema = myTestSchema.SchemaId,\n        },\n    });\n\n    var thisDataQualityRefresh = new Databricks.DataQualityRefresh(\"this\", new()\n    {\n        ObjectType = \"table\",\n        ObjectId = myTestTable.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:    pulumi.String(\"sandbox\"),\n\t\t\tComment: pulumi.String(\"this catalog is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmyTestSchema, err := databricks.NewSchema(ctx, \"myTestSchema\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: sandbox.ID(),\n\t\t\tName:        pulumi.String(\"myTestSchema\"),\n\t\t\tComment:     pulumi.String(\"this database is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"kind\": pulumi.String(\"various\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmyTestTable, err := databricks.NewSqlTable(ctx, \"myTestTable\", \u0026databricks.SqlTableArgs{\n\t\t\tCatalogName:      pulumi.String(\"main\"),\n\t\t\tSchemaName:       myTestSchema.Name,\n\t\t\tName:             pulumi.String(\"bar\"),\n\t\t\tTableType:        pulumi.String(\"MANAGED\"),\n\t\t\tDataSourceFormat: pulumi.String(\"DELTA\"),\n\t\t\tColumns: databricks.SqlTableColumnArray{\n\t\t\t\t\u0026databricks.SqlTableColumnArgs{\n\t\t\t\t\tName: pulumi.String(\"timestamp\"),\n\t\t\t\t\tType: pulumi.String(\"int\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewDataQualityMonitor(ctx, \"this\", \u0026databricks.DataQualityMonitorArgs{\n\t\t\tObjectType: pulumi.String(\"table\"),\n\t\t\tObjectId:   myTestTable.ID(),\n\t\t\tDataProfilingConfig: \u0026databricks.DataQualityMonitorDataProfilingConfigArgs{\n\t\t\t\tOutputSchema: myTestSchema.SchemaId,\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewDataQualityRefresh(ctx, \"this\", \u0026databricks.DataQualityRefreshArgs{\n\t\t\tObjectType: pulumi.String(\"table\"),\n\t\t\tObjectId:   myTestTable.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport com.pulumi.databricks.SqlTable;\nimport com.pulumi.databricks.SqlTableArgs;\nimport com.pulumi.databricks.inputs.SqlTableColumnArgs;\nimport com.pulumi.databricks.DataQualityMonitor;\nimport com.pulumi.databricks.DataQualityMonitorArgs;\nimport com.pulumi.databricks.inputs.DataQualityMonitorDataProfilingConfigArgs;\nimport com.pulumi.databricks.DataQualityRefresh;\nimport com.pulumi.databricks.DataQualityRefreshArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .comment(\"this catalog is managed by terraform\")\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var myTestSchema = new Schema(\"myTestSchema\", SchemaArgs.builder()\n            .catalogName(sandbox.id())\n            .name(\"myTestSchema\")\n            .comment(\"this database is managed by terraform\")\n            .properties(Map.of(\"kind\", \"various\"))\n            .build());\n\n        var myTestTable = new SqlTable(\"myTestTable\", SqlTableArgs.builder()\n            .catalogName(\"main\")\n            .schemaName(myTestSchema.name())\n            .name(\"bar\")\n            .tableType(\"MANAGED\")\n            .dataSourceFormat(\"DELTA\")\n            .columns(SqlTableColumnArgs.builder()\n                .name(\"timestamp\")\n                .type(\"int\")\n                .build())\n            .build());\n\n        var this_ = new DataQualityMonitor(\"this\", DataQualityMonitorArgs.builder()\n            .objectType(\"table\")\n            .objectId(myTestTable.id())\n            .dataProfilingConfig(DataQualityMonitorDataProfilingConfigArgs.builder()\n                .outputSchema(myTestSchema.schemaId())\n                .build())\n            .build());\n\n        var thisDataQualityRefresh = new DataQualityRefresh(\"thisDataQualityRefresh\", DataQualityRefreshArgs.builder()\n            .objectType(\"table\")\n            .objectId(myTestTable.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      comment: this catalog is managed by terraform\n      properties:\n        purpose: testing\n  myTestSchema:\n    type: databricks:Schema\n    properties:\n      catalogName: ${sandbox.id}\n      name: myTestSchema\n      comment: this database is managed by terraform\n      properties:\n        kind: various\n  myTestTable:\n    type: databricks:SqlTable\n    properties:\n      catalogName: main\n      schemaName: ${myTestSchema.name}\n      name: bar\n      tableType: MANAGED\n      dataSourceFormat: DELTA\n      columns:\n        - name: timestamp\n          type: int\n  this:\n    type: databricks:DataQualityMonitor\n    properties:\n      objectType: table\n      objectId: ${myTestTable.id}\n      dataProfilingConfig:\n        outputSchema: ${myTestSchema.schemaId}\n  thisDataQualityRefresh:\n    type: databricks:DataQualityRefresh\n    name: this\n    properties:\n      objectType: table\n      objectId: ${myTestTable.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"endTimeMs":{"type":"integer","description":"(integer) - Time when the refresh ended (milliseconds since 1/1/1970 UTC)\n"},"message":{"type":"string","description":"(string) - An optional message to give insight into the current state of the refresh (e.g. FAILURE messages)\n"},"objectId":{"type":"string","description":"The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[schemaId]\" pulumi-lang-dotnet=\"[SchemaId]\" pulumi-lang-go=\"[schemaId]\" pulumi-lang-python=\"[schema_id]\" pulumi-lang-yaml=\"[schemaId]\" pulumi-lang-java=\"[schemaId]\"\u003e[schema_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the `Schemas` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Schema ID` field.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[tableId]\" pulumi-lang-dotnet=\"[TableId]\" pulumi-lang-go=\"[tableId]\" pulumi-lang-python=\"[table_id]\" pulumi-lang-yaml=\"[tableId]\" pulumi-lang-java=\"[tableId]\"\u003e[table_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/tables/get#table_id) of the `Tables` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Table ID` field\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003eor \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n"},"providerConfig":{"$ref":"#/types/databricks:index/DataQualityRefreshProviderConfig:DataQualityRefreshProviderConfig","description":"Configure the provider for management through account provider.\n"},"refreshId":{"type":"integer","description":"(integer) - Unique id of the refresh operation\n"},"startTimeMs":{"type":"integer","description":"(integer) - Time when the refresh started (milliseconds since 1/1/1970 UTC)\n"},"state":{"type":"string","description":"(string) - The current state of the refresh. Possible values are: `MONITOR_REFRESH_STATE_CANCELED`, `MONITOR_REFRESH_STATE_FAILED`, `MONITOR_REFRESH_STATE_PENDING`, `MONITOR_REFRESH_STATE_RUNNING`, `MONITOR_REFRESH_STATE_SUCCESS`, `MONITOR_REFRESH_STATE_UNKNOWN`\n"},"trigger":{"type":"string","description":"(string) - What triggered the refresh. Possible values are: `MONITOR_REFRESH_TRIGGER_DATA_CHANGE`, `MONITOR_REFRESH_TRIGGER_MANUAL`, `MONITOR_REFRESH_TRIGGER_SCHEDULE`, `MONITOR_REFRESH_TRIGGER_UNKNOWN`\n"}},"required":["endTimeMs","message","objectId","objectType","refreshId","startTimeMs","state","trigger"],"inputProperties":{"objectId":{"type":"string","description":"The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[schemaId]\" pulumi-lang-dotnet=\"[SchemaId]\" pulumi-lang-go=\"[schemaId]\" pulumi-lang-python=\"[schema_id]\" pulumi-lang-yaml=\"[schemaId]\" pulumi-lang-java=\"[schemaId]\"\u003e[schema_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the `Schemas` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Schema ID` field.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[tableId]\" pulumi-lang-dotnet=\"[TableId]\" pulumi-lang-go=\"[tableId]\" pulumi-lang-python=\"[table_id]\" pulumi-lang-yaml=\"[tableId]\" pulumi-lang-java=\"[tableId]\"\u003e[table_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/tables/get#table_id) of the `Tables` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Table ID` field\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003eor \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n"},"providerConfig":{"$ref":"#/types/databricks:index/DataQualityRefreshProviderConfig:DataQualityRefreshProviderConfig","description":"Configure the provider for management through account provider.\n"}},"requiredInputs":["objectId","objectType"],"stateInputs":{"description":"Input properties used for looking up and filtering DataQualityRefresh resources.\n","properties":{"endTimeMs":{"type":"integer","description":"(integer) - Time when the refresh ended (milliseconds since 1/1/1970 UTC)\n"},"message":{"type":"string","description":"(string) - An optional message to give insight into the current state of the refresh (e.g. FAILURE messages)\n"},"objectId":{"type":"string","description":"The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[schemaId]\" pulumi-lang-dotnet=\"[SchemaId]\" pulumi-lang-go=\"[schemaId]\" pulumi-lang-python=\"[schema_id]\" pulumi-lang-yaml=\"[schemaId]\" pulumi-lang-java=\"[schemaId]\"\u003e[schema_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the `Schemas` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Schema ID` field.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[tableId]\" pulumi-lang-dotnet=\"[TableId]\" pulumi-lang-go=\"[tableId]\" pulumi-lang-python=\"[table_id]\" pulumi-lang-yaml=\"[tableId]\" pulumi-lang-java=\"[tableId]\"\u003e[table_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/tables/get#table_id) of the `Tables` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Table ID` field\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003eor \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n"},"providerConfig":{"$ref":"#/types/databricks:index/DataQualityRefreshProviderConfig:DataQualityRefreshProviderConfig","description":"Configure the provider for management through account provider.\n"},"refreshId":{"type":"integer","description":"(integer) - Unique id of the refresh operation\n"},"startTimeMs":{"type":"integer","description":"(integer) - Time when the refresh started (milliseconds since 1/1/1970 UTC)\n"},"state":{"type":"string","description":"(string) - The current state of the refresh. Possible values are: `MONITOR_REFRESH_STATE_CANCELED`, `MONITOR_REFRESH_STATE_FAILED`, `MONITOR_REFRESH_STATE_PENDING`, `MONITOR_REFRESH_STATE_RUNNING`, `MONITOR_REFRESH_STATE_SUCCESS`, `MONITOR_REFRESH_STATE_UNKNOWN`\n"},"trigger":{"type":"string","description":"(string) - What triggered the refresh. Possible values are: `MONITOR_REFRESH_TRIGGER_DATA_CHANGE`, `MONITOR_REFRESH_TRIGGER_MANUAL`, `MONITOR_REFRESH_TRIGGER_SCHEDULE`, `MONITOR_REFRESH_TRIGGER_UNKNOWN`\n"}},"type":"object"}},"databricks:index/databaseDatabaseCatalog:DatabaseDatabaseCatalog":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nDatabase Catalogs are databases inside a Lakebase Database Instance which are synced into a Postgres Catalog inside Unity Catalog.\n\n## Example Usage\n\n### Example\n\nThis example creates a Database Catalog based on an existing database in the Database Instance\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.DatabaseDatabaseCatalog(\"this\", {\n    name: \"my_registered_catalog\",\n    databaseInstanceName: \"my-database-instance\",\n    databaseName: \"databricks_postgres\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.DatabaseDatabaseCatalog(\"this\",\n    name=\"my_registered_catalog\",\n    database_instance_name=\"my-database-instance\",\n    database_name=\"databricks_postgres\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.DatabaseDatabaseCatalog(\"this\", new()\n    {\n        Name = \"my_registered_catalog\",\n        DatabaseInstanceName = \"my-database-instance\",\n        DatabaseName = \"databricks_postgres\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDatabaseDatabaseCatalog(ctx, \"this\", \u0026databricks.DatabaseDatabaseCatalogArgs{\n\t\t\tName:                 pulumi.String(\"my_registered_catalog\"),\n\t\t\tDatabaseInstanceName: pulumi.String(\"my-database-instance\"),\n\t\t\tDatabaseName:         pulumi.String(\"databricks_postgres\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabaseDatabaseCatalog;\nimport com.pulumi.databricks.DatabaseDatabaseCatalogArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new DatabaseDatabaseCatalog(\"this\", DatabaseDatabaseCatalogArgs.builder()\n            .name(\"my_registered_catalog\")\n            .databaseInstanceName(\"my-database-instance\")\n            .databaseName(\"databricks_postgres\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:DatabaseDatabaseCatalog\n    properties:\n      name: my_registered_catalog\n      databaseInstanceName: my-database-instance\n      databaseName: databricks_postgres\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nThis example creates a Database Catalog along with a new database inside an existing Database Instance\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.DatabaseDatabaseCatalog(\"this\", {\n    name: \"my_registered_catalog\",\n    databaseInstanceName: \"my-database-instance\",\n    databaseName: \"new_registered_catalog_database\",\n    createDatabaseIfNotExists: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.DatabaseDatabaseCatalog(\"this\",\n    name=\"my_registered_catalog\",\n    database_instance_name=\"my-database-instance\",\n    database_name=\"new_registered_catalog_database\",\n    create_database_if_not_exists=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.DatabaseDatabaseCatalog(\"this\", new()\n    {\n        Name = \"my_registered_catalog\",\n        DatabaseInstanceName = \"my-database-instance\",\n        DatabaseName = \"new_registered_catalog_database\",\n        CreateDatabaseIfNotExists = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDatabaseDatabaseCatalog(ctx, \"this\", \u0026databricks.DatabaseDatabaseCatalogArgs{\n\t\t\tName:                      pulumi.String(\"my_registered_catalog\"),\n\t\t\tDatabaseInstanceName:      pulumi.String(\"my-database-instance\"),\n\t\t\tDatabaseName:              pulumi.String(\"new_registered_catalog_database\"),\n\t\t\tCreateDatabaseIfNotExists: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabaseDatabaseCatalog;\nimport com.pulumi.databricks.DatabaseDatabaseCatalogArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new DatabaseDatabaseCatalog(\"this\", DatabaseDatabaseCatalogArgs.builder()\n            .name(\"my_registered_catalog\")\n            .databaseInstanceName(\"my-database-instance\")\n            .databaseName(\"new_registered_catalog_database\")\n            .createDatabaseIfNotExists(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:DatabaseDatabaseCatalog\n    properties:\n      name: my_registered_catalog\n      databaseInstanceName: my-database-instance\n      databaseName: new_registered_catalog_database\n      createDatabaseIfNotExists: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nThis example creates a DatabaseInstance and then a Database Catalog inside it\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst instance = new databricks.DatabaseInstance(\"instance\", {\n    name: \"my-database-instance\",\n    capacity: \"CU_1\",\n});\nconst catalog = new databricks.DatabaseDatabaseCatalog(\"catalog\", {\n    name: \"my_registered_catalog\",\n    databaseInstanceName: instance.name,\n    databaseName: \"new_registered_catalog_database\",\n    createDatabaseIfNotExists: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ninstance = databricks.DatabaseInstance(\"instance\",\n    name=\"my-database-instance\",\n    capacity=\"CU_1\")\ncatalog = databricks.DatabaseDatabaseCatalog(\"catalog\",\n    name=\"my_registered_catalog\",\n    database_instance_name=instance.name,\n    database_name=\"new_registered_catalog_database\",\n    create_database_if_not_exists=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var instance = new Databricks.DatabaseInstance(\"instance\", new()\n    {\n        Name = \"my-database-instance\",\n        Capacity = \"CU_1\",\n    });\n\n    var catalog = new Databricks.DatabaseDatabaseCatalog(\"catalog\", new()\n    {\n        Name = \"my_registered_catalog\",\n        DatabaseInstanceName = instance.Name,\n        DatabaseName = \"new_registered_catalog_database\",\n        CreateDatabaseIfNotExists = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tinstance, err := databricks.NewDatabaseInstance(ctx, \"instance\", \u0026databricks.DatabaseInstanceArgs{\n\t\t\tName:     pulumi.String(\"my-database-instance\"),\n\t\t\tCapacity: pulumi.String(\"CU_1\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewDatabaseDatabaseCatalog(ctx, \"catalog\", \u0026databricks.DatabaseDatabaseCatalogArgs{\n\t\t\tName:                      pulumi.String(\"my_registered_catalog\"),\n\t\t\tDatabaseInstanceName:      instance.Name,\n\t\t\tDatabaseName:              pulumi.String(\"new_registered_catalog_database\"),\n\t\t\tCreateDatabaseIfNotExists: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabaseInstance;\nimport com.pulumi.databricks.DatabaseInstanceArgs;\nimport com.pulumi.databricks.DatabaseDatabaseCatalog;\nimport com.pulumi.databricks.DatabaseDatabaseCatalogArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var instance = new DatabaseInstance(\"instance\", DatabaseInstanceArgs.builder()\n            .name(\"my-database-instance\")\n            .capacity(\"CU_1\")\n            .build());\n\n        var catalog = new DatabaseDatabaseCatalog(\"catalog\", DatabaseDatabaseCatalogArgs.builder()\n            .name(\"my_registered_catalog\")\n            .databaseInstanceName(instance.name())\n            .databaseName(\"new_registered_catalog_database\")\n            .createDatabaseIfNotExists(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  instance:\n    type: databricks:DatabaseInstance\n    properties:\n      name: my-database-instance\n      capacity: CU_1\n  catalog:\n    type: databricks:DatabaseDatabaseCatalog\n    properties:\n      name: my_registered_catalog\n      databaseInstanceName: ${instance.name}\n      databaseName: new_registered_catalog_database\n      createDatabaseIfNotExists: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"createDatabaseIfNotExists":{"type":"boolean"},"databaseInstanceName":{"type":"string","description":"The name of the DatabaseInstance housing the database\n"},"databaseName":{"type":"string","description":"The name of the database (in a instance) associated with the catalog\n"},"name":{"type":"string","description":"The name of the catalog in UC\n"},"providerConfig":{"$ref":"#/types/databricks:index/DatabaseDatabaseCatalogProviderConfig:DatabaseDatabaseCatalogProviderConfig","description":"Configure the provider for management through account provider.\n"},"uid":{"type":"string","description":"(string)\n"}},"required":["createDatabaseIfNotExists","databaseInstanceName","databaseName","name","uid"],"inputProperties":{"createDatabaseIfNotExists":{"type":"boolean"},"databaseInstanceName":{"type":"string","description":"The name of the DatabaseInstance housing the database\n"},"databaseName":{"type":"string","description":"The name of the database (in a instance) associated with the catalog\n"},"name":{"type":"string","description":"The name of the catalog in UC\n"},"providerConfig":{"$ref":"#/types/databricks:index/DatabaseDatabaseCatalogProviderConfig:DatabaseDatabaseCatalogProviderConfig","description":"Configure the provider for management through account provider.\n"}},"requiredInputs":["databaseInstanceName","databaseName"],"stateInputs":{"description":"Input properties used for looking up and filtering DatabaseDatabaseCatalog resources.\n","properties":{"createDatabaseIfNotExists":{"type":"boolean"},"databaseInstanceName":{"type":"string","description":"The name of the DatabaseInstance housing the database\n"},"databaseName":{"type":"string","description":"The name of the database (in a instance) associated with the catalog\n"},"name":{"type":"string","description":"The name of the catalog in UC\n"},"providerConfig":{"$ref":"#/types/databricks:index/DatabaseDatabaseCatalogProviderConfig:DatabaseDatabaseCatalogProviderConfig","description":"Configure the provider for management through account provider.\n"},"uid":{"type":"string","description":"(string)\n"}},"type":"object"}},"databricks:index/databaseInstance:DatabaseInstance":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nLakebase Database Instances are managed Postgres instances, composed of a primary Postgres compute instance and 0 or more read replica instances.\n\n## Example Usage\n\n### Basic Example\n\nThis example creates a simple Database Instance with the specified name and capacity.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.DatabaseInstance(\"this\", {\n    name: \"my-database-instance\",\n    capacity: \"CU_2\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.DatabaseInstance(\"this\",\n    name=\"my-database-instance\",\n    capacity=\"CU_2\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.DatabaseInstance(\"this\", new()\n    {\n        Name = \"my-database-instance\",\n        Capacity = \"CU_2\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDatabaseInstance(ctx, \"this\", \u0026databricks.DatabaseInstanceArgs{\n\t\t\tName:     pulumi.String(\"my-database-instance\"),\n\t\t\tCapacity: pulumi.String(\"CU_2\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabaseInstance;\nimport com.pulumi.databricks.DatabaseInstanceArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new DatabaseInstance(\"this\", DatabaseInstanceArgs.builder()\n            .name(\"my-database-instance\")\n            .capacity(\"CU_2\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:DatabaseInstance\n    properties:\n      name: my-database-instance\n      capacity: CU_2\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Example with Readable Secondaries\n\nThis example creates a Database Instance with readable secondaries (and HA) enabled.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.DatabaseInstance(\"this\", {\n    name: \"my-database-instance\",\n    capacity: \"CU_2\",\n    nodeCount: 2,\n    enableReadableSecondaries: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.DatabaseInstance(\"this\",\n    name=\"my-database-instance\",\n    capacity=\"CU_2\",\n    node_count=2,\n    enable_readable_secondaries=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.DatabaseInstance(\"this\", new()\n    {\n        Name = \"my-database-instance\",\n        Capacity = \"CU_2\",\n        NodeCount = 2,\n        EnableReadableSecondaries = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDatabaseInstance(ctx, \"this\", \u0026databricks.DatabaseInstanceArgs{\n\t\t\tName:                      pulumi.String(\"my-database-instance\"),\n\t\t\tCapacity:                  pulumi.String(\"CU_2\"),\n\t\t\tNodeCount:                 pulumi.Int(2),\n\t\t\tEnableReadableSecondaries: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabaseInstance;\nimport com.pulumi.databricks.DatabaseInstanceArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new DatabaseInstance(\"this\", DatabaseInstanceArgs.builder()\n            .name(\"my-database-instance\")\n            .capacity(\"CU_2\")\n            .nodeCount(2)\n            .enableReadableSecondaries(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:DatabaseInstance\n    properties:\n      name: my-database-instance\n      capacity: CU_2\n      nodeCount: 2\n      enableReadableSecondaries: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Example Child Instance Created From Parent\n\nThis example creates a child Database Instance from a specified parent Database Instance at the current point in time.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst child = new databricks.DatabaseInstance(\"child\", {\n    name: \"my-database-instance\",\n    capacity: \"CU_2\",\n    parentInstanceRef: {\n        name: \"my-parent-instance\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nchild = databricks.DatabaseInstance(\"child\",\n    name=\"my-database-instance\",\n    capacity=\"CU_2\",\n    parent_instance_ref={\n        \"name\": \"my-parent-instance\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var child = new Databricks.DatabaseInstance(\"child\", new()\n    {\n        Name = \"my-database-instance\",\n        Capacity = \"CU_2\",\n        ParentInstanceRef = new Databricks.Inputs.DatabaseInstanceParentInstanceRefArgs\n        {\n            Name = \"my-parent-instance\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDatabaseInstance(ctx, \"child\", \u0026databricks.DatabaseInstanceArgs{\n\t\t\tName:     pulumi.String(\"my-database-instance\"),\n\t\t\tCapacity: pulumi.String(\"CU_2\"),\n\t\t\tParentInstanceRef: \u0026databricks.DatabaseInstanceParentInstanceRefArgs{\n\t\t\t\tName: pulumi.String(\"my-parent-instance\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabaseInstance;\nimport com.pulumi.databricks.DatabaseInstanceArgs;\nimport com.pulumi.databricks.inputs.DatabaseInstanceParentInstanceRefArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var child = new DatabaseInstance(\"child\", DatabaseInstanceArgs.builder()\n            .name(\"my-database-instance\")\n            .capacity(\"CU_2\")\n            .parentInstanceRef(DatabaseInstanceParentInstanceRefArgs.builder()\n                .name(\"my-parent-instance\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  child:\n    type: databricks:DatabaseInstance\n    properties:\n      name: my-database-instance\n      capacity: CU_2\n      parentInstanceRef:\n        name: my-parent-instance\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Example with a usage policy and custom tags\n\nThis example creates a Database Instance with an associated usage policy and custom tags.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.DatabaseInstance(\"this\", {\n    name: \"my-database-instance\",\n    capacity: \"CU_8\",\n    usagePolicyId: \"948192fa-a98b-498f-a09b-ecee79d8b983\",\n    customTags: [\n        {\n            key: \"custom_tag_key1\",\n            value: \"custom_tag_value1\",\n        },\n        {\n            key: \"custom_tag_key2\",\n            value: \"custom_tag_value2\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.DatabaseInstance(\"this\",\n    name=\"my-database-instance\",\n    capacity=\"CU_8\",\n    usage_policy_id=\"948192fa-a98b-498f-a09b-ecee79d8b983\",\n    custom_tags=[\n        {\n            \"key\": \"custom_tag_key1\",\n            \"value\": \"custom_tag_value1\",\n        },\n        {\n            \"key\": \"custom_tag_key2\",\n            \"value\": \"custom_tag_value2\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.DatabaseInstance(\"this\", new()\n    {\n        Name = \"my-database-instance\",\n        Capacity = \"CU_8\",\n        UsagePolicyId = \"948192fa-a98b-498f-a09b-ecee79d8b983\",\n        CustomTags = new[]\n        {\n            new Databricks.Inputs.DatabaseInstanceCustomTagArgs\n            {\n                Key = \"custom_tag_key1\",\n                Value = \"custom_tag_value1\",\n            },\n            new Databricks.Inputs.DatabaseInstanceCustomTagArgs\n            {\n                Key = \"custom_tag_key2\",\n                Value = \"custom_tag_value2\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDatabaseInstance(ctx, \"this\", \u0026databricks.DatabaseInstanceArgs{\n\t\t\tName:          pulumi.String(\"my-database-instance\"),\n\t\t\tCapacity:      pulumi.String(\"CU_8\"),\n\t\t\tUsagePolicyId: pulumi.String(\"948192fa-a98b-498f-a09b-ecee79d8b983\"),\n\t\t\tCustomTags: databricks.DatabaseInstanceCustomTagArray{\n\t\t\t\t\u0026databricks.DatabaseInstanceCustomTagArgs{\n\t\t\t\t\tKey:   pulumi.String(\"custom_tag_key1\"),\n\t\t\t\t\tValue: pulumi.String(\"custom_tag_value1\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.DatabaseInstanceCustomTagArgs{\n\t\t\t\t\tKey:   pulumi.String(\"custom_tag_key2\"),\n\t\t\t\t\tValue: pulumi.String(\"custom_tag_value2\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabaseInstance;\nimport com.pulumi.databricks.DatabaseInstanceArgs;\nimport com.pulumi.databricks.inputs.DatabaseInstanceCustomTagArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new DatabaseInstance(\"this\", DatabaseInstanceArgs.builder()\n            .name(\"my-database-instance\")\n            .capacity(\"CU_8\")\n            .usagePolicyId(\"948192fa-a98b-498f-a09b-ecee79d8b983\")\n            .customTags(            \n                DatabaseInstanceCustomTagArgs.builder()\n                    .key(\"custom_tag_key1\")\n                    .value(\"custom_tag_value1\")\n                    .build(),\n                DatabaseInstanceCustomTagArgs.builder()\n                    .key(\"custom_tag_key2\")\n                    .value(\"custom_tag_value2\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:DatabaseInstance\n    properties:\n      name: my-database-instance\n      capacity: CU_8\n      usagePolicyId: 948192fa-a98b-498f-a09b-ecee79d8b983\n      customTags:\n        - key: custom_tag_key1\n          value: custom_tag_value1\n        - key: custom_tag_key2\n          value: custom_tag_value2\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"capacity":{"type":"string","description":"The sku of the instance. Valid values are \"CU_1\", \"CU_2\", \"CU_4\", \"CU_8\"\n"},"childInstanceRefs":{"type":"array","items":{"$ref":"#/types/databricks:index/DatabaseInstanceChildInstanceRef:DatabaseInstanceChildInstanceRef"},"description":"(list of DatabaseInstanceRef) - The refs of the child instances. This is only available if the instance is\nparent instance\n"},"creationTime":{"type":"string","description":"(string) - The timestamp when the instance was created\n"},"creator":{"type":"string","description":"(string) - The email of the creator of the instance\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/DatabaseInstanceCustomTag:DatabaseInstanceCustomTag"},"description":"Custom tags associated with the instance. This field is only included on create and update responses\n"},"effectiveCapacity":{"type":"string","description":"(string, deprecated) - Deprecated. The sku of the instance; this field will always match the value of capacity.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveCustomTags":{"type":"array","items":{"$ref":"#/types/databricks:index/DatabaseInstanceEffectiveCustomTag:DatabaseInstanceEffectiveCustomTag"},"description":"(list of CustomTag) - The recorded custom tags associated with the instance.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveEnablePgNativeLogin":{"type":"boolean","description":"(boolean) - Whether the instance has PG native password login enabled.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveEnableReadableSecondaries":{"type":"boolean","description":"(boolean) - Whether secondaries serving read-only traffic are enabled. Defaults to false.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveNodeCount":{"type":"integer","description":"(integer) - The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to\n1 primary and 0 secondaries.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveRetentionWindowInDays":{"type":"integer","description":"(integer) - The retention window for the instance. This is the time window in days\nfor which the historical data is retained.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveStopped":{"type":"boolean","description":"(boolean) - Whether the instance is stopped.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveUsagePolicyId":{"type":"string","description":"(string) - The policy that is applied to the instance.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"enablePgNativeLogin":{"type":"boolean","description":"Whether to enable PG native password login on the instance. Defaults to false\n"},"enableReadableSecondaries":{"type":"boolean","description":"Whether to enable secondaries to serve read-only traffic. Defaults to false\n"},"name":{"type":"string","description":"The name of the instance. This is the unique identifier for the instance\n"},"nodeCount":{"type":"integer","description":"The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to\n1 primary and 0 secondaries. This field is input only, see\u003cspan pulumi-lang-nodejs=\" effectiveNodeCount \" pulumi-lang-dotnet=\" EffectiveNodeCount \" pulumi-lang-go=\" effectiveNodeCount \" pulumi-lang-python=\" effective_node_count \" pulumi-lang-yaml=\" effectiveNodeCount \" pulumi-lang-java=\" effectiveNodeCount \"\u003e effective_node_count \u003c/span\u003efor the output\n"},"parentInstanceRef":{"$ref":"#/types/databricks:index/DatabaseInstanceParentInstanceRef:DatabaseInstanceParentInstanceRef","description":"The ref of the parent instance. This is only available if the instance is\nchild instance.\nInput: For specifying the parent instance to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"pgVersion":{"type":"string","description":"(string) - The version of Postgres running on the instance\n"},"providerConfig":{"$ref":"#/types/databricks:index/DatabaseInstanceProviderConfig:DatabaseInstanceProviderConfig","description":"Configure the provider for management through account provider.\n"},"purgeOnDelete":{"type":"boolean","description":"Purge the resource on delete\n"},"readOnlyDns":{"type":"string","description":"(string) - The DNS endpoint to connect to the instance for read only access. This is only available if\u003cspan pulumi-lang-nodejs=\"\nenableReadableSecondaries \" pulumi-lang-dotnet=\"\nEnableReadableSecondaries \" pulumi-lang-go=\"\nenableReadableSecondaries \" pulumi-lang-python=\"\nenable_readable_secondaries \" pulumi-lang-yaml=\"\nenableReadableSecondaries \" pulumi-lang-java=\"\nenableReadableSecondaries \"\u003e\nenable_readable_secondaries \u003c/span\u003eis true\n"},"readWriteDns":{"type":"string","description":"(string) - The DNS endpoint to connect to the instance for read+write access\n"},"retentionWindowInDays":{"type":"integer","description":"The retention window for the instance. This is the time window in days\nfor which the historical data is retained. The default value is 7 days.\nValid values are 2 to 35 days\n"},"state":{"type":"string","description":"(string) - The current state of the instance. Possible values are: `AVAILABLE`, `DELETING`, `FAILING_OVER`, `STARTING`, `STOPPED`, `UPDATING`\n"},"stopped":{"type":"boolean","description":"Whether to stop the instance. An input only param, see\u003cspan pulumi-lang-nodejs=\" effectiveStopped \" pulumi-lang-dotnet=\" EffectiveStopped \" pulumi-lang-go=\" effectiveStopped \" pulumi-lang-python=\" effective_stopped \" pulumi-lang-yaml=\" effectiveStopped \" pulumi-lang-java=\" effectiveStopped \"\u003e effective_stopped \u003c/span\u003efor the output\n"},"uid":{"type":"string","description":"(string) - Id of the ref database instance\n"},"usagePolicyId":{"type":"string","description":"The desired usage policy to associate with the instance\n"}},"required":["childInstanceRefs","creationTime","creator","customTags","effectiveCapacity","effectiveCustomTags","effectiveEnablePgNativeLogin","effectiveEnableReadableSecondaries","effectiveNodeCount","effectiveRetentionWindowInDays","effectiveStopped","effectiveUsagePolicyId","enablePgNativeLogin","enableReadableSecondaries","name","nodeCount","pgVersion","readOnlyDns","readWriteDns","retentionWindowInDays","state","stopped","uid","usagePolicyId"],"inputProperties":{"capacity":{"type":"string","description":"The sku of the instance. Valid values are \"CU_1\", \"CU_2\", \"CU_4\", \"CU_8\"\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/DatabaseInstanceCustomTag:DatabaseInstanceCustomTag"},"description":"Custom tags associated with the instance. This field is only included on create and update responses\n"},"enablePgNativeLogin":{"type":"boolean","description":"Whether to enable PG native password login on the instance. Defaults to false\n"},"enableReadableSecondaries":{"type":"boolean","description":"Whether to enable secondaries to serve read-only traffic. Defaults to false\n"},"name":{"type":"string","description":"The name of the instance. This is the unique identifier for the instance\n"},"nodeCount":{"type":"integer","description":"The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to\n1 primary and 0 secondaries. This field is input only, see\u003cspan pulumi-lang-nodejs=\" effectiveNodeCount \" pulumi-lang-dotnet=\" EffectiveNodeCount \" pulumi-lang-go=\" effectiveNodeCount \" pulumi-lang-python=\" effective_node_count \" pulumi-lang-yaml=\" effectiveNodeCount \" pulumi-lang-java=\" effectiveNodeCount \"\u003e effective_node_count \u003c/span\u003efor the output\n"},"parentInstanceRef":{"$ref":"#/types/databricks:index/DatabaseInstanceParentInstanceRef:DatabaseInstanceParentInstanceRef","description":"The ref of the parent instance. This is only available if the instance is\nchild instance.\nInput: For specifying the parent instance to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"providerConfig":{"$ref":"#/types/databricks:index/DatabaseInstanceProviderConfig:DatabaseInstanceProviderConfig","description":"Configure the provider for management through account provider.\n"},"purgeOnDelete":{"type":"boolean","description":"Purge the resource on delete\n"},"retentionWindowInDays":{"type":"integer","description":"The retention window for the instance. This is the time window in days\nfor which the historical data is retained. The default value is 7 days.\nValid values are 2 to 35 days\n"},"stopped":{"type":"boolean","description":"Whether to stop the instance. An input only param, see\u003cspan pulumi-lang-nodejs=\" effectiveStopped \" pulumi-lang-dotnet=\" EffectiveStopped \" pulumi-lang-go=\" effectiveStopped \" pulumi-lang-python=\" effective_stopped \" pulumi-lang-yaml=\" effectiveStopped \" pulumi-lang-java=\" effectiveStopped \"\u003e effective_stopped \u003c/span\u003efor the output\n"},"usagePolicyId":{"type":"string","description":"The desired usage policy to associate with the instance\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering DatabaseInstance resources.\n","properties":{"capacity":{"type":"string","description":"The sku of the instance. Valid values are \"CU_1\", \"CU_2\", \"CU_4\", \"CU_8\"\n"},"childInstanceRefs":{"type":"array","items":{"$ref":"#/types/databricks:index/DatabaseInstanceChildInstanceRef:DatabaseInstanceChildInstanceRef"},"description":"(list of DatabaseInstanceRef) - The refs of the child instances. This is only available if the instance is\nparent instance\n"},"creationTime":{"type":"string","description":"(string) - The timestamp when the instance was created\n"},"creator":{"type":"string","description":"(string) - The email of the creator of the instance\n"},"customTags":{"type":"array","items":{"$ref":"#/types/databricks:index/DatabaseInstanceCustomTag:DatabaseInstanceCustomTag"},"description":"Custom tags associated with the instance. This field is only included on create and update responses\n"},"effectiveCapacity":{"type":"string","description":"(string, deprecated) - Deprecated. The sku of the instance; this field will always match the value of capacity.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveCustomTags":{"type":"array","items":{"$ref":"#/types/databricks:index/DatabaseInstanceEffectiveCustomTag:DatabaseInstanceEffectiveCustomTag"},"description":"(list of CustomTag) - The recorded custom tags associated with the instance.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveEnablePgNativeLogin":{"type":"boolean","description":"(boolean) - Whether the instance has PG native password login enabled.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveEnableReadableSecondaries":{"type":"boolean","description":"(boolean) - Whether secondaries serving read-only traffic are enabled. Defaults to false.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveNodeCount":{"type":"integer","description":"(integer) - The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to\n1 primary and 0 secondaries.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveRetentionWindowInDays":{"type":"integer","description":"(integer) - The retention window for the instance. This is the time window in days\nfor which the historical data is retained.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveStopped":{"type":"boolean","description":"(boolean) - Whether the instance is stopped.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveUsagePolicyId":{"type":"string","description":"(string) - The policy that is applied to the instance.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"enablePgNativeLogin":{"type":"boolean","description":"Whether to enable PG native password login on the instance. Defaults to false\n"},"enableReadableSecondaries":{"type":"boolean","description":"Whether to enable secondaries to serve read-only traffic. Defaults to false\n"},"name":{"type":"string","description":"The name of the instance. This is the unique identifier for the instance\n"},"nodeCount":{"type":"integer","description":"The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to\n1 primary and 0 secondaries. This field is input only, see\u003cspan pulumi-lang-nodejs=\" effectiveNodeCount \" pulumi-lang-dotnet=\" EffectiveNodeCount \" pulumi-lang-go=\" effectiveNodeCount \" pulumi-lang-python=\" effective_node_count \" pulumi-lang-yaml=\" effectiveNodeCount \" pulumi-lang-java=\" effectiveNodeCount \"\u003e effective_node_count \u003c/span\u003efor the output\n"},"parentInstanceRef":{"$ref":"#/types/databricks:index/DatabaseInstanceParentInstanceRef:DatabaseInstanceParentInstanceRef","description":"The ref of the parent instance. This is only available if the instance is\nchild instance.\nInput: For specifying the parent instance to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"pgVersion":{"type":"string","description":"(string) - The version of Postgres running on the instance\n"},"providerConfig":{"$ref":"#/types/databricks:index/DatabaseInstanceProviderConfig:DatabaseInstanceProviderConfig","description":"Configure the provider for management through account provider.\n"},"purgeOnDelete":{"type":"boolean","description":"Purge the resource on delete\n"},"readOnlyDns":{"type":"string","description":"(string) - The DNS endpoint to connect to the instance for read only access. This is only available if\u003cspan pulumi-lang-nodejs=\"\nenableReadableSecondaries \" pulumi-lang-dotnet=\"\nEnableReadableSecondaries \" pulumi-lang-go=\"\nenableReadableSecondaries \" pulumi-lang-python=\"\nenable_readable_secondaries \" pulumi-lang-yaml=\"\nenableReadableSecondaries \" pulumi-lang-java=\"\nenableReadableSecondaries \"\u003e\nenable_readable_secondaries \u003c/span\u003eis true\n"},"readWriteDns":{"type":"string","description":"(string) - The DNS endpoint to connect to the instance for read+write access\n"},"retentionWindowInDays":{"type":"integer","description":"The retention window for the instance. This is the time window in days\nfor which the historical data is retained. The default value is 7 days.\nValid values are 2 to 35 days\n"},"state":{"type":"string","description":"(string) - The current state of the instance. Possible values are: `AVAILABLE`, `DELETING`, `FAILING_OVER`, `STARTING`, `STOPPED`, `UPDATING`\n"},"stopped":{"type":"boolean","description":"Whether to stop the instance. An input only param, see\u003cspan pulumi-lang-nodejs=\" effectiveStopped \" pulumi-lang-dotnet=\" EffectiveStopped \" pulumi-lang-go=\" effectiveStopped \" pulumi-lang-python=\" effective_stopped \" pulumi-lang-yaml=\" effectiveStopped \" pulumi-lang-java=\" effectiveStopped \"\u003e effective_stopped \u003c/span\u003efor the output\n"},"uid":{"type":"string","description":"(string) - Id of the ref database instance\n"},"usagePolicyId":{"type":"string","description":"The desired usage policy to associate with the instance\n"}},"type":"object"}},"databricks:index/databaseSyncedDatabaseTable:DatabaseSyncedDatabaseTable":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nLakebase Synced Database Tables are Postgres tables automatically synced from a source table inside Unity Catalog.\nThey can be used to serve realtime queries without the operational overhead of managing ETL pipelines. \n\nSynced Database Tables can be configured inside either Database Catalogs or Standard Catalogs. Multiple\nSynced Database Tables can be bin packed inside a single pipeline to optimize costs.\n\n## Example Usage\n\n### Creating a Synced Database Table inside a Database Catalog\n\nThis example creates a Synced Database Table inside a Database Catalog.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.DatabaseSyncedDatabaseTable(\"this\", {\n    name: \"my_database_catalog.public.synced_table\",\n    logicalDatabaseName: \"databricks_postgres\",\n    spec: {\n        schedulingPolicy: \"SNAPSHOT\",\n        sourceTableFullName: \"source_delta.tpch.customer\",\n        primaryKeyColumns: [\"c_custkey\"],\n        createDatabaseObjectsIfMissing: true,\n        newPipelineSpec: {\n            storageCatalog: \"source_delta\",\n            storageSchema: \"tpch\",\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.DatabaseSyncedDatabaseTable(\"this\",\n    name=\"my_database_catalog.public.synced_table\",\n    logical_database_name=\"databricks_postgres\",\n    spec={\n        \"scheduling_policy\": \"SNAPSHOT\",\n        \"source_table_full_name\": \"source_delta.tpch.customer\",\n        \"primary_key_columns\": [\"c_custkey\"],\n        \"create_database_objects_if_missing\": True,\n        \"new_pipeline_spec\": {\n            \"storage_catalog\": \"source_delta\",\n            \"storage_schema\": \"tpch\",\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.DatabaseSyncedDatabaseTable(\"this\", new()\n    {\n        Name = \"my_database_catalog.public.synced_table\",\n        LogicalDatabaseName = \"databricks_postgres\",\n        Spec = new Databricks.Inputs.DatabaseSyncedDatabaseTableSpecArgs\n        {\n            SchedulingPolicy = \"SNAPSHOT\",\n            SourceTableFullName = \"source_delta.tpch.customer\",\n            PrimaryKeyColumns = new[]\n            {\n                \"c_custkey\",\n            },\n            CreateDatabaseObjectsIfMissing = true,\n            NewPipelineSpec = new Databricks.Inputs.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs\n            {\n                StorageCatalog = \"source_delta\",\n                StorageSchema = \"tpch\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDatabaseSyncedDatabaseTable(ctx, \"this\", \u0026databricks.DatabaseSyncedDatabaseTableArgs{\n\t\t\tName:                pulumi.String(\"my_database_catalog.public.synced_table\"),\n\t\t\tLogicalDatabaseName: pulumi.String(\"databricks_postgres\"),\n\t\t\tSpec: \u0026databricks.DatabaseSyncedDatabaseTableSpecArgs{\n\t\t\t\tSchedulingPolicy:    pulumi.String(\"SNAPSHOT\"),\n\t\t\t\tSourceTableFullName: pulumi.String(\"source_delta.tpch.customer\"),\n\t\t\t\tPrimaryKeyColumns: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"c_custkey\"),\n\t\t\t\t},\n\t\t\t\tCreateDatabaseObjectsIfMissing: pulumi.Bool(true),\n\t\t\t\tNewPipelineSpec: \u0026databricks.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs{\n\t\t\t\t\tStorageCatalog: pulumi.String(\"source_delta\"),\n\t\t\t\t\tStorageSchema:  pulumi.String(\"tpch\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabaseSyncedDatabaseTable;\nimport com.pulumi.databricks.DatabaseSyncedDatabaseTableArgs;\nimport com.pulumi.databricks.inputs.DatabaseSyncedDatabaseTableSpecArgs;\nimport com.pulumi.databricks.inputs.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new DatabaseSyncedDatabaseTable(\"this\", DatabaseSyncedDatabaseTableArgs.builder()\n            .name(\"my_database_catalog.public.synced_table\")\n            .logicalDatabaseName(\"databricks_postgres\")\n            .spec(DatabaseSyncedDatabaseTableSpecArgs.builder()\n                .schedulingPolicy(\"SNAPSHOT\")\n                .sourceTableFullName(\"source_delta.tpch.customer\")\n                .primaryKeyColumns(\"c_custkey\")\n                .createDatabaseObjectsIfMissing(true)\n                .newPipelineSpec(DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs.builder()\n                    .storageCatalog(\"source_delta\")\n                    .storageSchema(\"tpch\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:DatabaseSyncedDatabaseTable\n    properties:\n      name: my_database_catalog.public.synced_table\n      logicalDatabaseName: databricks_postgres\n      spec:\n        schedulingPolicy: SNAPSHOT\n        sourceTableFullName: source_delta.tpch.customer\n        primaryKeyColumns:\n          - c_custkey\n        createDatabaseObjectsIfMissing: true\n        newPipelineSpec:\n          storageCatalog: source_delta\n          storageSchema: tpch\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Creating a Synced Database Table inside a Standard Catalog\n\nThis example creates a Synced Database Table inside a Standard Catalog.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.DatabaseSyncedDatabaseTable(\"this\", {\n    name: \"my_standard_catalog.public.synced_table\",\n    logicalDatabaseName: \"databricks_postgres\",\n    databaseInstanceName: \"my-database-instance\",\n    spec: {\n        schedulingPolicy: \"SNAPSHOT\",\n        sourceTableFullName: \"source_delta.tpch.customer\",\n        primaryKeyColumns: [\"c_custkey\"],\n        createDatabaseObjectsIfMissing: true,\n        newPipelineSpec: {\n            storageCatalog: \"source_delta\",\n            storageSchema: \"tpch\",\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.DatabaseSyncedDatabaseTable(\"this\",\n    name=\"my_standard_catalog.public.synced_table\",\n    logical_database_name=\"databricks_postgres\",\n    database_instance_name=\"my-database-instance\",\n    spec={\n        \"scheduling_policy\": \"SNAPSHOT\",\n        \"source_table_full_name\": \"source_delta.tpch.customer\",\n        \"primary_key_columns\": [\"c_custkey\"],\n        \"create_database_objects_if_missing\": True,\n        \"new_pipeline_spec\": {\n            \"storage_catalog\": \"source_delta\",\n            \"storage_schema\": \"tpch\",\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.DatabaseSyncedDatabaseTable(\"this\", new()\n    {\n        Name = \"my_standard_catalog.public.synced_table\",\n        LogicalDatabaseName = \"databricks_postgres\",\n        DatabaseInstanceName = \"my-database-instance\",\n        Spec = new Databricks.Inputs.DatabaseSyncedDatabaseTableSpecArgs\n        {\n            SchedulingPolicy = \"SNAPSHOT\",\n            SourceTableFullName = \"source_delta.tpch.customer\",\n            PrimaryKeyColumns = new[]\n            {\n                \"c_custkey\",\n            },\n            CreateDatabaseObjectsIfMissing = true,\n            NewPipelineSpec = new Databricks.Inputs.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs\n            {\n                StorageCatalog = \"source_delta\",\n                StorageSchema = \"tpch\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDatabaseSyncedDatabaseTable(ctx, \"this\", \u0026databricks.DatabaseSyncedDatabaseTableArgs{\n\t\t\tName:                 pulumi.String(\"my_standard_catalog.public.synced_table\"),\n\t\t\tLogicalDatabaseName:  pulumi.String(\"databricks_postgres\"),\n\t\t\tDatabaseInstanceName: pulumi.String(\"my-database-instance\"),\n\t\t\tSpec: \u0026databricks.DatabaseSyncedDatabaseTableSpecArgs{\n\t\t\t\tSchedulingPolicy:    pulumi.String(\"SNAPSHOT\"),\n\t\t\t\tSourceTableFullName: pulumi.String(\"source_delta.tpch.customer\"),\n\t\t\t\tPrimaryKeyColumns: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"c_custkey\"),\n\t\t\t\t},\n\t\t\t\tCreateDatabaseObjectsIfMissing: pulumi.Bool(true),\n\t\t\t\tNewPipelineSpec: \u0026databricks.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs{\n\t\t\t\t\tStorageCatalog: pulumi.String(\"source_delta\"),\n\t\t\t\t\tStorageSchema:  pulumi.String(\"tpch\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabaseSyncedDatabaseTable;\nimport com.pulumi.databricks.DatabaseSyncedDatabaseTableArgs;\nimport com.pulumi.databricks.inputs.DatabaseSyncedDatabaseTableSpecArgs;\nimport com.pulumi.databricks.inputs.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new DatabaseSyncedDatabaseTable(\"this\", DatabaseSyncedDatabaseTableArgs.builder()\n            .name(\"my_standard_catalog.public.synced_table\")\n            .logicalDatabaseName(\"databricks_postgres\")\n            .databaseInstanceName(\"my-database-instance\")\n            .spec(DatabaseSyncedDatabaseTableSpecArgs.builder()\n                .schedulingPolicy(\"SNAPSHOT\")\n                .sourceTableFullName(\"source_delta.tpch.customer\")\n                .primaryKeyColumns(\"c_custkey\")\n                .createDatabaseObjectsIfMissing(true)\n                .newPipelineSpec(DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs.builder()\n                    .storageCatalog(\"source_delta\")\n                    .storageSchema(\"tpch\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:DatabaseSyncedDatabaseTable\n    properties:\n      name: my_standard_catalog.public.synced_table\n      logicalDatabaseName: databricks_postgres\n      databaseInstanceName: my-database-instance\n      spec:\n        schedulingPolicy: SNAPSHOT\n        sourceTableFullName: source_delta.tpch.customer\n        primaryKeyColumns:\n          - c_custkey\n        createDatabaseObjectsIfMissing: true\n        newPipelineSpec:\n          storageCatalog: source_delta\n          storageSchema: tpch\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Creating multiple Synced Database Tables and bin packing them into a single pipeline\n\nThis example creates two Synced Database Tables. The first one specifies a new pipeline spec,\nwhich generates a new pipeline. The second one utilizes the pipeline ID of the first table.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst instance = new databricks.DatabaseInstance(\"instance\", {\n    name: \"my-database-instance\",\n    capacity: \"CU_1\",\n});\nconst syncedTable1 = new databricks.DatabaseSyncedDatabaseTable(\"synced_table_1\", {\n    name: \"my_standard_catalog.public.synced_table1\",\n    logicalDatabaseName: \"databricks_postgres\",\n    databaseInstanceName: instance.name,\n    spec: {\n        schedulingPolicy: \"SNAPSHOT\",\n        sourceTableFullName: \"source_delta.tpch.customer\",\n        primaryKeyColumns: [\"c_custkey\"],\n        createDatabaseObjectsIfMissing: true,\n        newPipelineSpec: {\n            storageCatalog: \"source_delta\",\n            storageSchema: \"tpch\",\n        },\n    },\n});\nconst syncedTable2 = new databricks.DatabaseSyncedDatabaseTable(\"synced_table_2\", {\n    name: \"my_standard_catalog.public.synced_table2\",\n    logicalDatabaseName: \"databricks_postgres\",\n    databaseInstanceName: instance.name,\n    spec: {\n        schedulingPolicy: \"SNAPSHOT\",\n        sourceTableFullName: \"source_delta.tpch.customer\",\n        primaryKeyColumns: [\"c_custkey\"],\n        createDatabaseObjectsIfMissing: true,\n        existingPipelineId: syncedTable1.dataSynchronizationStatus.apply(dataSynchronizationStatus =\u003e dataSynchronizationStatus.pipelineId),\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ninstance = databricks.DatabaseInstance(\"instance\",\n    name=\"my-database-instance\",\n    capacity=\"CU_1\")\nsynced_table1 = databricks.DatabaseSyncedDatabaseTable(\"synced_table_1\",\n    name=\"my_standard_catalog.public.synced_table1\",\n    logical_database_name=\"databricks_postgres\",\n    database_instance_name=instance.name,\n    spec={\n        \"scheduling_policy\": \"SNAPSHOT\",\n        \"source_table_full_name\": \"source_delta.tpch.customer\",\n        \"primary_key_columns\": [\"c_custkey\"],\n        \"create_database_objects_if_missing\": True,\n        \"new_pipeline_spec\": {\n            \"storage_catalog\": \"source_delta\",\n            \"storage_schema\": \"tpch\",\n        },\n    })\nsynced_table2 = databricks.DatabaseSyncedDatabaseTable(\"synced_table_2\",\n    name=\"my_standard_catalog.public.synced_table2\",\n    logical_database_name=\"databricks_postgres\",\n    database_instance_name=instance.name,\n    spec={\n        \"scheduling_policy\": \"SNAPSHOT\",\n        \"source_table_full_name\": \"source_delta.tpch.customer\",\n        \"primary_key_columns\": [\"c_custkey\"],\n        \"create_database_objects_if_missing\": True,\n        \"existing_pipeline_id\": synced_table1.data_synchronization_status.pipeline_id,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var instance = new Databricks.DatabaseInstance(\"instance\", new()\n    {\n        Name = \"my-database-instance\",\n        Capacity = \"CU_1\",\n    });\n\n    var syncedTable1 = new Databricks.DatabaseSyncedDatabaseTable(\"synced_table_1\", new()\n    {\n        Name = \"my_standard_catalog.public.synced_table1\",\n        LogicalDatabaseName = \"databricks_postgres\",\n        DatabaseInstanceName = instance.Name,\n        Spec = new Databricks.Inputs.DatabaseSyncedDatabaseTableSpecArgs\n        {\n            SchedulingPolicy = \"SNAPSHOT\",\n            SourceTableFullName = \"source_delta.tpch.customer\",\n            PrimaryKeyColumns = new[]\n            {\n                \"c_custkey\",\n            },\n            CreateDatabaseObjectsIfMissing = true,\n            NewPipelineSpec = new Databricks.Inputs.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs\n            {\n                StorageCatalog = \"source_delta\",\n                StorageSchema = \"tpch\",\n            },\n        },\n    });\n\n    var syncedTable2 = new Databricks.DatabaseSyncedDatabaseTable(\"synced_table_2\", new()\n    {\n        Name = \"my_standard_catalog.public.synced_table2\",\n        LogicalDatabaseName = \"databricks_postgres\",\n        DatabaseInstanceName = instance.Name,\n        Spec = new Databricks.Inputs.DatabaseSyncedDatabaseTableSpecArgs\n        {\n            SchedulingPolicy = \"SNAPSHOT\",\n            SourceTableFullName = \"source_delta.tpch.customer\",\n            PrimaryKeyColumns = new[]\n            {\n                \"c_custkey\",\n            },\n            CreateDatabaseObjectsIfMissing = true,\n            ExistingPipelineId = syncedTable1.DataSynchronizationStatus.Apply(dataSynchronizationStatus =\u003e dataSynchronizationStatus.PipelineId),\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tinstance, err := databricks.NewDatabaseInstance(ctx, \"instance\", \u0026databricks.DatabaseInstanceArgs{\n\t\t\tName:     pulumi.String(\"my-database-instance\"),\n\t\t\tCapacity: pulumi.String(\"CU_1\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsyncedTable1, err := databricks.NewDatabaseSyncedDatabaseTable(ctx, \"synced_table_1\", \u0026databricks.DatabaseSyncedDatabaseTableArgs{\n\t\t\tName:                 pulumi.String(\"my_standard_catalog.public.synced_table1\"),\n\t\t\tLogicalDatabaseName:  pulumi.String(\"databricks_postgres\"),\n\t\t\tDatabaseInstanceName: instance.Name,\n\t\t\tSpec: \u0026databricks.DatabaseSyncedDatabaseTableSpecArgs{\n\t\t\t\tSchedulingPolicy:    pulumi.String(\"SNAPSHOT\"),\n\t\t\t\tSourceTableFullName: pulumi.String(\"source_delta.tpch.customer\"),\n\t\t\t\tPrimaryKeyColumns: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"c_custkey\"),\n\t\t\t\t},\n\t\t\t\tCreateDatabaseObjectsIfMissing: pulumi.Bool(true),\n\t\t\t\tNewPipelineSpec: \u0026databricks.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs{\n\t\t\t\t\tStorageCatalog: pulumi.String(\"source_delta\"),\n\t\t\t\t\tStorageSchema:  pulumi.String(\"tpch\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewDatabaseSyncedDatabaseTable(ctx, \"synced_table_2\", \u0026databricks.DatabaseSyncedDatabaseTableArgs{\n\t\t\tName:                 pulumi.String(\"my_standard_catalog.public.synced_table2\"),\n\t\t\tLogicalDatabaseName:  pulumi.String(\"databricks_postgres\"),\n\t\t\tDatabaseInstanceName: instance.Name,\n\t\t\tSpec: \u0026databricks.DatabaseSyncedDatabaseTableSpecArgs{\n\t\t\t\tSchedulingPolicy:    pulumi.String(\"SNAPSHOT\"),\n\t\t\t\tSourceTableFullName: pulumi.String(\"source_delta.tpch.customer\"),\n\t\t\t\tPrimaryKeyColumns: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"c_custkey\"),\n\t\t\t\t},\n\t\t\t\tCreateDatabaseObjectsIfMissing: pulumi.Bool(true),\n\t\t\t\tExistingPipelineId: syncedTable1.DataSynchronizationStatus.ApplyT(func(dataSynchronizationStatus databricks.DatabaseSyncedDatabaseTableDataSynchronizationStatus) (*string, error) {\n\t\t\t\t\treturn \u0026dataSynchronizationStatus.PipelineId, nil\n\t\t\t\t}).(pulumi.StringPtrOutput),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabaseInstance;\nimport com.pulumi.databricks.DatabaseInstanceArgs;\nimport com.pulumi.databricks.DatabaseSyncedDatabaseTable;\nimport com.pulumi.databricks.DatabaseSyncedDatabaseTableArgs;\nimport com.pulumi.databricks.inputs.DatabaseSyncedDatabaseTableSpecArgs;\nimport com.pulumi.databricks.inputs.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var instance = new DatabaseInstance(\"instance\", DatabaseInstanceArgs.builder()\n            .name(\"my-database-instance\")\n            .capacity(\"CU_1\")\n            .build());\n\n        var syncedTable1 = new DatabaseSyncedDatabaseTable(\"syncedTable1\", DatabaseSyncedDatabaseTableArgs.builder()\n            .name(\"my_standard_catalog.public.synced_table1\")\n            .logicalDatabaseName(\"databricks_postgres\")\n            .databaseInstanceName(instance.name())\n            .spec(DatabaseSyncedDatabaseTableSpecArgs.builder()\n                .schedulingPolicy(\"SNAPSHOT\")\n                .sourceTableFullName(\"source_delta.tpch.customer\")\n                .primaryKeyColumns(\"c_custkey\")\n                .createDatabaseObjectsIfMissing(true)\n                .newPipelineSpec(DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs.builder()\n                    .storageCatalog(\"source_delta\")\n                    .storageSchema(\"tpch\")\n                    .build())\n                .build())\n            .build());\n\n        var syncedTable2 = new DatabaseSyncedDatabaseTable(\"syncedTable2\", DatabaseSyncedDatabaseTableArgs.builder()\n            .name(\"my_standard_catalog.public.synced_table2\")\n            .logicalDatabaseName(\"databricks_postgres\")\n            .databaseInstanceName(instance.name())\n            .spec(DatabaseSyncedDatabaseTableSpecArgs.builder()\n                .schedulingPolicy(\"SNAPSHOT\")\n                .sourceTableFullName(\"source_delta.tpch.customer\")\n                .primaryKeyColumns(\"c_custkey\")\n                .createDatabaseObjectsIfMissing(true)\n                .existingPipelineId(syncedTable1.dataSynchronizationStatus().applyValue(_dataSynchronizationStatus -\u003e _dataSynchronizationStatus.pipelineId()))\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  instance:\n    type: databricks:DatabaseInstance\n    properties:\n      name: my-database-instance\n      capacity: CU_1\n  syncedTable1:\n    type: databricks:DatabaseSyncedDatabaseTable\n    name: synced_table_1\n    properties:\n      name: my_standard_catalog.public.synced_table1\n      logicalDatabaseName: databricks_postgres\n      databaseInstanceName: ${instance.name}\n      spec:\n        schedulingPolicy: SNAPSHOT\n        sourceTableFullName: source_delta.tpch.customer\n        primaryKeyColumns:\n          - c_custkey\n        createDatabaseObjectsIfMissing: true\n        newPipelineSpec:\n          storageCatalog: source_delta\n          storageSchema: tpch\n  syncedTable2:\n    type: databricks:DatabaseSyncedDatabaseTable\n    name: synced_table_2\n    properties:\n      name: my_standard_catalog.public.synced_table2\n      logicalDatabaseName: databricks_postgres\n      databaseInstanceName: ${instance.name}\n      spec:\n        schedulingPolicy: SNAPSHOT\n        sourceTableFullName: source_delta.tpch.customer\n        primaryKeyColumns:\n          - c_custkey\n        createDatabaseObjectsIfMissing: true\n        existingPipelineId: ${syncedTable1.dataSynchronizationStatus.pipelineId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Creating a Synced Database Table with a custom Jobs schedule\n\nThis example creates a Synced Database Table and customizes the pipeline schedule. It assumes you already have \n\n- A database instance named `\"my-database-instance\"`\n- A standard catalog named `\u003cspan pulumi-lang-nodejs=\"\"myStandardCatalog\"\" pulumi-lang-dotnet=\"\"MyStandardCatalog\"\" pulumi-lang-go=\"\"myStandardCatalog\"\" pulumi-lang-python=\"\"my_standard_catalog\"\" pulumi-lang-yaml=\"\"myStandardCatalog\"\" pulumi-lang-java=\"\"myStandardCatalog\"\"\u003e\"my_standard_catalog\"\u003c/span\u003e`\n- A schema in the standard catalog named `\"default\"`\n- A source delta table named `\"source_delta.schema.customer\"` with the primary key `\u003cspan pulumi-lang-nodejs=\"\"cCustkey\"\" pulumi-lang-dotnet=\"\"CCustkey\"\" pulumi-lang-go=\"\"cCustkey\"\" pulumi-lang-python=\"\"c_custkey\"\" pulumi-lang-yaml=\"\"cCustkey\"\" pulumi-lang-java=\"\"cCustkey\"\"\u003e\"c_custkey\"\u003c/span\u003e`\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst syncedTable = new databricks.DatabaseSyncedDatabaseTable(\"synced_table\", {\n    name: \"my_standard_catalog.default.my_synced_table\",\n    logicalDatabaseName: \"terraform_test_db\",\n    databaseInstanceName: \"my-database-instance\",\n    spec: {\n        schedulingPolicy: \"SNAPSHOT\",\n        sourceTableFullName: \"source_delta.schema.customer\",\n        primaryKeyColumns: [\"c_custkey\"],\n        createDatabaseObjectsIfMissing: true,\n        newPipelineSpec: {\n            storageCatalog: \"source_delta\",\n            storageSchema: \"schema\",\n        },\n    },\n});\nconst syncPipelineScheduleJob = new databricks.Job(\"sync_pipeline_schedule_job\", {\n    name: \"Synced Pipeline Refresh\",\n    description: \"Job to schedule synced database table pipeline. \",\n    tasks: [{\n        taskKey: \"synced-table-pipeline\",\n        pipelineTask: {\n            pipelineId: syncedTable.dataSynchronizationStatus.apply(dataSynchronizationStatus =\u003e dataSynchronizationStatus.pipelineId),\n        },\n    }],\n    schedule: {\n        quartzCronExpression: \"0 0 0 * * ?\",\n        timezoneId: \"Europe/Helsinki\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsynced_table = databricks.DatabaseSyncedDatabaseTable(\"synced_table\",\n    name=\"my_standard_catalog.default.my_synced_table\",\n    logical_database_name=\"terraform_test_db\",\n    database_instance_name=\"my-database-instance\",\n    spec={\n        \"scheduling_policy\": \"SNAPSHOT\",\n        \"source_table_full_name\": \"source_delta.schema.customer\",\n        \"primary_key_columns\": [\"c_custkey\"],\n        \"create_database_objects_if_missing\": True,\n        \"new_pipeline_spec\": {\n            \"storage_catalog\": \"source_delta\",\n            \"storage_schema\": \"schema\",\n        },\n    })\nsync_pipeline_schedule_job = databricks.Job(\"sync_pipeline_schedule_job\",\n    name=\"Synced Pipeline Refresh\",\n    description=\"Job to schedule synced database table pipeline. \",\n    tasks=[{\n        \"task_key\": \"synced-table-pipeline\",\n        \"pipeline_task\": {\n            \"pipeline_id\": synced_table.data_synchronization_status.pipeline_id,\n        },\n    }],\n    schedule={\n        \"quartz_cron_expression\": \"0 0 0 * * ?\",\n        \"timezone_id\": \"Europe/Helsinki\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var syncedTable = new Databricks.DatabaseSyncedDatabaseTable(\"synced_table\", new()\n    {\n        Name = \"my_standard_catalog.default.my_synced_table\",\n        LogicalDatabaseName = \"terraform_test_db\",\n        DatabaseInstanceName = \"my-database-instance\",\n        Spec = new Databricks.Inputs.DatabaseSyncedDatabaseTableSpecArgs\n        {\n            SchedulingPolicy = \"SNAPSHOT\",\n            SourceTableFullName = \"source_delta.schema.customer\",\n            PrimaryKeyColumns = new[]\n            {\n                \"c_custkey\",\n            },\n            CreateDatabaseObjectsIfMissing = true,\n            NewPipelineSpec = new Databricks.Inputs.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs\n            {\n                StorageCatalog = \"source_delta\",\n                StorageSchema = \"schema\",\n            },\n        },\n    });\n\n    var syncPipelineScheduleJob = new Databricks.Job(\"sync_pipeline_schedule_job\", new()\n    {\n        Name = \"Synced Pipeline Refresh\",\n        Description = \"Job to schedule synced database table pipeline. \",\n        Tasks = new[]\n        {\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"synced-table-pipeline\",\n                PipelineTask = new Databricks.Inputs.JobTaskPipelineTaskArgs\n                {\n                    PipelineId = syncedTable.DataSynchronizationStatus.Apply(dataSynchronizationStatus =\u003e dataSynchronizationStatus.PipelineId),\n                },\n            },\n        },\n        Schedule = new Databricks.Inputs.JobScheduleArgs\n        {\n            QuartzCronExpression = \"0 0 0 * * ?\",\n            TimezoneId = \"Europe/Helsinki\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsyncedTable, err := databricks.NewDatabaseSyncedDatabaseTable(ctx, \"synced_table\", \u0026databricks.DatabaseSyncedDatabaseTableArgs{\n\t\t\tName:                 pulumi.String(\"my_standard_catalog.default.my_synced_table\"),\n\t\t\tLogicalDatabaseName:  pulumi.String(\"terraform_test_db\"),\n\t\t\tDatabaseInstanceName: pulumi.String(\"my-database-instance\"),\n\t\t\tSpec: \u0026databricks.DatabaseSyncedDatabaseTableSpecArgs{\n\t\t\t\tSchedulingPolicy:    pulumi.String(\"SNAPSHOT\"),\n\t\t\t\tSourceTableFullName: pulumi.String(\"source_delta.schema.customer\"),\n\t\t\t\tPrimaryKeyColumns: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"c_custkey\"),\n\t\t\t\t},\n\t\t\t\tCreateDatabaseObjectsIfMissing: pulumi.Bool(true),\n\t\t\t\tNewPipelineSpec: \u0026databricks.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs{\n\t\t\t\t\tStorageCatalog: pulumi.String(\"source_delta\"),\n\t\t\t\t\tStorageSchema:  pulumi.String(\"schema\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewJob(ctx, \"sync_pipeline_schedule_job\", \u0026databricks.JobArgs{\n\t\t\tName:        pulumi.String(\"Synced Pipeline Refresh\"),\n\t\t\tDescription: pulumi.String(\"Job to schedule synced database table pipeline. \"),\n\t\t\tTasks: databricks.JobTaskArray{\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"synced-table-pipeline\"),\n\t\t\t\t\tPipelineTask: \u0026databricks.JobTaskPipelineTaskArgs{\n\t\t\t\t\t\tPipelineId: syncedTable.DataSynchronizationStatus.ApplyT(func(dataSynchronizationStatus databricks.DatabaseSyncedDatabaseTableDataSynchronizationStatus) (*string, error) {\n\t\t\t\t\t\t\treturn \u0026dataSynchronizationStatus.PipelineId, nil\n\t\t\t\t\t\t}).(pulumi.StringPtrOutput),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tSchedule: \u0026databricks.JobScheduleArgs{\n\t\t\t\tQuartzCronExpression: pulumi.String(\"0 0 0 * * ?\"),\n\t\t\t\tTimezoneId:           pulumi.String(\"Europe/Helsinki\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabaseSyncedDatabaseTable;\nimport com.pulumi.databricks.DatabaseSyncedDatabaseTableArgs;\nimport com.pulumi.databricks.inputs.DatabaseSyncedDatabaseTableSpecArgs;\nimport com.pulumi.databricks.inputs.DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs;\nimport com.pulumi.databricks.Job;\nimport com.pulumi.databricks.JobArgs;\nimport com.pulumi.databricks.inputs.JobTaskArgs;\nimport com.pulumi.databricks.inputs.JobTaskPipelineTaskArgs;\nimport com.pulumi.databricks.inputs.JobScheduleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var syncedTable = new DatabaseSyncedDatabaseTable(\"syncedTable\", DatabaseSyncedDatabaseTableArgs.builder()\n            .name(\"my_standard_catalog.default.my_synced_table\")\n            .logicalDatabaseName(\"terraform_test_db\")\n            .databaseInstanceName(\"my-database-instance\")\n            .spec(DatabaseSyncedDatabaseTableSpecArgs.builder()\n                .schedulingPolicy(\"SNAPSHOT\")\n                .sourceTableFullName(\"source_delta.schema.customer\")\n                .primaryKeyColumns(\"c_custkey\")\n                .createDatabaseObjectsIfMissing(true)\n                .newPipelineSpec(DatabaseSyncedDatabaseTableSpecNewPipelineSpecArgs.builder()\n                    .storageCatalog(\"source_delta\")\n                    .storageSchema(\"schema\")\n                    .build())\n                .build())\n            .build());\n\n        var syncPipelineScheduleJob = new Job(\"syncPipelineScheduleJob\", JobArgs.builder()\n            .name(\"Synced Pipeline Refresh\")\n            .description(\"Job to schedule synced database table pipeline. \")\n            .tasks(JobTaskArgs.builder()\n                .taskKey(\"synced-table-pipeline\")\n                .pipelineTask(JobTaskPipelineTaskArgs.builder()\n                    .pipelineId(syncedTable.dataSynchronizationStatus().applyValue(_dataSynchronizationStatus -\u003e _dataSynchronizationStatus.pipelineId()))\n                    .build())\n                .build())\n            .schedule(JobScheduleArgs.builder()\n                .quartzCronExpression(\"0 0 0 * * ?\")\n                .timezoneId(\"Europe/Helsinki\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  syncedTable:\n    type: databricks:DatabaseSyncedDatabaseTable\n    name: synced_table\n    properties:\n      name: my_standard_catalog.default.my_synced_table\n      logicalDatabaseName: terraform_test_db\n      databaseInstanceName: my-database-instance\n      spec:\n        schedulingPolicy: SNAPSHOT\n        sourceTableFullName: source_delta.schema.customer\n        primaryKeyColumns:\n          - c_custkey\n        createDatabaseObjectsIfMissing: true\n        newPipelineSpec:\n          storageCatalog: source_delta\n          storageSchema: schema\n  syncPipelineScheduleJob:\n    type: databricks:Job\n    name: sync_pipeline_schedule_job\n    properties:\n      name: Synced Pipeline Refresh\n      description: 'Job to schedule synced database table pipeline. '\n      tasks:\n        - taskKey: synced-table-pipeline\n          pipelineTask:\n            pipelineId: ${syncedTable.dataSynchronizationStatus.pipelineId}\n      schedule:\n        quartzCronExpression: 0 0 0 * * ?\n        timezoneId: Europe/Helsinki\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"dataSynchronizationStatus":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatus:DatabaseSyncedDatabaseTableDataSynchronizationStatus","description":"(SyncedTableStatus) - Synced Table data synchronization status\n"},"databaseInstanceName":{"type":"string","description":"Name of the target database instance. This is required when creating synced database tables in standard catalogs.\nThis is optional when creating synced database tables in registered catalogs. If this field is specified\nwhen creating synced database tables in registered catalogs, the database instance name MUST\nmatch that of the registered catalog (or the request will be rejected)\n"},"effectiveDatabaseInstanceName":{"type":"string","description":"(string) - The name of the database instance that this table is registered to. This field is always returned, and for\ntables inside database catalogs is inferred database instance associated with the catalog.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveLogicalDatabaseName":{"type":"string","description":"(string) - The name of the logical database that this table is registered to.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"logicalDatabaseName":{"type":"string","description":"Target Postgres database object (logical database) name for this table.\n\nWhen creating a synced table in a registered Postgres catalog, the\ntarget Postgres database name is inferred to be that of the registered catalog.\nIf this field is specified in this scenario, the Postgres database name MUST\nmatch that of the registered catalog (or the request will be rejected).\n\nWhen creating a synced table in a standard catalog, this field is required.\nIn this scenario, specifying this field will allow targeting an arbitrary postgres database.\nNote that this has implications for the \u003cspan pulumi-lang-nodejs=\"`createDatabaseObjectsIsMissing`\" pulumi-lang-dotnet=\"`CreateDatabaseObjectsIsMissing`\" pulumi-lang-go=\"`createDatabaseObjectsIsMissing`\" pulumi-lang-python=\"`create_database_objects_is_missing`\" pulumi-lang-yaml=\"`createDatabaseObjectsIsMissing`\" pulumi-lang-java=\"`createDatabaseObjectsIsMissing`\"\u003e`create_database_objects_is_missing`\u003c/span\u003e field in \u003cspan pulumi-lang-nodejs=\"`spec`\" pulumi-lang-dotnet=\"`Spec`\" pulumi-lang-go=\"`spec`\" pulumi-lang-python=\"`spec`\" pulumi-lang-yaml=\"`spec`\" pulumi-lang-java=\"`spec`\"\u003e`spec`\u003c/span\u003e\n"},"name":{"type":"string","description":"Full three-part (catalog, schema, table) name of the table\n"},"providerConfig":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableProviderConfig:DatabaseSyncedDatabaseTableProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableSpec:DatabaseSyncedDatabaseTableSpec"},"unityCatalogProvisioningState":{"type":"string","description":"(string) - The provisioning state of the synced table entity in Unity Catalog. This is distinct from the\nstate of the data synchronization pipeline (i.e. the table may be in \"ACTIVE\" but the pipeline\nmay be in \"PROVISIONING\" as it runs asynchronously). Possible values are: `ACTIVE`, `DEGRADED`, `DELETING`, `FAILED`, `PROVISIONING`, `UPDATING`\n"}},"required":["dataSynchronizationStatus","databaseInstanceName","effectiveDatabaseInstanceName","effectiveLogicalDatabaseName","logicalDatabaseName","name","unityCatalogProvisioningState"],"inputProperties":{"databaseInstanceName":{"type":"string","description":"Name of the target database instance. This is required when creating synced database tables in standard catalogs.\nThis is optional when creating synced database tables in registered catalogs. If this field is specified\nwhen creating synced database tables in registered catalogs, the database instance name MUST\nmatch that of the registered catalog (or the request will be rejected)\n"},"logicalDatabaseName":{"type":"string","description":"Target Postgres database object (logical database) name for this table.\n\nWhen creating a synced table in a registered Postgres catalog, the\ntarget Postgres database name is inferred to be that of the registered catalog.\nIf this field is specified in this scenario, the Postgres database name MUST\nmatch that of the registered catalog (or the request will be rejected).\n\nWhen creating a synced table in a standard catalog, this field is required.\nIn this scenario, specifying this field will allow targeting an arbitrary postgres database.\nNote that this has implications for the \u003cspan pulumi-lang-nodejs=\"`createDatabaseObjectsIsMissing`\" pulumi-lang-dotnet=\"`CreateDatabaseObjectsIsMissing`\" pulumi-lang-go=\"`createDatabaseObjectsIsMissing`\" pulumi-lang-python=\"`create_database_objects_is_missing`\" pulumi-lang-yaml=\"`createDatabaseObjectsIsMissing`\" pulumi-lang-java=\"`createDatabaseObjectsIsMissing`\"\u003e`create_database_objects_is_missing`\u003c/span\u003e field in \u003cspan pulumi-lang-nodejs=\"`spec`\" pulumi-lang-dotnet=\"`Spec`\" pulumi-lang-go=\"`spec`\" pulumi-lang-python=\"`spec`\" pulumi-lang-yaml=\"`spec`\" pulumi-lang-java=\"`spec`\"\u003e`spec`\u003c/span\u003e\n"},"name":{"type":"string","description":"Full three-part (catalog, schema, table) name of the table\n"},"providerConfig":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableProviderConfig:DatabaseSyncedDatabaseTableProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableSpec:DatabaseSyncedDatabaseTableSpec"}},"stateInputs":{"description":"Input properties used for looking up and filtering DatabaseSyncedDatabaseTable resources.\n","properties":{"dataSynchronizationStatus":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableDataSynchronizationStatus:DatabaseSyncedDatabaseTableDataSynchronizationStatus","description":"(SyncedTableStatus) - Synced Table data synchronization status\n"},"databaseInstanceName":{"type":"string","description":"Name of the target database instance. This is required when creating synced database tables in standard catalogs.\nThis is optional when creating synced database tables in registered catalogs. If this field is specified\nwhen creating synced database tables in registered catalogs, the database instance name MUST\nmatch that of the registered catalog (or the request will be rejected)\n"},"effectiveDatabaseInstanceName":{"type":"string","description":"(string) - The name of the database instance that this table is registered to. This field is always returned, and for\ntables inside database catalogs is inferred database instance associated with the catalog.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"effectiveLogicalDatabaseName":{"type":"string","description":"(string) - The name of the logical database that this table is registered to.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n"},"logicalDatabaseName":{"type":"string","description":"Target Postgres database object (logical database) name for this table.\n\nWhen creating a synced table in a registered Postgres catalog, the\ntarget Postgres database name is inferred to be that of the registered catalog.\nIf this field is specified in this scenario, the Postgres database name MUST\nmatch that of the registered catalog (or the request will be rejected).\n\nWhen creating a synced table in a standard catalog, this field is required.\nIn this scenario, specifying this field will allow targeting an arbitrary postgres database.\nNote that this has implications for the \u003cspan pulumi-lang-nodejs=\"`createDatabaseObjectsIsMissing`\" pulumi-lang-dotnet=\"`CreateDatabaseObjectsIsMissing`\" pulumi-lang-go=\"`createDatabaseObjectsIsMissing`\" pulumi-lang-python=\"`create_database_objects_is_missing`\" pulumi-lang-yaml=\"`createDatabaseObjectsIsMissing`\" pulumi-lang-java=\"`createDatabaseObjectsIsMissing`\"\u003e`create_database_objects_is_missing`\u003c/span\u003e field in \u003cspan pulumi-lang-nodejs=\"`spec`\" pulumi-lang-dotnet=\"`Spec`\" pulumi-lang-go=\"`spec`\" pulumi-lang-python=\"`spec`\" pulumi-lang-yaml=\"`spec`\" pulumi-lang-java=\"`spec`\"\u003e`spec`\u003c/span\u003e\n"},"name":{"type":"string","description":"Full three-part (catalog, schema, table) name of the table\n"},"providerConfig":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableProviderConfig:DatabaseSyncedDatabaseTableProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/DatabaseSyncedDatabaseTableSpec:DatabaseSyncedDatabaseTableSpec"},"unityCatalogProvisioningState":{"type":"string","description":"(string) - The provisioning state of the synced table entity in Unity Catalog. This is distinct from the\nstate of the data synchronization pipeline (i.e. the table may be in \"ACTIVE\" but the pipeline\nmay be in \"PROVISIONING\" as it runs asynchronously). Possible values are: `ACTIVE`, `DEGRADED`, `DELETING`, `FAILED`, `PROVISIONING`, `UPDATING`\n"}},"type":"object"}},"databricks:index/dbfsFile:DbfsFile":{"description":"\u003e Please switch to\u003cspan pulumi-lang-nodejs=\" databricks.File \" pulumi-lang-dotnet=\" databricks.File \" pulumi-lang-go=\" File \" pulumi-lang-python=\" File \" pulumi-lang-yaml=\" databricks.File \" pulumi-lang-java=\" databricks.File \"\u003e databricks.File \u003c/span\u003eor\u003cspan pulumi-lang-nodejs=\" databricks.WorkspaceFile \" pulumi-lang-dotnet=\" databricks.WorkspaceFile \" pulumi-lang-go=\" WorkspaceFile \" pulumi-lang-python=\" WorkspaceFile \" pulumi-lang-yaml=\" databricks.WorkspaceFile \" pulumi-lang-java=\" databricks.WorkspaceFile \"\u003e databricks.WorkspaceFile \u003c/span\u003eto manage files. Databricks recommends against storing any production data or sensitive information in the DBFS root.\n\nThis is a resource that lets you manage relatively small files on [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html). The best use cases are libraries for\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eor databricks_job. You can also use\u003cspan pulumi-lang-nodejs=\" databricks.DbfsFile \" pulumi-lang-dotnet=\" databricks.DbfsFile \" pulumi-lang-go=\" DbfsFile \" pulumi-lang-python=\" DbfsFile \" pulumi-lang-yaml=\" databricks.DbfsFile \" pulumi-lang-java=\" databricks.DbfsFile \"\u003e databricks.DbfsFile \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" databricks.getDbfsFilePaths \" pulumi-lang-dotnet=\" databricks.getDbfsFilePaths \" pulumi-lang-go=\" getDbfsFilePaths \" pulumi-lang-python=\" get_dbfs_file_paths \" pulumi-lang-yaml=\" databricks.getDbfsFilePaths \" pulumi-lang-java=\" databricks.getDbfsFilePaths \"\u003e databricks.getDbfsFilePaths \u003c/span\u003edata sources.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n","properties":{"contentBase64":{"type":"string","description":"Encoded file contents. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances, like creating a data pipeline configuration file.\n"},"dbfsPath":{"type":"string","description":"Path, but with `dbfs:` prefix.\n"},"fileSize":{"type":"integer","description":"The file size of the file that is being tracked by this resource in bytes.\n"},"md5":{"type":"string"},"path":{"type":"string","description":"The path of the file in which you wish to save.\n"},"providerConfig":{"$ref":"#/types/databricks:index/DbfsFileProviderConfig:DbfsFileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"source":{"type":"string","description":"The full absolute path to the file. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n"}},"required":["dbfsPath","fileSize","path"],"inputProperties":{"contentBase64":{"type":"string","description":"Encoded file contents. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances, like creating a data pipeline configuration file.\n","willReplaceOnChanges":true},"md5":{"type":"string","willReplaceOnChanges":true},"path":{"type":"string","description":"The path of the file in which you wish to save.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/DbfsFileProviderConfig:DbfsFileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"source":{"type":"string","description":"The full absolute path to the file. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n","willReplaceOnChanges":true}},"requiredInputs":["path"],"stateInputs":{"description":"Input properties used for looking up and filtering DbfsFile resources.\n","properties":{"contentBase64":{"type":"string","description":"Encoded file contents. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances, like creating a data pipeline configuration file.\n","willReplaceOnChanges":true},"dbfsPath":{"type":"string","description":"Path, but with `dbfs:` prefix.\n"},"fileSize":{"type":"integer","description":"The file size of the file that is being tracked by this resource in bytes.\n"},"md5":{"type":"string","willReplaceOnChanges":true},"path":{"type":"string","description":"The path of the file in which you wish to save.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/DbfsFileProviderConfig:DbfsFileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"source":{"type":"string","description":"The full absolute path to the file. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/defaultNamespaceSetting:DefaultNamespaceSetting":{"description":"The \u003cspan pulumi-lang-nodejs=\"`databricks.DefaultNamespaceSetting`\" pulumi-lang-dotnet=\"`databricks.DefaultNamespaceSetting`\" pulumi-lang-go=\"`DefaultNamespaceSetting`\" pulumi-lang-python=\"`DefaultNamespaceSetting`\" pulumi-lang-yaml=\"`databricks.DefaultNamespaceSetting`\" pulumi-lang-java=\"`databricks.DefaultNamespaceSetting`\"\u003e`databricks.DefaultNamespaceSetting`\u003c/span\u003e resource allows you to operate the setting configuration for the default namespace in the Databricks workspace.\n\n\u003e This resource can only be used with a workspace-level provider!\n\nSetting the default catalog for the workspace determines the catalog that is used when queries do not reference\na fully qualified 3 level name. For example, if the default catalog is set to 'retail_prod' then a query\n'SELECT * FROM myTable' would reference the object 'retail_prod.default.myTable'\n(the schema 'default' is always assumed).\nThis setting requires a restart of clusters and SQL warehouses to take effect. Additionally, the default namespace only applies when using Unity Catalog-enabled compute.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.DefaultNamespaceSetting(\"this\", {namespace: {\n    value: \"namespace_value\",\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.DefaultNamespaceSetting(\"this\", namespace={\n    \"value\": \"namespace_value\",\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.DefaultNamespaceSetting(\"this\", new()\n    {\n        Namespace = new Databricks.Inputs.DefaultNamespaceSettingNamespaceArgs\n        {\n            Value = \"namespace_value\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDefaultNamespaceSetting(ctx, \"this\", \u0026databricks.DefaultNamespaceSettingArgs{\n\t\t\tNamespace: \u0026databricks.DefaultNamespaceSettingNamespaceArgs{\n\t\t\t\tValue: pulumi.String(\"namespace_value\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DefaultNamespaceSetting;\nimport com.pulumi.databricks.DefaultNamespaceSettingArgs;\nimport com.pulumi.databricks.inputs.DefaultNamespaceSettingNamespaceArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new DefaultNamespaceSetting(\"this\", DefaultNamespaceSettingArgs.builder()\n            .namespace(DefaultNamespaceSettingNamespaceArgs.builder()\n                .value(\"namespace_value\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:DefaultNamespaceSetting\n    properties:\n      namespace:\n        value: namespace_value\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"etag":{"type":"string"},"namespace":{"$ref":"#/types/databricks:index/DefaultNamespaceSettingNamespace:DefaultNamespaceSettingNamespace","description":"The configuration details.\n"},"providerConfig":{"$ref":"#/types/databricks:index/DefaultNamespaceSettingProviderConfig:DefaultNamespaceSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"required":["etag","namespace","settingName"],"inputProperties":{"etag":{"type":"string"},"namespace":{"$ref":"#/types/databricks:index/DefaultNamespaceSettingNamespace:DefaultNamespaceSettingNamespace","description":"The configuration details.\n"},"providerConfig":{"$ref":"#/types/databricks:index/DefaultNamespaceSettingProviderConfig:DefaultNamespaceSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"requiredInputs":["namespace"],"stateInputs":{"description":"Input properties used for looking up and filtering DefaultNamespaceSetting resources.\n","properties":{"etag":{"type":"string"},"namespace":{"$ref":"#/types/databricks:index/DefaultNamespaceSettingNamespace:DefaultNamespaceSettingNamespace","description":"The configuration details.\n"},"providerConfig":{"$ref":"#/types/databricks:index/DefaultNamespaceSettingProviderConfig:DefaultNamespaceSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"type":"object"}},"databricks:index/directory:Directory":{"description":"This resource allows you to manage directories in [Databricks Workpace](https://docs.databricks.com/workspace/workspace-objects.html).\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\nYou can declare a Pulumi-managed directory by specifying the \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e attribute of the corresponding directory.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst myCustomDirectory = new databricks.Directory(\"my_custom_directory\", {path: \"/my_custom_directory\"});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nmy_custom_directory = databricks.Directory(\"my_custom_directory\", path=\"/my_custom_directory\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var myCustomDirectory = new Databricks.Directory(\"my_custom_directory\", new()\n    {\n        Path = \"/my_custom_directory\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDirectory(ctx, \"my_custom_directory\", \u0026databricks.DirectoryArgs{\n\t\t\tPath: pulumi.String(\"/my_custom_directory\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Directory;\nimport com.pulumi.databricks.DirectoryArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var myCustomDirectory = new Directory(\"myCustomDirectory\", DirectoryArgs.builder()\n            .path(\"/my_custom_directory\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  myCustomDirectory:\n    type: databricks:Directory\n    name: my_custom_directory\n    properties:\n      path: /my_custom_directory\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\n-\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003ecan control which groups or individual users can access folders.\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n- End to end workspace management guide.\n-\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n-\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003edata to export a notebook from Databricks Workspace.\n-\u003cspan pulumi-lang-nodejs=\" databricks.getNotebookPaths \" pulumi-lang-dotnet=\" databricks.getNotebookPaths \" pulumi-lang-go=\" getNotebookPaths \" pulumi-lang-python=\" get_notebook_paths \" pulumi-lang-yaml=\" databricks.getNotebookPaths \" pulumi-lang-java=\" databricks.getNotebookPaths \"\u003e databricks.getNotebookPaths \u003c/span\u003edata to list notebooks in Databricks Workspace.\n-\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage [Databricks Repos](https://docs.databricks.com/repos.html).\n-\u003cspan pulumi-lang-nodejs=\" databricks.getSparkVersion \" pulumi-lang-dotnet=\" databricks.getSparkVersion \" pulumi-lang-go=\" getSparkVersion \" pulumi-lang-python=\" get_spark_version \" pulumi-lang-yaml=\" databricks.getSparkVersion \" pulumi-lang-java=\" databricks.getSparkVersion \"\u003e databricks.getSparkVersion \u003c/span\u003edata to get [Databricks Runtime (DBR)](https://docs.databricks.com/runtime/dbr.html) version that could be used for \u003cspan pulumi-lang-nodejs=\"`sparkVersion`\" pulumi-lang-dotnet=\"`SparkVersion`\" pulumi-lang-go=\"`sparkVersion`\" pulumi-lang-python=\"`spark_version`\" pulumi-lang-yaml=\"`sparkVersion`\" pulumi-lang-java=\"`sparkVersion`\"\u003e`spark_version`\u003c/span\u003e parameter in\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand other resources.\n-\u003cspan pulumi-lang-nodejs=\" databricks.WorkspaceConf \" pulumi-lang-dotnet=\" databricks.WorkspaceConf \" pulumi-lang-go=\" WorkspaceConf \" pulumi-lang-python=\" WorkspaceConf \" pulumi-lang-yaml=\" databricks.WorkspaceConf \" pulumi-lang-java=\" databricks.WorkspaceConf \"\u003e databricks.WorkspaceConf \u003c/span\u003eto manage workspace configuration for expert usage.\n\n","properties":{"deleteRecursive":{"type":"boolean","description":"Whether or not to trigger a recursive delete of this directory and its resources when deleting this on Pulumi. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e\n"},"objectId":{"type":"integer","description":"Unique identifier for a DIRECTORY\n"},"path":{"type":"string","description":"The absolute path of the directory, beginning with \"/\", e.g. \"/Demo\".\n"},"providerConfig":{"$ref":"#/types/databricks:index/DirectoryProviderConfig:DirectoryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"workspacePath":{"type":"string","description":"path on Workspace File System (WSFS) in form of `/Workspace` + \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e\n"}},"required":["objectId","path","workspacePath"],"inputProperties":{"deleteRecursive":{"type":"boolean","description":"Whether or not to trigger a recursive delete of this directory and its resources when deleting this on Pulumi. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e\n"},"objectId":{"type":"integer","description":"Unique identifier for a DIRECTORY\n"},"path":{"type":"string","description":"The absolute path of the directory, beginning with \"/\", e.g. \"/Demo\".\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/DirectoryProviderConfig:DirectoryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"requiredInputs":["path"],"stateInputs":{"description":"Input properties used for looking up and filtering Directory resources.\n","properties":{"deleteRecursive":{"type":"boolean","description":"Whether or not to trigger a recursive delete of this directory and its resources when deleting this on Pulumi. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e\n"},"objectId":{"type":"integer","description":"Unique identifier for a DIRECTORY\n"},"path":{"type":"string","description":"The absolute path of the directory, beginning with \"/\", e.g. \"/Demo\".\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/DirectoryProviderConfig:DirectoryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"workspacePath":{"type":"string","description":"path on Workspace File System (WSFS) in form of `/Workspace` + \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e\n"}},"type":"object"}},"databricks:index/disableLegacyAccessSetting:DisableLegacyAccessSetting":{"description":"The \u003cspan pulumi-lang-nodejs=\"`databricks.DisableLegacyAccessSetting`\" pulumi-lang-dotnet=\"`databricks.DisableLegacyAccessSetting`\" pulumi-lang-go=\"`DisableLegacyAccessSetting`\" pulumi-lang-python=\"`DisableLegacyAccessSetting`\" pulumi-lang-yaml=\"`databricks.DisableLegacyAccessSetting`\" pulumi-lang-java=\"`databricks.DisableLegacyAccessSetting`\"\u003e`databricks.DisableLegacyAccessSetting`\u003c/span\u003e resource allows you to disable legacy access. It has the following impact:\n\n1. Disables direct access to Hive Metastores from the workspace. However, you can still access a Hive Metastore through Hive Metastore federation.\n2. Disables Fallback Mode on any External Location access from the workspace.\n3. Disables Databricks Runtime versions prior to 13.3LTS.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e It may take 5 minutes to take effect and requires a restart of clusters and SQL warehouses.\n\n\u003e Please also set the default namespace using\u003cspan pulumi-lang-nodejs=\" databricks.DefaultNamespaceSetting \" pulumi-lang-dotnet=\" databricks.DefaultNamespaceSetting \" pulumi-lang-go=\" DefaultNamespaceSetting \" pulumi-lang-python=\" DefaultNamespaceSetting \" pulumi-lang-yaml=\" databricks.DefaultNamespaceSetting \" pulumi-lang-java=\" databricks.DefaultNamespaceSetting \"\u003e databricks.DefaultNamespaceSetting \u003c/span\u003eto any value other than \u003cspan pulumi-lang-nodejs=\"`hiveMetastore`\" pulumi-lang-dotnet=\"`HiveMetastore`\" pulumi-lang-go=\"`hiveMetastore`\" pulumi-lang-python=\"`hive_metastore`\" pulumi-lang-yaml=\"`hiveMetastore`\" pulumi-lang-java=\"`hiveMetastore`\"\u003e`hive_metastore`\u003c/span\u003e to avoid potential issues.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.DisableLegacyAccessSetting(\"this\", {disableLegacyAccess: {\n    value: true,\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.DisableLegacyAccessSetting(\"this\", disable_legacy_access={\n    \"value\": True,\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.DisableLegacyAccessSetting(\"this\", new()\n    {\n        DisableLegacyAccess = new Databricks.Inputs.DisableLegacyAccessSettingDisableLegacyAccessArgs\n        {\n            Value = true,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDisableLegacyAccessSetting(ctx, \"this\", \u0026databricks.DisableLegacyAccessSettingArgs{\n\t\t\tDisableLegacyAccess: \u0026databricks.DisableLegacyAccessSettingDisableLegacyAccessArgs{\n\t\t\t\tValue: pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DisableLegacyAccessSetting;\nimport com.pulumi.databricks.DisableLegacyAccessSettingArgs;\nimport com.pulumi.databricks.inputs.DisableLegacyAccessSettingDisableLegacyAccessArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new DisableLegacyAccessSetting(\"this\", DisableLegacyAccessSettingArgs.builder()\n            .disableLegacyAccess(DisableLegacyAccessSettingDisableLegacyAccessArgs.builder()\n                .value(true)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:DisableLegacyAccessSetting\n    properties:\n      disableLegacyAccess:\n        value: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Import\n\nThis resource can be imported by predefined name \u003cspan pulumi-lang-nodejs=\"`global`\" pulumi-lang-dotnet=\"`Global`\" pulumi-lang-go=\"`global`\" pulumi-lang-python=\"`global`\" pulumi-lang-yaml=\"`global`\" pulumi-lang-java=\"`global`\"\u003e`global`\u003c/span\u003e:\n\n```bash\nterraform import databricks_disable_legacy_access_setting.this global\n```\n\n","properties":{"disableLegacyAccess":{"$ref":"#/types/databricks:index/DisableLegacyAccessSettingDisableLegacyAccess:DisableLegacyAccessSettingDisableLegacyAccess","description":"The configuration details.\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DisableLegacyAccessSettingProviderConfig:DisableLegacyAccessSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"required":["disableLegacyAccess","etag","settingName"],"inputProperties":{"disableLegacyAccess":{"$ref":"#/types/databricks:index/DisableLegacyAccessSettingDisableLegacyAccess:DisableLegacyAccessSettingDisableLegacyAccess","description":"The configuration details.\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DisableLegacyAccessSettingProviderConfig:DisableLegacyAccessSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"requiredInputs":["disableLegacyAccess"],"stateInputs":{"description":"Input properties used for looking up and filtering DisableLegacyAccessSetting resources.\n","properties":{"disableLegacyAccess":{"$ref":"#/types/databricks:index/DisableLegacyAccessSettingDisableLegacyAccess:DisableLegacyAccessSettingDisableLegacyAccess","description":"The configuration details.\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DisableLegacyAccessSettingProviderConfig:DisableLegacyAccessSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"type":"object"}},"databricks:index/disableLegacyDbfsSetting:DisableLegacyDbfsSetting":{"description":"The \u003cspan pulumi-lang-nodejs=\"`databricks.DisableLegacyDbfsSetting`\" pulumi-lang-dotnet=\"`databricks.DisableLegacyDbfsSetting`\" pulumi-lang-go=\"`DisableLegacyDbfsSetting`\" pulumi-lang-python=\"`DisableLegacyDbfsSetting`\" pulumi-lang-yaml=\"`databricks.DisableLegacyDbfsSetting`\" pulumi-lang-java=\"`databricks.DisableLegacyDbfsSetting`\"\u003e`databricks.DisableLegacyDbfsSetting`\u003c/span\u003e resource allows you to disable legacy DBFS.\n\n\u003e This resource can only be used with a workspace-level provider!\n\nDisabling legacy DBFS has the following implications:\n\n1. Access to DBFS root and DBFS mounts is disallowed (as well as the creation of new mounts). \n2. Disables Databricks Runtime versions prior to 13.3LTS.\n\nWhen the setting is off, all DBFS functionality is enabled and no restrictions are imposed on Databricks Runtime versions. This setting can take up to 20 minutes to take effect and requires a manual restart of all-purpose compute clusters and SQL warehouses.\n\nRefer to official docs for more details:\n\n- [Azure](https://learn.microsoft.com/azure/databricks/dbfs/disable-dbfs-root-mounts)\n- [AWS](https://docs.databricks.com/aws/dbfs/disable-dbfs-root-mounts)\n- [GCP](https://docs.gcp.databricks.com/dbfs/disable-dbfs-root-mounts)\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.DisableLegacyDbfsSetting(\"this\", {disableLegacyDbfs: {\n    value: true,\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.DisableLegacyDbfsSetting(\"this\", disable_legacy_dbfs={\n    \"value\": True,\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.DisableLegacyDbfsSetting(\"this\", new()\n    {\n        DisableLegacyDbfs = new Databricks.Inputs.DisableLegacyDbfsSettingDisableLegacyDbfsArgs\n        {\n            Value = true,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDisableLegacyDbfsSetting(ctx, \"this\", \u0026databricks.DisableLegacyDbfsSettingArgs{\n\t\t\tDisableLegacyDbfs: \u0026databricks.DisableLegacyDbfsSettingDisableLegacyDbfsArgs{\n\t\t\t\tValue: pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DisableLegacyDbfsSetting;\nimport com.pulumi.databricks.DisableLegacyDbfsSettingArgs;\nimport com.pulumi.databricks.inputs.DisableLegacyDbfsSettingDisableLegacyDbfsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new DisableLegacyDbfsSetting(\"this\", DisableLegacyDbfsSettingArgs.builder()\n            .disableLegacyDbfs(DisableLegacyDbfsSettingDisableLegacyDbfsArgs.builder()\n                .value(true)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:DisableLegacyDbfsSetting\n    properties:\n      disableLegacyDbfs:\n        value: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"disableLegacyDbfs":{"$ref":"#/types/databricks:index/DisableLegacyDbfsSettingDisableLegacyDbfs:DisableLegacyDbfsSettingDisableLegacyDbfs","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DisableLegacyDbfsSettingProviderConfig:DisableLegacyDbfsSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"required":["disableLegacyDbfs","etag","settingName"],"inputProperties":{"disableLegacyDbfs":{"$ref":"#/types/databricks:index/DisableLegacyDbfsSettingDisableLegacyDbfs:DisableLegacyDbfsSettingDisableLegacyDbfs","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DisableLegacyDbfsSettingProviderConfig:DisableLegacyDbfsSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"requiredInputs":["disableLegacyDbfs"],"stateInputs":{"description":"Input properties used for looking up and filtering DisableLegacyDbfsSetting resources.\n","properties":{"disableLegacyDbfs":{"$ref":"#/types/databricks:index/DisableLegacyDbfsSettingDisableLegacyDbfs:DisableLegacyDbfsSettingDisableLegacyDbfs","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DisableLegacyDbfsSettingProviderConfig:DisableLegacyDbfsSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"settingName":{"type":"string"}},"type":"object"}},"databricks:index/disableLegacyFeaturesSetting:DisableLegacyFeaturesSetting":{"description":"\u003e This resource can only be used with an account-level provider!\n\nThe \u003cspan pulumi-lang-nodejs=\"`databricks.DisableLegacyFeaturesSetting`\" pulumi-lang-dotnet=\"`databricks.DisableLegacyFeaturesSetting`\" pulumi-lang-go=\"`DisableLegacyFeaturesSetting`\" pulumi-lang-python=\"`DisableLegacyFeaturesSetting`\" pulumi-lang-yaml=\"`databricks.DisableLegacyFeaturesSetting`\" pulumi-lang-java=\"`databricks.DisableLegacyFeaturesSetting`\"\u003e`databricks.DisableLegacyFeaturesSetting`\u003c/span\u003e resource allows you to disable legacy features on newly created workspaces.\n\n\u003e Before disabling legacy features, make sure that default catalog for the workspace is set to value different than \u003cspan pulumi-lang-nodejs=\"`hiveMetastore`\" pulumi-lang-dotnet=\"`HiveMetastore`\" pulumi-lang-go=\"`hiveMetastore`\" pulumi-lang-python=\"`hive_metastore`\" pulumi-lang-yaml=\"`hiveMetastore`\" pulumi-lang-java=\"`hiveMetastore`\"\u003e`hive_metastore`\u003c/span\u003e!  You can set it using the\u003cspan pulumi-lang-nodejs=\" databricks.DefaultNamespaceSetting \" pulumi-lang-dotnet=\" databricks.DefaultNamespaceSetting \" pulumi-lang-go=\" DefaultNamespaceSetting \" pulumi-lang-python=\" DefaultNamespaceSetting \" pulumi-lang-yaml=\" databricks.DefaultNamespaceSetting \" pulumi-lang-java=\" databricks.DefaultNamespaceSetting \"\u003e databricks.DefaultNamespaceSetting \u003c/span\u003eresource.\n\nWhen this setting is on, the following applies to new workspaces:\n\n- Disables the use of DBFS root and mounts.\n- Hive Metastore will not be provisioned.\n- Disables the use of 'No-isolation clusters'.\n- Disables Databricks Runtime versions prior to 13.3LTS\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\n// Change default catalog to anything than `hive_metastore`\nconst _this = new databricks.DefaultNamespaceSetting(\"this\", {namespace: {\n    value: \"default_catalog\",\n}});\n// Disable legacy features\nconst thisDisableLegacyFeaturesSetting = new databricks.DisableLegacyFeaturesSetting(\"this\", {disableLegacyFeatures: {\n    value: true,\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\n# Change default catalog to anything than `hive_metastore`\nthis = databricks.DefaultNamespaceSetting(\"this\", namespace={\n    \"value\": \"default_catalog\",\n})\n# Disable legacy features\nthis_disable_legacy_features_setting = databricks.DisableLegacyFeaturesSetting(\"this\", disable_legacy_features={\n    \"value\": True,\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    // Change default catalog to anything than `hive_metastore`\n    var @this = new Databricks.DefaultNamespaceSetting(\"this\", new()\n    {\n        Namespace = new Databricks.Inputs.DefaultNamespaceSettingNamespaceArgs\n        {\n            Value = \"default_catalog\",\n        },\n    });\n\n    // Disable legacy features\n    var thisDisableLegacyFeaturesSetting = new Databricks.DisableLegacyFeaturesSetting(\"this\", new()\n    {\n        DisableLegacyFeatures = new Databricks.Inputs.DisableLegacyFeaturesSettingDisableLegacyFeaturesArgs\n        {\n            Value = true,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t// Change default catalog to anything than `hive_metastore`\n\t\t_, err := databricks.NewDefaultNamespaceSetting(ctx, \"this\", \u0026databricks.DefaultNamespaceSettingArgs{\n\t\t\tNamespace: \u0026databricks.DefaultNamespaceSettingNamespaceArgs{\n\t\t\t\tValue: pulumi.String(\"default_catalog\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// Disable legacy features\n\t\t_, err = databricks.NewDisableLegacyFeaturesSetting(ctx, \"this\", \u0026databricks.DisableLegacyFeaturesSettingArgs{\n\t\t\tDisableLegacyFeatures: \u0026databricks.DisableLegacyFeaturesSettingDisableLegacyFeaturesArgs{\n\t\t\t\tValue: pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DefaultNamespaceSetting;\nimport com.pulumi.databricks.DefaultNamespaceSettingArgs;\nimport com.pulumi.databricks.inputs.DefaultNamespaceSettingNamespaceArgs;\nimport com.pulumi.databricks.DisableLegacyFeaturesSetting;\nimport com.pulumi.databricks.DisableLegacyFeaturesSettingArgs;\nimport com.pulumi.databricks.inputs.DisableLegacyFeaturesSettingDisableLegacyFeaturesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        // Change default catalog to anything than `hive_metastore`\n        var this_ = new DefaultNamespaceSetting(\"this\", DefaultNamespaceSettingArgs.builder()\n            .namespace(DefaultNamespaceSettingNamespaceArgs.builder()\n                .value(\"default_catalog\")\n                .build())\n            .build());\n\n        // Disable legacy features\n        var thisDisableLegacyFeaturesSetting = new DisableLegacyFeaturesSetting(\"thisDisableLegacyFeaturesSetting\", DisableLegacyFeaturesSettingArgs.builder()\n            .disableLegacyFeatures(DisableLegacyFeaturesSettingDisableLegacyFeaturesArgs.builder()\n                .value(true)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  # Change default catalog to anything than `hive_metastore`\n  this:\n    type: databricks:DefaultNamespaceSetting\n    properties:\n      namespace:\n        value: default_catalog\n  # Disable legacy features\n  thisDisableLegacyFeaturesSetting:\n    type: databricks:DisableLegacyFeaturesSetting\n    name: this\n    properties:\n      disableLegacyFeatures:\n        value: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.DisableLegacyAccessSetting \" pulumi-lang-dotnet=\" databricks.DisableLegacyAccessSetting \" pulumi-lang-go=\" DisableLegacyAccessSetting \" pulumi-lang-python=\" DisableLegacyAccessSetting \" pulumi-lang-yaml=\" databricks.DisableLegacyAccessSetting \" pulumi-lang-java=\" databricks.DisableLegacyAccessSetting \"\u003e databricks.DisableLegacyAccessSetting \u003c/span\u003eto disable legacy access, enabled by default when creating new workspaces with the \u003cspan pulumi-lang-nodejs=\"`disableLegacyFeatures`\" pulumi-lang-dotnet=\"`DisableLegacyFeatures`\" pulumi-lang-go=\"`disableLegacyFeatures`\" pulumi-lang-python=\"`disable_legacy_features`\" pulumi-lang-yaml=\"`disableLegacyFeatures`\" pulumi-lang-java=\"`disableLegacyFeatures`\"\u003e`disable_legacy_features`\u003c/span\u003e account level setting turned on.\n*\u003cspan pulumi-lang-nodejs=\" databricks.DisableLegacyDbfsSetting \" pulumi-lang-dotnet=\" databricks.DisableLegacyDbfsSetting \" pulumi-lang-go=\" DisableLegacyDbfsSetting \" pulumi-lang-python=\" DisableLegacyDbfsSetting \" pulumi-lang-yaml=\" databricks.DisableLegacyDbfsSetting \" pulumi-lang-java=\" databricks.DisableLegacyDbfsSetting \"\u003e databricks.DisableLegacyDbfsSetting \u003c/span\u003eto disable legacy DBFS, enabled by default when creating new workspaces with the \u003cspan pulumi-lang-nodejs=\"`disableLegacyFeatures`\" pulumi-lang-dotnet=\"`DisableLegacyFeatures`\" pulumi-lang-go=\"`disableLegacyFeatures`\" pulumi-lang-python=\"`disable_legacy_features`\" pulumi-lang-yaml=\"`disableLegacyFeatures`\" pulumi-lang-java=\"`disableLegacyFeatures`\"\u003e`disable_legacy_features`\u003c/span\u003e account level setting turned on.\n\n","properties":{"disableLegacyFeatures":{"$ref":"#/types/databricks:index/DisableLegacyFeaturesSettingDisableLegacyFeatures:DisableLegacyFeaturesSettingDisableLegacyFeatures","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DisableLegacyFeaturesSettingProviderConfig:DisableLegacyFeaturesSettingProviderConfig"},"settingName":{"type":"string"}},"required":["disableLegacyFeatures","etag","settingName"],"inputProperties":{"disableLegacyFeatures":{"$ref":"#/types/databricks:index/DisableLegacyFeaturesSettingDisableLegacyFeatures:DisableLegacyFeaturesSettingDisableLegacyFeatures","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DisableLegacyFeaturesSettingProviderConfig:DisableLegacyFeaturesSettingProviderConfig"},"settingName":{"type":"string"}},"requiredInputs":["disableLegacyFeatures"],"stateInputs":{"description":"Input properties used for looking up and filtering DisableLegacyFeaturesSetting resources.\n","properties":{"disableLegacyFeatures":{"$ref":"#/types/databricks:index/DisableLegacyFeaturesSettingDisableLegacyFeatures:DisableLegacyFeaturesSettingDisableLegacyFeatures","description":"block with following attributes:\n"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/DisableLegacyFeaturesSettingProviderConfig:DisableLegacyFeaturesSettingProviderConfig"},"settingName":{"type":"string"}},"type":"object"}},"databricks:index/endpoint:Endpoint":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nEndpoint resource manages network connectivity endpoints for private access to Databricks workspaces.\n\n\u003e **Note** This resource can only be used with an account-level provider!\n\n## Example Usage\n\n### Example for Azure cloud\nThis is an example for creating an endpoint in Azure cloud:\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Endpoint(\"this\", {\n    accountId: \"eae3abf6-1496-494e-9983-4660a5ad5aab\",\n    endpointName: \"my-private-endpoint\",\n    region: \"westus\",\n    azurePrivateEndpointInfo: {\n        privateEndpointName: \"my-pe\",\n        privateEndpointResourceGuid: \"12345678-1234-1234-1234-123456789abc\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Endpoint(\"this\",\n    account_id=\"eae3abf6-1496-494e-9983-4660a5ad5aab\",\n    endpoint_name=\"my-private-endpoint\",\n    region=\"westus\",\n    azure_private_endpoint_info={\n        \"private_endpoint_name\": \"my-pe\",\n        \"private_endpoint_resource_guid\": \"12345678-1234-1234-1234-123456789abc\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Endpoint(\"this\", new()\n    {\n        AccountId = \"eae3abf6-1496-494e-9983-4660a5ad5aab\",\n        EndpointName = \"my-private-endpoint\",\n        Region = \"westus\",\n        AzurePrivateEndpointInfo = new Databricks.Inputs.EndpointAzurePrivateEndpointInfoArgs\n        {\n            PrivateEndpointName = \"my-pe\",\n            PrivateEndpointResourceGuid = \"12345678-1234-1234-1234-123456789abc\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewEndpoint(ctx, \"this\", \u0026databricks.EndpointArgs{\n\t\t\tAccountId:    \"eae3abf6-1496-494e-9983-4660a5ad5aab\",\n\t\t\tEndpointName: \"my-private-endpoint\",\n\t\t\tRegion:       pulumi.String(\"westus\"),\n\t\t\tAzurePrivateEndpointInfo: \u0026databricks.EndpointAzurePrivateEndpointInfoArgs{\n\t\t\t\tPrivateEndpointName:         pulumi.String(\"my-pe\"),\n\t\t\t\tPrivateEndpointResourceGuid: pulumi.String(\"12345678-1234-1234-1234-123456789abc\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Endpoint;\nimport com.pulumi.databricks.EndpointArgs;\nimport com.pulumi.databricks.inputs.EndpointAzurePrivateEndpointInfoArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Endpoint(\"this\", EndpointArgs.builder()\n            .accountId(\"eae3abf6-1496-494e-9983-4660a5ad5aab\")\n            .endpointName(\"my-private-endpoint\")\n            .region(\"westus\")\n            .azurePrivateEndpointInfo(EndpointAzurePrivateEndpointInfoArgs.builder()\n                .privateEndpointName(\"my-pe\")\n                .privateEndpointResourceGuid(\"12345678-1234-1234-1234-123456789abc\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Endpoint\n    properties:\n      accountId: eae3abf6-1496-494e-9983-4660a5ad5aab\n      endpointName: my-private-endpoint\n      region: westus\n      azurePrivateEndpointInfo:\n        privateEndpointName: my-pe\n        privateEndpointResourceGuid: 12345678-1234-1234-1234-123456789abc\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"accountId":{"type":"string","description":"(string) - The Databricks Account in which the endpoint object exists\n"},"azurePrivateEndpointInfo":{"$ref":"#/types/databricks:index/EndpointAzurePrivateEndpointInfo:EndpointAzurePrivateEndpointInfo","description":"Info for an Azure private endpoint\n"},"createTime":{"type":"string","description":"(string) - The timestamp when the endpoint was created. The timestamp is in RFC 3339 format in UTC timezone\n"},"displayName":{"type":"string","description":"The human-readable display name of this endpoint.\nThe input should conform to RFC-1034, which restricts to letters, numbers, and hyphens,\nwith the first character a letter, the last a letter or a number, and a 63 character maximum\n"},"endpointId":{"type":"string","description":"(string) - The unique identifier for this endpoint under the account. This field is a UUID generated by Databricks\n"},"name":{"type":"string","description":"(string) - The resource name of the endpoint, which uniquely identifies the endpoint\n"},"parent":{"type":"string"},"region":{"type":"string","description":"The cloud provider region where this endpoint is located\n"},"state":{"type":"string","description":"(string) - The state of the endpoint. The endpoint can only be used if the state is `APPROVED`. Possible values are: `APPROVED`, `DISCONNECTED`, `FAILED`, `PENDING`\n"},"useCase":{"type":"string","description":"(string) - The use case that determines the type of network connectivity this endpoint provides.\nThis field is automatically determined based on the endpoint configuration and cloud-specific settings. Possible values are: `SERVICE_DIRECT`\n"}},"required":["accountId","createTime","displayName","endpointId","name","parent","region","state","useCase"],"inputProperties":{"azurePrivateEndpointInfo":{"$ref":"#/types/databricks:index/EndpointAzurePrivateEndpointInfo:EndpointAzurePrivateEndpointInfo","description":"Info for an Azure private endpoint\n"},"displayName":{"type":"string","description":"The human-readable display name of this endpoint.\nThe input should conform to RFC-1034, which restricts to letters, numbers, and hyphens,\nwith the first character a letter, the last a letter or a number, and a 63 character maximum\n"},"parent":{"type":"string"},"region":{"type":"string","description":"The cloud provider region where this endpoint is located\n"}},"requiredInputs":["displayName","parent","region"],"stateInputs":{"description":"Input properties used for looking up and filtering Endpoint resources.\n","properties":{"accountId":{"type":"string","description":"(string) - The Databricks Account in which the endpoint object exists\n"},"azurePrivateEndpointInfo":{"$ref":"#/types/databricks:index/EndpointAzurePrivateEndpointInfo:EndpointAzurePrivateEndpointInfo","description":"Info for an Azure private endpoint\n"},"createTime":{"type":"string","description":"(string) - The timestamp when the endpoint was created. The timestamp is in RFC 3339 format in UTC timezone\n"},"displayName":{"type":"string","description":"The human-readable display name of this endpoint.\nThe input should conform to RFC-1034, which restricts to letters, numbers, and hyphens,\nwith the first character a letter, the last a letter or a number, and a 63 character maximum\n"},"endpointId":{"type":"string","description":"(string) - The unique identifier for this endpoint under the account. This field is a UUID generated by Databricks\n"},"name":{"type":"string","description":"(string) - The resource name of the endpoint, which uniquely identifies the endpoint\n"},"parent":{"type":"string"},"region":{"type":"string","description":"The cloud provider region where this endpoint is located\n"},"state":{"type":"string","description":"(string) - The state of the endpoint. The endpoint can only be used if the state is `APPROVED`. Possible values are: `APPROVED`, `DISCONNECTED`, `FAILED`, `PENDING`\n"},"useCase":{"type":"string","description":"(string) - The use case that determines the type of network connectivity this endpoint provides.\nThis field is automatically determined based on the endpoint configuration and cloud-specific settings. Possible values are: `SERVICE_DIRECT`\n"}},"type":"object"}},"databricks:index/enhancedSecurityMonitoringWorkspaceSetting:EnhancedSecurityMonitoringWorkspaceSetting":{"properties":{"enhancedSecurityMonitoringWorkspace":{"$ref":"#/types/databricks:index/EnhancedSecurityMonitoringWorkspaceSettingEnhancedSecurityMonitoringWorkspace:EnhancedSecurityMonitoringWorkspaceSettingEnhancedSecurityMonitoringWorkspace"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/EnhancedSecurityMonitoringWorkspaceSettingProviderConfig:EnhancedSecurityMonitoringWorkspaceSettingProviderConfig"},"settingName":{"type":"string"}},"required":["enhancedSecurityMonitoringWorkspace","etag","settingName"],"inputProperties":{"enhancedSecurityMonitoringWorkspace":{"$ref":"#/types/databricks:index/EnhancedSecurityMonitoringWorkspaceSettingEnhancedSecurityMonitoringWorkspace:EnhancedSecurityMonitoringWorkspaceSettingEnhancedSecurityMonitoringWorkspace"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/EnhancedSecurityMonitoringWorkspaceSettingProviderConfig:EnhancedSecurityMonitoringWorkspaceSettingProviderConfig"},"settingName":{"type":"string"}},"requiredInputs":["enhancedSecurityMonitoringWorkspace"],"stateInputs":{"description":"Input properties used for looking up and filtering EnhancedSecurityMonitoringWorkspaceSetting resources.\n","properties":{"enhancedSecurityMonitoringWorkspace":{"$ref":"#/types/databricks:index/EnhancedSecurityMonitoringWorkspaceSettingEnhancedSecurityMonitoringWorkspace:EnhancedSecurityMonitoringWorkspaceSettingEnhancedSecurityMonitoringWorkspace"},"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/EnhancedSecurityMonitoringWorkspaceSettingProviderConfig:EnhancedSecurityMonitoringWorkspaceSettingProviderConfig"},"settingName":{"type":"string"}},"type":"object"}},"databricks:index/entitlements:Entitlements":{"description":"This resource allows you to set entitlements to existing databricks_users,\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eor databricks_service_principal.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e You must define entitlements of a principal using either \u003cspan pulumi-lang-nodejs=\"`databricks.Entitlements`\" pulumi-lang-dotnet=\"`databricks.Entitlements`\" pulumi-lang-go=\"`Entitlements`\" pulumi-lang-python=\"`Entitlements`\" pulumi-lang-yaml=\"`databricks.Entitlements`\" pulumi-lang-java=\"`databricks.Entitlements`\"\u003e`databricks.Entitlements`\u003c/span\u003e or directly within one of databricks_users,\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eor databricks_service_principal. Having entitlements defined in both resources will result in non-deterministic behaviour.\n\n## Example Usage\n\nSetting entitlements for a regular user:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst me = databricks.getUser({\n    userName: \"me@example.com\",\n});\nconst meEntitlements = new databricks.Entitlements(\"me\", {\n    userId: me.then(me =\u003e me.id),\n    allowClusterCreate: true,\n    allowInstancePoolCreate: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nme = databricks.get_user(user_name=\"me@example.com\")\nme_entitlements = databricks.Entitlements(\"me\",\n    user_id=me.id,\n    allow_cluster_create=True,\n    allow_instance_pool_create=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var me = Databricks.GetUser.Invoke(new()\n    {\n        UserName = \"me@example.com\",\n    });\n\n    var meEntitlements = new Databricks.Entitlements(\"me\", new()\n    {\n        UserId = me.Apply(getUserResult =\u003e getUserResult.Id),\n        AllowClusterCreate = true,\n        AllowInstancePoolCreate = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tme, err := databricks.LookupUser(ctx, \u0026databricks.LookupUserArgs{\n\t\t\tUserName: pulumi.StringRef(\"me@example.com\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewEntitlements(ctx, \"me\", \u0026databricks.EntitlementsArgs{\n\t\t\tUserId:                  pulumi.String(me.Id),\n\t\t\tAllowClusterCreate:      pulumi.Bool(true),\n\t\t\tAllowInstancePoolCreate: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetUserArgs;\nimport com.pulumi.databricks.Entitlements;\nimport com.pulumi.databricks.EntitlementsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var me = DatabricksFunctions.getUser(GetUserArgs.builder()\n            .userName(\"me@example.com\")\n            .build());\n\n        var meEntitlements = new Entitlements(\"meEntitlements\", EntitlementsArgs.builder()\n            .userId(me.id())\n            .allowClusterCreate(true)\n            .allowInstancePoolCreate(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  meEntitlements:\n    type: databricks:Entitlements\n    name: me\n    properties:\n      userId: ${me.id}\n      allowClusterCreate: true\n      allowInstancePoolCreate: true\nvariables:\n  me:\n    fn::invoke:\n      function: databricks:getUser\n      arguments:\n        userName: me@example.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nSetting entitlements for a service principal:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getServicePrincipal({\n    applicationId: \"11111111-2222-3333-4444-555666777888\",\n});\nconst thisEntitlements = new databricks.Entitlements(\"this\", {\n    servicePrincipalId: _this.then(_this =\u003e _this.spId),\n    allowClusterCreate: true,\n    allowInstancePoolCreate: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_service_principal(application_id=\"11111111-2222-3333-4444-555666777888\")\nthis_entitlements = databricks.Entitlements(\"this\",\n    service_principal_id=this.sp_id,\n    allow_cluster_create=True,\n    allow_instance_pool_create=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetServicePrincipal.Invoke(new()\n    {\n        ApplicationId = \"11111111-2222-3333-4444-555666777888\",\n    });\n\n    var thisEntitlements = new Databricks.Entitlements(\"this\", new()\n    {\n        ServicePrincipalId = @this.Apply(@this =\u003e @this.Apply(getServicePrincipalResult =\u003e getServicePrincipalResult.SpId)),\n        AllowClusterCreate = true,\n        AllowInstancePoolCreate = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupServicePrincipal(ctx, \u0026databricks.LookupServicePrincipalArgs{\n\t\t\tApplicationId: pulumi.StringRef(\"11111111-2222-3333-4444-555666777888\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewEntitlements(ctx, \"this\", \u0026databricks.EntitlementsArgs{\n\t\t\tServicePrincipalId:      pulumi.String(this.SpId),\n\t\t\tAllowClusterCreate:      pulumi.Bool(true),\n\t\t\tAllowInstancePoolCreate: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetServicePrincipalArgs;\nimport com.pulumi.databricks.Entitlements;\nimport com.pulumi.databricks.EntitlementsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getServicePrincipal(GetServicePrincipalArgs.builder()\n            .applicationId(\"11111111-2222-3333-4444-555666777888\")\n            .build());\n\n        var thisEntitlements = new Entitlements(\"thisEntitlements\", EntitlementsArgs.builder()\n            .servicePrincipalId(this_.spId())\n            .allowClusterCreate(true)\n            .allowInstancePoolCreate(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  thisEntitlements:\n    type: databricks:Entitlements\n    name: this\n    properties:\n      servicePrincipalId: ${this.spId}\n      allowClusterCreate: true\n      allowInstancePoolCreate: true\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getServicePrincipal\n      arguments:\n        applicationId: 11111111-2222-3333-4444-555666777888\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nSetting entitlements to all users in a workspace - referencing special \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e\u003cspan pulumi-lang-nodejs=\" databricks.Group\n\" pulumi-lang-dotnet=\" databricks.Group\n\" pulumi-lang-go=\" Group\n\" pulumi-lang-python=\" Group\n\" pulumi-lang-yaml=\" databricks.Group\n\" pulumi-lang-java=\" databricks.Group\n\"\u003e databricks.Group\n\u003c/span\u003e\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst users = databricks.getGroup({\n    displayName: \"users\",\n});\nconst workspace_users = new databricks.Entitlements(\"workspace-users\", {\n    groupId: users.then(users =\u003e users.id),\n    allowClusterCreate: true,\n    allowInstancePoolCreate: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nusers = databricks.get_group(display_name=\"users\")\nworkspace_users = databricks.Entitlements(\"workspace-users\",\n    group_id=users.id,\n    allow_cluster_create=True,\n    allow_instance_pool_create=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var users = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"users\",\n    });\n\n    var workspace_users = new Databricks.Entitlements(\"workspace-users\", new()\n    {\n        GroupId = users.Apply(getGroupResult =\u003e getGroupResult.Id),\n        AllowClusterCreate = true,\n        AllowInstancePoolCreate = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tusers, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"users\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewEntitlements(ctx, \"workspace-users\", \u0026databricks.EntitlementsArgs{\n\t\t\tGroupId:                 pulumi.String(users.Id),\n\t\t\tAllowClusterCreate:      pulumi.Bool(true),\n\t\t\tAllowInstancePoolCreate: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.Entitlements;\nimport com.pulumi.databricks.EntitlementsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var users = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"users\")\n            .build());\n\n        var workspace_users = new Entitlements(\"workspace-users\", EntitlementsArgs.builder()\n            .groupId(users.id())\n            .allowClusterCreate(true)\n            .allowInstancePoolCreate(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  workspace-users:\n    type: databricks:Entitlements\n    properties:\n      groupId: ${users.id}\n      allowClusterCreate: true\n      allowInstancePoolCreate: true\nvariables:\n  users:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: users\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_group.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003edata to retrieve information about databricks_user.\n\n","properties":{"allowClusterCreate":{"type":"boolean","description":"Allow the principal to have cluster create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e argument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"Allow the principal to have instance pool create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the principal to have access to [Databricks SQL](https://databricks.com/product/databricks-sql)  UI, [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one) and through databricks_sql_endpoint.\n"},"groupId":{"type":"string","description":"Canonical unique identifier for the group.\n"},"providerConfig":{"$ref":"#/types/databricks:index/EntitlementsProviderConfig:EntitlementsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"servicePrincipalId":{"type":"string","description":"Canonical unique identifier for the service principal.\n\nThe following entitlements are available.\n"},"userId":{"type":"string","description":"Canonical unique identifier for the user.\n"},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the principal to have access to a Databricks Workspace UI and [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the principal to have access only to [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"inputProperties":{"allowClusterCreate":{"type":"boolean","description":"Allow the principal to have cluster create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e argument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"Allow the principal to have instance pool create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the principal to have access to [Databricks SQL](https://databricks.com/product/databricks-sql)  UI, [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one) and through databricks_sql_endpoint.\n"},"groupId":{"type":"string","description":"Canonical unique identifier for the group.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/EntitlementsProviderConfig:EntitlementsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"servicePrincipalId":{"type":"string","description":"Canonical unique identifier for the service principal.\n\nThe following entitlements are available.\n","willReplaceOnChanges":true},"userId":{"type":"string","description":"Canonical unique identifier for the user.\n","willReplaceOnChanges":true},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the principal to have access to a Databricks Workspace UI and [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the principal to have access only to [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering Entitlements resources.\n","properties":{"allowClusterCreate":{"type":"boolean","description":"Allow the principal to have cluster create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e argument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"Allow the principal to have instance pool create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the principal to have access to [Databricks SQL](https://databricks.com/product/databricks-sql)  UI, [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one) and through databricks_sql_endpoint.\n"},"groupId":{"type":"string","description":"Canonical unique identifier for the group.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/EntitlementsProviderConfig:EntitlementsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"servicePrincipalId":{"type":"string","description":"Canonical unique identifier for the service principal.\n\nThe following entitlements are available.\n","willReplaceOnChanges":true},"userId":{"type":"string","description":"Canonical unique identifier for the user.\n","willReplaceOnChanges":true},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the principal to have access to a Databricks Workspace UI and [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the principal to have access only to [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"type":"object"}},"databricks:index/entityTagAssignment:EntityTagAssignment":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis resource allows you to create, update, list, and delete tag assignments on Unity Catalog entities.\n\n## Example Usage\n\n### Basic tag assignment to a catalog\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst catalogTag = new databricks.EntityTagAssignment(\"catalog_tag\", {\n    entityType: \"catalogs\",\n    entityName: \"production_catalog\",\n    tagKey: \"environment\",\n    tagValue: \"production\",\n});\nconst schemaTag = new databricks.EntityTagAssignment(\"schema_tag\", {\n    entityType: \"schemas\",\n    entityName: \"production_catalog.sales_data\",\n    tagKey: \"owner\",\n    tagValue: \"sales-team\",\n});\nconst tableTag = new databricks.EntityTagAssignment(\"table_tag\", {\n    entityType: \"tables\",\n    entityName: \"production_catalog.sales_data.customer_orders\",\n    tagKey: \"data_classification\",\n    tagValue: \"confidential\",\n});\nconst columnTag = new databricks.EntityTagAssignment(\"column_tag\", {\n    entityType: \"columns\",\n    entityName: \"production_catalog.sales_data.customers.email_address\",\n    tagKey: \"pii\",\n    tagValue: \"email\",\n});\nconst volumeTag = new databricks.EntityTagAssignment(\"volume_tag\", {\n    entityType: \"volumes\",\n    entityName: \"production_catalog.raw_data.landing_zone\",\n    tagKey: \"purpose\",\n    tagValue: \"data_ingestion\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncatalog_tag = databricks.EntityTagAssignment(\"catalog_tag\",\n    entity_type=\"catalogs\",\n    entity_name=\"production_catalog\",\n    tag_key=\"environment\",\n    tag_value=\"production\")\nschema_tag = databricks.EntityTagAssignment(\"schema_tag\",\n    entity_type=\"schemas\",\n    entity_name=\"production_catalog.sales_data\",\n    tag_key=\"owner\",\n    tag_value=\"sales-team\")\ntable_tag = databricks.EntityTagAssignment(\"table_tag\",\n    entity_type=\"tables\",\n    entity_name=\"production_catalog.sales_data.customer_orders\",\n    tag_key=\"data_classification\",\n    tag_value=\"confidential\")\ncolumn_tag = databricks.EntityTagAssignment(\"column_tag\",\n    entity_type=\"columns\",\n    entity_name=\"production_catalog.sales_data.customers.email_address\",\n    tag_key=\"pii\",\n    tag_value=\"email\")\nvolume_tag = databricks.EntityTagAssignment(\"volume_tag\",\n    entity_type=\"volumes\",\n    entity_name=\"production_catalog.raw_data.landing_zone\",\n    tag_key=\"purpose\",\n    tag_value=\"data_ingestion\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var catalogTag = new Databricks.EntityTagAssignment(\"catalog_tag\", new()\n    {\n        EntityType = \"catalogs\",\n        EntityName = \"production_catalog\",\n        TagKey = \"environment\",\n        TagValue = \"production\",\n    });\n\n    var schemaTag = new Databricks.EntityTagAssignment(\"schema_tag\", new()\n    {\n        EntityType = \"schemas\",\n        EntityName = \"production_catalog.sales_data\",\n        TagKey = \"owner\",\n        TagValue = \"sales-team\",\n    });\n\n    var tableTag = new Databricks.EntityTagAssignment(\"table_tag\", new()\n    {\n        EntityType = \"tables\",\n        EntityName = \"production_catalog.sales_data.customer_orders\",\n        TagKey = \"data_classification\",\n        TagValue = \"confidential\",\n    });\n\n    var columnTag = new Databricks.EntityTagAssignment(\"column_tag\", new()\n    {\n        EntityType = \"columns\",\n        EntityName = \"production_catalog.sales_data.customers.email_address\",\n        TagKey = \"pii\",\n        TagValue = \"email\",\n    });\n\n    var volumeTag = new Databricks.EntityTagAssignment(\"volume_tag\", new()\n    {\n        EntityType = \"volumes\",\n        EntityName = \"production_catalog.raw_data.landing_zone\",\n        TagKey = \"purpose\",\n        TagValue = \"data_ingestion\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewEntityTagAssignment(ctx, \"catalog_tag\", \u0026databricks.EntityTagAssignmentArgs{\n\t\t\tEntityType: pulumi.String(\"catalogs\"),\n\t\t\tEntityName: pulumi.String(\"production_catalog\"),\n\t\t\tTagKey:     pulumi.String(\"environment\"),\n\t\t\tTagValue:   pulumi.String(\"production\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewEntityTagAssignment(ctx, \"schema_tag\", \u0026databricks.EntityTagAssignmentArgs{\n\t\t\tEntityType: pulumi.String(\"schemas\"),\n\t\t\tEntityName: pulumi.String(\"production_catalog.sales_data\"),\n\t\t\tTagKey:     pulumi.String(\"owner\"),\n\t\t\tTagValue:   pulumi.String(\"sales-team\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewEntityTagAssignment(ctx, \"table_tag\", \u0026databricks.EntityTagAssignmentArgs{\n\t\t\tEntityType: pulumi.String(\"tables\"),\n\t\t\tEntityName: pulumi.String(\"production_catalog.sales_data.customer_orders\"),\n\t\t\tTagKey:     pulumi.String(\"data_classification\"),\n\t\t\tTagValue:   pulumi.String(\"confidential\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewEntityTagAssignment(ctx, \"column_tag\", \u0026databricks.EntityTagAssignmentArgs{\n\t\t\tEntityType: pulumi.String(\"columns\"),\n\t\t\tEntityName: pulumi.String(\"production_catalog.sales_data.customers.email_address\"),\n\t\t\tTagKey:     pulumi.String(\"pii\"),\n\t\t\tTagValue:   pulumi.String(\"email\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewEntityTagAssignment(ctx, \"volume_tag\", \u0026databricks.EntityTagAssignmentArgs{\n\t\t\tEntityType: pulumi.String(\"volumes\"),\n\t\t\tEntityName: pulumi.String(\"production_catalog.raw_data.landing_zone\"),\n\t\t\tTagKey:     pulumi.String(\"purpose\"),\n\t\t\tTagValue:   pulumi.String(\"data_ingestion\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.EntityTagAssignment;\nimport com.pulumi.databricks.EntityTagAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var catalogTag = new EntityTagAssignment(\"catalogTag\", EntityTagAssignmentArgs.builder()\n            .entityType(\"catalogs\")\n            .entityName(\"production_catalog\")\n            .tagKey(\"environment\")\n            .tagValue(\"production\")\n            .build());\n\n        var schemaTag = new EntityTagAssignment(\"schemaTag\", EntityTagAssignmentArgs.builder()\n            .entityType(\"schemas\")\n            .entityName(\"production_catalog.sales_data\")\n            .tagKey(\"owner\")\n            .tagValue(\"sales-team\")\n            .build());\n\n        var tableTag = new EntityTagAssignment(\"tableTag\", EntityTagAssignmentArgs.builder()\n            .entityType(\"tables\")\n            .entityName(\"production_catalog.sales_data.customer_orders\")\n            .tagKey(\"data_classification\")\n            .tagValue(\"confidential\")\n            .build());\n\n        var columnTag = new EntityTagAssignment(\"columnTag\", EntityTagAssignmentArgs.builder()\n            .entityType(\"columns\")\n            .entityName(\"production_catalog.sales_data.customers.email_address\")\n            .tagKey(\"pii\")\n            .tagValue(\"email\")\n            .build());\n\n        var volumeTag = new EntityTagAssignment(\"volumeTag\", EntityTagAssignmentArgs.builder()\n            .entityType(\"volumes\")\n            .entityName(\"production_catalog.raw_data.landing_zone\")\n            .tagKey(\"purpose\")\n            .tagValue(\"data_ingestion\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  catalogTag:\n    type: databricks:EntityTagAssignment\n    name: catalog_tag\n    properties:\n      entityType: catalogs\n      entityName: production_catalog\n      tagKey: environment\n      tagValue: production\n  schemaTag:\n    type: databricks:EntityTagAssignment\n    name: schema_tag\n    properties:\n      entityType: schemas\n      entityName: production_catalog.sales_data\n      tagKey: owner\n      tagValue: sales-team\n  tableTag:\n    type: databricks:EntityTagAssignment\n    name: table_tag\n    properties:\n      entityType: tables\n      entityName: production_catalog.sales_data.customer_orders\n      tagKey: data_classification\n      tagValue: confidential\n  columnTag:\n    type: databricks:EntityTagAssignment\n    name: column_tag\n    properties:\n      entityType: columns\n      entityName: production_catalog.sales_data.customers.email_address\n      tagKey: pii\n      tagValue: email\n  volumeTag:\n    type: databricks:EntityTagAssignment\n    name: volume_tag\n    properties:\n      entityType: volumes\n      entityName: production_catalog.raw_data.landing_zone\n      tagKey: purpose\n      tagValue: data_ingestion\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"entityName":{"type":"string","description":"The fully qualified name of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of the entity to which the tag is assigned. Allowed values are: catalogs, schemas, tables, columns, volumes\n"},"providerConfig":{"$ref":"#/types/databricks:index/EntityTagAssignmentProviderConfig:EntityTagAssignmentProviderConfig","description":"Configure the provider for management through account provider.\n"},"sourceType":{"type":"string","description":"(string) - The source type of the tag assignment, e.g., user-assigned or system-assigned. Possible values are: `TAG_ASSIGNMENT_SOURCE_TYPE_SYSTEM_DATA_CLASSIFICATION`\n"},"tagKey":{"type":"string","description":"The key of the tag\n"},"tagValue":{"type":"string","description":"The value of the tag\n"},"updateTime":{"type":"string","description":"(string) - The timestamp when the tag assignment was last updated\n"},"updatedBy":{"type":"string","description":"(string) - The user or principal who updated the tag assignment\n"}},"required":["entityName","entityType","sourceType","tagKey","updateTime","updatedBy"],"inputProperties":{"entityName":{"type":"string","description":"The fully qualified name of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of the entity to which the tag is assigned. Allowed values are: catalogs, schemas, tables, columns, volumes\n"},"providerConfig":{"$ref":"#/types/databricks:index/EntityTagAssignmentProviderConfig:EntityTagAssignmentProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string","description":"The key of the tag\n"},"tagValue":{"type":"string","description":"The value of the tag\n"}},"requiredInputs":["entityName","entityType","tagKey"],"stateInputs":{"description":"Input properties used for looking up and filtering EntityTagAssignment resources.\n","properties":{"entityName":{"type":"string","description":"The fully qualified name of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of the entity to which the tag is assigned. Allowed values are: catalogs, schemas, tables, columns, volumes\n"},"providerConfig":{"$ref":"#/types/databricks:index/EntityTagAssignmentProviderConfig:EntityTagAssignmentProviderConfig","description":"Configure the provider for management through account provider.\n"},"sourceType":{"type":"string","description":"(string) - The source type of the tag assignment, e.g., user-assigned or system-assigned. Possible values are: `TAG_ASSIGNMENT_SOURCE_TYPE_SYSTEM_DATA_CLASSIFICATION`\n"},"tagKey":{"type":"string","description":"The key of the tag\n"},"tagValue":{"type":"string","description":"The value of the tag\n"},"updateTime":{"type":"string","description":"(string) - The timestamp when the tag assignment was last updated\n"},"updatedBy":{"type":"string","description":"(string) - The user or principal who updated the tag assignment\n"}},"type":"object"}},"databricks:index/externalLocation:ExternalLocation":{"description":"To work with external tables, Unity Catalog introduces two new objects to access and work with external cloud storage:\n\n-\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003erepresent authentication methods to access cloud storage (e.g. an IAM role for Amazon S3 or a service principal for Azure Storage). Storage credentials are access-controlled to determine which users can use the credential.\n- \u003cspan pulumi-lang-nodejs=\"`databricks.ExternalLocation`\" pulumi-lang-dotnet=\"`databricks.ExternalLocation`\" pulumi-lang-go=\"`ExternalLocation`\" pulumi-lang-python=\"`ExternalLocation`\" pulumi-lang-yaml=\"`databricks.ExternalLocation`\" pulumi-lang-java=\"`databricks.ExternalLocation`\"\u003e`databricks.ExternalLocation`\u003c/span\u003e are objects that combine a cloud storage path with a Storage Credential that can be used to access the location.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\nFor AWS\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst external = new databricks.StorageCredential(\"external\", {\n    name: externalDataAccess.name,\n    awsIamRole: {\n        roleArn: externalDataAccess.arn,\n    },\n    comment: \"Managed by TF\",\n});\nconst some = new databricks.ExternalLocation(\"some\", {\n    name: \"external\",\n    url: `s3://${externalAwsS3Bucket.id}/some`,\n    credentialName: external.id,\n    comment: \"Managed by TF\",\n});\nconst someGrants = new databricks.Grants(\"some\", {\n    externalLocation: some.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\n            \"CREATE_EXTERNAL_TABLE\",\n            \"READ_FILES\",\n        ],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexternal = databricks.StorageCredential(\"external\",\n    name=external_data_access[\"name\"],\n    aws_iam_role={\n        \"role_arn\": external_data_access[\"arn\"],\n    },\n    comment=\"Managed by TF\")\nsome = databricks.ExternalLocation(\"some\",\n    name=\"external\",\n    url=f\"s3://{external_aws_s3_bucket['id']}/some\",\n    credential_name=external.id,\n    comment=\"Managed by TF\")\nsome_grants = databricks.Grants(\"some\",\n    external_location=some.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\n            \"CREATE_EXTERNAL_TABLE\",\n            \"READ_FILES\",\n        ],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var external = new Databricks.StorageCredential(\"external\", new()\n    {\n        Name = externalDataAccess.Name,\n        AwsIamRole = new Databricks.Inputs.StorageCredentialAwsIamRoleArgs\n        {\n            RoleArn = externalDataAccess.Arn,\n        },\n        Comment = \"Managed by TF\",\n    });\n\n    var some = new Databricks.ExternalLocation(\"some\", new()\n    {\n        Name = \"external\",\n        Url = $\"s3://{externalAwsS3Bucket.Id}/some\",\n        CredentialName = external.Id,\n        Comment = \"Managed by TF\",\n    });\n\n    var someGrants = new Databricks.Grants(\"some\", new()\n    {\n        ExternalLocation = some.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"CREATE_EXTERNAL_TABLE\",\n                    \"READ_FILES\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternal, err := databricks.NewStorageCredential(ctx, \"external\", \u0026databricks.StorageCredentialArgs{\n\t\t\tName: pulumi.Any(externalDataAccess.Name),\n\t\t\tAwsIamRole: \u0026databricks.StorageCredentialAwsIamRoleArgs{\n\t\t\t\tRoleArn: pulumi.Any(externalDataAccess.Arn),\n\t\t\t},\n\t\t\tComment: pulumi.String(\"Managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsome, err := databricks.NewExternalLocation(ctx, \"some\", \u0026databricks.ExternalLocationArgs{\n\t\t\tName:           pulumi.String(\"external\"),\n\t\t\tUrl:            pulumi.Sprintf(\"s3://%v/some\", externalAwsS3Bucket.Id),\n\t\t\tCredentialName: external.ID(),\n\t\t\tComment:        pulumi.String(\"Managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"some\", \u0026databricks.GrantsArgs{\n\t\t\tExternalLocation: some.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\t\t\tpulumi.String(\"READ_FILES\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.StorageCredential;\nimport com.pulumi.databricks.StorageCredentialArgs;\nimport com.pulumi.databricks.inputs.StorageCredentialAwsIamRoleArgs;\nimport com.pulumi.databricks.ExternalLocation;\nimport com.pulumi.databricks.ExternalLocationArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var external = new StorageCredential(\"external\", StorageCredentialArgs.builder()\n            .name(externalDataAccess.name())\n            .awsIamRole(StorageCredentialAwsIamRoleArgs.builder()\n                .roleArn(externalDataAccess.arn())\n                .build())\n            .comment(\"Managed by TF\")\n            .build());\n\n        var some = new ExternalLocation(\"some\", ExternalLocationArgs.builder()\n            .name(\"external\")\n            .url(String.format(\"s3://%s/some\", externalAwsS3Bucket.id()))\n            .credentialName(external.id())\n            .comment(\"Managed by TF\")\n            .build());\n\n        var someGrants = new Grants(\"someGrants\", GrantsArgs.builder()\n            .externalLocation(some.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(                \n                    \"CREATE_EXTERNAL_TABLE\",\n                    \"READ_FILES\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  external:\n    type: databricks:StorageCredential\n    properties:\n      name: ${externalDataAccess.name}\n      awsIamRole:\n        roleArn: ${externalDataAccess.arn}\n      comment: Managed by TF\n  some:\n    type: databricks:ExternalLocation\n    properties:\n      name: external\n      url: s3://${externalAwsS3Bucket.id}/some\n      credentialName: ${external.id}\n      comment: Managed by TF\n  someGrants:\n    type: databricks:Grants\n    name: some\n    properties:\n      externalLocation: ${some.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - CREATE_EXTERNAL_TABLE\n            - READ_FILES\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nFor Azure\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst external = new databricks.StorageCredential(\"external\", {\n    name: extCred.displayName,\n    azureServicePrincipal: {\n        directoryId: tenantId,\n        applicationId: extCred.applicationId,\n        clientSecret: extCredAzureadApplicationPassword.value,\n    },\n    comment: \"Managed by TF\",\n}, {\n    dependsOn: [_this],\n});\nconst some = new databricks.ExternalLocation(\"some\", {\n    name: \"external\",\n    url: std.format({\n        input: \"abfss://%s@%s.dfs.core.windows.net\",\n        args: [\n            extStorage.name,\n            extStorageAzurermStorageAccount.name,\n        ],\n    }).then(invoke =\u003e invoke.result),\n    credentialName: external.id,\n    comment: \"Managed by TF\",\n}, {\n    dependsOn: [_this],\n});\nconst someGrants = new databricks.Grants(\"some\", {\n    externalLocation: some.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\n            \"CREATE_EXTERNAL_TABLE\",\n            \"READ_FILES\",\n        ],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\nexternal = databricks.StorageCredential(\"external\",\n    name=ext_cred[\"displayName\"],\n    azure_service_principal={\n        \"directory_id\": tenant_id,\n        \"application_id\": ext_cred[\"applicationId\"],\n        \"client_secret\": ext_cred_azuread_application_password[\"value\"],\n    },\n    comment=\"Managed by TF\",\n    opts = pulumi.ResourceOptions(depends_on=[this]))\nsome = databricks.ExternalLocation(\"some\",\n    name=\"external\",\n    url=std.format(input=\"abfss://%s@%s.dfs.core.windows.net\",\n        args=[\n            ext_storage[\"name\"],\n            ext_storage_azurerm_storage_account[\"name\"],\n        ]).result,\n    credential_name=external.id,\n    comment=\"Managed by TF\",\n    opts = pulumi.ResourceOptions(depends_on=[this]))\nsome_grants = databricks.Grants(\"some\",\n    external_location=some.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\n            \"CREATE_EXTERNAL_TABLE\",\n            \"READ_FILES\",\n        ],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var external = new Databricks.StorageCredential(\"external\", new()\n    {\n        Name = extCred.DisplayName,\n        AzureServicePrincipal = new Databricks.Inputs.StorageCredentialAzureServicePrincipalArgs\n        {\n            DirectoryId = tenantId,\n            ApplicationId = extCred.ApplicationId,\n            ClientSecret = extCredAzureadApplicationPassword.Value,\n        },\n        Comment = \"Managed by TF\",\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            @this,\n        },\n    });\n\n    var some = new Databricks.ExternalLocation(\"some\", new()\n    {\n        Name = \"external\",\n        Url = Std.Format.Invoke(new()\n        {\n            Input = \"abfss://%s@%s.dfs.core.windows.net\",\n            Args = new[]\n            {\n                extStorage.Name,\n                extStorageAzurermStorageAccount.Name,\n            },\n        }).Apply(invoke =\u003e invoke.Result),\n        CredentialName = external.Id,\n        Comment = \"Managed by TF\",\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            @this,\n        },\n    });\n\n    var someGrants = new Databricks.Grants(\"some\", new()\n    {\n        ExternalLocation = some.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"CREATE_EXTERNAL_TABLE\",\n                    \"READ_FILES\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternal, err := databricks.NewStorageCredential(ctx, \"external\", \u0026databricks.StorageCredentialArgs{\n\t\t\tName: pulumi.Any(extCred.DisplayName),\n\t\t\tAzureServicePrincipal: \u0026databricks.StorageCredentialAzureServicePrincipalArgs{\n\t\t\t\tDirectoryId:   pulumi.Any(tenantId),\n\t\t\t\tApplicationId: pulumi.Any(extCred.ApplicationId),\n\t\t\t\tClientSecret:  pulumi.Any(extCredAzureadApplicationPassword.Value),\n\t\t\t},\n\t\t\tComment: pulumi.String(\"Managed by TF\"),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tthis,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tinvokeFormat, err := std.Format(ctx, \u0026std.FormatArgs{\n\t\t\tInput: \"abfss://%s@%s.dfs.core.windows.net\",\n\t\t\tArgs: []interface{}{\n\t\t\t\textStorage.Name,\n\t\t\t\textStorageAzurermStorageAccount.Name,\n\t\t\t},\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsome, err := databricks.NewExternalLocation(ctx, \"some\", \u0026databricks.ExternalLocationArgs{\n\t\t\tName:           pulumi.String(\"external\"),\n\t\t\tUrl:            pulumi.String(invokeFormat.Result),\n\t\t\tCredentialName: external.ID(),\n\t\t\tComment:        pulumi.String(\"Managed by TF\"),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tthis,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"some\", \u0026databricks.GrantsArgs{\n\t\t\tExternalLocation: some.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\t\t\tpulumi.String(\"READ_FILES\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.StorageCredential;\nimport com.pulumi.databricks.StorageCredentialArgs;\nimport com.pulumi.databricks.inputs.StorageCredentialAzureServicePrincipalArgs;\nimport com.pulumi.databricks.ExternalLocation;\nimport com.pulumi.databricks.ExternalLocationArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.FormatArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var external = new StorageCredential(\"external\", StorageCredentialArgs.builder()\n            .name(extCred.displayName())\n            .azureServicePrincipal(StorageCredentialAzureServicePrincipalArgs.builder()\n                .directoryId(tenantId)\n                .applicationId(extCred.applicationId())\n                .clientSecret(extCredAzureadApplicationPassword.value())\n                .build())\n            .comment(\"Managed by TF\")\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(this_)\n                .build());\n\n        var some = new ExternalLocation(\"some\", ExternalLocationArgs.builder()\n            .name(\"external\")\n            .url(StdFunctions.format(FormatArgs.builder()\n                .input(\"abfss://%s@%s.dfs.core.windows.net\")\n                .args(                \n                    extStorage.name(),\n                    extStorageAzurermStorageAccount.name())\n                .build()).result())\n            .credentialName(external.id())\n            .comment(\"Managed by TF\")\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(this_)\n                .build());\n\n        var someGrants = new Grants(\"someGrants\", GrantsArgs.builder()\n            .externalLocation(some.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(                \n                    \"CREATE_EXTERNAL_TABLE\",\n                    \"READ_FILES\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  external:\n    type: databricks:StorageCredential\n    properties:\n      name: ${extCred.displayName}\n      azureServicePrincipal:\n        directoryId: ${tenantId}\n        applicationId: ${extCred.applicationId}\n        clientSecret: ${extCredAzureadApplicationPassword.value}\n      comment: Managed by TF\n    options:\n      dependsOn:\n        - ${this}\n  some:\n    type: databricks:ExternalLocation\n    properties:\n      name: external\n      url:\n        fn::invoke:\n          function: std:format\n          arguments:\n            input: abfss://%s@%s.dfs.core.windows.net\n            args:\n              - ${extStorage.name}\n              - ${extStorageAzurermStorageAccount.name}\n          return: result\n      credentialName: ${external.id}\n      comment: Managed by TF\n    options:\n      dependsOn:\n        - ${this}\n  someGrants:\n    type: databricks:Grants\n    name: some\n    properties:\n      externalLocation: ${some.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - CREATE_EXTERNAL_TABLE\n            - READ_FILES\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nFor GCP\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst ext = new databricks.StorageCredential(\"ext\", {\n    name: \"the-creds\",\n    databricksGcpServiceAccount: {},\n});\nconst some = new databricks.ExternalLocation(\"some\", {\n    name: \"the-ext-location\",\n    url: `gs://${extBucket.name}`,\n    credentialName: ext.id,\n    comment: \"Managed by TF\",\n});\nconst someGrants = new databricks.Grants(\"some\", {\n    externalLocation: some.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\n            \"CREATE_EXTERNAL_TABLE\",\n            \"READ_FILES\",\n        ],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\next = databricks.StorageCredential(\"ext\",\n    name=\"the-creds\",\n    databricks_gcp_service_account={})\nsome = databricks.ExternalLocation(\"some\",\n    name=\"the-ext-location\",\n    url=f\"gs://{ext_bucket['name']}\",\n    credential_name=ext.id,\n    comment=\"Managed by TF\")\nsome_grants = databricks.Grants(\"some\",\n    external_location=some.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\n            \"CREATE_EXTERNAL_TABLE\",\n            \"READ_FILES\",\n        ],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var ext = new Databricks.StorageCredential(\"ext\", new()\n    {\n        Name = \"the-creds\",\n        DatabricksGcpServiceAccount = null,\n    });\n\n    var some = new Databricks.ExternalLocation(\"some\", new()\n    {\n        Name = \"the-ext-location\",\n        Url = $\"gs://{extBucket.Name}\",\n        CredentialName = ext.Id,\n        Comment = \"Managed by TF\",\n    });\n\n    var someGrants = new Databricks.Grants(\"some\", new()\n    {\n        ExternalLocation = some.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"CREATE_EXTERNAL_TABLE\",\n                    \"READ_FILES\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\text, err := databricks.NewStorageCredential(ctx, \"ext\", \u0026databricks.StorageCredentialArgs{\n\t\t\tName:                        pulumi.String(\"the-creds\"),\n\t\t\tDatabricksGcpServiceAccount: \u0026databricks.StorageCredentialDatabricksGcpServiceAccountArgs{},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsome, err := databricks.NewExternalLocation(ctx, \"some\", \u0026databricks.ExternalLocationArgs{\n\t\t\tName:           pulumi.String(\"the-ext-location\"),\n\t\t\tUrl:            pulumi.Sprintf(\"gs://%v\", extBucket.Name),\n\t\t\tCredentialName: ext.ID(),\n\t\t\tComment:        pulumi.String(\"Managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"some\", \u0026databricks.GrantsArgs{\n\t\t\tExternalLocation: some.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\t\t\tpulumi.String(\"READ_FILES\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.StorageCredential;\nimport com.pulumi.databricks.StorageCredentialArgs;\nimport com.pulumi.databricks.inputs.StorageCredentialDatabricksGcpServiceAccountArgs;\nimport com.pulumi.databricks.ExternalLocation;\nimport com.pulumi.databricks.ExternalLocationArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var ext = new StorageCredential(\"ext\", StorageCredentialArgs.builder()\n            .name(\"the-creds\")\n            .databricksGcpServiceAccount(StorageCredentialDatabricksGcpServiceAccountArgs.builder()\n                .build())\n            .build());\n\n        var some = new ExternalLocation(\"some\", ExternalLocationArgs.builder()\n            .name(\"the-ext-location\")\n            .url(String.format(\"gs://%s\", extBucket.name()))\n            .credentialName(ext.id())\n            .comment(\"Managed by TF\")\n            .build());\n\n        var someGrants = new Grants(\"someGrants\", GrantsArgs.builder()\n            .externalLocation(some.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(                \n                    \"CREATE_EXTERNAL_TABLE\",\n                    \"READ_FILES\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  ext:\n    type: databricks:StorageCredential\n    properties:\n      name: the-creds\n      databricksGcpServiceAccount: {}\n  some:\n    type: databricks:ExternalLocation\n    properties:\n      name: the-ext-location\n      url: gs://${extBucket.name}\n      credentialName: ${ext.id}\n      comment: Managed by TF\n  someGrants:\n    type: databricks:Grants\n    name: some\n    properties:\n      externalLocation: ${some.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - CREATE_EXTERNAL_TABLE\n            - READ_FILES\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nExample \u003cspan pulumi-lang-nodejs=\"`encryptionDetails`\" pulumi-lang-dotnet=\"`EncryptionDetails`\" pulumi-lang-go=\"`encryptionDetails`\" pulumi-lang-python=\"`encryption_details`\" pulumi-lang-yaml=\"`encryptionDetails`\" pulumi-lang-java=\"`encryptionDetails`\"\u003e`encryption_details`\u003c/span\u003e specifying SSE_S3 encryption:\n\n","properties":{"browseOnly":{"type":"boolean"},"comment":{"type":"string","description":"User-supplied free-form text.\n"},"createdAt":{"type":"integer","description":"Time at which this external location was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of external location creator.\n"},"credentialId":{"type":"string","description":"Unique ID of the location's storage credential.\n"},"credentialName":{"type":"string","description":"Name of the\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003eto use with this external location.\n"},"effectiveEnableFileEvents":{"type":"boolean"},"enableFileEvents":{"type":"boolean","description":"indicates if managed file events are enabled for this external location.  Requires \u003cspan pulumi-lang-nodejs=\"`fileEventQueue`\" pulumi-lang-dotnet=\"`FileEventQueue`\" pulumi-lang-go=\"`fileEventQueue`\" pulumi-lang-python=\"`file_event_queue`\" pulumi-lang-yaml=\"`fileEventQueue`\" pulumi-lang-java=\"`fileEventQueue`\"\u003e`file_event_queue`\u003c/span\u003e block.\n"},"encryptionDetails":{"$ref":"#/types/databricks:index/ExternalLocationEncryptionDetails:ExternalLocationEncryptionDetails"},"fallback":{"type":"boolean","description":"Indicates whether fallback mode is enabled for this external location. When fallback mode is enabled (disabled by default), the access to the location falls back to cluster credentials if UC credentials are not sufficient.\n"},"fileEventQueue":{"$ref":"#/types/databricks:index/ExternalLocationFileEventQueue:ExternalLocationFileEventQueue"},"forceDestroy":{"type":"boolean","description":"Destroy external location regardless of its dependents.\n"},"forceUpdate":{"type":"boolean","description":"Update external location regardless of its dependents.\n"},"isolationMode":{"type":"string","description":"Whether the external location is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATION_MODE_ISOLATED` or `ISOLATION_MODE_OPEN`. Setting the external location to `ISOLATION_MODE_ISOLATED` will automatically allow access from the current workspace.\n"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"Name of External Location, which must be unique within the databricks_metastore. Change forces creation of a new resource.\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the external location owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ExternalLocationProviderConfig:ExternalLocationProviderConfig"},"readOnly":{"type":"boolean","description":"Indicates whether the external location is read-only.\n"},"skipValidation":{"type":"boolean","description":"Suppress validation errors if any \u0026 force save the external location\n"},"updatedAt":{"type":"integer","description":"Time at which external location this was last modified, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of user who last modified the external location.\n"},"url":{"type":"string","description":"Path URL in cloud storage, of the form: `s3://[bucket-host]/[bucket-dir]` (AWS), `abfss://[user]@[host]/[path]` (Azure), `gs://[bucket-host]/[bucket-dir]` (GCP).   If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.).\n"}},"required":["browseOnly","createdAt","createdBy","credentialId","credentialName","effectiveEnableFileEvents","isolationMode","metastoreId","name","owner","updatedAt","updatedBy","url"],"inputProperties":{"comment":{"type":"string","description":"User-supplied free-form text.\n"},"credentialName":{"type":"string","description":"Name of the\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003eto use with this external location.\n"},"enableFileEvents":{"type":"boolean","description":"indicates if managed file events are enabled for this external location.  Requires \u003cspan pulumi-lang-nodejs=\"`fileEventQueue`\" pulumi-lang-dotnet=\"`FileEventQueue`\" pulumi-lang-go=\"`fileEventQueue`\" pulumi-lang-python=\"`file_event_queue`\" pulumi-lang-yaml=\"`fileEventQueue`\" pulumi-lang-java=\"`fileEventQueue`\"\u003e`file_event_queue`\u003c/span\u003e block.\n"},"encryptionDetails":{"$ref":"#/types/databricks:index/ExternalLocationEncryptionDetails:ExternalLocationEncryptionDetails"},"fallback":{"type":"boolean","description":"Indicates whether fallback mode is enabled for this external location. When fallback mode is enabled (disabled by default), the access to the location falls back to cluster credentials if UC credentials are not sufficient.\n"},"fileEventQueue":{"$ref":"#/types/databricks:index/ExternalLocationFileEventQueue:ExternalLocationFileEventQueue"},"forceDestroy":{"type":"boolean","description":"Destroy external location regardless of its dependents.\n"},"forceUpdate":{"type":"boolean","description":"Update external location regardless of its dependents.\n"},"isolationMode":{"type":"string","description":"Whether the external location is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATION_MODE_ISOLATED` or `ISOLATION_MODE_OPEN`. Setting the external location to `ISOLATION_MODE_ISOLATED` will automatically allow access from the current workspace.\n"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"Name of External Location, which must be unique within the databricks_metastore. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the external location owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ExternalLocationProviderConfig:ExternalLocationProviderConfig"},"readOnly":{"type":"boolean","description":"Indicates whether the external location is read-only.\n"},"skipValidation":{"type":"boolean","description":"Suppress validation errors if any \u0026 force save the external location\n"},"url":{"type":"string","description":"Path URL in cloud storage, of the form: `s3://[bucket-host]/[bucket-dir]` (AWS), `abfss://[user]@[host]/[path]` (Azure), `gs://[bucket-host]/[bucket-dir]` (GCP).   If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.).\n"}},"requiredInputs":["credentialName","url"],"stateInputs":{"description":"Input properties used for looking up and filtering ExternalLocation resources.\n","properties":{"browseOnly":{"type":"boolean"},"comment":{"type":"string","description":"User-supplied free-form text.\n"},"createdAt":{"type":"integer","description":"Time at which this external location was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of external location creator.\n"},"credentialId":{"type":"string","description":"Unique ID of the location's storage credential.\n"},"credentialName":{"type":"string","description":"Name of the\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003eto use with this external location.\n"},"effectiveEnableFileEvents":{"type":"boolean"},"enableFileEvents":{"type":"boolean","description":"indicates if managed file events are enabled for this external location.  Requires \u003cspan pulumi-lang-nodejs=\"`fileEventQueue`\" pulumi-lang-dotnet=\"`FileEventQueue`\" pulumi-lang-go=\"`fileEventQueue`\" pulumi-lang-python=\"`file_event_queue`\" pulumi-lang-yaml=\"`fileEventQueue`\" pulumi-lang-java=\"`fileEventQueue`\"\u003e`file_event_queue`\u003c/span\u003e block.\n"},"encryptionDetails":{"$ref":"#/types/databricks:index/ExternalLocationEncryptionDetails:ExternalLocationEncryptionDetails"},"fallback":{"type":"boolean","description":"Indicates whether fallback mode is enabled for this external location. When fallback mode is enabled (disabled by default), the access to the location falls back to cluster credentials if UC credentials are not sufficient.\n"},"fileEventQueue":{"$ref":"#/types/databricks:index/ExternalLocationFileEventQueue:ExternalLocationFileEventQueue"},"forceDestroy":{"type":"boolean","description":"Destroy external location regardless of its dependents.\n"},"forceUpdate":{"type":"boolean","description":"Update external location regardless of its dependents.\n"},"isolationMode":{"type":"string","description":"Whether the external location is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATION_MODE_ISOLATED` or `ISOLATION_MODE_OPEN`. Setting the external location to `ISOLATION_MODE_ISOLATED` will automatically allow access from the current workspace.\n"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"Name of External Location, which must be unique within the databricks_metastore. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the external location owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ExternalLocationProviderConfig:ExternalLocationProviderConfig"},"readOnly":{"type":"boolean","description":"Indicates whether the external location is read-only.\n"},"skipValidation":{"type":"boolean","description":"Suppress validation errors if any \u0026 force save the external location\n"},"updatedAt":{"type":"integer","description":"Time at which external location this was last modified, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of user who last modified the external location.\n"},"url":{"type":"string","description":"Path URL in cloud storage, of the form: `s3://[bucket-host]/[bucket-dir]` (AWS), `abfss://[user]@[host]/[path]` (Azure), `gs://[bucket-host]/[bucket-dir]` (GCP).   If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.).\n"}},"type":"object"}},"databricks:index/externalMetadata:ExternalMetadata":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nTo enrich lineage with workloads that are run outside of Databricks (for example, first mile ETL or last mile BI),\nUnity Catalog is introducing the external metadata object. UC lets you add external metadata objects to augment the data lineage it captures automatically, giving you an end-to-end lineage view in UC. \nThis is useful when you want to capture where data came from (for example, Salesforce or MySQL) before it was ingested into UC or where data is being consumed outside UC (for example, Tableau or PowerBI).\n\n\u003e **Note** This resource can only be used with an workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.ExternalMetadata(\"this\", {\n    name: \"security_events_stream\",\n    systemType: \"KAFKA\",\n    entityType: \"Topic\",\n    url: \"https://kafka.com/12345\",\n    description: \"A stream of security related events in the critical services.\",\n    columns: [\n        \"type\",\n        \"message\",\n        \"details\",\n        \"date\",\n        \"time\",\n    ],\n    properties: {\n        topic: \"prod.security.events.raw\",\n        enabled: \"true\",\n        format: \"zstd\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.ExternalMetadata(\"this\",\n    name=\"security_events_stream\",\n    system_type=\"KAFKA\",\n    entity_type=\"Topic\",\n    url=\"https://kafka.com/12345\",\n    description=\"A stream of security related events in the critical services.\",\n    columns=[\n        \"type\",\n        \"message\",\n        \"details\",\n        \"date\",\n        \"time\",\n    ],\n    properties={\n        \"topic\": \"prod.security.events.raw\",\n        \"enabled\": \"true\",\n        \"format\": \"zstd\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.ExternalMetadata(\"this\", new()\n    {\n        Name = \"security_events_stream\",\n        SystemType = \"KAFKA\",\n        EntityType = \"Topic\",\n        Url = \"https://kafka.com/12345\",\n        Description = \"A stream of security related events in the critical services.\",\n        Columns = new[]\n        {\n            \"type\",\n            \"message\",\n            \"details\",\n            \"date\",\n            \"time\",\n        },\n        Properties = \n        {\n            { \"topic\", \"prod.security.events.raw\" },\n            { \"enabled\", \"true\" },\n            { \"format\", \"zstd\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewExternalMetadata(ctx, \"this\", \u0026databricks.ExternalMetadataArgs{\n\t\t\tName:        pulumi.String(\"security_events_stream\"),\n\t\t\tSystemType:  pulumi.String(\"KAFKA\"),\n\t\t\tEntityType:  pulumi.String(\"Topic\"),\n\t\t\tUrl:         pulumi.String(\"https://kafka.com/12345\"),\n\t\t\tDescription: pulumi.String(\"A stream of security related events in the critical services.\"),\n\t\t\tColumns: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"type\"),\n\t\t\t\tpulumi.String(\"message\"),\n\t\t\t\tpulumi.String(\"details\"),\n\t\t\t\tpulumi.String(\"date\"),\n\t\t\t\tpulumi.String(\"time\"),\n\t\t\t},\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"topic\":   pulumi.String(\"prod.security.events.raw\"),\n\t\t\t\t\"enabled\": pulumi.String(\"true\"),\n\t\t\t\t\"format\":  pulumi.String(\"zstd\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ExternalMetadata;\nimport com.pulumi.databricks.ExternalMetadataArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new ExternalMetadata(\"this\", ExternalMetadataArgs.builder()\n            .name(\"security_events_stream\")\n            .systemType(\"KAFKA\")\n            .entityType(\"Topic\")\n            .url(\"https://kafka.com/12345\")\n            .description(\"A stream of security related events in the critical services.\")\n            .columns(            \n                \"type\",\n                \"message\",\n                \"details\",\n                \"date\",\n                \"time\")\n            .properties(Map.ofEntries(\n                Map.entry(\"topic\", \"prod.security.events.raw\"),\n                Map.entry(\"enabled\", \"true\"),\n                Map.entry(\"format\", \"zstd\")\n            ))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:ExternalMetadata\n    properties:\n      name: security_events_stream\n      systemType: KAFKA\n      entityType: Topic\n      url: https://kafka.com/12345\n      description: A stream of security related events in the critical services.\n      columns:\n        - type\n        - message\n        - details\n        - date\n        - time\n      properties:\n        topic: prod.security.events.raw\n        enabled: 'true'\n        format: zstd\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"columns":{"type":"array","items":{"type":"string"},"description":"List of columns associated with the external metadata object\n"},"createTime":{"type":"string","description":"(string) - Time at which this external metadata object was created\n"},"createdBy":{"type":"string","description":"(string) - Username of external metadata object creator\n"},"description":{"type":"string","description":"User-provided free-form text description\n"},"entityType":{"type":"string","description":"Type of entity within the external system\n"},"metastoreId":{"type":"string","description":"(string) - Unique identifier of parent metastore\n"},"name":{"type":"string","description":"Name of the external metadata object\n"},"owner":{"type":"string","description":"Owner of the external metadata object\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"A map of key-value properties attached to the external metadata object\n"},"providerConfig":{"$ref":"#/types/databricks:index/ExternalMetadataProviderConfig:ExternalMetadataProviderConfig","description":"Configure the provider for management through account provider.\n"},"systemType":{"type":"string","description":"Type of external system. Possible values are: `AMAZON_REDSHIFT`, `AZURE_SYNAPSE`, `CONFLUENT`, `DATABRICKS`, `GOOGLE_BIGQUERY`, `KAFKA`, `LOOKER`, `MICROSOFT_FABRIC`, `MICROSOFT_SQL_SERVER`, `MONGODB`, `MYSQL`, `ORACLE`, `OTHER`, `POSTGRESQL`, `POWER_BI`, `SALESFORCE`, `SAP`, `SERVICENOW`, `SNOWFLAKE`, `STREAM_NATIVE`, `TABLEAU`, `TERADATA`, `WORKDAY`\n"},"updateTime":{"type":"string","description":"(string) - Time at which this external metadata object was last modified\n"},"updatedBy":{"type":"string","description":"(string) - Username of user who last modified external metadata object\n"},"url":{"type":"string","description":"URL associated with the external metadata object\n"}},"required":["createTime","createdBy","entityType","metastoreId","name","systemType","updateTime","updatedBy"],"inputProperties":{"columns":{"type":"array","items":{"type":"string"},"description":"List of columns associated with the external metadata object\n"},"description":{"type":"string","description":"User-provided free-form text description\n"},"entityType":{"type":"string","description":"Type of entity within the external system\n"},"name":{"type":"string","description":"Name of the external metadata object\n"},"owner":{"type":"string","description":"Owner of the external metadata object\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"A map of key-value properties attached to the external metadata object\n"},"providerConfig":{"$ref":"#/types/databricks:index/ExternalMetadataProviderConfig:ExternalMetadataProviderConfig","description":"Configure the provider for management through account provider.\n"},"systemType":{"type":"string","description":"Type of external system. Possible values are: `AMAZON_REDSHIFT`, `AZURE_SYNAPSE`, `CONFLUENT`, `DATABRICKS`, `GOOGLE_BIGQUERY`, `KAFKA`, `LOOKER`, `MICROSOFT_FABRIC`, `MICROSOFT_SQL_SERVER`, `MONGODB`, `MYSQL`, `ORACLE`, `OTHER`, `POSTGRESQL`, `POWER_BI`, `SALESFORCE`, `SAP`, `SERVICENOW`, `SNOWFLAKE`, `STREAM_NATIVE`, `TABLEAU`, `TERADATA`, `WORKDAY`\n"},"url":{"type":"string","description":"URL associated with the external metadata object\n"}},"requiredInputs":["entityType","systemType"],"stateInputs":{"description":"Input properties used for looking up and filtering ExternalMetadata resources.\n","properties":{"columns":{"type":"array","items":{"type":"string"},"description":"List of columns associated with the external metadata object\n"},"createTime":{"type":"string","description":"(string) - Time at which this external metadata object was created\n"},"createdBy":{"type":"string","description":"(string) - Username of external metadata object creator\n"},"description":{"type":"string","description":"User-provided free-form text description\n"},"entityType":{"type":"string","description":"Type of entity within the external system\n"},"metastoreId":{"type":"string","description":"(string) - Unique identifier of parent metastore\n"},"name":{"type":"string","description":"Name of the external metadata object\n"},"owner":{"type":"string","description":"Owner of the external metadata object\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"A map of key-value properties attached to the external metadata object\n"},"providerConfig":{"$ref":"#/types/databricks:index/ExternalMetadataProviderConfig:ExternalMetadataProviderConfig","description":"Configure the provider for management through account provider.\n"},"systemType":{"type":"string","description":"Type of external system. Possible values are: `AMAZON_REDSHIFT`, `AZURE_SYNAPSE`, `CONFLUENT`, `DATABRICKS`, `GOOGLE_BIGQUERY`, `KAFKA`, `LOOKER`, `MICROSOFT_FABRIC`, `MICROSOFT_SQL_SERVER`, `MONGODB`, `MYSQL`, `ORACLE`, `OTHER`, `POSTGRESQL`, `POWER_BI`, `SALESFORCE`, `SAP`, `SERVICENOW`, `SNOWFLAKE`, `STREAM_NATIVE`, `TABLEAU`, `TERADATA`, `WORKDAY`\n"},"updateTime":{"type":"string","description":"(string) - Time at which this external metadata object was last modified\n"},"updatedBy":{"type":"string","description":"(string) - Username of user who last modified external metadata object\n"},"url":{"type":"string","description":"URL associated with the external metadata object\n"}},"type":"object"}},"databricks:index/featureEngineeringFeature:FeatureEngineeringFeature":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n","properties":{"description":{"type":"string","description":"The description of the feature\n"},"filterCondition":{"type":"string","description":"The filter condition applied to the source data before aggregation\n"},"fullName":{"type":"string","description":"The full three-part name (catalog, schema, name) of the feature\n"},"function":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureFunction:FeatureEngineeringFeatureFunction","description":"The function by which the feature is computed\n"},"inputs":{"type":"array","items":{"type":"string"},"description":"The input columns from which the feature is computed\n"},"lineageContext":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureLineageContext:FeatureEngineeringFeatureLineageContext","description":"WARNING: This field is primarily intended for internal use by Databricks systems and\nis automatically populated when features are created through Databricks notebooks or jobs.\nUsers should not manually set this field as incorrect values may lead to inaccurate lineage tracking or unexpected behavior.\nThis field will be set by feature-engineering client and should be left unset by SDK and terraform users\n"},"providerConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureProviderConfig:FeatureEngineeringFeatureProviderConfig","description":"Configure the provider for management through account provider.\n"},"source":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureSource:FeatureEngineeringFeatureSource","description":"The data source of the feature\n"},"timeWindow":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureTimeWindow:FeatureEngineeringFeatureTimeWindow","description":"The time window in which the feature is computed\n"}},"required":["fullName","function","inputs","source"],"inputProperties":{"description":{"type":"string","description":"The description of the feature\n"},"filterCondition":{"type":"string","description":"The filter condition applied to the source data before aggregation\n"},"fullName":{"type":"string","description":"The full three-part name (catalog, schema, name) of the feature\n"},"function":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureFunction:FeatureEngineeringFeatureFunction","description":"The function by which the feature is computed\n"},"inputs":{"type":"array","items":{"type":"string"},"description":"The input columns from which the feature is computed\n"},"lineageContext":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureLineageContext:FeatureEngineeringFeatureLineageContext","description":"WARNING: This field is primarily intended for internal use by Databricks systems and\nis automatically populated when features are created through Databricks notebooks or jobs.\nUsers should not manually set this field as incorrect values may lead to inaccurate lineage tracking or unexpected behavior.\nThis field will be set by feature-engineering client and should be left unset by SDK and terraform users\n"},"providerConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureProviderConfig:FeatureEngineeringFeatureProviderConfig","description":"Configure the provider for management through account provider.\n"},"source":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureSource:FeatureEngineeringFeatureSource","description":"The data source of the feature\n"},"timeWindow":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureTimeWindow:FeatureEngineeringFeatureTimeWindow","description":"The time window in which the feature is computed\n"}},"requiredInputs":["fullName","function","inputs","source"],"stateInputs":{"description":"Input properties used for looking up and filtering FeatureEngineeringFeature resources.\n","properties":{"description":{"type":"string","description":"The description of the feature\n"},"filterCondition":{"type":"string","description":"The filter condition applied to the source data before aggregation\n"},"fullName":{"type":"string","description":"The full three-part name (catalog, schema, name) of the feature\n"},"function":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureFunction:FeatureEngineeringFeatureFunction","description":"The function by which the feature is computed\n"},"inputs":{"type":"array","items":{"type":"string"},"description":"The input columns from which the feature is computed\n"},"lineageContext":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureLineageContext:FeatureEngineeringFeatureLineageContext","description":"WARNING: This field is primarily intended for internal use by Databricks systems and\nis automatically populated when features are created through Databricks notebooks or jobs.\nUsers should not manually set this field as incorrect values may lead to inaccurate lineage tracking or unexpected behavior.\nThis field will be set by feature-engineering client and should be left unset by SDK and terraform users\n"},"providerConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureProviderConfig:FeatureEngineeringFeatureProviderConfig","description":"Configure the provider for management through account provider.\n"},"source":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureSource:FeatureEngineeringFeatureSource","description":"The data source of the feature\n"},"timeWindow":{"$ref":"#/types/databricks:index/FeatureEngineeringFeatureTimeWindow:FeatureEngineeringFeatureTimeWindow","description":"The time window in which the feature is computed\n"}},"type":"object"}},"databricks:index/featureEngineeringKafkaConfig:FeatureEngineeringKafkaConfig":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n","properties":{"authConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigAuthConfig:FeatureEngineeringKafkaConfigAuthConfig","description":"Authentication configuration for connection to topics\n"},"backfillSource":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigBackfillSource:FeatureEngineeringKafkaConfigBackfillSource","description":"A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config.\nIn the future, a separate table will be maintained by Databricks for forward filling data.\nThe schema for this source must match exactly that of the key and value schemas specified for this Kafka config\n"},"bootstrapServers":{"type":"string","description":"A comma-separated list of host/port pairs pointing to Kafka cluster\n"},"extraOptions":{"type":"object","additionalProperties":{"type":"string"},"description":"Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*)\n"},"keySchema":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigKeySchema:FeatureEngineeringKafkaConfigKeySchema","description":"Schema configuration for extracting message keys from topics. At least one of\u003cspan pulumi-lang-nodejs=\" keySchema \" pulumi-lang-dotnet=\" KeySchema \" pulumi-lang-go=\" keySchema \" pulumi-lang-python=\" key_schema \" pulumi-lang-yaml=\" keySchema \" pulumi-lang-java=\" keySchema \"\u003e key_schema \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" valueSchema \" pulumi-lang-dotnet=\" ValueSchema \" pulumi-lang-go=\" valueSchema \" pulumi-lang-python=\" value_schema \" pulumi-lang-yaml=\" valueSchema \" pulumi-lang-java=\" valueSchema \"\u003e value_schema \u003c/span\u003emust be provided\n"},"name":{"type":"string","description":"(string) - Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature.\nCan be distinct from topic name\n"},"providerConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigProviderConfig:FeatureEngineeringKafkaConfigProviderConfig","description":"Configure the provider for management through account provider.\n"},"subscriptionMode":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigSubscriptionMode:FeatureEngineeringKafkaConfigSubscriptionMode","description":"Options to configure which Kafka topics to pull data from\n"},"valueSchema":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigValueSchema:FeatureEngineeringKafkaConfigValueSchema","description":"Schema configuration for extracting message values from topics. At least one of\u003cspan pulumi-lang-nodejs=\" keySchema \" pulumi-lang-dotnet=\" KeySchema \" pulumi-lang-go=\" keySchema \" pulumi-lang-python=\" key_schema \" pulumi-lang-yaml=\" keySchema \" pulumi-lang-java=\" keySchema \"\u003e key_schema \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" valueSchema \" pulumi-lang-dotnet=\" ValueSchema \" pulumi-lang-go=\" valueSchema \" pulumi-lang-python=\" value_schema \" pulumi-lang-yaml=\" valueSchema \" pulumi-lang-java=\" valueSchema \"\u003e value_schema \u003c/span\u003emust be provided\n"}},"required":["authConfig","bootstrapServers","name","subscriptionMode"],"inputProperties":{"authConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigAuthConfig:FeatureEngineeringKafkaConfigAuthConfig","description":"Authentication configuration for connection to topics\n"},"backfillSource":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigBackfillSource:FeatureEngineeringKafkaConfigBackfillSource","description":"A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config.\nIn the future, a separate table will be maintained by Databricks for forward filling data.\nThe schema for this source must match exactly that of the key and value schemas specified for this Kafka config\n"},"bootstrapServers":{"type":"string","description":"A comma-separated list of host/port pairs pointing to Kafka cluster\n"},"extraOptions":{"type":"object","additionalProperties":{"type":"string"},"description":"Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*)\n"},"keySchema":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigKeySchema:FeatureEngineeringKafkaConfigKeySchema","description":"Schema configuration for extracting message keys from topics. At least one of\u003cspan pulumi-lang-nodejs=\" keySchema \" pulumi-lang-dotnet=\" KeySchema \" pulumi-lang-go=\" keySchema \" pulumi-lang-python=\" key_schema \" pulumi-lang-yaml=\" keySchema \" pulumi-lang-java=\" keySchema \"\u003e key_schema \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" valueSchema \" pulumi-lang-dotnet=\" ValueSchema \" pulumi-lang-go=\" valueSchema \" pulumi-lang-python=\" value_schema \" pulumi-lang-yaml=\" valueSchema \" pulumi-lang-java=\" valueSchema \"\u003e value_schema \u003c/span\u003emust be provided\n"},"providerConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigProviderConfig:FeatureEngineeringKafkaConfigProviderConfig","description":"Configure the provider for management through account provider.\n"},"subscriptionMode":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigSubscriptionMode:FeatureEngineeringKafkaConfigSubscriptionMode","description":"Options to configure which Kafka topics to pull data from\n"},"valueSchema":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigValueSchema:FeatureEngineeringKafkaConfigValueSchema","description":"Schema configuration for extracting message values from topics. At least one of\u003cspan pulumi-lang-nodejs=\" keySchema \" pulumi-lang-dotnet=\" KeySchema \" pulumi-lang-go=\" keySchema \" pulumi-lang-python=\" key_schema \" pulumi-lang-yaml=\" keySchema \" pulumi-lang-java=\" keySchema \"\u003e key_schema \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" valueSchema \" pulumi-lang-dotnet=\" ValueSchema \" pulumi-lang-go=\" valueSchema \" pulumi-lang-python=\" value_schema \" pulumi-lang-yaml=\" valueSchema \" pulumi-lang-java=\" valueSchema \"\u003e value_schema \u003c/span\u003emust be provided\n"}},"requiredInputs":["authConfig","bootstrapServers","subscriptionMode"],"stateInputs":{"description":"Input properties used for looking up and filtering FeatureEngineeringKafkaConfig resources.\n","properties":{"authConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigAuthConfig:FeatureEngineeringKafkaConfigAuthConfig","description":"Authentication configuration for connection to topics\n"},"backfillSource":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigBackfillSource:FeatureEngineeringKafkaConfigBackfillSource","description":"A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config.\nIn the future, a separate table will be maintained by Databricks for forward filling data.\nThe schema for this source must match exactly that of the key and value schemas specified for this Kafka config\n"},"bootstrapServers":{"type":"string","description":"A comma-separated list of host/port pairs pointing to Kafka cluster\n"},"extraOptions":{"type":"object","additionalProperties":{"type":"string"},"description":"Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*)\n"},"keySchema":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigKeySchema:FeatureEngineeringKafkaConfigKeySchema","description":"Schema configuration for extracting message keys from topics. At least one of\u003cspan pulumi-lang-nodejs=\" keySchema \" pulumi-lang-dotnet=\" KeySchema \" pulumi-lang-go=\" keySchema \" pulumi-lang-python=\" key_schema \" pulumi-lang-yaml=\" keySchema \" pulumi-lang-java=\" keySchema \"\u003e key_schema \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" valueSchema \" pulumi-lang-dotnet=\" ValueSchema \" pulumi-lang-go=\" valueSchema \" pulumi-lang-python=\" value_schema \" pulumi-lang-yaml=\" valueSchema \" pulumi-lang-java=\" valueSchema \"\u003e value_schema \u003c/span\u003emust be provided\n"},"name":{"type":"string","description":"(string) - Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature.\nCan be distinct from topic name\n"},"providerConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigProviderConfig:FeatureEngineeringKafkaConfigProviderConfig","description":"Configure the provider for management through account provider.\n"},"subscriptionMode":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigSubscriptionMode:FeatureEngineeringKafkaConfigSubscriptionMode","description":"Options to configure which Kafka topics to pull data from\n"},"valueSchema":{"$ref":"#/types/databricks:index/FeatureEngineeringKafkaConfigValueSchema:FeatureEngineeringKafkaConfigValueSchema","description":"Schema configuration for extracting message values from topics. At least one of\u003cspan pulumi-lang-nodejs=\" keySchema \" pulumi-lang-dotnet=\" KeySchema \" pulumi-lang-go=\" keySchema \" pulumi-lang-python=\" key_schema \" pulumi-lang-yaml=\" keySchema \" pulumi-lang-java=\" keySchema \"\u003e key_schema \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" valueSchema \" pulumi-lang-dotnet=\" ValueSchema \" pulumi-lang-go=\" valueSchema \" pulumi-lang-python=\" value_schema \" pulumi-lang-yaml=\" valueSchema \" pulumi-lang-java=\" valueSchema \"\u003e value_schema \u003c/span\u003emust be provided\n"}},"type":"object"}},"databricks:index/featureEngineeringMaterializedFeature:FeatureEngineeringMaterializedFeature":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n","properties":{"cronSchedule":{"type":"string","description":"The quartz cron expression that defines the schedule of the materialization pipeline. The schedule is evaluated in the UTC timezone\n"},"featureName":{"type":"string","description":"The full name of the feature in Unity Catalog\n"},"lastMaterializationTime":{"type":"string","description":"(string) - The timestamp when the pipeline last ran and updated the materialized feature values.\nIf the pipeline has not run yet, this field will be null\n"},"materializedFeatureId":{"type":"string","description":"(string) - Unique identifier for the materialized feature\n"},"offlineStoreConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringMaterializedFeatureOfflineStoreConfig:FeatureEngineeringMaterializedFeatureOfflineStoreConfig"},"onlineStoreConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringMaterializedFeatureOnlineStoreConfig:FeatureEngineeringMaterializedFeatureOnlineStoreConfig"},"pipelineScheduleState":{"type":"string","description":"The schedule state of the materialization pipeline. Possible values are: `ACTIVE`, `PAUSED`, `SNAPSHOT`\n"},"providerConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringMaterializedFeatureProviderConfig:FeatureEngineeringMaterializedFeatureProviderConfig","description":"Configure the provider for management through account provider.\n"},"tableName":{"type":"string","description":"(string) - The fully qualified Unity Catalog path to the table containing the materialized feature (Delta table or Lakebase table). Output only\n"}},"required":["featureName","lastMaterializationTime","materializedFeatureId","tableName"],"inputProperties":{"cronSchedule":{"type":"string","description":"The quartz cron expression that defines the schedule of the materialization pipeline. The schedule is evaluated in the UTC timezone\n"},"featureName":{"type":"string","description":"The full name of the feature in Unity Catalog\n"},"offlineStoreConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringMaterializedFeatureOfflineStoreConfig:FeatureEngineeringMaterializedFeatureOfflineStoreConfig"},"onlineStoreConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringMaterializedFeatureOnlineStoreConfig:FeatureEngineeringMaterializedFeatureOnlineStoreConfig"},"pipelineScheduleState":{"type":"string","description":"The schedule state of the materialization pipeline. Possible values are: `ACTIVE`, `PAUSED`, `SNAPSHOT`\n"},"providerConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringMaterializedFeatureProviderConfig:FeatureEngineeringMaterializedFeatureProviderConfig","description":"Configure the provider for management through account provider.\n"}},"requiredInputs":["featureName"],"stateInputs":{"description":"Input properties used for looking up and filtering FeatureEngineeringMaterializedFeature resources.\n","properties":{"cronSchedule":{"type":"string","description":"The quartz cron expression that defines the schedule of the materialization pipeline. The schedule is evaluated in the UTC timezone\n"},"featureName":{"type":"string","description":"The full name of the feature in Unity Catalog\n"},"lastMaterializationTime":{"type":"string","description":"(string) - The timestamp when the pipeline last ran and updated the materialized feature values.\nIf the pipeline has not run yet, this field will be null\n"},"materializedFeatureId":{"type":"string","description":"(string) - Unique identifier for the materialized feature\n"},"offlineStoreConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringMaterializedFeatureOfflineStoreConfig:FeatureEngineeringMaterializedFeatureOfflineStoreConfig"},"onlineStoreConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringMaterializedFeatureOnlineStoreConfig:FeatureEngineeringMaterializedFeatureOnlineStoreConfig"},"pipelineScheduleState":{"type":"string","description":"The schedule state of the materialization pipeline. Possible values are: `ACTIVE`, `PAUSED`, `SNAPSHOT`\n"},"providerConfig":{"$ref":"#/types/databricks:index/FeatureEngineeringMaterializedFeatureProviderConfig:FeatureEngineeringMaterializedFeatureProviderConfig","description":"Configure the provider for management through account provider.\n"},"tableName":{"type":"string","description":"(string) - The fully qualified Unity Catalog path to the table containing the materialized feature (Delta table or Lakebase table). Output only\n"}},"type":"object"}},"databricks:index/file:File":{"description":"This resource allows uploading and downloading files in databricks_volume.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e Currently the limit is 5GiB in octet-stream.\n\n\u003e Currently, only UC volumes are supported. The list of destinations may change.\n\n## Example Usage\n\nIn order to manage a file on Unity Catalog Volumes with Pulumi, you must specify the \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e attribute containing the full path to the file on the local filesystem.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    metastoreId: thisDatabricksMetastore.id,\n    name: \"sandbox\",\n    comment: \"this catalog is managed by terraform\",\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst things = new databricks.Schema(\"things\", {\n    catalogName: sandbox.name,\n    name: \"things\",\n    comment: \"this schema is managed by terraform\",\n    properties: {\n        kind: \"various\",\n    },\n});\nconst _this = new databricks.Volume(\"this\", {\n    name: \"quickstart_volume\",\n    catalogName: sandbox.name,\n    schemaName: things.name,\n    volumeType: \"MANAGED\",\n    comment: \"this volume is managed by terraform\",\n});\nconst thisFile = new databricks.File(\"this\", {\n    source: \"/full/path/on/local/system\",\n    path: pulumi.interpolate`${_this.volumePath}/fileName`,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    metastore_id=this_databricks_metastore[\"id\"],\n    name=\"sandbox\",\n    comment=\"this catalog is managed by terraform\",\n    properties={\n        \"purpose\": \"testing\",\n    })\nthings = databricks.Schema(\"things\",\n    catalog_name=sandbox.name,\n    name=\"things\",\n    comment=\"this schema is managed by terraform\",\n    properties={\n        \"kind\": \"various\",\n    })\nthis = databricks.Volume(\"this\",\n    name=\"quickstart_volume\",\n    catalog_name=sandbox.name,\n    schema_name=things.name,\n    volume_type=\"MANAGED\",\n    comment=\"this volume is managed by terraform\")\nthis_file = databricks.File(\"this\",\n    source=\"/full/path/on/local/system\",\n    path=this.volume_path.apply(lambda volume_path: f\"{volume_path}/fileName\"))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        MetastoreId = thisDatabricksMetastore.Id,\n        Name = \"sandbox\",\n        Comment = \"this catalog is managed by terraform\",\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var things = new Databricks.Schema(\"things\", new()\n    {\n        CatalogName = sandbox.Name,\n        Name = \"things\",\n        Comment = \"this schema is managed by terraform\",\n        Properties = \n        {\n            { \"kind\", \"various\" },\n        },\n    });\n\n    var @this = new Databricks.Volume(\"this\", new()\n    {\n        Name = \"quickstart_volume\",\n        CatalogName = sandbox.Name,\n        SchemaName = things.Name,\n        VolumeType = \"MANAGED\",\n        Comment = \"this volume is managed by terraform\",\n    });\n\n    var thisFile = new Databricks.File(\"this\", new()\n    {\n        Source = \"/full/path/on/local/system\",\n        Path = @this.VolumePath.Apply(volumePath =\u003e $\"{volumePath}/fileName\"),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tMetastoreId: pulumi.Any(thisDatabricksMetastore.Id),\n\t\t\tName:        pulumi.String(\"sandbox\"),\n\t\t\tComment:     pulumi.String(\"this catalog is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthings, err := databricks.NewSchema(ctx, \"things\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: sandbox.Name,\n\t\t\tName:        pulumi.String(\"things\"),\n\t\t\tComment:     pulumi.String(\"this schema is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"kind\": pulumi.String(\"various\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewVolume(ctx, \"this\", \u0026databricks.VolumeArgs{\n\t\t\tName:        pulumi.String(\"quickstart_volume\"),\n\t\t\tCatalogName: sandbox.Name,\n\t\t\tSchemaName:  things.Name,\n\t\t\tVolumeType:  pulumi.String(\"MANAGED\"),\n\t\t\tComment:     pulumi.String(\"this volume is managed by terraform\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewFile(ctx, \"this\", \u0026databricks.FileArgs{\n\t\t\tSource: pulumi.String(\"/full/path/on/local/system\"),\n\t\t\tPath: this.VolumePath.ApplyT(func(volumePath string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"%v/fileName\", volumePath), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport com.pulumi.databricks.Volume;\nimport com.pulumi.databricks.VolumeArgs;\nimport com.pulumi.databricks.File;\nimport com.pulumi.databricks.FileArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .metastoreId(thisDatabricksMetastore.id())\n            .name(\"sandbox\")\n            .comment(\"this catalog is managed by terraform\")\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var things = new Schema(\"things\", SchemaArgs.builder()\n            .catalogName(sandbox.name())\n            .name(\"things\")\n            .comment(\"this schema is managed by terraform\")\n            .properties(Map.of(\"kind\", \"various\"))\n            .build());\n\n        var this_ = new Volume(\"this\", VolumeArgs.builder()\n            .name(\"quickstart_volume\")\n            .catalogName(sandbox.name())\n            .schemaName(things.name())\n            .volumeType(\"MANAGED\")\n            .comment(\"this volume is managed by terraform\")\n            .build());\n\n        var thisFile = new File(\"thisFile\", FileArgs.builder()\n            .source(\"/full/path/on/local/system\")\n            .path(this_.volumePath().applyValue(_volumePath -\u003e String.format(\"%s/fileName\", _volumePath)))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      metastoreId: ${thisDatabricksMetastore.id}\n      name: sandbox\n      comment: this catalog is managed by terraform\n      properties:\n        purpose: testing\n  things:\n    type: databricks:Schema\n    properties:\n      catalogName: ${sandbox.name}\n      name: things\n      comment: this schema is managed by terraform\n      properties:\n        kind: various\n  this:\n    type: databricks:Volume\n    properties:\n      name: quickstart_volume\n      catalogName: ${sandbox.name}\n      schemaName: ${things.name}\n      volumeType: MANAGED\n      comment: this volume is managed by terraform\n  thisFile:\n    type: databricks:File\n    name: this\n    properties:\n      source: /full/path/on/local/system\n      path: ${this.volumePath}/fileName\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nYou can also inline sources through \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e  attribute.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst initScript = new databricks.File(\"init_script\", {\n    contentBase64: std.base64encode({\n        input: `#!/bin/bash\necho \\\\\"Hello World\\\\\"\n`,\n    }).then(invoke =\u003e invoke.result),\n    path: `${_this.volumePath}/fileName`,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\ninit_script = databricks.File(\"init_script\",\n    content_base64=std.base64encode(input=\"\"\"#!/bin/bash\necho \\\"Hello World\\\"\n\"\"\").result,\n    path=f\"{this['volumePath']}/fileName\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var initScript = new Databricks.File(\"init_script\", new()\n    {\n        ContentBase64 = Std.Base64encode.Invoke(new()\n        {\n            Input = @\"#!/bin/bash\necho \\\"\"Hello World\\\"\"\n\",\n        }).Apply(invoke =\u003e invoke.Result),\n        Path = $\"{@this.VolumePath}/fileName\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tinvokeBase64encode, err := std.Base64encode(ctx, \u0026std.Base64encodeArgs{\n\t\t\tInput: \"#!/bin/bash\\necho \\\\\\\"Hello World\\\\\\\"\\n\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewFile(ctx, \"init_script\", \u0026databricks.FileArgs{\n\t\t\tContentBase64: pulumi.String(invokeBase64encode.Result),\n\t\t\tPath:          pulumi.Sprintf(\"%v/fileName\", this.VolumePath),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.File;\nimport com.pulumi.databricks.FileArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.Base64encodeArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var initScript = new File(\"initScript\", FileArgs.builder()\n            .contentBase64(StdFunctions.base64encode(Base64encodeArgs.builder()\n                .input(\"\"\"\n#!/bin/bash\necho \\\"Hello World\\\"\n                \"\"\")\n                .build()).result())\n            .path(String.format(\"%s/fileName\", this_.volumePath()))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  initScript:\n    type: databricks:File\n    name: init_script\n    properties:\n      contentBase64:\n        fn::invoke:\n          function: std:base64encode\n          arguments:\n            input: |\n              #!/bin/bash\n              echo \\\"Hello World\\\"\n          return: result\n      path: ${this.volumePath}/fileName\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.WorkspaceFile\n\" pulumi-lang-dotnet=\" databricks.WorkspaceFile\n\" pulumi-lang-go=\" WorkspaceFile\n\" pulumi-lang-python=\" WorkspaceFile\n\" pulumi-lang-yaml=\" databricks.WorkspaceFile\n\" pulumi-lang-java=\" databricks.WorkspaceFile\n\"\u003e databricks.WorkspaceFile\n\u003c/span\u003e* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Volume \" pulumi-lang-dotnet=\" databricks.Volume \" pulumi-lang-go=\" Volume \" pulumi-lang-python=\" Volume \" pulumi-lang-yaml=\" databricks.Volume \" pulumi-lang-java=\" databricks.Volume \"\u003e databricks.Volume \u003c/span\u003eto manage [volumes within Unity Catalog](https://docs.databricks.com/en/connect/unity-catalog/volumes.html).\n\n","properties":{"contentBase64":{"type":"string","description":"Contents in base 64 format. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e.\n"},"fileSize":{"type":"integer","description":"The file size of the file that is being tracked by this resource in bytes.\n"},"md5":{"type":"string"},"modificationTime":{"type":"string","description":"The last time stamp when the file was modified\n"},"path":{"type":"string","description":"The path of the file in which you wish to save. For example, `/Volumes/main/default/volume1/file.txt`.\n"},"providerConfig":{"$ref":"#/types/databricks:index/FileProviderConfig:FileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"remoteFileModified":{"type":"boolean"},"source":{"type":"string","description":"The full absolute path to the file. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n"}},"required":["fileSize","modificationTime","path"],"inputProperties":{"contentBase64":{"type":"string","description":"Contents in base 64 format. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e.\n"},"md5":{"type":"string"},"path":{"type":"string","description":"The path of the file in which you wish to save. For example, `/Volumes/main/default/volume1/file.txt`.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/FileProviderConfig:FileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"remoteFileModified":{"type":"boolean"},"source":{"type":"string","description":"The full absolute path to the file. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n"}},"requiredInputs":["path"],"stateInputs":{"description":"Input properties used for looking up and filtering File resources.\n","properties":{"contentBase64":{"type":"string","description":"Contents in base 64 format. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e.\n"},"fileSize":{"type":"integer","description":"The file size of the file that is being tracked by this resource in bytes.\n"},"md5":{"type":"string"},"modificationTime":{"type":"string","description":"The last time stamp when the file was modified\n"},"path":{"type":"string","description":"The path of the file in which you wish to save. For example, `/Volumes/main/default/volume1/file.txt`.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/FileProviderConfig:FileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"remoteFileModified":{"type":"boolean"},"source":{"type":"string","description":"The full absolute path to the file. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n"}},"type":"object"}},"databricks:index/gitCredential:GitCredential":{"description":"This resource allows you to manage credentials for [Databricks Repos](https://docs.databricks.com/repos.html) using [Git Credentials API](https://docs.databricks.com/dev-tools/api/latest/gitcredentials.html).\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n### Git credential that uses personal access token\n\nYou can declare Pulumi-managed Git credential using following code:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst ado = new databricks.GitCredential(\"ado\", {\n    gitUsername: \"myuser\",\n    gitProvider: \"azureDevOpsServices\",\n    personalAccessToken: \"sometoken\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nado = databricks.GitCredential(\"ado\",\n    git_username=\"myuser\",\n    git_provider=\"azureDevOpsServices\",\n    personal_access_token=\"sometoken\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var ado = new Databricks.GitCredential(\"ado\", new()\n    {\n        GitUsername = \"myuser\",\n        GitProvider = \"azureDevOpsServices\",\n        PersonalAccessToken = \"sometoken\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGitCredential(ctx, \"ado\", \u0026databricks.GitCredentialArgs{\n\t\t\tGitUsername:         pulumi.String(\"myuser\"),\n\t\t\tGitProvider:         pulumi.String(\"azureDevOpsServices\"),\n\t\t\tPersonalAccessToken: pulumi.String(\"sometoken\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.GitCredential;\nimport com.pulumi.databricks.GitCredentialArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var ado = new GitCredential(\"ado\", GitCredentialArgs.builder()\n            .gitUsername(\"myuser\")\n            .gitProvider(\"azureDevOpsServices\")\n            .personalAccessToken(\"sometoken\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  ado:\n    type: databricks:GitCredential\n    properties:\n      gitUsername: myuser\n      gitProvider: azureDevOpsServices\n      personalAccessToken: sometoken\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Git credential configuration for Azure Service Principal and Azure DevOps\n\nDatabricks now supports Azure service principal federation to Azure DevOps.  Follow the [documentation](https://learn.microsoft.com/en-us/azure/databricks/repos/automate-with-ms-entra) on how to configure service principal federation, and after everything is configured, it could be used as simple as:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst ado = new databricks.GitCredential(\"ado\", {gitProvider: \"azureDevOpsServicesAad\"});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nado = databricks.GitCredential(\"ado\", git_provider=\"azureDevOpsServicesAad\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var ado = new Databricks.GitCredential(\"ado\", new()\n    {\n        GitProvider = \"azureDevOpsServicesAad\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGitCredential(ctx, \"ado\", \u0026databricks.GitCredentialArgs{\n\t\t\tGitProvider: pulumi.String(\"azureDevOpsServicesAad\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.GitCredential;\nimport com.pulumi.databricks.GitCredentialArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var ado = new GitCredential(\"ado\", GitCredentialArgs.builder()\n            .gitProvider(\"azureDevOpsServicesAad\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  ado:\n    type: databricks:GitCredential\n    properties:\n      gitProvider: azureDevOpsServicesAad\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage Databricks Repos.\n\n","properties":{"force":{"type":"boolean","description":"specify if settings need to be enforced (i.e., to overwrite previously set credential for service principals).\n"},"gitEmail":{"type":"string","description":"The email associated with your Git provider user account. Used for authentication with the remote repository and also sets the author \u0026 committer identity for commits.\n"},"gitProvider":{"type":"string","description":"case insensitive name of the Git provider.  Following values are supported right now (could be a subject for a change, consult [Git Credentials API documentation](https://docs.databricks.com/dev-tools/api/latest/gitcredentials.html)): `gitHub`, `gitHubEnterprise`, `bitbucketCloud`, `bitbucketServer`, `azureDevOpsServices`, `gitLab`, `gitLabEnterpriseEdition`, `awsCodeCommit`, `azureDevOpsServicesAad`.\n"},"gitUsername":{"type":"string","description":"user name at Git provider.  For most Git providers it is only used to set the Git committer \u0026 author names for commits, however it may be required for authentication depending on your Git provider / token requirements.\n"},"isDefaultForProvider":{"type":"boolean","description":"boolean flag specifying if the credential is the default for the given provider type.\n"},"name":{"type":"string","description":"the name of the git credential, used for identification and ease of lookup.\n"},"personalAccessToken":{"type":"string","description":"The personal access token used to authenticate to the corresponding Git provider. If value is not provided, it's sourced from the first environment variable of `GITHUB_TOKEN`, `GITLAB_TOKEN`, or `AZDO_PERSONAL_ACCESS_TOKEN`, that has a non-empty value.\n"},"principalId":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/GitCredentialProviderConfig:GitCredentialProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"required":["gitProvider","name"],"inputProperties":{"force":{"type":"boolean","description":"specify if settings need to be enforced (i.e., to overwrite previously set credential for service principals).\n"},"gitEmail":{"type":"string","description":"The email associated with your Git provider user account. Used for authentication with the remote repository and also sets the author \u0026 committer identity for commits.\n"},"gitProvider":{"type":"string","description":"case insensitive name of the Git provider.  Following values are supported right now (could be a subject for a change, consult [Git Credentials API documentation](https://docs.databricks.com/dev-tools/api/latest/gitcredentials.html)): `gitHub`, `gitHubEnterprise`, `bitbucketCloud`, `bitbucketServer`, `azureDevOpsServices`, `gitLab`, `gitLabEnterpriseEdition`, `awsCodeCommit`, `azureDevOpsServicesAad`.\n"},"gitUsername":{"type":"string","description":"user name at Git provider.  For most Git providers it is only used to set the Git committer \u0026 author names for commits, however it may be required for authentication depending on your Git provider / token requirements.\n"},"isDefaultForProvider":{"type":"boolean","description":"boolean flag specifying if the credential is the default for the given provider type.\n"},"name":{"type":"string","description":"the name of the git credential, used for identification and ease of lookup.\n"},"personalAccessToken":{"type":"string","description":"The personal access token used to authenticate to the corresponding Git provider. If value is not provided, it's sourced from the first environment variable of `GITHUB_TOKEN`, `GITLAB_TOKEN`, or `AZDO_PERSONAL_ACCESS_TOKEN`, that has a non-empty value.\n"},"principalId":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/GitCredentialProviderConfig:GitCredentialProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"requiredInputs":["gitProvider"],"stateInputs":{"description":"Input properties used for looking up and filtering GitCredential resources.\n","properties":{"force":{"type":"boolean","description":"specify if settings need to be enforced (i.e., to overwrite previously set credential for service principals).\n"},"gitEmail":{"type":"string","description":"The email associated with your Git provider user account. Used for authentication with the remote repository and also sets the author \u0026 committer identity for commits.\n"},"gitProvider":{"type":"string","description":"case insensitive name of the Git provider.  Following values are supported right now (could be a subject for a change, consult [Git Credentials API documentation](https://docs.databricks.com/dev-tools/api/latest/gitcredentials.html)): `gitHub`, `gitHubEnterprise`, `bitbucketCloud`, `bitbucketServer`, `azureDevOpsServices`, `gitLab`, `gitLabEnterpriseEdition`, `awsCodeCommit`, `azureDevOpsServicesAad`.\n"},"gitUsername":{"type":"string","description":"user name at Git provider.  For most Git providers it is only used to set the Git committer \u0026 author names for commits, however it may be required for authentication depending on your Git provider / token requirements.\n"},"isDefaultForProvider":{"type":"boolean","description":"boolean flag specifying if the credential is the default for the given provider type.\n"},"name":{"type":"string","description":"the name of the git credential, used for identification and ease of lookup.\n"},"personalAccessToken":{"type":"string","description":"The personal access token used to authenticate to the corresponding Git provider. If value is not provided, it's sourced from the first environment variable of `GITHUB_TOKEN`, `GITLAB_TOKEN`, or `AZDO_PERSONAL_ACCESS_TOKEN`, that has a non-empty value.\n"},"principalId":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/GitCredentialProviderConfig:GitCredentialProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"type":"object"}},"databricks:index/globalInitScript:GlobalInitScript":{"description":"This resource allows you to manage [global init scripts](https://docs.databricks.com/clusters/init-scripts.html#global-init-scripts), which are run on all\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand databricks_job.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n","properties":{"contentBase64":{"type":"string","description":"The base64-encoded source code global init script. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances\n"},"enabled":{"type":"boolean","description":"specifies if the script is enabled for execution, or not\n"},"md5":{"type":"string"},"name":{"type":"string","description":"the name of the script.  It should be unique\n"},"position":{"type":"integer","description":"the position of a global init script, where \u003cspan pulumi-lang-nodejs=\"`0`\" pulumi-lang-dotnet=\"`0`\" pulumi-lang-go=\"`0`\" pulumi-lang-python=\"`0`\" pulumi-lang-yaml=\"`0`\" pulumi-lang-java=\"`0`\"\u003e`0`\u003c/span\u003e represents the first global init script to run, \u003cspan pulumi-lang-nodejs=\"`1`\" pulumi-lang-dotnet=\"`1`\" pulumi-lang-go=\"`1`\" pulumi-lang-python=\"`1`\" pulumi-lang-yaml=\"`1`\" pulumi-lang-java=\"`1`\"\u003e`1`\u003c/span\u003e is the second global init script to run, and so on. When omitted, the script gets the last position.\n"},"providerConfig":{"$ref":"#/types/databricks:index/GlobalInitScriptProviderConfig:GlobalInitScriptProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"source":{"type":"string","description":"Path to script's source code on local filesystem. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e\n"}},"required":["name","position"],"inputProperties":{"contentBase64":{"type":"string","description":"The base64-encoded source code global init script. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances\n"},"enabled":{"type":"boolean","description":"specifies if the script is enabled for execution, or not\n"},"md5":{"type":"string"},"name":{"type":"string","description":"the name of the script.  It should be unique\n"},"position":{"type":"integer","description":"the position of a global init script, where \u003cspan pulumi-lang-nodejs=\"`0`\" pulumi-lang-dotnet=\"`0`\" pulumi-lang-go=\"`0`\" pulumi-lang-python=\"`0`\" pulumi-lang-yaml=\"`0`\" pulumi-lang-java=\"`0`\"\u003e`0`\u003c/span\u003e represents the first global init script to run, \u003cspan pulumi-lang-nodejs=\"`1`\" pulumi-lang-dotnet=\"`1`\" pulumi-lang-go=\"`1`\" pulumi-lang-python=\"`1`\" pulumi-lang-yaml=\"`1`\" pulumi-lang-java=\"`1`\"\u003e`1`\u003c/span\u003e is the second global init script to run, and so on. When omitted, the script gets the last position.\n"},"providerConfig":{"$ref":"#/types/databricks:index/GlobalInitScriptProviderConfig:GlobalInitScriptProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"source":{"type":"string","description":"Path to script's source code on local filesystem. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering GlobalInitScript resources.\n","properties":{"contentBase64":{"type":"string","description":"The base64-encoded source code global init script. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances\n"},"enabled":{"type":"boolean","description":"specifies if the script is enabled for execution, or not\n"},"md5":{"type":"string"},"name":{"type":"string","description":"the name of the script.  It should be unique\n"},"position":{"type":"integer","description":"the position of a global init script, where \u003cspan pulumi-lang-nodejs=\"`0`\" pulumi-lang-dotnet=\"`0`\" pulumi-lang-go=\"`0`\" pulumi-lang-python=\"`0`\" pulumi-lang-yaml=\"`0`\" pulumi-lang-java=\"`0`\"\u003e`0`\u003c/span\u003e represents the first global init script to run, \u003cspan pulumi-lang-nodejs=\"`1`\" pulumi-lang-dotnet=\"`1`\" pulumi-lang-go=\"`1`\" pulumi-lang-python=\"`1`\" pulumi-lang-yaml=\"`1`\" pulumi-lang-java=\"`1`\"\u003e`1`\u003c/span\u003e is the second global init script to run, and so on. When omitted, the script gets the last position.\n"},"providerConfig":{"$ref":"#/types/databricks:index/GlobalInitScriptProviderConfig:GlobalInitScriptProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"source":{"type":"string","description":"Path to script's source code on local filesystem. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e\n"}},"type":"object"}},"databricks:index/grant:Grant":{"description":"\u003e This article refers to the privileges and inheritance model in Privilege Model version 1.0. If you created your metastore during the public preview (before August 25, 2022), you can upgrade to Privilege Model version 1.0 following [Upgrade to privilege inheritance](https://docs.databricks.com/data-governance/unity-catalog/hive-metastore.html)\n\n\u003e Most of Unity Catalog APIs are only accessible via **workspace-level APIs**. This design may change in the future. Account-level principal grants can be assigned with any valid workspace as the Unity Catalog is decoupled from specific workspaces. More information in [the official documentation](https://docs.databricks.com/data-governance/unity-catalog/index.html).\n\n\u003e This resource is _authoritative_ for grants on securables for a given _singular_ principal. Configuring this resource for a securable will **OVERWRITE** any existing grants for the principal and changes made outside of Pulumi will be reset. Use\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003efor authoritative control of all grants on a securable.\n\nIn Unity Catalog all users initially have no access to data. Only Metastore Admins can create objects and can grant/revoke access on individual objects to users and groups. Every securable object in Unity Catalog has an owner. The owner can be any account-level user or group, called principals in general. The principal that creates an object becomes its owner. Owners receive `ALL_PRIVILEGES` on the securable object (e.g., `SELECT` and `MODIFY` on a table), as well as the permission to grant privileges to other principals.\n\nSecurable objects are hierarchical and privileges are inherited downward. The highest level object that privileges are inherited from is the catalog. This means that granting a privilege on a catalog or schema automatically grants the privilege to all current and future objects within the catalog or schema. Privileges that are granted on a metastore are not inherited.\n\nEvery \u003cspan pulumi-lang-nodejs=\"`databricks.Grant`\" pulumi-lang-dotnet=\"`databricks.Grant`\" pulumi-lang-go=\"`Grant`\" pulumi-lang-python=\"`Grant`\" pulumi-lang-yaml=\"`databricks.Grant`\" pulumi-lang-java=\"`databricks.Grant`\"\u003e`databricks.Grant`\u003c/span\u003e resource must have exactly one securable identifier and the following arguments:\n\n- \u003cspan pulumi-lang-nodejs=\"`principal`\" pulumi-lang-dotnet=\"`Principal`\" pulumi-lang-go=\"`principal`\" pulumi-lang-python=\"`principal`\" pulumi-lang-yaml=\"`principal`\" pulumi-lang-java=\"`principal`\"\u003e`principal`\u003c/span\u003e - User name, group name or service principal application ID.\n- \u003cspan pulumi-lang-nodejs=\"`privileges`\" pulumi-lang-dotnet=\"`Privileges`\" pulumi-lang-go=\"`privileges`\" pulumi-lang-python=\"`privileges`\" pulumi-lang-yaml=\"`privileges`\" pulumi-lang-java=\"`privileges`\"\u003e`privileges`\u003c/span\u003e - One or more privileges that are specific to a securable type.\n- \u003cspan pulumi-lang-nodejs=\"`providerConfig`\" pulumi-lang-dotnet=\"`ProviderConfig`\" pulumi-lang-go=\"`providerConfig`\" pulumi-lang-python=\"`provider_config`\" pulumi-lang-yaml=\"`providerConfig`\" pulumi-lang-java=\"`providerConfig`\"\u003e`provider_config`\u003c/span\u003e - (Optional) Configure the provider for management through account provider. This block consists of the following fields:\n  - \u003cspan pulumi-lang-nodejs=\"`workspaceId`\" pulumi-lang-dotnet=\"`WorkspaceId`\" pulumi-lang-go=\"`workspaceId`\" pulumi-lang-python=\"`workspace_id`\" pulumi-lang-yaml=\"`workspaceId`\" pulumi-lang-java=\"`workspaceId`\"\u003e`workspace_id`\u003c/span\u003e - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n\nFor the latest list of privilege types that apply to each securable object in Unity Catalog, please refer to the [official documentation](https://docs.databricks.com/en/data-governance/unity-catalog/manage-privileges/privileges.html#privilege-types-by-securable-object-in-unity-catalog)\n\nUnlike the [SQL specification](https://docs.databricks.com/sql/language-manual/sql-ref-privileges.html#privilege-types), all privileges to be written with underscore instead of space, e.g. `CREATE_TABLE` and not `CREATE TABLE`.\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003efor the list of privilege types that apply to each securable object.\n\n## Metastore grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eMetastore grants for the list of privileges that apply to Metastores.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandboxDataEngineers = new databricks.Grant(\"sandbox_data_engineers\", {\n    metastore: \"metastore_id\",\n    principal: \"Data Engineers\",\n    privileges: [\n        \"CREATE_CATALOG\",\n        \"CREATE_EXTERNAL_LOCATION\",\n    ],\n});\nconst sandboxDataSharer = new databricks.Grant(\"sandbox_data_sharer\", {\n    metastore: \"metastore_id\",\n    principal: \"Data Sharer\",\n    privileges: [\n        \"CREATE_RECIPIENT\",\n        \"CREATE_SHARE\",\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox_data_engineers = databricks.Grant(\"sandbox_data_engineers\",\n    metastore=\"metastore_id\",\n    principal=\"Data Engineers\",\n    privileges=[\n        \"CREATE_CATALOG\",\n        \"CREATE_EXTERNAL_LOCATION\",\n    ])\nsandbox_data_sharer = databricks.Grant(\"sandbox_data_sharer\",\n    metastore=\"metastore_id\",\n    principal=\"Data Sharer\",\n    privileges=[\n        \"CREATE_RECIPIENT\",\n        \"CREATE_SHARE\",\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandboxDataEngineers = new Databricks.Grant(\"sandbox_data_engineers\", new()\n    {\n        Metastore = \"metastore_id\",\n        Principal = \"Data Engineers\",\n        Privileges = new[]\n        {\n            \"CREATE_CATALOG\",\n            \"CREATE_EXTERNAL_LOCATION\",\n        },\n    });\n\n    var sandboxDataSharer = new Databricks.Grant(\"sandbox_data_sharer\", new()\n    {\n        Metastore = \"metastore_id\",\n        Principal = \"Data Sharer\",\n        Privileges = new[]\n        {\n            \"CREATE_RECIPIENT\",\n            \"CREATE_SHARE\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGrant(ctx, \"sandbox_data_engineers\", \u0026databricks.GrantArgs{\n\t\t\tMetastore: pulumi.String(\"metastore_id\"),\n\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"CREATE_CATALOG\"),\n\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_LOCATION\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"sandbox_data_sharer\", \u0026databricks.GrantArgs{\n\t\t\tMetastore: pulumi.String(\"metastore_id\"),\n\t\t\tPrincipal: pulumi.String(\"Data Sharer\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"CREATE_RECIPIENT\"),\n\t\t\t\tpulumi.String(\"CREATE_SHARE\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandboxDataEngineers = new Grant(\"sandboxDataEngineers\", GrantArgs.builder()\n            .metastore(\"metastore_id\")\n            .principal(\"Data Engineers\")\n            .privileges(            \n                \"CREATE_CATALOG\",\n                \"CREATE_EXTERNAL_LOCATION\")\n            .build());\n\n        var sandboxDataSharer = new Grant(\"sandboxDataSharer\", GrantArgs.builder()\n            .metastore(\"metastore_id\")\n            .principal(\"Data Sharer\")\n            .privileges(            \n                \"CREATE_RECIPIENT\",\n                \"CREATE_SHARE\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandboxDataEngineers:\n    type: databricks:Grant\n    name: sandbox_data_engineers\n    properties:\n      metastore: metastore_id\n      principal: Data Engineers\n      privileges:\n        - CREATE_CATALOG\n        - CREATE_EXTERNAL_LOCATION\n  sandboxDataSharer:\n    type: databricks:Grant\n    name: sandbox_data_sharer\n    properties:\n      metastore: metastore_id\n      principal: Data Sharer\n      privileges:\n        - CREATE_RECIPIENT\n        - CREATE_SHARE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Catalog grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eCatalog grants for the list of privileges that apply to Catalogs.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    comment: \"this catalog is managed by terraform\",\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst sandboxDataScientists = new databricks.Grant(\"sandbox_data_scientists\", {\n    catalog: sandbox.name,\n    principal: \"Data Scientists\",\n    privileges: [\n        \"USE_CATALOG\",\n        \"USE_SCHEMA\",\n        \"CREATE_TABLE\",\n        \"SELECT\",\n    ],\n});\nconst sandboxDataEngineers = new databricks.Grant(\"sandbox_data_engineers\", {\n    catalog: sandbox.name,\n    principal: \"Data Engineers\",\n    privileges: [\n        \"USE_CATALOG\",\n        \"USE_SCHEMA\",\n        \"CREATE_SCHEMA\",\n        \"CREATE_TABLE\",\n        \"MODIFY\",\n    ],\n});\nconst sandboxDataAnalyst = new databricks.Grant(\"sandbox_data_analyst\", {\n    catalog: sandbox.name,\n    principal: \"Data Analyst\",\n    privileges: [\n        \"USE_CATALOG\",\n        \"USE_SCHEMA\",\n        \"SELECT\",\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    comment=\"this catalog is managed by terraform\",\n    properties={\n        \"purpose\": \"testing\",\n    })\nsandbox_data_scientists = databricks.Grant(\"sandbox_data_scientists\",\n    catalog=sandbox.name,\n    principal=\"Data Scientists\",\n    privileges=[\n        \"USE_CATALOG\",\n        \"USE_SCHEMA\",\n        \"CREATE_TABLE\",\n        \"SELECT\",\n    ])\nsandbox_data_engineers = databricks.Grant(\"sandbox_data_engineers\",\n    catalog=sandbox.name,\n    principal=\"Data Engineers\",\n    privileges=[\n        \"USE_CATALOG\",\n        \"USE_SCHEMA\",\n        \"CREATE_SCHEMA\",\n        \"CREATE_TABLE\",\n        \"MODIFY\",\n    ])\nsandbox_data_analyst = databricks.Grant(\"sandbox_data_analyst\",\n    catalog=sandbox.name,\n    principal=\"Data Analyst\",\n    privileges=[\n        \"USE_CATALOG\",\n        \"USE_SCHEMA\",\n        \"SELECT\",\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        Comment = \"this catalog is managed by terraform\",\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var sandboxDataScientists = new Databricks.Grant(\"sandbox_data_scientists\", new()\n    {\n        Catalog = sandbox.Name,\n        Principal = \"Data Scientists\",\n        Privileges = new[]\n        {\n            \"USE_CATALOG\",\n            \"USE_SCHEMA\",\n            \"CREATE_TABLE\",\n            \"SELECT\",\n        },\n    });\n\n    var sandboxDataEngineers = new Databricks.Grant(\"sandbox_data_engineers\", new()\n    {\n        Catalog = sandbox.Name,\n        Principal = \"Data Engineers\",\n        Privileges = new[]\n        {\n            \"USE_CATALOG\",\n            \"USE_SCHEMA\",\n            \"CREATE_SCHEMA\",\n            \"CREATE_TABLE\",\n            \"MODIFY\",\n        },\n    });\n\n    var sandboxDataAnalyst = new Databricks.Grant(\"sandbox_data_analyst\", new()\n    {\n        Catalog = sandbox.Name,\n        Principal = \"Data Analyst\",\n        Privileges = new[]\n        {\n            \"USE_CATALOG\",\n            \"USE_SCHEMA\",\n            \"SELECT\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:    pulumi.String(\"sandbox\"),\n\t\t\tComment: pulumi.String(\"this catalog is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"sandbox_data_scientists\", \u0026databricks.GrantArgs{\n\t\t\tCatalog:   sandbox.Name,\n\t\t\tPrincipal: pulumi.String(\"Data Scientists\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USE_CATALOG\"),\n\t\t\t\tpulumi.String(\"USE_SCHEMA\"),\n\t\t\t\tpulumi.String(\"CREATE_TABLE\"),\n\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"sandbox_data_engineers\", \u0026databricks.GrantArgs{\n\t\t\tCatalog:   sandbox.Name,\n\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USE_CATALOG\"),\n\t\t\t\tpulumi.String(\"USE_SCHEMA\"),\n\t\t\t\tpulumi.String(\"CREATE_SCHEMA\"),\n\t\t\t\tpulumi.String(\"CREATE_TABLE\"),\n\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"sandbox_data_analyst\", \u0026databricks.GrantArgs{\n\t\t\tCatalog:   sandbox.Name,\n\t\t\tPrincipal: pulumi.String(\"Data Analyst\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USE_CATALOG\"),\n\t\t\t\tpulumi.String(\"USE_SCHEMA\"),\n\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .comment(\"this catalog is managed by terraform\")\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var sandboxDataScientists = new Grant(\"sandboxDataScientists\", GrantArgs.builder()\n            .catalog(sandbox.name())\n            .principal(\"Data Scientists\")\n            .privileges(            \n                \"USE_CATALOG\",\n                \"USE_SCHEMA\",\n                \"CREATE_TABLE\",\n                \"SELECT\")\n            .build());\n\n        var sandboxDataEngineers = new Grant(\"sandboxDataEngineers\", GrantArgs.builder()\n            .catalog(sandbox.name())\n            .principal(\"Data Engineers\")\n            .privileges(            \n                \"USE_CATALOG\",\n                \"USE_SCHEMA\",\n                \"CREATE_SCHEMA\",\n                \"CREATE_TABLE\",\n                \"MODIFY\")\n            .build());\n\n        var sandboxDataAnalyst = new Grant(\"sandboxDataAnalyst\", GrantArgs.builder()\n            .catalog(sandbox.name())\n            .principal(\"Data Analyst\")\n            .privileges(            \n                \"USE_CATALOG\",\n                \"USE_SCHEMA\",\n                \"SELECT\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      comment: this catalog is managed by terraform\n      properties:\n        purpose: testing\n  sandboxDataScientists:\n    type: databricks:Grant\n    name: sandbox_data_scientists\n    properties:\n      catalog: ${sandbox.name}\n      principal: Data Scientists\n      privileges:\n        - USE_CATALOG\n        - USE_SCHEMA\n        - CREATE_TABLE\n        - SELECT\n  sandboxDataEngineers:\n    type: databricks:Grant\n    name: sandbox_data_engineers\n    properties:\n      catalog: ${sandbox.name}\n      principal: Data Engineers\n      privileges:\n        - USE_CATALOG\n        - USE_SCHEMA\n        - CREATE_SCHEMA\n        - CREATE_TABLE\n        - MODIFY\n  sandboxDataAnalyst:\n    type: databricks:Grant\n    name: sandbox_data_analyst\n    properties:\n      catalog: ${sandbox.name}\n      principal: Data Analyst\n      privileges:\n        - USE_CATALOG\n        - USE_SCHEMA\n        - SELECT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Schema grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eSchema grants for the list of privileges that apply to Schemas.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst things = new databricks.Schema(\"things\", {\n    catalogName: sandbox.id,\n    name: \"things\",\n    comment: \"this schema is managed by terraform\",\n    properties: {\n        kind: \"various\",\n    },\n});\nconst thingsGrant = new databricks.Grant(\"things\", {\n    schema: things.id,\n    principal: \"Data Engineers\",\n    privileges: [\n        \"USE_SCHEMA\",\n        \"MODIFY\",\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthings = databricks.Schema(\"things\",\n    catalog_name=sandbox[\"id\"],\n    name=\"things\",\n    comment=\"this schema is managed by terraform\",\n    properties={\n        \"kind\": \"various\",\n    })\nthings_grant = databricks.Grant(\"things\",\n    schema=things.id,\n    principal=\"Data Engineers\",\n    privileges=[\n        \"USE_SCHEMA\",\n        \"MODIFY\",\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var things = new Databricks.Schema(\"things\", new()\n    {\n        CatalogName = sandbox.Id,\n        Name = \"things\",\n        Comment = \"this schema is managed by terraform\",\n        Properties = \n        {\n            { \"kind\", \"various\" },\n        },\n    });\n\n    var thingsGrant = new Databricks.Grant(\"things\", new()\n    {\n        Schema = things.Id,\n        Principal = \"Data Engineers\",\n        Privileges = new[]\n        {\n            \"USE_SCHEMA\",\n            \"MODIFY\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthings, err := databricks.NewSchema(ctx, \"things\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: pulumi.Any(sandbox.Id),\n\t\t\tName:        pulumi.String(\"things\"),\n\t\t\tComment:     pulumi.String(\"this schema is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"kind\": pulumi.String(\"various\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"things\", \u0026databricks.GrantArgs{\n\t\t\tSchema:    things.ID(),\n\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USE_SCHEMA\"),\n\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var things = new Schema(\"things\", SchemaArgs.builder()\n            .catalogName(sandbox.id())\n            .name(\"things\")\n            .comment(\"this schema is managed by terraform\")\n            .properties(Map.of(\"kind\", \"various\"))\n            .build());\n\n        var thingsGrant = new Grant(\"thingsGrant\", GrantArgs.builder()\n            .schema(things.id())\n            .principal(\"Data Engineers\")\n            .privileges(            \n                \"USE_SCHEMA\",\n                \"MODIFY\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  things:\n    type: databricks:Schema\n    properties:\n      catalogName: ${sandbox.id}\n      name: things\n      comment: this schema is managed by terraform\n      properties:\n        kind: various\n  thingsGrant:\n    type: databricks:Grant\n    name: things\n    properties:\n      schema: ${things.id}\n      principal: Data Engineers\n      privileges:\n        - USE_SCHEMA\n        - MODIFY\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Table grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eTable grants for the list of privileges that apply to Tables.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst customersDataEngineers = new databricks.Grant(\"customers_data_engineers\", {\n    table: \"main.reporting.customers\",\n    principal: \"Data Engineers\",\n    privileges: [\n        \"MODIFY\",\n        \"SELECT\",\n    ],\n});\nconst customersDataAnalysts = new databricks.Grant(\"customers_data_analysts\", {\n    table: \"main.reporting.customers\",\n    principal: \"Data Analysts\",\n    privileges: [\"SELECT\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncustomers_data_engineers = databricks.Grant(\"customers_data_engineers\",\n    table=\"main.reporting.customers\",\n    principal=\"Data Engineers\",\n    privileges=[\n        \"MODIFY\",\n        \"SELECT\",\n    ])\ncustomers_data_analysts = databricks.Grant(\"customers_data_analysts\",\n    table=\"main.reporting.customers\",\n    principal=\"Data Analysts\",\n    privileges=[\"SELECT\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var customersDataEngineers = new Databricks.Grant(\"customers_data_engineers\", new()\n    {\n        Table = \"main.reporting.customers\",\n        Principal = \"Data Engineers\",\n        Privileges = new[]\n        {\n            \"MODIFY\",\n            \"SELECT\",\n        },\n    });\n\n    var customersDataAnalysts = new Databricks.Grant(\"customers_data_analysts\", new()\n    {\n        Table = \"main.reporting.customers\",\n        Principal = \"Data Analysts\",\n        Privileges = new[]\n        {\n            \"SELECT\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGrant(ctx, \"customers_data_engineers\", \u0026databricks.GrantArgs{\n\t\t\tTable:     pulumi.String(\"main.reporting.customers\"),\n\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"customers_data_analysts\", \u0026databricks.GrantArgs{\n\t\t\tTable:     pulumi.String(\"main.reporting.customers\"),\n\t\t\tPrincipal: pulumi.String(\"Data Analysts\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var customersDataEngineers = new Grant(\"customersDataEngineers\", GrantArgs.builder()\n            .table(\"main.reporting.customers\")\n            .principal(\"Data Engineers\")\n            .privileges(            \n                \"MODIFY\",\n                \"SELECT\")\n            .build());\n\n        var customersDataAnalysts = new Grant(\"customersDataAnalysts\", GrantArgs.builder()\n            .table(\"main.reporting.customers\")\n            .principal(\"Data Analysts\")\n            .privileges(\"SELECT\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  customersDataEngineers:\n    type: databricks:Grant\n    name: customers_data_engineers\n    properties:\n      table: main.reporting.customers\n      principal: Data Engineers\n      privileges:\n        - MODIFY\n        - SELECT\n  customersDataAnalysts:\n    type: databricks:Grant\n    name: customers_data_analysts\n    properties:\n      table: main.reporting.customers\n      principal: Data Analysts\n      privileges:\n        - SELECT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nYou can also apply grants dynamically with\u003cspan pulumi-lang-nodejs=\" databricks.getTables \" pulumi-lang-dotnet=\" databricks.getTables \" pulumi-lang-go=\" getTables \" pulumi-lang-python=\" get_tables \" pulumi-lang-yaml=\" databricks.getTables \" pulumi-lang-java=\" databricks.getTables \"\u003e databricks.getTables \u003c/span\u003edata resource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nexport = async () =\u003e {\n    const things = await databricks.getTables({\n        catalogName: \"sandbox\",\n        schemaName: \"things\",\n    });\n    const thingsGrant: databricks.Grant[] = [];\n    for (const range of things.ids.map((v, k) =\u003e ({key: k, value: v}))) {\n        thingsGrant.push(new databricks.Grant(`things-${range.key}`, {\n            table: range.value,\n            principal: \"sensitive\",\n            privileges: [\n                \"SELECT\",\n                \"MODIFY\",\n            ],\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthings = databricks.get_tables(catalog_name=\"sandbox\",\n    schema_name=\"things\")\nthings_grant = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate(things.ids)]:\n    things_grant.append(databricks.Grant(f\"things-{range['key']}\",\n        table=range[\"value\"],\n        principal=\"sensitive\",\n        privileges=[\n            \"SELECT\",\n            \"MODIFY\",\n        ]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var things = await Databricks.GetTables.InvokeAsync(new()\n    {\n        CatalogName = \"sandbox\",\n        SchemaName = \"things\",\n    });\n\n    var thingsGrant = new List\u003cDatabricks.Grant\u003e();\n    foreach (var range in )\n    {\n        thingsGrant.Add(new Databricks.Grant($\"things-{range.Key}\", new()\n        {\n            Table = range.Value,\n            Principal = \"sensitive\",\n            Privileges = new[]\n            {\n                \"SELECT\",\n                \"MODIFY\",\n            },\n        }));\n    }\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthings, err := databricks.GetTables(ctx, \u0026databricks.GetTablesArgs{\n\t\t\tCatalogName: \"sandbox\",\n\t\t\tSchemaName:  \"things\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar thingsGrant []*databricks.Grant\n\t\tfor key0, val0 := range things.Ids {\n\t\t\t__res, err := databricks.NewGrant(ctx, fmt.Sprintf(\"things-%v\", key0), \u0026databricks.GrantArgs{\n\t\t\t\tTable:     pulumi.String(val0),\n\t\t\t\tPrincipal: pulumi.String(\"sensitive\"),\n\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tthingsGrant = append(thingsGrant, __res)\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetTablesArgs;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport com.pulumi.codegen.internal.KeyedValue;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var things = DatabricksFunctions.getTables(GetTablesArgs.builder()\n            .catalogName(\"sandbox\")\n            .schemaName(\"things\")\n            .build());\n\n        final var thingsGrant = things.applyValue(getTablesResult -\u003e {\n            final var resources = new ArrayList\u003cGrant\u003e();\n            for (var range : KeyedValue.of(getTablesResult.ids())) {\n                var resource = new Grant(\"thingsGrant-\" + range.key(), GrantArgs.builder()\n                    .table(range.value())\n                    .principal(\"sensitive\")\n                    .privileges(                    \n                        \"SELECT\",\n                        \"MODIFY\")\n                    .build());\n\n                resources.add(resource);\n            }\n\n            return resources;\n        });\n\n    }\n}\n```\n```yaml\nresources:\n  thingsGrant:\n    type: databricks:Grant\n    name: things\n    properties:\n      table: ${range.value}\n      principal: sensitive\n      privileges:\n        - SELECT\n        - MODIFY\n    options: {}\nvariables:\n  things:\n    fn::invoke:\n      function: databricks:getTables\n      arguments:\n        catalogName: sandbox\n        schemaName: things\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## View grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eView grants for the list of privileges that apply to Views.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst customer360 = new databricks.Grant(\"customer360\", {\n    table: \"main.reporting.customer360\",\n    principal: \"Data Analysts\",\n    privileges: [\"SELECT\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncustomer360 = databricks.Grant(\"customer360\",\n    table=\"main.reporting.customer360\",\n    principal=\"Data Analysts\",\n    privileges=[\"SELECT\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var customer360 = new Databricks.Grant(\"customer360\", new()\n    {\n        Table = \"main.reporting.customer360\",\n        Principal = \"Data Analysts\",\n        Privileges = new[]\n        {\n            \"SELECT\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGrant(ctx, \"customer360\", \u0026databricks.GrantArgs{\n\t\t\tTable:     pulumi.String(\"main.reporting.customer360\"),\n\t\t\tPrincipal: pulumi.String(\"Data Analysts\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var customer360 = new Grant(\"customer360\", GrantArgs.builder()\n            .table(\"main.reporting.customer360\")\n            .principal(\"Data Analysts\")\n            .privileges(\"SELECT\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  customer360:\n    type: databricks:Grant\n    properties:\n      table: main.reporting.customer360\n      principal: Data Analysts\n      privileges:\n        - SELECT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nYou can also apply grants dynamically with\u003cspan pulumi-lang-nodejs=\" databricks.getViews \" pulumi-lang-dotnet=\" databricks.getViews \" pulumi-lang-go=\" getViews \" pulumi-lang-python=\" get_views \" pulumi-lang-yaml=\" databricks.getViews \" pulumi-lang-java=\" databricks.getViews \"\u003e databricks.getViews \u003c/span\u003edata resource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nexport = async () =\u003e {\n    const customers = await databricks.getViews({\n        catalogName: \"main\",\n        schemaName: \"customers\",\n    });\n    const customersGrant: databricks.Grant[] = [];\n    for (const range of customers.ids.map((v, k) =\u003e ({key: k, value: v}))) {\n        customersGrant.push(new databricks.Grant(`customers-${range.key}`, {\n            table: range.value,\n            principal: \"sensitive\",\n            privileges: [\n                \"SELECT\",\n                \"MODIFY\",\n            ],\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncustomers = databricks.get_views(catalog_name=\"main\",\n    schema_name=\"customers\")\ncustomers_grant = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate(customers.ids)]:\n    customers_grant.append(databricks.Grant(f\"customers-{range['key']}\",\n        table=range[\"value\"],\n        principal=\"sensitive\",\n        privileges=[\n            \"SELECT\",\n            \"MODIFY\",\n        ]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var customers = await Databricks.GetViews.InvokeAsync(new()\n    {\n        CatalogName = \"main\",\n        SchemaName = \"customers\",\n    });\n\n    var customersGrant = new List\u003cDatabricks.Grant\u003e();\n    foreach (var range in )\n    {\n        customersGrant.Add(new Databricks.Grant($\"customers-{range.Key}\", new()\n        {\n            Table = range.Value,\n            Principal = \"sensitive\",\n            Privileges = new[]\n            {\n                \"SELECT\",\n                \"MODIFY\",\n            },\n        }));\n    }\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcustomers, err := databricks.GetViews(ctx, \u0026databricks.GetViewsArgs{\n\t\t\tCatalogName: \"main\",\n\t\t\tSchemaName:  \"customers\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar customersGrant []*databricks.Grant\n\t\tfor key0, val0 := range customers.Ids {\n\t\t\t__res, err := databricks.NewGrant(ctx, fmt.Sprintf(\"customers-%v\", key0), \u0026databricks.GrantArgs{\n\t\t\t\tTable:     pulumi.String(val0),\n\t\t\t\tPrincipal: pulumi.String(\"sensitive\"),\n\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcustomersGrant = append(customersGrant, __res)\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetViewsArgs;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport com.pulumi.codegen.internal.KeyedValue;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var customers = DatabricksFunctions.getViews(GetViewsArgs.builder()\n            .catalogName(\"main\")\n            .schemaName(\"customers\")\n            .build());\n\n        final var customersGrant = customers.applyValue(getViewsResult -\u003e {\n            final var resources = new ArrayList\u003cGrant\u003e();\n            for (var range : KeyedValue.of(getViewsResult.ids())) {\n                var resource = new Grant(\"customersGrant-\" + range.key(), GrantArgs.builder()\n                    .table(range.value())\n                    .principal(\"sensitive\")\n                    .privileges(                    \n                        \"SELECT\",\n                        \"MODIFY\")\n                    .build());\n\n                resources.add(resource);\n            }\n\n            return resources;\n        });\n\n    }\n}\n```\n```yaml\nresources:\n  customersGrant:\n    type: databricks:Grant\n    name: customers\n    properties:\n      table: ${range.value}\n      principal: sensitive\n      privileges:\n        - SELECT\n        - MODIFY\n    options: {}\nvariables:\n  customers:\n    fn::invoke:\n      function: databricks:getViews\n      arguments:\n        catalogName: main\n        schemaName: customers\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Volume grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eVolume grants for the list of privileges that apply to Volumes.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Volume(\"this\", {\n    name: \"quickstart_volume\",\n    catalogName: sandbox.name,\n    schemaName: things.name,\n    volumeType: \"EXTERNAL\",\n    storageLocation: some.url,\n    comment: \"this volume is managed by terraform\",\n});\nconst volume = new databricks.Grant(\"volume\", {\n    volume: _this.id,\n    principal: \"Data Engineers\",\n    privileges: [\"WRITE_VOLUME\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Volume(\"this\",\n    name=\"quickstart_volume\",\n    catalog_name=sandbox[\"name\"],\n    schema_name=things[\"name\"],\n    volume_type=\"EXTERNAL\",\n    storage_location=some[\"url\"],\n    comment=\"this volume is managed by terraform\")\nvolume = databricks.Grant(\"volume\",\n    volume=this.id,\n    principal=\"Data Engineers\",\n    privileges=[\"WRITE_VOLUME\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Volume(\"this\", new()\n    {\n        Name = \"quickstart_volume\",\n        CatalogName = sandbox.Name,\n        SchemaName = things.Name,\n        VolumeType = \"EXTERNAL\",\n        StorageLocation = some.Url,\n        Comment = \"this volume is managed by terraform\",\n    });\n\n    var volume = new Databricks.Grant(\"volume\", new()\n    {\n        Volume = @this.Id,\n        Principal = \"Data Engineers\",\n        Privileges = new[]\n        {\n            \"WRITE_VOLUME\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewVolume(ctx, \"this\", \u0026databricks.VolumeArgs{\n\t\t\tName:            pulumi.String(\"quickstart_volume\"),\n\t\t\tCatalogName:     pulumi.Any(sandbox.Name),\n\t\t\tSchemaName:      pulumi.Any(things.Name),\n\t\t\tVolumeType:      pulumi.String(\"EXTERNAL\"),\n\t\t\tStorageLocation: pulumi.Any(some.Url),\n\t\t\tComment:         pulumi.String(\"this volume is managed by terraform\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"volume\", \u0026databricks.GrantArgs{\n\t\t\tVolume:    this.ID(),\n\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"WRITE_VOLUME\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Volume;\nimport com.pulumi.databricks.VolumeArgs;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Volume(\"this\", VolumeArgs.builder()\n            .name(\"quickstart_volume\")\n            .catalogName(sandbox.name())\n            .schemaName(things.name())\n            .volumeType(\"EXTERNAL\")\n            .storageLocation(some.url())\n            .comment(\"this volume is managed by terraform\")\n            .build());\n\n        var volume = new Grant(\"volume\", GrantArgs.builder()\n            .volume(this_.id())\n            .principal(\"Data Engineers\")\n            .privileges(\"WRITE_VOLUME\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Volume\n    properties:\n      name: quickstart_volume\n      catalogName: ${sandbox.name}\n      schemaName: ${things.name}\n      volumeType: EXTERNAL\n      storageLocation: ${some.url}\n      comment: this volume is managed by terraform\n  volume:\n    type: databricks:Grant\n    properties:\n      volume: ${this.id}\n      principal: Data Engineers\n      privileges:\n        - WRITE_VOLUME\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Registered model grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eRegistered model grants for the list of privileges that apply to Registered models.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst customersDataEngineers = new databricks.Grant(\"customers_data_engineers\", {\n    model: \"main.reporting.customer_model\",\n    principal: \"Data Engineers\",\n    privileges: [\n        \"APPLY_TAG\",\n        \"EXECUTE\",\n    ],\n});\nconst customersDataAnalysts = new databricks.Grant(\"customers_data_analysts\", {\n    model: \"main.reporting.customer_model\",\n    principal: \"Data Analysts\",\n    privileges: [\"EXECUTE\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncustomers_data_engineers = databricks.Grant(\"customers_data_engineers\",\n    model=\"main.reporting.customer_model\",\n    principal=\"Data Engineers\",\n    privileges=[\n        \"APPLY_TAG\",\n        \"EXECUTE\",\n    ])\ncustomers_data_analysts = databricks.Grant(\"customers_data_analysts\",\n    model=\"main.reporting.customer_model\",\n    principal=\"Data Analysts\",\n    privileges=[\"EXECUTE\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var customersDataEngineers = new Databricks.Grant(\"customers_data_engineers\", new()\n    {\n        Model = \"main.reporting.customer_model\",\n        Principal = \"Data Engineers\",\n        Privileges = new[]\n        {\n            \"APPLY_TAG\",\n            \"EXECUTE\",\n        },\n    });\n\n    var customersDataAnalysts = new Databricks.Grant(\"customers_data_analysts\", new()\n    {\n        Model = \"main.reporting.customer_model\",\n        Principal = \"Data Analysts\",\n        Privileges = new[]\n        {\n            \"EXECUTE\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGrant(ctx, \"customers_data_engineers\", \u0026databricks.GrantArgs{\n\t\t\tModel:     pulumi.String(\"main.reporting.customer_model\"),\n\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"APPLY_TAG\"),\n\t\t\t\tpulumi.String(\"EXECUTE\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"customers_data_analysts\", \u0026databricks.GrantArgs{\n\t\t\tModel:     pulumi.String(\"main.reporting.customer_model\"),\n\t\t\tPrincipal: pulumi.String(\"Data Analysts\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"EXECUTE\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var customersDataEngineers = new Grant(\"customersDataEngineers\", GrantArgs.builder()\n            .model(\"main.reporting.customer_model\")\n            .principal(\"Data Engineers\")\n            .privileges(            \n                \"APPLY_TAG\",\n                \"EXECUTE\")\n            .build());\n\n        var customersDataAnalysts = new Grant(\"customersDataAnalysts\", GrantArgs.builder()\n            .model(\"main.reporting.customer_model\")\n            .principal(\"Data Analysts\")\n            .privileges(\"EXECUTE\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  customersDataEngineers:\n    type: databricks:Grant\n    name: customers_data_engineers\n    properties:\n      model: main.reporting.customer_model\n      principal: Data Engineers\n      privileges:\n        - APPLY_TAG\n        - EXECUTE\n  customersDataAnalysts:\n    type: databricks:Grant\n    name: customers_data_analysts\n    properties:\n      model: main.reporting.customer_model\n      principal: Data Analysts\n      privileges:\n        - EXECUTE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Function grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eFunction grants for the list of privileges that apply to Registered models.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst udfDataEngineers = new databricks.Grant(\"udf_data_engineers\", {\n    \"function\": \"main.reporting.udf\",\n    principal: \"Data Engineers\",\n    privileges: [\"EXECUTE\"],\n});\nconst udfDataAnalysts = new databricks.Grant(\"udf_data_analysts\", {\n    \"function\": \"main.reporting.udf\",\n    principal: \"Data Analysts\",\n    privileges: [\"EXECUTE\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nudf_data_engineers = databricks.Grant(\"udf_data_engineers\",\n    function=\"main.reporting.udf\",\n    principal=\"Data Engineers\",\n    privileges=[\"EXECUTE\"])\nudf_data_analysts = databricks.Grant(\"udf_data_analysts\",\n    function=\"main.reporting.udf\",\n    principal=\"Data Analysts\",\n    privileges=[\"EXECUTE\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var udfDataEngineers = new Databricks.Grant(\"udf_data_engineers\", new()\n    {\n        Function = \"main.reporting.udf\",\n        Principal = \"Data Engineers\",\n        Privileges = new[]\n        {\n            \"EXECUTE\",\n        },\n    });\n\n    var udfDataAnalysts = new Databricks.Grant(\"udf_data_analysts\", new()\n    {\n        Function = \"main.reporting.udf\",\n        Principal = \"Data Analysts\",\n        Privileges = new[]\n        {\n            \"EXECUTE\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGrant(ctx, \"udf_data_engineers\", \u0026databricks.GrantArgs{\n\t\t\tFunction:  pulumi.String(\"main.reporting.udf\"),\n\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"EXECUTE\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"udf_data_analysts\", \u0026databricks.GrantArgs{\n\t\t\tFunction:  pulumi.String(\"main.reporting.udf\"),\n\t\t\tPrincipal: pulumi.String(\"Data Analysts\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"EXECUTE\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var udfDataEngineers = new Grant(\"udfDataEngineers\", GrantArgs.builder()\n            .function(\"main.reporting.udf\")\n            .principal(\"Data Engineers\")\n            .privileges(\"EXECUTE\")\n            .build());\n\n        var udfDataAnalysts = new Grant(\"udfDataAnalysts\", GrantArgs.builder()\n            .function(\"main.reporting.udf\")\n            .principal(\"Data Analysts\")\n            .privileges(\"EXECUTE\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  udfDataEngineers:\n    type: databricks:Grant\n    name: udf_data_engineers\n    properties:\n      function: main.reporting.udf\n      principal: Data Engineers\n      privileges:\n        - EXECUTE\n  udfDataAnalysts:\n    type: databricks:Grant\n    name: udf_data_analysts\n    properties:\n      function: main.reporting.udf\n      principal: Data Analysts\n      privileges:\n        - EXECUTE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Service credential grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eService credential grants for the list of privileges that apply to Service credentials.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst external = new databricks.Credential(\"external\", {\n    name: externalDataAccess.name,\n    awsIamRole: {\n        roleArn: externalDataAccess.arn,\n    },\n    purpose: \"SERVICE\",\n    comment: \"Managed by TF\",\n});\nconst externalCreds = new databricks.Grant(\"external_creds\", {\n    credential: external.id,\n    principal: \"Data Engineers\",\n    privileges: [\"ACCESS\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexternal = databricks.Credential(\"external\",\n    name=external_data_access[\"name\"],\n    aws_iam_role={\n        \"role_arn\": external_data_access[\"arn\"],\n    },\n    purpose=\"SERVICE\",\n    comment=\"Managed by TF\")\nexternal_creds = databricks.Grant(\"external_creds\",\n    credential=external.id,\n    principal=\"Data Engineers\",\n    privileges=[\"ACCESS\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var external = new Databricks.Credential(\"external\", new()\n    {\n        Name = externalDataAccess.Name,\n        AwsIamRole = new Databricks.Inputs.CredentialAwsIamRoleArgs\n        {\n            RoleArn = externalDataAccess.Arn,\n        },\n        Purpose = \"SERVICE\",\n        Comment = \"Managed by TF\",\n    });\n\n    var externalCreds = new Databricks.Grant(\"external_creds\", new()\n    {\n        Credential = external.Id,\n        Principal = \"Data Engineers\",\n        Privileges = new[]\n        {\n            \"ACCESS\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternal, err := databricks.NewCredential(ctx, \"external\", \u0026databricks.CredentialArgs{\n\t\t\tName: pulumi.Any(externalDataAccess.Name),\n\t\t\tAwsIamRole: \u0026databricks.CredentialAwsIamRoleArgs{\n\t\t\t\tRoleArn: pulumi.Any(externalDataAccess.Arn),\n\t\t\t},\n\t\t\tPurpose: pulumi.String(\"SERVICE\"),\n\t\t\tComment: pulumi.String(\"Managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"external_creds\", \u0026databricks.GrantArgs{\n\t\t\tCredential: external.ID(),\n\t\t\tPrincipal:  pulumi.String(\"Data Engineers\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"ACCESS\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Credential;\nimport com.pulumi.databricks.CredentialArgs;\nimport com.pulumi.databricks.inputs.CredentialAwsIamRoleArgs;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var external = new Credential(\"external\", CredentialArgs.builder()\n            .name(externalDataAccess.name())\n            .awsIamRole(CredentialAwsIamRoleArgs.builder()\n                .roleArn(externalDataAccess.arn())\n                .build())\n            .purpose(\"SERVICE\")\n            .comment(\"Managed by TF\")\n            .build());\n\n        var externalCreds = new Grant(\"externalCreds\", GrantArgs.builder()\n            .credential(external.id())\n            .principal(\"Data Engineers\")\n            .privileges(\"ACCESS\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  external:\n    type: databricks:Credential\n    properties:\n      name: ${externalDataAccess.name}\n      awsIamRole:\n        roleArn: ${externalDataAccess.arn}\n      purpose: SERVICE\n      comment: Managed by TF\n  externalCreds:\n    type: databricks:Grant\n    name: external_creds\n    properties:\n      credential: ${external.id}\n      principal: Data Engineers\n      privileges:\n        - ACCESS\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Storage credential grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eStorage credential grants for the list of privileges that apply to Storage credentials.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst external = new databricks.StorageCredential(\"external\", {\n    name: externalDataAccess.name,\n    awsIamRole: {\n        roleArn: externalDataAccess.arn,\n    },\n    comment: \"Managed by TF\",\n});\nconst externalCreds = new databricks.Grant(\"external_creds\", {\n    storageCredential: external.id,\n    principal: \"Data Engineers\",\n    privileges: [\"CREATE_EXTERNAL_TABLE\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexternal = databricks.StorageCredential(\"external\",\n    name=external_data_access[\"name\"],\n    aws_iam_role={\n        \"role_arn\": external_data_access[\"arn\"],\n    },\n    comment=\"Managed by TF\")\nexternal_creds = databricks.Grant(\"external_creds\",\n    storage_credential=external.id,\n    principal=\"Data Engineers\",\n    privileges=[\"CREATE_EXTERNAL_TABLE\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var external = new Databricks.StorageCredential(\"external\", new()\n    {\n        Name = externalDataAccess.Name,\n        AwsIamRole = new Databricks.Inputs.StorageCredentialAwsIamRoleArgs\n        {\n            RoleArn = externalDataAccess.Arn,\n        },\n        Comment = \"Managed by TF\",\n    });\n\n    var externalCreds = new Databricks.Grant(\"external_creds\", new()\n    {\n        StorageCredential = external.Id,\n        Principal = \"Data Engineers\",\n        Privileges = new[]\n        {\n            \"CREATE_EXTERNAL_TABLE\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternal, err := databricks.NewStorageCredential(ctx, \"external\", \u0026databricks.StorageCredentialArgs{\n\t\t\tName: pulumi.Any(externalDataAccess.Name),\n\t\t\tAwsIamRole: \u0026databricks.StorageCredentialAwsIamRoleArgs{\n\t\t\t\tRoleArn: pulumi.Any(externalDataAccess.Arn),\n\t\t\t},\n\t\t\tComment: pulumi.String(\"Managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"external_creds\", \u0026databricks.GrantArgs{\n\t\t\tStorageCredential: external.ID(),\n\t\t\tPrincipal:         pulumi.String(\"Data Engineers\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.StorageCredential;\nimport com.pulumi.databricks.StorageCredentialArgs;\nimport com.pulumi.databricks.inputs.StorageCredentialAwsIamRoleArgs;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var external = new StorageCredential(\"external\", StorageCredentialArgs.builder()\n            .name(externalDataAccess.name())\n            .awsIamRole(StorageCredentialAwsIamRoleArgs.builder()\n                .roleArn(externalDataAccess.arn())\n                .build())\n            .comment(\"Managed by TF\")\n            .build());\n\n        var externalCreds = new Grant(\"externalCreds\", GrantArgs.builder()\n            .storageCredential(external.id())\n            .principal(\"Data Engineers\")\n            .privileges(\"CREATE_EXTERNAL_TABLE\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  external:\n    type: databricks:StorageCredential\n    properties:\n      name: ${externalDataAccess.name}\n      awsIamRole:\n        roleArn: ${externalDataAccess.arn}\n      comment: Managed by TF\n  externalCreds:\n    type: databricks:Grant\n    name: external_creds\n    properties:\n      storageCredential: ${external.id}\n      principal: Data Engineers\n      privileges:\n        - CREATE_EXTERNAL_TABLE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## External location grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eExternal location grants for the list of privileges that apply to External locations.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst some = new databricks.ExternalLocation(\"some\", {\n    name: \"external\",\n    url: `s3://${externalAwsS3Bucket.id}/some`,\n    credentialName: external.id,\n    comment: \"Managed by TF\",\n});\nconst someDataEngineers = new databricks.Grant(\"some_data_engineers\", {\n    externalLocation: some.id,\n    principal: \"Data Engineers\",\n    privileges: [\n        \"CREATE_EXTERNAL_TABLE\",\n        \"READ_FILES\",\n    ],\n});\nconst someServicePrincipal = new databricks.Grant(\"some_service_principal\", {\n    externalLocation: some.id,\n    principal: mySp.applicationId,\n    privileges: [\n        \"USE_SCHEMA\",\n        \"MODIFY\",\n    ],\n});\nconst someGroup = new databricks.Grant(\"some_group\", {\n    externalLocation: some.id,\n    principal: myGroup.displayName,\n    privileges: [\n        \"USE_SCHEMA\",\n        \"MODIFY\",\n    ],\n});\nconst someUser = new databricks.Grant(\"some_user\", {\n    externalLocation: some.id,\n    principal: myUser.userName,\n    privileges: [\n        \"USE_SCHEMA\",\n        \"MODIFY\",\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsome = databricks.ExternalLocation(\"some\",\n    name=\"external\",\n    url=f\"s3://{external_aws_s3_bucket['id']}/some\",\n    credential_name=external[\"id\"],\n    comment=\"Managed by TF\")\nsome_data_engineers = databricks.Grant(\"some_data_engineers\",\n    external_location=some.id,\n    principal=\"Data Engineers\",\n    privileges=[\n        \"CREATE_EXTERNAL_TABLE\",\n        \"READ_FILES\",\n    ])\nsome_service_principal = databricks.Grant(\"some_service_principal\",\n    external_location=some.id,\n    principal=my_sp[\"applicationId\"],\n    privileges=[\n        \"USE_SCHEMA\",\n        \"MODIFY\",\n    ])\nsome_group = databricks.Grant(\"some_group\",\n    external_location=some.id,\n    principal=my_group[\"displayName\"],\n    privileges=[\n        \"USE_SCHEMA\",\n        \"MODIFY\",\n    ])\nsome_user = databricks.Grant(\"some_user\",\n    external_location=some.id,\n    principal=my_user[\"userName\"],\n    privileges=[\n        \"USE_SCHEMA\",\n        \"MODIFY\",\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var some = new Databricks.ExternalLocation(\"some\", new()\n    {\n        Name = \"external\",\n        Url = $\"s3://{externalAwsS3Bucket.Id}/some\",\n        CredentialName = external.Id,\n        Comment = \"Managed by TF\",\n    });\n\n    var someDataEngineers = new Databricks.Grant(\"some_data_engineers\", new()\n    {\n        ExternalLocation = some.Id,\n        Principal = \"Data Engineers\",\n        Privileges = new[]\n        {\n            \"CREATE_EXTERNAL_TABLE\",\n            \"READ_FILES\",\n        },\n    });\n\n    var someServicePrincipal = new Databricks.Grant(\"some_service_principal\", new()\n    {\n        ExternalLocation = some.Id,\n        Principal = mySp.ApplicationId,\n        Privileges = new[]\n        {\n            \"USE_SCHEMA\",\n            \"MODIFY\",\n        },\n    });\n\n    var someGroup = new Databricks.Grant(\"some_group\", new()\n    {\n        ExternalLocation = some.Id,\n        Principal = myGroup.DisplayName,\n        Privileges = new[]\n        {\n            \"USE_SCHEMA\",\n            \"MODIFY\",\n        },\n    });\n\n    var someUser = new Databricks.Grant(\"some_user\", new()\n    {\n        ExternalLocation = some.Id,\n        Principal = myUser.UserName,\n        Privileges = new[]\n        {\n            \"USE_SCHEMA\",\n            \"MODIFY\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsome, err := databricks.NewExternalLocation(ctx, \"some\", \u0026databricks.ExternalLocationArgs{\n\t\t\tName:           pulumi.String(\"external\"),\n\t\t\tUrl:            pulumi.Sprintf(\"s3://%v/some\", externalAwsS3Bucket.Id),\n\t\t\tCredentialName: pulumi.Any(external.Id),\n\t\t\tComment:        pulumi.String(\"Managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"some_data_engineers\", \u0026databricks.GrantArgs{\n\t\t\tExternalLocation: some.ID(),\n\t\t\tPrincipal:        pulumi.String(\"Data Engineers\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\tpulumi.String(\"READ_FILES\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"some_service_principal\", \u0026databricks.GrantArgs{\n\t\t\tExternalLocation: some.ID(),\n\t\t\tPrincipal:        pulumi.Any(mySp.ApplicationId),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USE_SCHEMA\"),\n\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"some_group\", \u0026databricks.GrantArgs{\n\t\t\tExternalLocation: some.ID(),\n\t\t\tPrincipal:        pulumi.Any(myGroup.DisplayName),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USE_SCHEMA\"),\n\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"some_user\", \u0026databricks.GrantArgs{\n\t\t\tExternalLocation: some.ID(),\n\t\t\tPrincipal:        pulumi.Any(myUser.UserName),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USE_SCHEMA\"),\n\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ExternalLocation;\nimport com.pulumi.databricks.ExternalLocationArgs;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var some = new ExternalLocation(\"some\", ExternalLocationArgs.builder()\n            .name(\"external\")\n            .url(String.format(\"s3://%s/some\", externalAwsS3Bucket.id()))\n            .credentialName(external.id())\n            .comment(\"Managed by TF\")\n            .build());\n\n        var someDataEngineers = new Grant(\"someDataEngineers\", GrantArgs.builder()\n            .externalLocation(some.id())\n            .principal(\"Data Engineers\")\n            .privileges(            \n                \"CREATE_EXTERNAL_TABLE\",\n                \"READ_FILES\")\n            .build());\n\n        var someServicePrincipal = new Grant(\"someServicePrincipal\", GrantArgs.builder()\n            .externalLocation(some.id())\n            .principal(mySp.applicationId())\n            .privileges(            \n                \"USE_SCHEMA\",\n                \"MODIFY\")\n            .build());\n\n        var someGroup = new Grant(\"someGroup\", GrantArgs.builder()\n            .externalLocation(some.id())\n            .principal(myGroup.displayName())\n            .privileges(            \n                \"USE_SCHEMA\",\n                \"MODIFY\")\n            .build());\n\n        var someUser = new Grant(\"someUser\", GrantArgs.builder()\n            .externalLocation(some.id())\n            .principal(myUser.userName())\n            .privileges(            \n                \"USE_SCHEMA\",\n                \"MODIFY\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  some:\n    type: databricks:ExternalLocation\n    properties:\n      name: external\n      url: s3://${externalAwsS3Bucket.id}/some\n      credentialName: ${external.id}\n      comment: Managed by TF\n  someDataEngineers:\n    type: databricks:Grant\n    name: some_data_engineers\n    properties:\n      externalLocation: ${some.id}\n      principal: Data Engineers\n      privileges:\n        - CREATE_EXTERNAL_TABLE\n        - READ_FILES\n  someServicePrincipal:\n    type: databricks:Grant\n    name: some_service_principal\n    properties:\n      externalLocation: ${some.id}\n      principal: ${mySp.applicationId}\n      privileges:\n        - USE_SCHEMA\n        - MODIFY\n  someGroup:\n    type: databricks:Grant\n    name: some_group\n    properties:\n      externalLocation: ${some.id}\n      principal: ${myGroup.displayName}\n      privileges:\n        - USE_SCHEMA\n        - MODIFY\n  someUser:\n    type: databricks:Grant\n    name: some_user\n    properties:\n      externalLocation: ${some.id}\n      principal: ${myUser.userName}\n      privileges:\n        - USE_SCHEMA\n        - MODIFY\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Connection grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eConnection grants for the list of privileges that apply to Connections.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst mysql = new databricks.Connection(\"mysql\", {\n    name: \"mysql_connection\",\n    connectionType: \"MYSQL\",\n    comment: \"this is a connection to mysql db\",\n    options: {\n        host: \"test.mysql.database.azure.com\",\n        port: \"3306\",\n        user: \"user\",\n        password: \"password\",\n    },\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst some = new databricks.Grant(\"some\", {\n    foreignConnection: mysql.name,\n    principal: \"Data Engineers\",\n    privileges: [\n        \"CREATE_FOREIGN_CATALOG\",\n        \"USE_CONNECTION\",\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nmysql = databricks.Connection(\"mysql\",\n    name=\"mysql_connection\",\n    connection_type=\"MYSQL\",\n    comment=\"this is a connection to mysql db\",\n    options={\n        \"host\": \"test.mysql.database.azure.com\",\n        \"port\": \"3306\",\n        \"user\": \"user\",\n        \"password\": \"password\",\n    },\n    properties={\n        \"purpose\": \"testing\",\n    })\nsome = databricks.Grant(\"some\",\n    foreign_connection=mysql.name,\n    principal=\"Data Engineers\",\n    privileges=[\n        \"CREATE_FOREIGN_CATALOG\",\n        \"USE_CONNECTION\",\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var mysql = new Databricks.Connection(\"mysql\", new()\n    {\n        Name = \"mysql_connection\",\n        ConnectionType = \"MYSQL\",\n        Comment = \"this is a connection to mysql db\",\n        Options = \n        {\n            { \"host\", \"test.mysql.database.azure.com\" },\n            { \"port\", \"3306\" },\n            { \"user\", \"user\" },\n            { \"password\", \"password\" },\n        },\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var some = new Databricks.Grant(\"some\", new()\n    {\n        ForeignConnection = mysql.Name,\n        Principal = \"Data Engineers\",\n        Privileges = new[]\n        {\n            \"CREATE_FOREIGN_CATALOG\",\n            \"USE_CONNECTION\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tmysql, err := databricks.NewConnection(ctx, \"mysql\", \u0026databricks.ConnectionArgs{\n\t\t\tName:           pulumi.String(\"mysql_connection\"),\n\t\t\tConnectionType: pulumi.String(\"MYSQL\"),\n\t\t\tComment:        pulumi.String(\"this is a connection to mysql db\"),\n\t\t\tOptions: pulumi.StringMap{\n\t\t\t\t\"host\":     pulumi.String(\"test.mysql.database.azure.com\"),\n\t\t\t\t\"port\":     pulumi.String(\"3306\"),\n\t\t\t\t\"user\":     pulumi.String(\"user\"),\n\t\t\t\t\"password\": pulumi.String(\"password\"),\n\t\t\t},\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"some\", \u0026databricks.GrantArgs{\n\t\t\tForeignConnection: mysql.Name,\n\t\t\tPrincipal:         pulumi.String(\"Data Engineers\"),\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"CREATE_FOREIGN_CATALOG\"),\n\t\t\t\tpulumi.String(\"USE_CONNECTION\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Connection;\nimport com.pulumi.databricks.ConnectionArgs;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var mysql = new Connection(\"mysql\", ConnectionArgs.builder()\n            .name(\"mysql_connection\")\n            .connectionType(\"MYSQL\")\n            .comment(\"this is a connection to mysql db\")\n            .options(Map.ofEntries(\n                Map.entry(\"host\", \"test.mysql.database.azure.com\"),\n                Map.entry(\"port\", \"3306\"),\n                Map.entry(\"user\", \"user\"),\n                Map.entry(\"password\", \"password\")\n            ))\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var some = new Grant(\"some\", GrantArgs.builder()\n            .foreignConnection(mysql.name())\n            .principal(\"Data Engineers\")\n            .privileges(            \n                \"CREATE_FOREIGN_CATALOG\",\n                \"USE_CONNECTION\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  mysql:\n    type: databricks:Connection\n    properties:\n      name: mysql_connection\n      connectionType: MYSQL\n      comment: this is a connection to mysql db\n      options:\n        host: test.mysql.database.azure.com\n        port: '3306'\n        user: user\n        password: password\n      properties:\n        purpose: testing\n  some:\n    type: databricks:Grant\n    properties:\n      foreignConnection: ${mysql.name}\n      principal: Data Engineers\n      privileges:\n        - CREATE_FOREIGN_CATALOG\n        - USE_CONNECTION\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Delta Sharing share grants\n\nSee\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eDelta Sharing share grants for the list of privileges that apply to Delta Sharing shares.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst some = new databricks.Share(\"some\", {name: \"my_share\"});\nconst someRecipient = new databricks.Recipient(\"some\", {name: \"my_recipient\"});\nconst someGrant = new databricks.Grant(\"some\", {\n    share: some.name,\n    principal: someRecipient.name,\n    privileges: [\"SELECT\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsome = databricks.Share(\"some\", name=\"my_share\")\nsome_recipient = databricks.Recipient(\"some\", name=\"my_recipient\")\nsome_grant = databricks.Grant(\"some\",\n    share=some.name,\n    principal=some_recipient.name,\n    privileges=[\"SELECT\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var some = new Databricks.Share(\"some\", new()\n    {\n        Name = \"my_share\",\n    });\n\n    var someRecipient = new Databricks.Recipient(\"some\", new()\n    {\n        Name = \"my_recipient\",\n    });\n\n    var someGrant = new Databricks.Grant(\"some\", new()\n    {\n        Share = some.Name,\n        Principal = someRecipient.Name,\n        Privileges = new[]\n        {\n            \"SELECT\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsome, err := databricks.NewShare(ctx, \"some\", \u0026databricks.ShareArgs{\n\t\t\tName: pulumi.String(\"my_share\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsomeRecipient, err := databricks.NewRecipient(ctx, \"some\", \u0026databricks.RecipientArgs{\n\t\t\tName: pulumi.String(\"my_recipient\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrant(ctx, \"some\", \u0026databricks.GrantArgs{\n\t\t\tShare:     some.Name,\n\t\t\tPrincipal: someRecipient.Name,\n\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Share;\nimport com.pulumi.databricks.ShareArgs;\nimport com.pulumi.databricks.Recipient;\nimport com.pulumi.databricks.RecipientArgs;\nimport com.pulumi.databricks.Grant;\nimport com.pulumi.databricks.GrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var some = new Share(\"some\", ShareArgs.builder()\n            .name(\"my_share\")\n            .build());\n\n        var someRecipient = new Recipient(\"someRecipient\", RecipientArgs.builder()\n            .name(\"my_recipient\")\n            .build());\n\n        var someGrant = new Grant(\"someGrant\", GrantArgs.builder()\n            .share(some.name())\n            .principal(someRecipient.name())\n            .privileges(\"SELECT\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  some:\n    type: databricks:Share\n    properties:\n      name: my_share\n  someRecipient:\n    type: databricks:Recipient\n    name: some\n    properties:\n      name: my_recipient\n  someGrant:\n    type: databricks:Grant\n    name: some\n    properties:\n      share: ${some.name}\n      principal: ${someRecipient.name}\n      privileges:\n        - SELECT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Other access control\n\nYou can control Databricks General Permissions through\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eresource.\n\n","properties":{"catalog":{"type":"string"},"credential":{"type":"string"},"externalLocation":{"type":"string"},"foreignConnection":{"type":"string"},"function":{"type":"string"},"metastore":{"type":"string"},"model":{"type":"string"},"pipeline":{"type":"string"},"principal":{"type":"string"},"privileges":{"type":"array","items":{"type":"string"}},"providerConfig":{"$ref":"#/types/databricks:index/GrantProviderConfig:GrantProviderConfig"},"recipient":{"type":"string"},"schema":{"type":"string"},"share":{"type":"string"},"storageCredential":{"type":"string"},"table":{"type":"string"},"volume":{"type":"string"}},"required":["principal","privileges"],"inputProperties":{"catalog":{"type":"string","willReplaceOnChanges":true},"credential":{"type":"string","willReplaceOnChanges":true},"externalLocation":{"type":"string","willReplaceOnChanges":true},"foreignConnection":{"type":"string","willReplaceOnChanges":true},"function":{"type":"string","willReplaceOnChanges":true},"metastore":{"type":"string","willReplaceOnChanges":true},"model":{"type":"string","willReplaceOnChanges":true},"pipeline":{"type":"string","willReplaceOnChanges":true},"principal":{"type":"string","willReplaceOnChanges":true},"privileges":{"type":"array","items":{"type":"string"}},"providerConfig":{"$ref":"#/types/databricks:index/GrantProviderConfig:GrantProviderConfig"},"recipient":{"type":"string","willReplaceOnChanges":true},"schema":{"type":"string","willReplaceOnChanges":true},"share":{"type":"string","willReplaceOnChanges":true},"storageCredential":{"type":"string","willReplaceOnChanges":true},"table":{"type":"string","willReplaceOnChanges":true},"volume":{"type":"string","willReplaceOnChanges":true}},"requiredInputs":["principal","privileges"],"stateInputs":{"description":"Input properties used for looking up and filtering Grant resources.\n","properties":{"catalog":{"type":"string","willReplaceOnChanges":true},"credential":{"type":"string","willReplaceOnChanges":true},"externalLocation":{"type":"string","willReplaceOnChanges":true},"foreignConnection":{"type":"string","willReplaceOnChanges":true},"function":{"type":"string","willReplaceOnChanges":true},"metastore":{"type":"string","willReplaceOnChanges":true},"model":{"type":"string","willReplaceOnChanges":true},"pipeline":{"type":"string","willReplaceOnChanges":true},"principal":{"type":"string","willReplaceOnChanges":true},"privileges":{"type":"array","items":{"type":"string"}},"providerConfig":{"$ref":"#/types/databricks:index/GrantProviderConfig:GrantProviderConfig"},"recipient":{"type":"string","willReplaceOnChanges":true},"schema":{"type":"string","willReplaceOnChanges":true},"share":{"type":"string","willReplaceOnChanges":true},"storageCredential":{"type":"string","willReplaceOnChanges":true},"table":{"type":"string","willReplaceOnChanges":true},"volume":{"type":"string","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/grants:Grants":{"description":"\u003e This article refers to the privileges and inheritance model in Privilege Model version 1.0. If you created your metastore during the public preview (before August 25, 2022), you can upgrade to Privilege Model version 1.0 following [Upgrade to privilege inheritance](https://docs.databricks.com/data-governance/unity-catalog/hive-metastore.html)\n\n\u003e Most of Unity Catalog APIs are only accessible via **workspace-level APIs**. This design may change in the future. Account-level principal grants can be assigned with any valid workspace as the Unity Catalog is decoupled from specific workspaces. More information in [the official documentation](https://docs.databricks.com/data-governance/unity-catalog/index.html).\n\n\u003e This resource is _authoritative_ for grants on securables. Configuring this resource for a securable will **OVERWRITE** any existing grants and changes made outside of Pulumi will be reset. Use\u003cspan pulumi-lang-nodejs=\" databricks.Grant \" pulumi-lang-dotnet=\" databricks.Grant \" pulumi-lang-go=\" Grant \" pulumi-lang-python=\" Grant \" pulumi-lang-yaml=\" databricks.Grant \" pulumi-lang-java=\" databricks.Grant \"\u003e databricks.Grant \u003c/span\u003efor more granular grant management.\n\nIn Unity Catalog all users initially have no access to data. Only Metastore Admins can create objects and can grant/revoke access on individual objects to users and groups. Every securable object in Unity Catalog has an owner. The owner can be any account-level user or group, called principals in general. The principal that creates an object becomes its owner. Owners receive `ALL_PRIVILEGES` on the securable object (e.g., `SELECT` and `MODIFY` on a table), as well as the permission to grant privileges to other principals.\n\nSecurable objects are hierarchical and privileges are inherited downward. The highest level object that privileges are inherited from is the catalog. This means that granting a privilege on a catalog or schema automatically grants the privilege to all current and future objects within the catalog or schema. Privileges that are granted on a metastore are not inherited.\n\nEvery \u003cspan pulumi-lang-nodejs=\"`databricks.Grants`\" pulumi-lang-dotnet=\"`databricks.Grants`\" pulumi-lang-go=\"`Grants`\" pulumi-lang-python=\"`Grants`\" pulumi-lang-yaml=\"`databricks.Grants`\" pulumi-lang-java=\"`databricks.Grants`\"\u003e`databricks.Grants`\u003c/span\u003e resource must have exactly one securable identifier and one or more \u003cspan pulumi-lang-nodejs=\"`grant`\" pulumi-lang-dotnet=\"`Grant`\" pulumi-lang-go=\"`grant`\" pulumi-lang-python=\"`grant`\" pulumi-lang-yaml=\"`grant`\" pulumi-lang-java=\"`grant`\"\u003e`grant`\u003c/span\u003e blocks with the following arguments:\n\n- \u003cspan pulumi-lang-nodejs=\"`principal`\" pulumi-lang-dotnet=\"`Principal`\" pulumi-lang-go=\"`principal`\" pulumi-lang-python=\"`principal`\" pulumi-lang-yaml=\"`principal`\" pulumi-lang-java=\"`principal`\"\u003e`principal`\u003c/span\u003e - User name, group name or service principal application ID.\n- \u003cspan pulumi-lang-nodejs=\"`privileges`\" pulumi-lang-dotnet=\"`Privileges`\" pulumi-lang-go=\"`privileges`\" pulumi-lang-python=\"`privileges`\" pulumi-lang-yaml=\"`privileges`\" pulumi-lang-java=\"`privileges`\"\u003e`privileges`\u003c/span\u003e - One or more privileges that are specific to a securable type.\n- \u003cspan pulumi-lang-nodejs=\"`providerConfig`\" pulumi-lang-dotnet=\"`ProviderConfig`\" pulumi-lang-go=\"`providerConfig`\" pulumi-lang-python=\"`provider_config`\" pulumi-lang-yaml=\"`providerConfig`\" pulumi-lang-java=\"`providerConfig`\"\u003e`provider_config`\u003c/span\u003e - (Optional) Configure the provider for management through account provider. This block consists of the following fields:\n  - \u003cspan pulumi-lang-nodejs=\"`workspaceId`\" pulumi-lang-dotnet=\"`WorkspaceId`\" pulumi-lang-go=\"`workspaceId`\" pulumi-lang-python=\"`workspace_id`\" pulumi-lang-yaml=\"`workspaceId`\" pulumi-lang-java=\"`workspaceId`\"\u003e`workspace_id`\u003c/span\u003e - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n\nFor the latest list of privilege types that apply to each securable object in Unity Catalog, please refer to the [official documentation](https://docs.databricks.com/en/data-governance/unity-catalog/manage-privileges/privileges.html#privilege-types-by-securable-object-in-unity-catalog)\n\nWhen applying grants using an identity with [`MANAGE` permission](https://docs.databricks.com/aws/en/data-governance/unity-catalog/manage-privileges/ownership#ownership-versus-the-manage-privilege), their `MANAGE` permission must also be defined, otherwise Pulumi will remove their permissions, leading to errors.\n\nUnlike the [SQL specification](https://docs.databricks.com/sql/language-manual/sql-ref-privileges.html#privilege-types), all privileges to be written with underscore instead of space, e.g. `CREATE_TABLE` and not `CREATE TABLE`. Below summarizes which privilege types apply to each securable object in the catalog:\n\n## Metastore grants\n\nYou can grant `CREATE_CATALOG`, `CREATE_CLEAN_ROOM`, `CREATE_CONNECTION`, `CREATE_EXTERNAL_LOCATION`, `CREATE_PROVIDER`, `CREATE_RECIPIENT`, `CREATE_SHARE`, `CREATE_SERVICE_CREDENTIAL`, `CREATE_STORAGE_CREDENTIAL`, `SET_SHARE_PERMISSION`, `USE_MARKETPLACE_ASSETS`, `USE_PROVIDER`, `USE_RECIPIENT`, and `USE_SHARE` privileges to\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eassigned to the workspace.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Grants(\"sandbox\", {\n    metastore: \"metastore_id\",\n    grants: [\n        {\n            principal: \"Data Engineers\",\n            privileges: [\n                \"CREATE_CATALOG\",\n                \"CREATE_EXTERNAL_LOCATION\",\n            ],\n        },\n        {\n            principal: \"Data Sharer\",\n            privileges: [\n                \"CREATE_RECIPIENT\",\n                \"CREATE_SHARE\",\n            ],\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Grants(\"sandbox\",\n    metastore=\"metastore_id\",\n    grants=[\n        {\n            \"principal\": \"Data Engineers\",\n            \"privileges\": [\n                \"CREATE_CATALOG\",\n                \"CREATE_EXTERNAL_LOCATION\",\n            ],\n        },\n        {\n            \"principal\": \"Data Sharer\",\n            \"privileges\": [\n                \"CREATE_RECIPIENT\",\n                \"CREATE_SHARE\",\n            ],\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Grants(\"sandbox\", new()\n    {\n        Metastore = \"metastore_id\",\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"CREATE_CATALOG\",\n                    \"CREATE_EXTERNAL_LOCATION\",\n                },\n            },\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Sharer\",\n                Privileges = new[]\n                {\n                    \"CREATE_RECIPIENT\",\n                    \"CREATE_SHARE\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGrants(ctx, \"sandbox\", \u0026databricks.GrantsArgs{\n\t\t\tMetastore: pulumi.String(\"metastore_id\"),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_CATALOG\"),\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_LOCATION\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Sharer\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_RECIPIENT\"),\n\t\t\t\t\t\tpulumi.String(\"CREATE_SHARE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Grants(\"sandbox\", GrantsArgs.builder()\n            .metastore(\"metastore_id\")\n            .grants(            \n                GrantsGrantArgs.builder()\n                    .principal(\"Data Engineers\")\n                    .privileges(                    \n                        \"CREATE_CATALOG\",\n                        \"CREATE_EXTERNAL_LOCATION\")\n                    .build(),\n                GrantsGrantArgs.builder()\n                    .principal(\"Data Sharer\")\n                    .privileges(                    \n                        \"CREATE_RECIPIENT\",\n                        \"CREATE_SHARE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Grants\n    properties:\n      metastore: metastore_id\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - CREATE_CATALOG\n            - CREATE_EXTERNAL_LOCATION\n        - principal: Data Sharer\n          privileges:\n            - CREATE_RECIPIENT\n            - CREATE_SHARE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Catalog grants\n\nYou can grant `ALL_PRIVILEGES`, `APPLY_TAG`, `CREATE_CONNECTION`, `CREATE_SCHEMA`, `MANAGE`, and `USE_CATALOG` privileges to\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003especified in the \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e attribute. You can also grant `CREATE_FUNCTION`, `CREATE_TABLE`, `CREATE_VOLUME`, `EXECUTE`, `MODIFY`, `REFRESH`, `SELECT`, `READ_VOLUME`, `WRITE_VOLUME` and `USE_SCHEMA` at the catalog level to apply them to the pertinent current and future securable objects within the catalog:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    comment: \"this catalog is managed by terraform\",\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst sandboxGrants = new databricks.Grants(\"sandbox\", {\n    catalog: sandbox.name,\n    grants: [\n        {\n            principal: \"Data Scientists\",\n            privileges: [\n                \"USE_CATALOG\",\n                \"USE_SCHEMA\",\n                \"CREATE_TABLE\",\n                \"SELECT\",\n            ],\n        },\n        {\n            principal: \"Data Engineers\",\n            privileges: [\n                \"USE_CATALOG\",\n                \"USE_SCHEMA\",\n                \"CREATE_SCHEMA\",\n                \"CREATE_TABLE\",\n                \"MODIFY\",\n            ],\n        },\n        {\n            principal: \"Data Analyst\",\n            privileges: [\n                \"USE_CATALOG\",\n                \"USE_SCHEMA\",\n                \"SELECT\",\n            ],\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    comment=\"this catalog is managed by terraform\",\n    properties={\n        \"purpose\": \"testing\",\n    })\nsandbox_grants = databricks.Grants(\"sandbox\",\n    catalog=sandbox.name,\n    grants=[\n        {\n            \"principal\": \"Data Scientists\",\n            \"privileges\": [\n                \"USE_CATALOG\",\n                \"USE_SCHEMA\",\n                \"CREATE_TABLE\",\n                \"SELECT\",\n            ],\n        },\n        {\n            \"principal\": \"Data Engineers\",\n            \"privileges\": [\n                \"USE_CATALOG\",\n                \"USE_SCHEMA\",\n                \"CREATE_SCHEMA\",\n                \"CREATE_TABLE\",\n                \"MODIFY\",\n            ],\n        },\n        {\n            \"principal\": \"Data Analyst\",\n            \"privileges\": [\n                \"USE_CATALOG\",\n                \"USE_SCHEMA\",\n                \"SELECT\",\n            ],\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        Comment = \"this catalog is managed by terraform\",\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var sandboxGrants = new Databricks.Grants(\"sandbox\", new()\n    {\n        Catalog = sandbox.Name,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Scientists\",\n                Privileges = new[]\n                {\n                    \"USE_CATALOG\",\n                    \"USE_SCHEMA\",\n                    \"CREATE_TABLE\",\n                    \"SELECT\",\n                },\n            },\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"USE_CATALOG\",\n                    \"USE_SCHEMA\",\n                    \"CREATE_SCHEMA\",\n                    \"CREATE_TABLE\",\n                    \"MODIFY\",\n                },\n            },\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Analyst\",\n                Privileges = new[]\n                {\n                    \"USE_CATALOG\",\n                    \"USE_SCHEMA\",\n                    \"SELECT\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:    pulumi.String(\"sandbox\"),\n\t\t\tComment: pulumi.String(\"this catalog is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"sandbox\", \u0026databricks.GrantsArgs{\n\t\t\tCatalog: sandbox.Name,\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Scientists\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"USE_CATALOG\"),\n\t\t\t\t\t\tpulumi.String(\"USE_SCHEMA\"),\n\t\t\t\t\t\tpulumi.String(\"CREATE_TABLE\"),\n\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"USE_CATALOG\"),\n\t\t\t\t\t\tpulumi.String(\"USE_SCHEMA\"),\n\t\t\t\t\t\tpulumi.String(\"CREATE_SCHEMA\"),\n\t\t\t\t\t\tpulumi.String(\"CREATE_TABLE\"),\n\t\t\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Analyst\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"USE_CATALOG\"),\n\t\t\t\t\t\tpulumi.String(\"USE_SCHEMA\"),\n\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .comment(\"this catalog is managed by terraform\")\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var sandboxGrants = new Grants(\"sandboxGrants\", GrantsArgs.builder()\n            .catalog(sandbox.name())\n            .grants(            \n                GrantsGrantArgs.builder()\n                    .principal(\"Data Scientists\")\n                    .privileges(                    \n                        \"USE_CATALOG\",\n                        \"USE_SCHEMA\",\n                        \"CREATE_TABLE\",\n                        \"SELECT\")\n                    .build(),\n                GrantsGrantArgs.builder()\n                    .principal(\"Data Engineers\")\n                    .privileges(                    \n                        \"USE_CATALOG\",\n                        \"USE_SCHEMA\",\n                        \"CREATE_SCHEMA\",\n                        \"CREATE_TABLE\",\n                        \"MODIFY\")\n                    .build(),\n                GrantsGrantArgs.builder()\n                    .principal(\"Data Analyst\")\n                    .privileges(                    \n                        \"USE_CATALOG\",\n                        \"USE_SCHEMA\",\n                        \"SELECT\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      comment: this catalog is managed by terraform\n      properties:\n        purpose: testing\n  sandboxGrants:\n    type: databricks:Grants\n    name: sandbox\n    properties:\n      catalog: ${sandbox.name}\n      grants:\n        - principal: Data Scientists\n          privileges:\n            - USE_CATALOG\n            - USE_SCHEMA\n            - CREATE_TABLE\n            - SELECT\n        - principal: Data Engineers\n          privileges:\n            - USE_CATALOG\n            - USE_SCHEMA\n            - CREATE_SCHEMA\n            - CREATE_TABLE\n            - MODIFY\n        - principal: Data Analyst\n          privileges:\n            - USE_CATALOG\n            - USE_SCHEMA\n            - SELECT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Schema grants\n\nYou can grant `ALL_PRIVILEGES`, `APPLY_TAG`, `CREATE_FUNCTION`, `CREATE_TABLE`, `CREATE_VOLUME`, `MANAGE` and `USE_SCHEMA` privileges to _`catalog.schema`_ specified in the \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e attribute. You can also grant `EXECUTE`, `MODIFY`, `REFRESH`, `SELECT`, `READ_VOLUME`, `WRITE_VOLUME` at the schema level to apply them to the pertinent current and future securable objects within the schema:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst things = new databricks.Schema(\"things\", {\n    catalogName: sandbox.id,\n    name: \"things\",\n    comment: \"this schema is managed by terraform\",\n    properties: {\n        kind: \"various\",\n    },\n});\nconst thingsGrants = new databricks.Grants(\"things\", {\n    schema: things.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\n            \"USE_SCHEMA\",\n            \"MODIFY\",\n        ],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthings = databricks.Schema(\"things\",\n    catalog_name=sandbox[\"id\"],\n    name=\"things\",\n    comment=\"this schema is managed by terraform\",\n    properties={\n        \"kind\": \"various\",\n    })\nthings_grants = databricks.Grants(\"things\",\n    schema=things.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\n            \"USE_SCHEMA\",\n            \"MODIFY\",\n        ],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var things = new Databricks.Schema(\"things\", new()\n    {\n        CatalogName = sandbox.Id,\n        Name = \"things\",\n        Comment = \"this schema is managed by terraform\",\n        Properties = \n        {\n            { \"kind\", \"various\" },\n        },\n    });\n\n    var thingsGrants = new Databricks.Grants(\"things\", new()\n    {\n        Schema = things.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"USE_SCHEMA\",\n                    \"MODIFY\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthings, err := databricks.NewSchema(ctx, \"things\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: pulumi.Any(sandbox.Id),\n\t\t\tName:        pulumi.String(\"things\"),\n\t\t\tComment:     pulumi.String(\"this schema is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"kind\": pulumi.String(\"various\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"things\", \u0026databricks.GrantsArgs{\n\t\t\tSchema: things.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"USE_SCHEMA\"),\n\t\t\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var things = new Schema(\"things\", SchemaArgs.builder()\n            .catalogName(sandbox.id())\n            .name(\"things\")\n            .comment(\"this schema is managed by terraform\")\n            .properties(Map.of(\"kind\", \"various\"))\n            .build());\n\n        var thingsGrants = new Grants(\"thingsGrants\", GrantsArgs.builder()\n            .schema(things.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(                \n                    \"USE_SCHEMA\",\n                    \"MODIFY\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  things:\n    type: databricks:Schema\n    properties:\n      catalogName: ${sandbox.id}\n      name: things\n      comment: this schema is managed by terraform\n      properties:\n        kind: various\n  thingsGrants:\n    type: databricks:Grants\n    name: things\n    properties:\n      schema: ${things.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - USE_SCHEMA\n            - MODIFY\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Table grants\n\nYou can grant `ALL_PRIVILEGES`, `APPLY_TAG`, `MANAGE`, `SELECT` and `MODIFY` privileges to _`catalog.schema.table`_ specified in the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e attribute.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst customers = new databricks.Grants(\"customers\", {\n    table: \"main.reporting.customers\",\n    grants: [\n        {\n            principal: \"Data Engineers\",\n            privileges: [\n                \"MODIFY\",\n                \"SELECT\",\n            ],\n        },\n        {\n            principal: \"Data Analysts\",\n            privileges: [\"SELECT\"],\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncustomers = databricks.Grants(\"customers\",\n    table=\"main.reporting.customers\",\n    grants=[\n        {\n            \"principal\": \"Data Engineers\",\n            \"privileges\": [\n                \"MODIFY\",\n                \"SELECT\",\n            ],\n        },\n        {\n            \"principal\": \"Data Analysts\",\n            \"privileges\": [\"SELECT\"],\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var customers = new Databricks.Grants(\"customers\", new()\n    {\n        Table = \"main.reporting.customers\",\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"MODIFY\",\n                    \"SELECT\",\n                },\n            },\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Analysts\",\n                Privileges = new[]\n                {\n                    \"SELECT\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGrants(ctx, \"customers\", \u0026databricks.GrantsArgs{\n\t\t\tTable: pulumi.String(\"main.reporting.customers\"),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Analysts\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var customers = new Grants(\"customers\", GrantsArgs.builder()\n            .table(\"main.reporting.customers\")\n            .grants(            \n                GrantsGrantArgs.builder()\n                    .principal(\"Data Engineers\")\n                    .privileges(                    \n                        \"MODIFY\",\n                        \"SELECT\")\n                    .build(),\n                GrantsGrantArgs.builder()\n                    .principal(\"Data Analysts\")\n                    .privileges(\"SELECT\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  customers:\n    type: databricks:Grants\n    properties:\n      table: main.reporting.customers\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - MODIFY\n            - SELECT\n        - principal: Data Analysts\n          privileges:\n            - SELECT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nYou can also apply grants dynamically with\u003cspan pulumi-lang-nodejs=\" databricks.getTables \" pulumi-lang-dotnet=\" databricks.getTables \" pulumi-lang-go=\" getTables \" pulumi-lang-python=\" get_tables \" pulumi-lang-yaml=\" databricks.getTables \" pulumi-lang-java=\" databricks.getTables \"\u003e databricks.getTables \u003c/span\u003edata resource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nexport = async () =\u003e {\n    const things = await databricks.getTables({\n        catalogName: \"sandbox\",\n        schemaName: \"things\",\n    });\n    const thingsGrants: databricks.Grants[] = [];\n    for (const range of things.ids.map((v, k) =\u003e ({key: k, value: v}))) {\n        thingsGrants.push(new databricks.Grants(`things-${range.key}`, {\n            table: range.value,\n            grants: [{\n                principal: \"sensitive\",\n                privileges: [\n                    \"SELECT\",\n                    \"MODIFY\",\n                ],\n            }],\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthings = databricks.get_tables(catalog_name=\"sandbox\",\n    schema_name=\"things\")\nthings_grants = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate(things.ids)]:\n    things_grants.append(databricks.Grants(f\"things-{range['key']}\",\n        table=range[\"value\"],\n        grants=[{\n            \"principal\": \"sensitive\",\n            \"privileges\": [\n                \"SELECT\",\n                \"MODIFY\",\n            ],\n        }]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var things = await Databricks.GetTables.InvokeAsync(new()\n    {\n        CatalogName = \"sandbox\",\n        SchemaName = \"things\",\n    });\n\n    var thingsGrants = new List\u003cDatabricks.Grants\u003e();\n    foreach (var range in )\n    {\n        thingsGrants.Add(new Databricks.Grants($\"things-{range.Key}\", new()\n        {\n            Table = range.Value,\n            GrantDetails = new[]\n            {\n                new Databricks.Inputs.GrantsGrantArgs\n                {\n                    Principal = \"sensitive\",\n                    Privileges = new[]\n                    {\n                        \"SELECT\",\n                        \"MODIFY\",\n                    },\n                },\n            },\n        }));\n    }\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthings, err := databricks.GetTables(ctx, \u0026databricks.GetTablesArgs{\n\t\t\tCatalogName: \"sandbox\",\n\t\t\tSchemaName:  \"things\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar thingsGrants []*databricks.Grants\n\t\tfor key0, val0 := range things.Ids {\n\t\t\t__res, err := databricks.NewGrants(ctx, fmt.Sprintf(\"things-%v\", key0), \u0026databricks.GrantsArgs{\n\t\t\t\tTable: pulumi.String(val0),\n\t\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\t\tPrincipal: pulumi.String(\"sensitive\"),\n\t\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tthingsGrants = append(thingsGrants, __res)\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetTablesArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport com.pulumi.codegen.internal.KeyedValue;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var things = DatabricksFunctions.getTables(GetTablesArgs.builder()\n            .catalogName(\"sandbox\")\n            .schemaName(\"things\")\n            .build());\n\n        final var thingsGrants = things.applyValue(getTablesResult -\u003e {\n            final var resources = new ArrayList\u003cGrants\u003e();\n            for (var range : KeyedValue.of(getTablesResult.ids())) {\n                var resource = new Grants(\"thingsGrants-\" + range.key(), GrantsArgs.builder()\n                    .table(range.value())\n                    .grants(GrantsGrantArgs.builder()\n                        .principal(\"sensitive\")\n                        .privileges(                        \n                            \"SELECT\",\n                            \"MODIFY\")\n                        .build())\n                    .build());\n\n                resources.add(resource);\n            }\n\n            return resources;\n        });\n\n    }\n}\n```\n```yaml\nresources:\n  thingsGrants:\n    type: databricks:Grants\n    name: things\n    properties:\n      table: ${range.value}\n      grants:\n        - principal: sensitive\n          privileges:\n            - SELECT\n            - MODIFY\n    options: {}\nvariables:\n  things:\n    fn::invoke:\n      function: databricks:getTables\n      arguments:\n        catalogName: sandbox\n        schemaName: things\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## View grants\n\nYou can grant `ALL_PRIVILEGES`, `APPLY_TAG`, `MANAGE` and `SELECT` privileges to _`catalog.schema.view`_ specified in \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e attribute.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst customer360 = new databricks.Grants(\"customer360\", {\n    table: \"main.reporting.customer360\",\n    grants: [{\n        principal: \"Data Analysts\",\n        privileges: [\"SELECT\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncustomer360 = databricks.Grants(\"customer360\",\n    table=\"main.reporting.customer360\",\n    grants=[{\n        \"principal\": \"Data Analysts\",\n        \"privileges\": [\"SELECT\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var customer360 = new Databricks.Grants(\"customer360\", new()\n    {\n        Table = \"main.reporting.customer360\",\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Analysts\",\n                Privileges = new[]\n                {\n                    \"SELECT\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGrants(ctx, \"customer360\", \u0026databricks.GrantsArgs{\n\t\t\tTable: pulumi.String(\"main.reporting.customer360\"),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Analysts\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var customer360 = new Grants(\"customer360\", GrantsArgs.builder()\n            .table(\"main.reporting.customer360\")\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Analysts\")\n                .privileges(\"SELECT\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  customer360:\n    type: databricks:Grants\n    properties:\n      table: main.reporting.customer360\n      grants:\n        - principal: Data Analysts\n          privileges:\n            - SELECT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nYou can also apply grants dynamically with\u003cspan pulumi-lang-nodejs=\" databricks.getViews \" pulumi-lang-dotnet=\" databricks.getViews \" pulumi-lang-go=\" getViews \" pulumi-lang-python=\" get_views \" pulumi-lang-yaml=\" databricks.getViews \" pulumi-lang-java=\" databricks.getViews \"\u003e databricks.getViews \u003c/span\u003edata resource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nexport = async () =\u003e {\n    const customers = await databricks.getViews({\n        catalogName: \"main\",\n        schemaName: \"customers\",\n    });\n    const customersGrants: databricks.Grants[] = [];\n    for (const range of customers.ids.map((v, k) =\u003e ({key: k, value: v}))) {\n        customersGrants.push(new databricks.Grants(`customers-${range.key}`, {\n            table: range.value,\n            grants: [{\n                principal: \"sensitive\",\n                privileges: [\n                    \"SELECT\",\n                    \"MODIFY\",\n                ],\n            }],\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncustomers = databricks.get_views(catalog_name=\"main\",\n    schema_name=\"customers\")\ncustomers_grants = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate(customers.ids)]:\n    customers_grants.append(databricks.Grants(f\"customers-{range['key']}\",\n        table=range[\"value\"],\n        grants=[{\n            \"principal\": \"sensitive\",\n            \"privileges\": [\n                \"SELECT\",\n                \"MODIFY\",\n            ],\n        }]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var customers = await Databricks.GetViews.InvokeAsync(new()\n    {\n        CatalogName = \"main\",\n        SchemaName = \"customers\",\n    });\n\n    var customersGrants = new List\u003cDatabricks.Grants\u003e();\n    foreach (var range in )\n    {\n        customersGrants.Add(new Databricks.Grants($\"customers-{range.Key}\", new()\n        {\n            Table = range.Value,\n            GrantDetails = new[]\n            {\n                new Databricks.Inputs.GrantsGrantArgs\n                {\n                    Principal = \"sensitive\",\n                    Privileges = new[]\n                    {\n                        \"SELECT\",\n                        \"MODIFY\",\n                    },\n                },\n            },\n        }));\n    }\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcustomers, err := databricks.GetViews(ctx, \u0026databricks.GetViewsArgs{\n\t\t\tCatalogName: \"main\",\n\t\t\tSchemaName:  \"customers\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar customersGrants []*databricks.Grants\n\t\tfor key0, val0 := range customers.Ids {\n\t\t\t__res, err := databricks.NewGrants(ctx, fmt.Sprintf(\"customers-%v\", key0), \u0026databricks.GrantsArgs{\n\t\t\t\tTable: pulumi.String(val0),\n\t\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\t\tPrincipal: pulumi.String(\"sensitive\"),\n\t\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcustomersGrants = append(customersGrants, __res)\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetViewsArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport com.pulumi.codegen.internal.KeyedValue;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var customers = DatabricksFunctions.getViews(GetViewsArgs.builder()\n            .catalogName(\"main\")\n            .schemaName(\"customers\")\n            .build());\n\n        final var customersGrants = customers.applyValue(getViewsResult -\u003e {\n            final var resources = new ArrayList\u003cGrants\u003e();\n            for (var range : KeyedValue.of(getViewsResult.ids())) {\n                var resource = new Grants(\"customersGrants-\" + range.key(), GrantsArgs.builder()\n                    .table(range.value())\n                    .grants(GrantsGrantArgs.builder()\n                        .principal(\"sensitive\")\n                        .privileges(                        \n                            \"SELECT\",\n                            \"MODIFY\")\n                        .build())\n                    .build());\n\n                resources.add(resource);\n            }\n\n            return resources;\n        });\n\n    }\n}\n```\n```yaml\nresources:\n  customersGrants:\n    type: databricks:Grants\n    name: customers\n    properties:\n      table: ${range.value}\n      grants:\n        - principal: sensitive\n          privileges:\n            - SELECT\n            - MODIFY\n    options: {}\nvariables:\n  customers:\n    fn::invoke:\n      function: databricks:getViews\n      arguments:\n        catalogName: main\n        schemaName: customers\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Volume grants\n\nYou can grant `ALL_PRIVILEGES`, `APPLY_TAG`, `MANAGE`, `READ_VOLUME` and `WRITE_VOLUME` privileges to _`catalog.schema.volume`_ specified in the \u003cspan pulumi-lang-nodejs=\"`volume`\" pulumi-lang-dotnet=\"`Volume`\" pulumi-lang-go=\"`volume`\" pulumi-lang-python=\"`volume`\" pulumi-lang-yaml=\"`volume`\" pulumi-lang-java=\"`volume`\"\u003e`volume`\u003c/span\u003e attribute.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Volume(\"this\", {\n    name: \"quickstart_volume\",\n    catalogName: sandbox.name,\n    schemaName: things.name,\n    volumeType: \"EXTERNAL\",\n    storageLocation: some.url,\n    comment: \"this volume is managed by terraform\",\n});\nconst volume = new databricks.Grants(\"volume\", {\n    volume: _this.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\"WRITE_VOLUME\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Volume(\"this\",\n    name=\"quickstart_volume\",\n    catalog_name=sandbox[\"name\"],\n    schema_name=things[\"name\"],\n    volume_type=\"EXTERNAL\",\n    storage_location=some[\"url\"],\n    comment=\"this volume is managed by terraform\")\nvolume = databricks.Grants(\"volume\",\n    volume=this.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\"WRITE_VOLUME\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Volume(\"this\", new()\n    {\n        Name = \"quickstart_volume\",\n        CatalogName = sandbox.Name,\n        SchemaName = things.Name,\n        VolumeType = \"EXTERNAL\",\n        StorageLocation = some.Url,\n        Comment = \"this volume is managed by terraform\",\n    });\n\n    var volume = new Databricks.Grants(\"volume\", new()\n    {\n        Volume = @this.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"WRITE_VOLUME\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewVolume(ctx, \"this\", \u0026databricks.VolumeArgs{\n\t\t\tName:            pulumi.String(\"quickstart_volume\"),\n\t\t\tCatalogName:     pulumi.Any(sandbox.Name),\n\t\t\tSchemaName:      pulumi.Any(things.Name),\n\t\t\tVolumeType:      pulumi.String(\"EXTERNAL\"),\n\t\t\tStorageLocation: pulumi.Any(some.Url),\n\t\t\tComment:         pulumi.String(\"this volume is managed by terraform\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"volume\", \u0026databricks.GrantsArgs{\n\t\t\tVolume: this.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"WRITE_VOLUME\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Volume;\nimport com.pulumi.databricks.VolumeArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Volume(\"this\", VolumeArgs.builder()\n            .name(\"quickstart_volume\")\n            .catalogName(sandbox.name())\n            .schemaName(things.name())\n            .volumeType(\"EXTERNAL\")\n            .storageLocation(some.url())\n            .comment(\"this volume is managed by terraform\")\n            .build());\n\n        var volume = new Grants(\"volume\", GrantsArgs.builder()\n            .volume(this_.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(\"WRITE_VOLUME\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Volume\n    properties:\n      name: quickstart_volume\n      catalogName: ${sandbox.name}\n      schemaName: ${things.name}\n      volumeType: EXTERNAL\n      storageLocation: ${some.url}\n      comment: this volume is managed by terraform\n  volume:\n    type: databricks:Grants\n    properties:\n      volume: ${this.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - WRITE_VOLUME\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Registered model grants\n\nYou can grant `ALL_PRIVILEGES`, `APPLY_TAG`, `EXECUTE`, and `MANAGE` privileges to _`catalog.schema.model`_ specified in the \u003cspan pulumi-lang-nodejs=\"`model`\" pulumi-lang-dotnet=\"`Model`\" pulumi-lang-go=\"`model`\" pulumi-lang-python=\"`model`\" pulumi-lang-yaml=\"`model`\" pulumi-lang-java=\"`model`\"\u003e`model`\u003c/span\u003e attribute.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst customers = new databricks.Grants(\"customers\", {\n    model: \"main.reporting.customer_model\",\n    grants: [\n        {\n            principal: \"Data Engineers\",\n            privileges: [\n                \"APPLY_TAG\",\n                \"EXECUTE\",\n            ],\n        },\n        {\n            principal: \"Data Analysts\",\n            privileges: [\"EXECUTE\"],\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncustomers = databricks.Grants(\"customers\",\n    model=\"main.reporting.customer_model\",\n    grants=[\n        {\n            \"principal\": \"Data Engineers\",\n            \"privileges\": [\n                \"APPLY_TAG\",\n                \"EXECUTE\",\n            ],\n        },\n        {\n            \"principal\": \"Data Analysts\",\n            \"privileges\": [\"EXECUTE\"],\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var customers = new Databricks.Grants(\"customers\", new()\n    {\n        Model = \"main.reporting.customer_model\",\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"APPLY_TAG\",\n                    \"EXECUTE\",\n                },\n            },\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Analysts\",\n                Privileges = new[]\n                {\n                    \"EXECUTE\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGrants(ctx, \"customers\", \u0026databricks.GrantsArgs{\n\t\t\tModel: pulumi.String(\"main.reporting.customer_model\"),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"APPLY_TAG\"),\n\t\t\t\t\t\tpulumi.String(\"EXECUTE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Analysts\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"EXECUTE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var customers = new Grants(\"customers\", GrantsArgs.builder()\n            .model(\"main.reporting.customer_model\")\n            .grants(            \n                GrantsGrantArgs.builder()\n                    .principal(\"Data Engineers\")\n                    .privileges(                    \n                        \"APPLY_TAG\",\n                        \"EXECUTE\")\n                    .build(),\n                GrantsGrantArgs.builder()\n                    .principal(\"Data Analysts\")\n                    .privileges(\"EXECUTE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  customers:\n    type: databricks:Grants\n    properties:\n      model: main.reporting.customer_model\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - APPLY_TAG\n            - EXECUTE\n        - principal: Data Analysts\n          privileges:\n            - EXECUTE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Function grants\n\nYou can grant `ALL_PRIVILEGES`, `EXECUTE`, and `MANAGE` privileges to _`catalog.schema.function`_ specified in the \u003cspan pulumi-lang-nodejs=\"`function`\" pulumi-lang-dotnet=\"`Function`\" pulumi-lang-go=\"`function`\" pulumi-lang-python=\"`function`\" pulumi-lang-yaml=\"`function`\" pulumi-lang-java=\"`function`\"\u003e`function`\u003c/span\u003e attribute.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst udf = new databricks.Grants(\"udf\", {\n    \"function\": \"main.reporting.udf\",\n    grants: [\n        {\n            principal: \"Data Engineers\",\n            privileges: [\"EXECUTE\"],\n        },\n        {\n            principal: \"Data Analysts\",\n            privileges: [\"EXECUTE\"],\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nudf = databricks.Grants(\"udf\",\n    function=\"main.reporting.udf\",\n    grants=[\n        {\n            \"principal\": \"Data Engineers\",\n            \"privileges\": [\"EXECUTE\"],\n        },\n        {\n            \"principal\": \"Data Analysts\",\n            \"privileges\": [\"EXECUTE\"],\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var udf = new Databricks.Grants(\"udf\", new()\n    {\n        Function = \"main.reporting.udf\",\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"EXECUTE\",\n                },\n            },\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Analysts\",\n                Privileges = new[]\n                {\n                    \"EXECUTE\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGrants(ctx, \"udf\", \u0026databricks.GrantsArgs{\n\t\t\tFunction: pulumi.String(\"main.reporting.udf\"),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"EXECUTE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Analysts\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"EXECUTE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var udf = new Grants(\"udf\", GrantsArgs.builder()\n            .function(\"main.reporting.udf\")\n            .grants(            \n                GrantsGrantArgs.builder()\n                    .principal(\"Data Engineers\")\n                    .privileges(\"EXECUTE\")\n                    .build(),\n                GrantsGrantArgs.builder()\n                    .principal(\"Data Analysts\")\n                    .privileges(\"EXECUTE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  udf:\n    type: databricks:Grants\n    properties:\n      function: main.reporting.udf\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - EXECUTE\n        - principal: Data Analysts\n          privileges:\n            - EXECUTE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Service credential grants\n\nYou can grant `ALL_PRIVILEGES`, `ACCESS`, `CREATE_CONNECTION`, and `MANAGE` privileges to\u003cspan pulumi-lang-nodejs=\" databricks.Credential \" pulumi-lang-dotnet=\" databricks.Credential \" pulumi-lang-go=\" Credential \" pulumi-lang-python=\" Credential \" pulumi-lang-yaml=\" databricks.Credential \" pulumi-lang-java=\" databricks.Credential \"\u003e databricks.Credential \u003c/span\u003eid specified in \u003cspan pulumi-lang-nodejs=\"`credential`\" pulumi-lang-dotnet=\"`Credential`\" pulumi-lang-go=\"`credential`\" pulumi-lang-python=\"`credential`\" pulumi-lang-yaml=\"`credential`\" pulumi-lang-java=\"`credential`\"\u003e`credential`\u003c/span\u003e attribute:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst external = new databricks.Credential(\"external\", {\n    name: externalDataAccess.name,\n    awsIamRole: {\n        roleArn: externalDataAccess.arn,\n    },\n    purpose: \"SERVICE\",\n    comment: \"Managed by TF\",\n});\nconst externalCreds = new databricks.Grants(\"external_creds\", {\n    credential: external.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\"CREATE_CONNECTION\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexternal = databricks.Credential(\"external\",\n    name=external_data_access[\"name\"],\n    aws_iam_role={\n        \"role_arn\": external_data_access[\"arn\"],\n    },\n    purpose=\"SERVICE\",\n    comment=\"Managed by TF\")\nexternal_creds = databricks.Grants(\"external_creds\",\n    credential=external.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\"CREATE_CONNECTION\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var external = new Databricks.Credential(\"external\", new()\n    {\n        Name = externalDataAccess.Name,\n        AwsIamRole = new Databricks.Inputs.CredentialAwsIamRoleArgs\n        {\n            RoleArn = externalDataAccess.Arn,\n        },\n        Purpose = \"SERVICE\",\n        Comment = \"Managed by TF\",\n    });\n\n    var externalCreds = new Databricks.Grants(\"external_creds\", new()\n    {\n        Credential = external.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"CREATE_CONNECTION\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternal, err := databricks.NewCredential(ctx, \"external\", \u0026databricks.CredentialArgs{\n\t\t\tName: pulumi.Any(externalDataAccess.Name),\n\t\t\tAwsIamRole: \u0026databricks.CredentialAwsIamRoleArgs{\n\t\t\t\tRoleArn: pulumi.Any(externalDataAccess.Arn),\n\t\t\t},\n\t\t\tPurpose: pulumi.String(\"SERVICE\"),\n\t\t\tComment: pulumi.String(\"Managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"external_creds\", \u0026databricks.GrantsArgs{\n\t\t\tCredential: external.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_CONNECTION\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Credential;\nimport com.pulumi.databricks.CredentialArgs;\nimport com.pulumi.databricks.inputs.CredentialAwsIamRoleArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var external = new Credential(\"external\", CredentialArgs.builder()\n            .name(externalDataAccess.name())\n            .awsIamRole(CredentialAwsIamRoleArgs.builder()\n                .roleArn(externalDataAccess.arn())\n                .build())\n            .purpose(\"SERVICE\")\n            .comment(\"Managed by TF\")\n            .build());\n\n        var externalCreds = new Grants(\"externalCreds\", GrantsArgs.builder()\n            .credential(external.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(\"CREATE_CONNECTION\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  external:\n    type: databricks:Credential\n    properties:\n      name: ${externalDataAccess.name}\n      awsIamRole:\n        roleArn: ${externalDataAccess.arn}\n      purpose: SERVICE\n      comment: Managed by TF\n  externalCreds:\n    type: databricks:Grants\n    name: external_creds\n    properties:\n      credential: ${external.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - CREATE_CONNECTION\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Storage credential grants\n\nYou can grant `ALL_PRIVILEGES`, `CREATE_EXTERNAL_LOCATION`, `CREATE_EXTERNAL_TABLE`, `MANAGE`, `READ_FILES` and `WRITE_FILES` privileges to\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003eid specified in \u003cspan pulumi-lang-nodejs=\"`storageCredential`\" pulumi-lang-dotnet=\"`StorageCredential`\" pulumi-lang-go=\"`storageCredential`\" pulumi-lang-python=\"`storage_credential`\" pulumi-lang-yaml=\"`storageCredential`\" pulumi-lang-java=\"`storageCredential`\"\u003e`storage_credential`\u003c/span\u003e attribute:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst external = new databricks.StorageCredential(\"external\", {\n    name: externalDataAccess.name,\n    awsIamRole: {\n        roleArn: externalDataAccess.arn,\n    },\n    comment: \"Managed by TF\",\n});\nconst externalCreds = new databricks.Grants(\"external_creds\", {\n    storageCredential: external.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\"CREATE_EXTERNAL_TABLE\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexternal = databricks.StorageCredential(\"external\",\n    name=external_data_access[\"name\"],\n    aws_iam_role={\n        \"role_arn\": external_data_access[\"arn\"],\n    },\n    comment=\"Managed by TF\")\nexternal_creds = databricks.Grants(\"external_creds\",\n    storage_credential=external.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\"CREATE_EXTERNAL_TABLE\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var external = new Databricks.StorageCredential(\"external\", new()\n    {\n        Name = externalDataAccess.Name,\n        AwsIamRole = new Databricks.Inputs.StorageCredentialAwsIamRoleArgs\n        {\n            RoleArn = externalDataAccess.Arn,\n        },\n        Comment = \"Managed by TF\",\n    });\n\n    var externalCreds = new Databricks.Grants(\"external_creds\", new()\n    {\n        StorageCredential = external.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"CREATE_EXTERNAL_TABLE\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternal, err := databricks.NewStorageCredential(ctx, \"external\", \u0026databricks.StorageCredentialArgs{\n\t\t\tName: pulumi.Any(externalDataAccess.Name),\n\t\t\tAwsIamRole: \u0026databricks.StorageCredentialAwsIamRoleArgs{\n\t\t\t\tRoleArn: pulumi.Any(externalDataAccess.Arn),\n\t\t\t},\n\t\t\tComment: pulumi.String(\"Managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"external_creds\", \u0026databricks.GrantsArgs{\n\t\t\tStorageCredential: external.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.StorageCredential;\nimport com.pulumi.databricks.StorageCredentialArgs;\nimport com.pulumi.databricks.inputs.StorageCredentialAwsIamRoleArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var external = new StorageCredential(\"external\", StorageCredentialArgs.builder()\n            .name(externalDataAccess.name())\n            .awsIamRole(StorageCredentialAwsIamRoleArgs.builder()\n                .roleArn(externalDataAccess.arn())\n                .build())\n            .comment(\"Managed by TF\")\n            .build());\n\n        var externalCreds = new Grants(\"externalCreds\", GrantsArgs.builder()\n            .storageCredential(external.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(\"CREATE_EXTERNAL_TABLE\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  external:\n    type: databricks:StorageCredential\n    properties:\n      name: ${externalDataAccess.name}\n      awsIamRole:\n        roleArn: ${externalDataAccess.arn}\n      comment: Managed by TF\n  externalCreds:\n    type: databricks:Grants\n    name: external_creds\n    properties:\n      storageCredential: ${external.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - CREATE_EXTERNAL_TABLE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## External location grants\n\nYou can grant `ALL_PRIVILEGES`, `CREATE_EXTERNAL_TABLE`, `CREATE_MANAGED_STORAGE`, `CREATE EXTERNAL VOLUME`, `MANAGE`, `READ_FILES` and `WRITE_FILES` privileges to\u003cspan pulumi-lang-nodejs=\" databricks.ExternalLocation \" pulumi-lang-dotnet=\" databricks.ExternalLocation \" pulumi-lang-go=\" ExternalLocation \" pulumi-lang-python=\" ExternalLocation \" pulumi-lang-yaml=\" databricks.ExternalLocation \" pulumi-lang-java=\" databricks.ExternalLocation \"\u003e databricks.ExternalLocation \u003c/span\u003eid specified in \u003cspan pulumi-lang-nodejs=\"`externalLocation`\" pulumi-lang-dotnet=\"`ExternalLocation`\" pulumi-lang-go=\"`externalLocation`\" pulumi-lang-python=\"`external_location`\" pulumi-lang-yaml=\"`externalLocation`\" pulumi-lang-java=\"`externalLocation`\"\u003e`external_location`\u003c/span\u003e attribute:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst some = new databricks.ExternalLocation(\"some\", {\n    name: \"external\",\n    url: `s3://${externalAwsS3Bucket.id}/some`,\n    credentialName: external.id,\n    comment: \"Managed by TF\",\n});\nconst someGrants = new databricks.Grants(\"some\", {\n    externalLocation: some.id,\n    grants: [\n        {\n            principal: \"Data Engineers\",\n            privileges: [\n                \"CREATE_EXTERNAL_TABLE\",\n                \"READ_FILES\",\n            ],\n        },\n        {\n            principal: mySp.applicationId,\n            privileges: [\n                \"CREATE_EXTERNAL_TABLE\",\n                \"READ_FILES\",\n            ],\n        },\n        {\n            principal: myGroup.displayName,\n            privileges: [\n                \"CREATE_EXTERNAL_TABLE\",\n                \"READ_FILES\",\n            ],\n        },\n        {\n            principal: myUser.userName,\n            privileges: [\n                \"CREATE_EXTERNAL_TABLE\",\n                \"READ_FILES\",\n            ],\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsome = databricks.ExternalLocation(\"some\",\n    name=\"external\",\n    url=f\"s3://{external_aws_s3_bucket['id']}/some\",\n    credential_name=external[\"id\"],\n    comment=\"Managed by TF\")\nsome_grants = databricks.Grants(\"some\",\n    external_location=some.id,\n    grants=[\n        {\n            \"principal\": \"Data Engineers\",\n            \"privileges\": [\n                \"CREATE_EXTERNAL_TABLE\",\n                \"READ_FILES\",\n            ],\n        },\n        {\n            \"principal\": my_sp[\"applicationId\"],\n            \"privileges\": [\n                \"CREATE_EXTERNAL_TABLE\",\n                \"READ_FILES\",\n            ],\n        },\n        {\n            \"principal\": my_group[\"displayName\"],\n            \"privileges\": [\n                \"CREATE_EXTERNAL_TABLE\",\n                \"READ_FILES\",\n            ],\n        },\n        {\n            \"principal\": my_user[\"userName\"],\n            \"privileges\": [\n                \"CREATE_EXTERNAL_TABLE\",\n                \"READ_FILES\",\n            ],\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var some = new Databricks.ExternalLocation(\"some\", new()\n    {\n        Name = \"external\",\n        Url = $\"s3://{externalAwsS3Bucket.Id}/some\",\n        CredentialName = external.Id,\n        Comment = \"Managed by TF\",\n    });\n\n    var someGrants = new Databricks.Grants(\"some\", new()\n    {\n        ExternalLocation = some.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"CREATE_EXTERNAL_TABLE\",\n                    \"READ_FILES\",\n                },\n            },\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = mySp.ApplicationId,\n                Privileges = new[]\n                {\n                    \"CREATE_EXTERNAL_TABLE\",\n                    \"READ_FILES\",\n                },\n            },\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = myGroup.DisplayName,\n                Privileges = new[]\n                {\n                    \"CREATE_EXTERNAL_TABLE\",\n                    \"READ_FILES\",\n                },\n            },\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = myUser.UserName,\n                Privileges = new[]\n                {\n                    \"CREATE_EXTERNAL_TABLE\",\n                    \"READ_FILES\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsome, err := databricks.NewExternalLocation(ctx, \"some\", \u0026databricks.ExternalLocationArgs{\n\t\t\tName:           pulumi.String(\"external\"),\n\t\t\tUrl:            pulumi.Sprintf(\"s3://%v/some\", externalAwsS3Bucket.Id),\n\t\t\tCredentialName: pulumi.Any(external.Id),\n\t\t\tComment:        pulumi.String(\"Managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"some\", \u0026databricks.GrantsArgs{\n\t\t\tExternalLocation: some.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\t\t\tpulumi.String(\"READ_FILES\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.Any(mySp.ApplicationId),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\t\t\tpulumi.String(\"READ_FILES\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.Any(myGroup.DisplayName),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\t\t\tpulumi.String(\"READ_FILES\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.Any(myUser.UserName),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\t\t\tpulumi.String(\"READ_FILES\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ExternalLocation;\nimport com.pulumi.databricks.ExternalLocationArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var some = new ExternalLocation(\"some\", ExternalLocationArgs.builder()\n            .name(\"external\")\n            .url(String.format(\"s3://%s/some\", externalAwsS3Bucket.id()))\n            .credentialName(external.id())\n            .comment(\"Managed by TF\")\n            .build());\n\n        var someGrants = new Grants(\"someGrants\", GrantsArgs.builder()\n            .externalLocation(some.id())\n            .grants(            \n                GrantsGrantArgs.builder()\n                    .principal(\"Data Engineers\")\n                    .privileges(                    \n                        \"CREATE_EXTERNAL_TABLE\",\n                        \"READ_FILES\")\n                    .build(),\n                GrantsGrantArgs.builder()\n                    .principal(mySp.applicationId())\n                    .privileges(                    \n                        \"CREATE_EXTERNAL_TABLE\",\n                        \"READ_FILES\")\n                    .build(),\n                GrantsGrantArgs.builder()\n                    .principal(myGroup.displayName())\n                    .privileges(                    \n                        \"CREATE_EXTERNAL_TABLE\",\n                        \"READ_FILES\")\n                    .build(),\n                GrantsGrantArgs.builder()\n                    .principal(myUser.userName())\n                    .privileges(                    \n                        \"CREATE_EXTERNAL_TABLE\",\n                        \"READ_FILES\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  some:\n    type: databricks:ExternalLocation\n    properties:\n      name: external\n      url: s3://${externalAwsS3Bucket.id}/some\n      credentialName: ${external.id}\n      comment: Managed by TF\n  someGrants:\n    type: databricks:Grants\n    name: some\n    properties:\n      externalLocation: ${some.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - CREATE_EXTERNAL_TABLE\n            - READ_FILES\n        - principal: ${mySp.applicationId}\n          privileges:\n            - CREATE_EXTERNAL_TABLE\n            - READ_FILES\n        - principal: ${myGroup.displayName}\n          privileges:\n            - CREATE_EXTERNAL_TABLE\n            - READ_FILES\n        - principal: ${myUser.userName}\n          privileges:\n            - CREATE_EXTERNAL_TABLE\n            - READ_FILES\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Connection grants\n\nYou can grant `ALL_PRIVILEGES`, `MANAGE`, `USE_CONNECTION` and `CREATE_FOREIGN_CATALOG` to\u003cspan pulumi-lang-nodejs=\" databricks.Connection \" pulumi-lang-dotnet=\" databricks.Connection \" pulumi-lang-go=\" Connection \" pulumi-lang-python=\" Connection \" pulumi-lang-yaml=\" databricks.Connection \" pulumi-lang-java=\" databricks.Connection \"\u003e databricks.Connection \u003c/span\u003especified in \u003cspan pulumi-lang-nodejs=\"`foreignConnection`\" pulumi-lang-dotnet=\"`ForeignConnection`\" pulumi-lang-go=\"`foreignConnection`\" pulumi-lang-python=\"`foreign_connection`\" pulumi-lang-yaml=\"`foreignConnection`\" pulumi-lang-java=\"`foreignConnection`\"\u003e`foreign_connection`\u003c/span\u003e attribute:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst mysql = new databricks.Connection(\"mysql\", {\n    name: \"mysql_connection\",\n    connectionType: \"MYSQL\",\n    comment: \"this is a connection to mysql db\",\n    options: {\n        host: \"test.mysql.database.azure.com\",\n        port: \"3306\",\n        user: \"user\",\n        password: \"password\",\n    },\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst some = new databricks.Grants(\"some\", {\n    foreignConnection: mysql.name,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\n            \"CREATE_FOREIGN_CATALOG\",\n            \"USE_CONNECTION\",\n        ],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nmysql = databricks.Connection(\"mysql\",\n    name=\"mysql_connection\",\n    connection_type=\"MYSQL\",\n    comment=\"this is a connection to mysql db\",\n    options={\n        \"host\": \"test.mysql.database.azure.com\",\n        \"port\": \"3306\",\n        \"user\": \"user\",\n        \"password\": \"password\",\n    },\n    properties={\n        \"purpose\": \"testing\",\n    })\nsome = databricks.Grants(\"some\",\n    foreign_connection=mysql.name,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\n            \"CREATE_FOREIGN_CATALOG\",\n            \"USE_CONNECTION\",\n        ],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var mysql = new Databricks.Connection(\"mysql\", new()\n    {\n        Name = \"mysql_connection\",\n        ConnectionType = \"MYSQL\",\n        Comment = \"this is a connection to mysql db\",\n        Options = \n        {\n            { \"host\", \"test.mysql.database.azure.com\" },\n            { \"port\", \"3306\" },\n            { \"user\", \"user\" },\n            { \"password\", \"password\" },\n        },\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var some = new Databricks.Grants(\"some\", new()\n    {\n        ForeignConnection = mysql.Name,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"CREATE_FOREIGN_CATALOG\",\n                    \"USE_CONNECTION\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tmysql, err := databricks.NewConnection(ctx, \"mysql\", \u0026databricks.ConnectionArgs{\n\t\t\tName:           pulumi.String(\"mysql_connection\"),\n\t\t\tConnectionType: pulumi.String(\"MYSQL\"),\n\t\t\tComment:        pulumi.String(\"this is a connection to mysql db\"),\n\t\t\tOptions: pulumi.StringMap{\n\t\t\t\t\"host\":     pulumi.String(\"test.mysql.database.azure.com\"),\n\t\t\t\t\"port\":     pulumi.String(\"3306\"),\n\t\t\t\t\"user\":     pulumi.String(\"user\"),\n\t\t\t\t\"password\": pulumi.String(\"password\"),\n\t\t\t},\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"some\", \u0026databricks.GrantsArgs{\n\t\t\tForeignConnection: mysql.Name,\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_FOREIGN_CATALOG\"),\n\t\t\t\t\t\tpulumi.String(\"USE_CONNECTION\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Connection;\nimport com.pulumi.databricks.ConnectionArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var mysql = new Connection(\"mysql\", ConnectionArgs.builder()\n            .name(\"mysql_connection\")\n            .connectionType(\"MYSQL\")\n            .comment(\"this is a connection to mysql db\")\n            .options(Map.ofEntries(\n                Map.entry(\"host\", \"test.mysql.database.azure.com\"),\n                Map.entry(\"port\", \"3306\"),\n                Map.entry(\"user\", \"user\"),\n                Map.entry(\"password\", \"password\")\n            ))\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var some = new Grants(\"some\", GrantsArgs.builder()\n            .foreignConnection(mysql.name())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(                \n                    \"CREATE_FOREIGN_CATALOG\",\n                    \"USE_CONNECTION\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  mysql:\n    type: databricks:Connection\n    properties:\n      name: mysql_connection\n      connectionType: MYSQL\n      comment: this is a connection to mysql db\n      options:\n        host: test.mysql.database.azure.com\n        port: '3306'\n        user: user\n        password: password\n      properties:\n        purpose: testing\n  some:\n    type: databricks:Grants\n    properties:\n      foreignConnection: ${mysql.name}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - CREATE_FOREIGN_CATALOG\n            - USE_CONNECTION\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Delta Sharing share grants\n\nYou can grant `SELECT` to\u003cspan pulumi-lang-nodejs=\" databricks.Recipient \" pulumi-lang-dotnet=\" databricks.Recipient \" pulumi-lang-go=\" Recipient \" pulumi-lang-python=\" Recipient \" pulumi-lang-yaml=\" databricks.Recipient \" pulumi-lang-java=\" databricks.Recipient \"\u003e databricks.Recipient \u003c/span\u003eon\u003cspan pulumi-lang-nodejs=\" databricks.Share \" pulumi-lang-dotnet=\" databricks.Share \" pulumi-lang-go=\" Share \" pulumi-lang-python=\" Share \" pulumi-lang-yaml=\" databricks.Share \" pulumi-lang-java=\" databricks.Share \"\u003e databricks.Share \u003c/span\u003ename specified in \u003cspan pulumi-lang-nodejs=\"`share`\" pulumi-lang-dotnet=\"`Share`\" pulumi-lang-go=\"`share`\" pulumi-lang-python=\"`share`\" pulumi-lang-yaml=\"`share`\" pulumi-lang-java=\"`share`\"\u003e`share`\u003c/span\u003e attribute:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst some = new databricks.Share(\"some\", {name: \"my_share\"});\nconst someRecipient = new databricks.Recipient(\"some\", {name: \"my_recipient\"});\nconst someGrants = new databricks.Grants(\"some\", {\n    share: some.name,\n    grants: [{\n        principal: someRecipient.name,\n        privileges: [\"SELECT\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsome = databricks.Share(\"some\", name=\"my_share\")\nsome_recipient = databricks.Recipient(\"some\", name=\"my_recipient\")\nsome_grants = databricks.Grants(\"some\",\n    share=some.name,\n    grants=[{\n        \"principal\": some_recipient.name,\n        \"privileges\": [\"SELECT\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var some = new Databricks.Share(\"some\", new()\n    {\n        Name = \"my_share\",\n    });\n\n    var someRecipient = new Databricks.Recipient(\"some\", new()\n    {\n        Name = \"my_recipient\",\n    });\n\n    var someGrants = new Databricks.Grants(\"some\", new()\n    {\n        Share = some.Name,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = someRecipient.Name,\n                Privileges = new[]\n                {\n                    \"SELECT\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsome, err := databricks.NewShare(ctx, \"some\", \u0026databricks.ShareArgs{\n\t\t\tName: pulumi.String(\"my_share\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsomeRecipient, err := databricks.NewRecipient(ctx, \"some\", \u0026databricks.RecipientArgs{\n\t\t\tName: pulumi.String(\"my_recipient\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"some\", \u0026databricks.GrantsArgs{\n\t\t\tShare: some.Name,\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: someRecipient.Name,\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Share;\nimport com.pulumi.databricks.ShareArgs;\nimport com.pulumi.databricks.Recipient;\nimport com.pulumi.databricks.RecipientArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var some = new Share(\"some\", ShareArgs.builder()\n            .name(\"my_share\")\n            .build());\n\n        var someRecipient = new Recipient(\"someRecipient\", RecipientArgs.builder()\n            .name(\"my_recipient\")\n            .build());\n\n        var someGrants = new Grants(\"someGrants\", GrantsArgs.builder()\n            .share(some.name())\n            .grants(GrantsGrantArgs.builder()\n                .principal(someRecipient.name())\n                .privileges(\"SELECT\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  some:\n    type: databricks:Share\n    properties:\n      name: my_share\n  someRecipient:\n    type: databricks:Recipient\n    name: some\n    properties:\n      name: my_recipient\n  someGrants:\n    type: databricks:Grants\n    name: some\n    properties:\n      share: ${some.name}\n      grants:\n        - principal: ${someRecipient.name}\n          privileges:\n            - SELECT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Other access control\n\nYou can control Databricks General Permissions through\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eresource.\n\n","properties":{"catalog":{"type":"string"},"credential":{"type":"string"},"externalLocation":{"type":"string"},"foreignConnection":{"type":"string"},"function":{"type":"string"},"grants":{"type":"array","items":{"$ref":"#/types/databricks:index/GrantsGrant:GrantsGrant"},"language":{"csharp":{"name":"GrantDetails"}}},"metastore":{"type":"string"},"model":{"type":"string"},"pipeline":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/GrantsProviderConfig:GrantsProviderConfig"},"recipient":{"type":"string"},"schema":{"type":"string"},"share":{"type":"string"},"storageCredential":{"type":"string"},"table":{"type":"string"},"volume":{"type":"string"}},"required":["grants"],"inputProperties":{"catalog":{"type":"string","willReplaceOnChanges":true},"credential":{"type":"string","willReplaceOnChanges":true},"externalLocation":{"type":"string","willReplaceOnChanges":true},"foreignConnection":{"type":"string","willReplaceOnChanges":true},"function":{"type":"string","willReplaceOnChanges":true},"grants":{"type":"array","items":{"$ref":"#/types/databricks:index/GrantsGrant:GrantsGrant"},"language":{"csharp":{"name":"GrantDetails"}}},"metastore":{"type":"string","willReplaceOnChanges":true},"model":{"type":"string","willReplaceOnChanges":true},"pipeline":{"type":"string","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/GrantsProviderConfig:GrantsProviderConfig"},"recipient":{"type":"string","willReplaceOnChanges":true},"schema":{"type":"string","willReplaceOnChanges":true},"share":{"type":"string","willReplaceOnChanges":true},"storageCredential":{"type":"string","willReplaceOnChanges":true},"table":{"type":"string","willReplaceOnChanges":true},"volume":{"type":"string","willReplaceOnChanges":true}},"requiredInputs":["grants"],"stateInputs":{"description":"Input properties used for looking up and filtering Grants resources.\n","properties":{"catalog":{"type":"string","willReplaceOnChanges":true},"credential":{"type":"string","willReplaceOnChanges":true},"externalLocation":{"type":"string","willReplaceOnChanges":true},"foreignConnection":{"type":"string","willReplaceOnChanges":true},"function":{"type":"string","willReplaceOnChanges":true},"grants":{"type":"array","items":{"$ref":"#/types/databricks:index/GrantsGrant:GrantsGrant"},"language":{"csharp":{"name":"GrantDetails"}}},"metastore":{"type":"string","willReplaceOnChanges":true},"model":{"type":"string","willReplaceOnChanges":true},"pipeline":{"type":"string","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/GrantsProviderConfig:GrantsProviderConfig"},"recipient":{"type":"string","willReplaceOnChanges":true},"schema":{"type":"string","willReplaceOnChanges":true},"share":{"type":"string","willReplaceOnChanges":true},"storageCredential":{"type":"string","willReplaceOnChanges":true},"table":{"type":"string","willReplaceOnChanges":true},"volume":{"type":"string","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/group:Group":{"description":"This resource allows you to manage both [account groups and workspace-local groups](https://docs.databricks.com/administration-guide/users-groups/groups.html). You can use the\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eresource to assign Databricks users, service principals as well as other groups as members of the group. This is useful if you are using an application to sync users \u0026 groups with SCIM API.\n\n\u003e This resource can be used with an account or workspace-level provider.\n\n\u003e To assign an account level group to a workspace use databricks_mws_permission_assignment.\n\n\u003e Entitlements, like, \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`allowInstancePoolCreate`\" pulumi-lang-dotnet=\"`AllowInstancePoolCreate`\" pulumi-lang-go=\"`allowInstancePoolCreate`\" pulumi-lang-python=\"`allow_instance_pool_create`\" pulumi-lang-yaml=\"`allowInstancePoolCreate`\" pulumi-lang-java=\"`allowInstancePoolCreate`\"\u003e`allow_instance_pool_create`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e applicable only for workspace-level groups.  Use\u003cspan pulumi-lang-nodejs=\" databricks.Entitlements \" pulumi-lang-dotnet=\" databricks.Entitlements \" pulumi-lang-go=\" Entitlements \" pulumi-lang-python=\" Entitlements \" pulumi-lang-yaml=\" databricks.Entitlements \" pulumi-lang-java=\" databricks.Entitlements \"\u003e databricks.Entitlements \u003c/span\u003eresource to assign entitlements inside a workspace to account-level groups.\n\nTo create account groups in the Databricks account, the provider must be configured accordingly. On AWS deployment with `host = \"https://accounts.cloud.databricks.com\"` and \u003cspan pulumi-lang-nodejs=\"`accountId \" pulumi-lang-dotnet=\"`AccountId \" pulumi-lang-go=\"`accountId \" pulumi-lang-python=\"`account_id \" pulumi-lang-yaml=\"`accountId \" pulumi-lang-java=\"`accountId \"\u003e`account_id \u003c/span\u003e= \"00000000-0000-0000-0000-000000000000\"`. On Azure deployments `host = \"https://accounts.azuredatabricks.net\"`, \u003cspan pulumi-lang-nodejs=\"`accountId \" pulumi-lang-dotnet=\"`AccountId \" pulumi-lang-go=\"`accountId \" pulumi-lang-python=\"`account_id \" pulumi-lang-yaml=\"`accountId \" pulumi-lang-java=\"`accountId \"\u003e`account_id \u003c/span\u003e= \"00000000-0000-0000-0000-000000000000\"` and using AAD tokens as authentication.\n\nRecommended to use along with Identity Provider SCIM provisioning to populate users into those groups:\n\n* [Azure Active Directory](https://docs.microsoft.com/en-us/azure/databricks/administration-guide/users-groups/scim/aad)\n* [Okta](https://docs.databricks.com/administration-guide/users-groups/scim/okta.html)\n* [OneLogin](https://docs.databricks.com/administration-guide/users-groups/scim/onelogin.html)\n\n## Example Usage\n\nCreating some group\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Group(\"this\", {\n    displayName: \"Some Group\",\n    allowClusterCreate: true,\n    allowInstancePoolCreate: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Group(\"this\",\n    display_name=\"Some Group\",\n    allow_cluster_create=True,\n    allow_instance_pool_create=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Group(\"this\", new()\n    {\n        DisplayName = \"Some Group\",\n        AllowClusterCreate = true,\n        AllowInstancePoolCreate = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGroup(ctx, \"this\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName:             pulumi.String(\"Some Group\"),\n\t\t\tAllowClusterCreate:      pulumi.Bool(true),\n\t\t\tAllowInstancePoolCreate: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Group(\"this\", GroupArgs.builder()\n            .displayName(\"Some Group\")\n            .allowClusterCreate(true)\n            .allowInstancePoolCreate(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Group\n    properties:\n      displayName: Some Group\n      allowClusterCreate: true\n      allowInstancePoolCreate: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nAdding\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eas\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eof some group\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Group(\"this\", {\n    displayName: \"Some Group\",\n    allowClusterCreate: true,\n    allowInstancePoolCreate: true,\n});\nconst thisUser = new databricks.User(\"this\", {userName: \"someone@example.com\"});\nconst vipMember = new databricks.GroupMember(\"vip_member\", {\n    groupId: _this.id,\n    memberId: thisUser.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Group(\"this\",\n    display_name=\"Some Group\",\n    allow_cluster_create=True,\n    allow_instance_pool_create=True)\nthis_user = databricks.User(\"this\", user_name=\"someone@example.com\")\nvip_member = databricks.GroupMember(\"vip_member\",\n    group_id=this.id,\n    member_id=this_user.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Group(\"this\", new()\n    {\n        DisplayName = \"Some Group\",\n        AllowClusterCreate = true,\n        AllowInstancePoolCreate = true,\n    });\n\n    var thisUser = new Databricks.User(\"this\", new()\n    {\n        UserName = \"someone@example.com\",\n    });\n\n    var vipMember = new Databricks.GroupMember(\"vip_member\", new()\n    {\n        GroupId = @this.Id,\n        MemberId = thisUser.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewGroup(ctx, \"this\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName:             pulumi.String(\"Some Group\"),\n\t\t\tAllowClusterCreate:      pulumi.Bool(true),\n\t\t\tAllowInstancePoolCreate: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisUser, err := databricks.NewUser(ctx, \"this\", \u0026databricks.UserArgs{\n\t\t\tUserName: pulumi.String(\"someone@example.com\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupMember(ctx, \"vip_member\", \u0026databricks.GroupMemberArgs{\n\t\t\tGroupId:  this.ID(),\n\t\t\tMemberId: thisUser.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport com.pulumi.databricks.GroupMember;\nimport com.pulumi.databricks.GroupMemberArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Group(\"this\", GroupArgs.builder()\n            .displayName(\"Some Group\")\n            .allowClusterCreate(true)\n            .allowInstancePoolCreate(true)\n            .build());\n\n        var thisUser = new User(\"thisUser\", UserArgs.builder()\n            .userName(\"someone@example.com\")\n            .build());\n\n        var vipMember = new GroupMember(\"vipMember\", GroupMemberArgs.builder()\n            .groupId(this_.id())\n            .memberId(thisUser.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Group\n    properties:\n      displayName: Some Group\n      allowClusterCreate: true\n      allowInstancePoolCreate: true\n  thisUser:\n    type: databricks:User\n    name: this\n    properties:\n      userName: someone@example.com\n  vipMember:\n    type: databricks:GroupMember\n    name: vip_member\n    properties:\n      groupId: ${this.id}\n      memberId: ${thisUser.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating group in AWS Databricks account:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Group(\"this\", {displayName: \"Some Group\"});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Group(\"this\", display_name=\"Some Group\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Group(\"this\", new()\n    {\n        DisplayName = \"Some Group\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGroup(ctx, \"this\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Some Group\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Group(\"this\", GroupArgs.builder()\n            .displayName(\"Some Group\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Group\n    properties:\n      displayName: Some Group\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating group in Azure Databricks account:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Group(\"this\", {displayName: \"Some Group\"});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Group(\"this\", display_name=\"Some Group\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Group(\"this\", new()\n    {\n        DisplayName = \"Some Group\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewGroup(ctx, \"this\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Some Group\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Group(\"this\", GroupArgs.builder()\n            .displayName(\"Some Group\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Group\n    properties:\n      displayName: Some Group\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `groups/Some Group`.\n"},"allowClusterCreate":{"type":"boolean","description":"This is a field to allow the group to have cluster create privileges. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" clusterId \" pulumi-lang-dotnet=\" ClusterId \" pulumi-lang-go=\" clusterId \" pulumi-lang-python=\" cluster_id \" pulumi-lang-yaml=\" clusterId \" pulumi-lang-java=\" clusterId \"\u003e cluster_id \u003c/span\u003eargument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"This is a field to allow the group to have instance pool create privileges. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the group to have access to [Databricks SQL](https://databricks.com/product/databricks-sql)  UI, [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one) and through databricks_sql_endpoint.\n"},"displayName":{"type":"string","description":"This is the display name for the given group.\n"},"externalId":{"type":"string","description":"ID of the group in an external identity provider.\n"},"force":{"type":"boolean","description":"Ignore `cannot create group: Group with name X already exists.` errors and implicitly import the specific group into Pulumi state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.\n"},"url":{"type":"string"},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the group to have access to a Databricks Workspace UI and [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the group to have access only to [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"required":["aclPrincipalId","displayName","url"],"inputProperties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `groups/Some Group`.\n"},"allowClusterCreate":{"type":"boolean","description":"This is a field to allow the group to have cluster create privileges. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" clusterId \" pulumi-lang-dotnet=\" ClusterId \" pulumi-lang-go=\" clusterId \" pulumi-lang-python=\" cluster_id \" pulumi-lang-yaml=\" clusterId \" pulumi-lang-java=\" clusterId \"\u003e cluster_id \u003c/span\u003eargument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"This is a field to allow the group to have instance pool create privileges. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the group to have access to [Databricks SQL](https://databricks.com/product/databricks-sql)  UI, [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one) and through databricks_sql_endpoint.\n"},"displayName":{"type":"string","description":"This is the display name for the given group.\n"},"externalId":{"type":"string","description":"ID of the group in an external identity provider.\n","willReplaceOnChanges":true},"force":{"type":"boolean","description":"Ignore `cannot create group: Group with name X already exists.` errors and implicitly import the specific group into Pulumi state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.\n"},"url":{"type":"string"},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the group to have access to a Databricks Workspace UI and [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the group to have access only to [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering Group resources.\n","properties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `groups/Some Group`.\n"},"allowClusterCreate":{"type":"boolean","description":"This is a field to allow the group to have cluster create privileges. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" clusterId \" pulumi-lang-dotnet=\" ClusterId \" pulumi-lang-go=\" clusterId \" pulumi-lang-python=\" cluster_id \" pulumi-lang-yaml=\" clusterId \" pulumi-lang-java=\" clusterId \"\u003e cluster_id \u003c/span\u003eargument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"This is a field to allow the group to have instance pool create privileges. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the group to have access to [Databricks SQL](https://databricks.com/product/databricks-sql)  UI, [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one) and through databricks_sql_endpoint.\n"},"displayName":{"type":"string","description":"This is the display name for the given group.\n"},"externalId":{"type":"string","description":"ID of the group in an external identity provider.\n","willReplaceOnChanges":true},"force":{"type":"boolean","description":"Ignore `cannot create group: Group with name X already exists.` errors and implicitly import the specific group into Pulumi state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.\n"},"url":{"type":"string"},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the group to have access to a Databricks Workspace UI and [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the group to have access only to [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"type":"object"}},"databricks:index/groupInstanceProfile:GroupInstanceProfile":{"description":"\u003e **Deprecated** Please migrate to databricks_group_role.\n\nThis resource allows you to attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_group.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst instanceProfile = new databricks.InstanceProfile(\"instance_profile\", {instanceProfileArn: \"my_instance_profile_arn\"});\nconst myGroup = new databricks.Group(\"my_group\", {displayName: \"my_group_name\"});\nconst myGroupInstanceProfile = new databricks.GroupInstanceProfile(\"my_group_instance_profile\", {\n    groupId: myGroup.id,\n    instanceProfileId: instanceProfile.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ninstance_profile = databricks.InstanceProfile(\"instance_profile\", instance_profile_arn=\"my_instance_profile_arn\")\nmy_group = databricks.Group(\"my_group\", display_name=\"my_group_name\")\nmy_group_instance_profile = databricks.GroupInstanceProfile(\"my_group_instance_profile\",\n    group_id=my_group.id,\n    instance_profile_id=instance_profile.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var instanceProfile = new Databricks.InstanceProfile(\"instance_profile\", new()\n    {\n        InstanceProfileArn = \"my_instance_profile_arn\",\n    });\n\n    var myGroup = new Databricks.Group(\"my_group\", new()\n    {\n        DisplayName = \"my_group_name\",\n    });\n\n    var myGroupInstanceProfile = new Databricks.GroupInstanceProfile(\"my_group_instance_profile\", new()\n    {\n        GroupId = myGroup.Id,\n        InstanceProfileId = instanceProfile.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tinstanceProfile, err := databricks.NewInstanceProfile(ctx, \"instance_profile\", \u0026databricks.InstanceProfileArgs{\n\t\t\tInstanceProfileArn: pulumi.String(\"my_instance_profile_arn\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmyGroup, err := databricks.NewGroup(ctx, \"my_group\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"my_group_name\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupInstanceProfile(ctx, \"my_group_instance_profile\", \u0026databricks.GroupInstanceProfileArgs{\n\t\t\tGroupId:           myGroup.ID(),\n\t\t\tInstanceProfileId: instanceProfile.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.InstanceProfile;\nimport com.pulumi.databricks.InstanceProfileArgs;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.GroupInstanceProfile;\nimport com.pulumi.databricks.GroupInstanceProfileArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var instanceProfile = new InstanceProfile(\"instanceProfile\", InstanceProfileArgs.builder()\n            .instanceProfileArn(\"my_instance_profile_arn\")\n            .build());\n\n        var myGroup = new Group(\"myGroup\", GroupArgs.builder()\n            .displayName(\"my_group_name\")\n            .build());\n\n        var myGroupInstanceProfile = new GroupInstanceProfile(\"myGroupInstanceProfile\", GroupInstanceProfileArgs.builder()\n            .groupId(myGroup.id())\n            .instanceProfileId(instanceProfile.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  instanceProfile:\n    type: databricks:InstanceProfile\n    name: instance_profile\n    properties:\n      instanceProfileArn: my_instance_profile_arn\n  myGroup:\n    type: databricks:Group\n    name: my_group\n    properties:\n      displayName: my_group_name\n  myGroupInstanceProfile:\n    type: databricks:GroupInstanceProfile\n    name: my_group_instance_profile\n    properties:\n      groupId: ${myGroup.id}\n      instanceProfileId: ${instanceProfile.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getAwsBucketPolicy \" pulumi-lang-dotnet=\" databricks.getAwsBucketPolicy \" pulumi-lang-go=\" getAwsBucketPolicy \" pulumi-lang-python=\" get_aws_bucket_policy \" pulumi-lang-yaml=\" databricks.getAwsBucketPolicy \" pulumi-lang-java=\" databricks.getAwsBucketPolicy \"\u003e databricks.getAwsBucketPolicy \u003c/span\u003edata to configure a simple access policy for AWS S3 buckets, so that Databricks can access data in it.\n*\u003cspan pulumi-lang-nodejs=\" databricks.ClusterPolicy \" pulumi-lang-dotnet=\" databricks.ClusterPolicy \" pulumi-lang-go=\" ClusterPolicy \" pulumi-lang-python=\" ClusterPolicy \" pulumi-lang-yaml=\" databricks.ClusterPolicy \" pulumi-lang-java=\" databricks.ClusterPolicy \"\u003e databricks.ClusterPolicy \u003c/span\u003eto create a\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003epolicy, which limits the ability to create clusters based on a set of rules.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstancePool \" pulumi-lang-dotnet=\" databricks.InstancePool \" pulumi-lang-go=\" InstancePool \" pulumi-lang-python=\" InstancePool \" pulumi-lang-yaml=\" databricks.InstancePool \" pulumi-lang-java=\" databricks.InstancePool \"\u003e databricks.InstancePool \u003c/span\u003eto manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.UserInstanceProfile \" pulumi-lang-dotnet=\" databricks.UserInstanceProfile \" pulumi-lang-go=\" UserInstanceProfile \" pulumi-lang-python=\" UserInstanceProfile \" pulumi-lang-yaml=\" databricks.UserInstanceProfile \" pulumi-lang-java=\" databricks.UserInstanceProfile \"\u003e databricks.UserInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_user.\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"groupId":{"type":"string","description":"This is the id of the group resource.\n"},"instanceProfileId":{"type":"string","description":"This is the id of the instance profile resource.\n"}},"required":["groupId","instanceProfileId"],"inputProperties":{"groupId":{"type":"string","description":"This is the id of the group resource.\n","willReplaceOnChanges":true},"instanceProfileId":{"type":"string","description":"This is the id of the instance profile resource.\n","willReplaceOnChanges":true}},"requiredInputs":["groupId","instanceProfileId"],"stateInputs":{"description":"Input properties used for looking up and filtering GroupInstanceProfile resources.\n","properties":{"groupId":{"type":"string","description":"This is the id of the group resource.\n","willReplaceOnChanges":true},"instanceProfileId":{"type":"string","description":"This is the id of the instance profile resource.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/groupMember:GroupMember":{"description":"This resource allows you to attach users, service_principal, and groups as group members.\n\n\u003e This resource can be used with an account or workspace-level provider.\n\nTo attach members to groups in the Databricks account, the provider must be configured with `host = \"https://accounts.cloud.databricks.com\"` on AWS deployments or `host = \"https://accounts.azuredatabricks.net\"` and authenticate using AAD tokens on Azure deployments\n\n## Example Usage\n\nAfter the following example, Bradley would have direct membership in group B and transitive membership in group A.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst a = new databricks.Group(\"a\", {displayName: \"A\"});\nconst b = new databricks.Group(\"b\", {displayName: \"B\"});\nconst ab = new databricks.GroupMember(\"ab\", {\n    groupId: a.id,\n    memberId: b.id,\n});\nconst bradley = new databricks.User(\"bradley\", {userName: \"bradley@example.com\"});\nconst bb = new databricks.GroupMember(\"bb\", {\n    groupId: b.id,\n    memberId: bradley.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\na = databricks.Group(\"a\", display_name=\"A\")\nb = databricks.Group(\"b\", display_name=\"B\")\nab = databricks.GroupMember(\"ab\",\n    group_id=a.id,\n    member_id=b.id)\nbradley = databricks.User(\"bradley\", user_name=\"bradley@example.com\")\nbb = databricks.GroupMember(\"bb\",\n    group_id=b.id,\n    member_id=bradley.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var a = new Databricks.Group(\"a\", new()\n    {\n        DisplayName = \"A\",\n    });\n\n    var b = new Databricks.Group(\"b\", new()\n    {\n        DisplayName = \"B\",\n    });\n\n    var ab = new Databricks.GroupMember(\"ab\", new()\n    {\n        GroupId = a.Id,\n        MemberId = b.Id,\n    });\n\n    var bradley = new Databricks.User(\"bradley\", new()\n    {\n        UserName = \"bradley@example.com\",\n    });\n\n    var bb = new Databricks.GroupMember(\"bb\", new()\n    {\n        GroupId = b.Id,\n        MemberId = bradley.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\ta, err := databricks.NewGroup(ctx, \"a\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"A\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tb, err := databricks.NewGroup(ctx, \"b\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"B\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupMember(ctx, \"ab\", \u0026databricks.GroupMemberArgs{\n\t\t\tGroupId:  a.ID(),\n\t\t\tMemberId: b.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tbradley, err := databricks.NewUser(ctx, \"bradley\", \u0026databricks.UserArgs{\n\t\t\tUserName: pulumi.String(\"bradley@example.com\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupMember(ctx, \"bb\", \u0026databricks.GroupMemberArgs{\n\t\t\tGroupId:  b.ID(),\n\t\t\tMemberId: bradley.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.GroupMember;\nimport com.pulumi.databricks.GroupMemberArgs;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var a = new Group(\"a\", GroupArgs.builder()\n            .displayName(\"A\")\n            .build());\n\n        var b = new Group(\"b\", GroupArgs.builder()\n            .displayName(\"B\")\n            .build());\n\n        var ab = new GroupMember(\"ab\", GroupMemberArgs.builder()\n            .groupId(a.id())\n            .memberId(b.id())\n            .build());\n\n        var bradley = new User(\"bradley\", UserArgs.builder()\n            .userName(\"bradley@example.com\")\n            .build());\n\n        var bb = new GroupMember(\"bb\", GroupMemberArgs.builder()\n            .groupId(b.id())\n            .memberId(bradley.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  a:\n    type: databricks:Group\n    properties:\n      displayName: A\n  b:\n    type: databricks:Group\n    properties:\n      displayName: B\n  ab:\n    type: databricks:GroupMember\n    properties:\n      groupId: ${a.id}\n      memberId: ${b.id}\n  bradley:\n    type: databricks:User\n    properties:\n      userName: bradley@example.com\n  bb:\n    type: databricks:GroupMember\n    properties:\n      groupId: ${b.id}\n      memberId: ${bradley.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_group.\n*\u003cspan pulumi-lang-nodejs=\" databricks.IpAccessList \" pulumi-lang-dotnet=\" databricks.IpAccessList \" pulumi-lang-go=\" IpAccessList \" pulumi-lang-python=\" IpAccessList \" pulumi-lang-yaml=\" databricks.IpAccessList \" pulumi-lang-java=\" databricks.IpAccessList \"\u003e databricks.IpAccessList \u003c/span\u003eto allow access from [predefined IP ranges](https://docs.databricks.com/security/network/ip-access-list.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eto grant access to a workspace to an automation tool or application.\n*\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eto [manage users](https://docs.databricks.com/administration-guide/users-groups/users.html), that could be added to\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003ewithin the workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003edata to retrieve information about databricks_user.\n*\u003cspan pulumi-lang-nodejs=\" databricks.UserInstanceProfile \" pulumi-lang-dotnet=\" databricks.UserInstanceProfile \" pulumi-lang-go=\" UserInstanceProfile \" pulumi-lang-python=\" UserInstanceProfile \" pulumi-lang-yaml=\" databricks.UserInstanceProfile \" pulumi-lang-java=\" databricks.UserInstanceProfile \"\u003e databricks.UserInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_user.\n\n","properties":{"groupId":{"type":"string","description":"This is the \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e attribute (SCIM ID) of the group resource.\n"},"memberId":{"type":"string","description":"This is the \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e attribute (SCIM ID) of the group, service principal, or user.\n"}},"required":["groupId","memberId"],"inputProperties":{"groupId":{"type":"string","description":"This is the \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e attribute (SCIM ID) of the group resource.\n","willReplaceOnChanges":true},"memberId":{"type":"string","description":"This is the \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e attribute (SCIM ID) of the group, service principal, or user.\n","willReplaceOnChanges":true}},"requiredInputs":["groupId","memberId"],"stateInputs":{"description":"Input properties used for looking up and filtering GroupMember resources.\n","properties":{"groupId":{"type":"string","description":"This is the \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e attribute (SCIM ID) of the group resource.\n","willReplaceOnChanges":true},"memberId":{"type":"string","description":"This is the \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e attribute (SCIM ID) of the group, service principal, or user.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/groupRole:GroupRole":{"description":"This resource allows you to attach a role to databricks_group. This role could be a pre-defined role such as account admin, or an instance profile ARN.\n\n\u003e This resource can be used with an account or workspace-level provider.\n\n## Example Usage\n\nAttach an instance profile to a group\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst instanceProfile = new databricks.InstanceProfile(\"instance_profile\", {instanceProfileArn: \"my_instance_profile_arn\"});\nconst myGroup = new databricks.Group(\"my_group\", {displayName: \"my_group_name\"});\nconst myGroupInstanceProfile = new databricks.GroupRole(\"my_group_instance_profile\", {\n    groupId: myGroup.id,\n    role: instanceProfile.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ninstance_profile = databricks.InstanceProfile(\"instance_profile\", instance_profile_arn=\"my_instance_profile_arn\")\nmy_group = databricks.Group(\"my_group\", display_name=\"my_group_name\")\nmy_group_instance_profile = databricks.GroupRole(\"my_group_instance_profile\",\n    group_id=my_group.id,\n    role=instance_profile.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var instanceProfile = new Databricks.InstanceProfile(\"instance_profile\", new()\n    {\n        InstanceProfileArn = \"my_instance_profile_arn\",\n    });\n\n    var myGroup = new Databricks.Group(\"my_group\", new()\n    {\n        DisplayName = \"my_group_name\",\n    });\n\n    var myGroupInstanceProfile = new Databricks.GroupRole(\"my_group_instance_profile\", new()\n    {\n        GroupId = myGroup.Id,\n        Role = instanceProfile.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tinstanceProfile, err := databricks.NewInstanceProfile(ctx, \"instance_profile\", \u0026databricks.InstanceProfileArgs{\n\t\t\tInstanceProfileArn: pulumi.String(\"my_instance_profile_arn\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmyGroup, err := databricks.NewGroup(ctx, \"my_group\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"my_group_name\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupRole(ctx, \"my_group_instance_profile\", \u0026databricks.GroupRoleArgs{\n\t\t\tGroupId: myGroup.ID(),\n\t\t\tRole:    instanceProfile.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.InstanceProfile;\nimport com.pulumi.databricks.InstanceProfileArgs;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.GroupRole;\nimport com.pulumi.databricks.GroupRoleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var instanceProfile = new InstanceProfile(\"instanceProfile\", InstanceProfileArgs.builder()\n            .instanceProfileArn(\"my_instance_profile_arn\")\n            .build());\n\n        var myGroup = new Group(\"myGroup\", GroupArgs.builder()\n            .displayName(\"my_group_name\")\n            .build());\n\n        var myGroupInstanceProfile = new GroupRole(\"myGroupInstanceProfile\", GroupRoleArgs.builder()\n            .groupId(myGroup.id())\n            .role(instanceProfile.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  instanceProfile:\n    type: databricks:InstanceProfile\n    name: instance_profile\n    properties:\n      instanceProfileArn: my_instance_profile_arn\n  myGroup:\n    type: databricks:Group\n    name: my_group\n    properties:\n      displayName: my_group_name\n  myGroupInstanceProfile:\n    type: databricks:GroupRole\n    name: my_group_instance_profile\n    properties:\n      groupId: ${myGroup.id}\n      role: ${instanceProfile.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nAttach account admin role to an account-level group\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst myGroup = new databricks.Group(\"my_group\", {displayName: \"my_group_name\"});\nconst myGroupAccountAdmin = new databricks.GroupRole(\"my_group_account_admin\", {\n    groupId: myGroup.id,\n    role: \"account_admin\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nmy_group = databricks.Group(\"my_group\", display_name=\"my_group_name\")\nmy_group_account_admin = databricks.GroupRole(\"my_group_account_admin\",\n    group_id=my_group.id,\n    role=\"account_admin\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var myGroup = new Databricks.Group(\"my_group\", new()\n    {\n        DisplayName = \"my_group_name\",\n    });\n\n    var myGroupAccountAdmin = new Databricks.GroupRole(\"my_group_account_admin\", new()\n    {\n        GroupId = myGroup.Id,\n        Role = \"account_admin\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tmyGroup, err := databricks.NewGroup(ctx, \"my_group\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"my_group_name\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupRole(ctx, \"my_group_account_admin\", \u0026databricks.GroupRoleArgs{\n\t\t\tGroupId: myGroup.ID(),\n\t\t\tRole:    pulumi.String(\"account_admin\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.GroupRole;\nimport com.pulumi.databricks.GroupRoleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var myGroup = new Group(\"myGroup\", GroupArgs.builder()\n            .displayName(\"my_group_name\")\n            .build());\n\n        var myGroupAccountAdmin = new GroupRole(\"myGroupAccountAdmin\", GroupRoleArgs.builder()\n            .groupId(myGroup.id())\n            .role(\"account_admin\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  myGroup:\n    type: databricks:Group\n    name: my_group\n    properties:\n      displayName: my_group_name\n  myGroupAccountAdmin:\n    type: databricks:GroupRole\n    name: my_group_account_admin\n    properties:\n      groupId: ${myGroup.id}\n      role: account_admin\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getAwsBucketPolicy \" pulumi-lang-dotnet=\" databricks.getAwsBucketPolicy \" pulumi-lang-go=\" getAwsBucketPolicy \" pulumi-lang-python=\" get_aws_bucket_policy \" pulumi-lang-yaml=\" databricks.getAwsBucketPolicy \" pulumi-lang-java=\" databricks.getAwsBucketPolicy \"\u003e databricks.getAwsBucketPolicy \u003c/span\u003edata to configure a simple access policy for AWS S3 buckets, so that Databricks can access data in it.\n*\u003cspan pulumi-lang-nodejs=\" databricks.ClusterPolicy \" pulumi-lang-dotnet=\" databricks.ClusterPolicy \" pulumi-lang-go=\" ClusterPolicy \" pulumi-lang-python=\" ClusterPolicy \" pulumi-lang-yaml=\" databricks.ClusterPolicy \" pulumi-lang-java=\" databricks.ClusterPolicy \"\u003e databricks.ClusterPolicy \u003c/span\u003eto create a\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003epolicy, which limits the ability to create clusters based on a set of rules.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstancePool \" pulumi-lang-dotnet=\" databricks.InstancePool \" pulumi-lang-go=\" InstancePool \" pulumi-lang-python=\" InstancePool \" pulumi-lang-yaml=\" databricks.InstancePool \" pulumi-lang-java=\" databricks.InstancePool \"\u003e databricks.InstancePool \u003c/span\u003eto manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.UserInstanceProfile \" pulumi-lang-dotnet=\" databricks.UserInstanceProfile \" pulumi-lang-go=\" UserInstanceProfile \" pulumi-lang-python=\" UserInstanceProfile \" pulumi-lang-yaml=\" databricks.UserInstanceProfile \" pulumi-lang-java=\" databricks.UserInstanceProfile \"\u003e databricks.UserInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_user.\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"groupId":{"type":"string","description":"This is the id of the group resource.\n"},"role":{"type":"string","description":"Either a role name or the ARN/ID of the instance profile resource.\n"}},"required":["groupId","role"],"inputProperties":{"groupId":{"type":"string","description":"This is the id of the group resource.\n","willReplaceOnChanges":true},"role":{"type":"string","description":"Either a role name or the ARN/ID of the instance profile resource.\n","willReplaceOnChanges":true}},"requiredInputs":["groupId","role"],"stateInputs":{"description":"Input properties used for looking up and filtering GroupRole resources.\n","properties":{"groupId":{"type":"string","description":"This is the id of the group resource.\n","willReplaceOnChanges":true},"role":{"type":"string","description":"Either a role name or the ARN/ID of the instance profile resource.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/instancePool:InstancePool":{"description":"This resource allows you to manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances. An instance pool reduces cluster start and auto-scaling times by maintaining a set of idle, ready-to-use cloud instances. When a cluster attached to a pool needs an instance, it first attempts to allocate one of the pool's idle instances. If the pool has no idle instances, it expands by allocating a new instance from the instance provider in order to accommodate the cluster's request. When a cluster releases an instance, it returns to the pool and is free for another cluster to use. Only clusters attached to a pool can use that pool's idle instances.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e It is important to know that different cloud service providers have different \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`diskSpecs`\" pulumi-lang-dotnet=\"`DiskSpecs`\" pulumi-lang-go=\"`diskSpecs`\" pulumi-lang-python=\"`disk_specs`\" pulumi-lang-yaml=\"`diskSpecs`\" pulumi-lang-java=\"`diskSpecs`\"\u003e`disk_specs`\u003c/span\u003e and potentially other configurations.\n\n\u003e \"auto\" \u003cspan pulumi-lang-nodejs=\"`zoneId`\" pulumi-lang-dotnet=\"`ZoneId`\" pulumi-lang-go=\"`zoneId`\" pulumi-lang-python=\"`zone_id`\" pulumi-lang-yaml=\"`zoneId`\" pulumi-lang-java=\"`zoneId`\"\u003e`zone_id`\u003c/span\u003e is only supported for fleet node types.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst smallest = databricks.getNodeType({});\nconst smallestNodes = new databricks.InstancePool(\"smallest_nodes\", {\n    instancePoolName: \"Smallest Nodes\",\n    minIdleInstances: 0,\n    maxCapacity: 300,\n    nodeTypeId: smallest.then(smallest =\u003e smallest.id),\n    awsAttributes: {\n        availability: \"ON_DEMAND\",\n        zoneId: \"us-east-1a\",\n        spotBidPricePercent: 100,\n    },\n    idleInstanceAutoterminationMinutes: 10,\n    diskSpec: {\n        diskType: {\n            ebsVolumeType: \"GENERAL_PURPOSE_SSD\",\n        },\n        diskSize: 80,\n        diskCount: 1,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsmallest = databricks.get_node_type()\nsmallest_nodes = databricks.InstancePool(\"smallest_nodes\",\n    instance_pool_name=\"Smallest Nodes\",\n    min_idle_instances=0,\n    max_capacity=300,\n    node_type_id=smallest.id,\n    aws_attributes={\n        \"availability\": \"ON_DEMAND\",\n        \"zone_id\": \"us-east-1a\",\n        \"spot_bid_price_percent\": 100,\n    },\n    idle_instance_autotermination_minutes=10,\n    disk_spec={\n        \"disk_type\": {\n            \"ebs_volume_type\": \"GENERAL_PURPOSE_SSD\",\n        },\n        \"disk_size\": 80,\n        \"disk_count\": 1,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var smallest = Databricks.GetNodeType.Invoke();\n\n    var smallestNodes = new Databricks.InstancePool(\"smallest_nodes\", new()\n    {\n        InstancePoolName = \"Smallest Nodes\",\n        MinIdleInstances = 0,\n        MaxCapacity = 300,\n        NodeTypeId = smallest.Apply(getNodeTypeResult =\u003e getNodeTypeResult.Id),\n        AwsAttributes = new Databricks.Inputs.InstancePoolAwsAttributesArgs\n        {\n            Availability = \"ON_DEMAND\",\n            ZoneId = \"us-east-1a\",\n            SpotBidPricePercent = 100,\n        },\n        IdleInstanceAutoterminationMinutes = 10,\n        DiskSpec = new Databricks.Inputs.InstancePoolDiskSpecArgs\n        {\n            DiskType = new Databricks.Inputs.InstancePoolDiskSpecDiskTypeArgs\n            {\n                EbsVolumeType = \"GENERAL_PURPOSE_SSD\",\n            },\n            DiskSize = 80,\n            DiskCount = 1,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsmallest, err := databricks.GetNodeType(ctx, \u0026databricks.GetNodeTypeArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewInstancePool(ctx, \"smallest_nodes\", \u0026databricks.InstancePoolArgs{\n\t\t\tInstancePoolName: pulumi.String(\"Smallest Nodes\"),\n\t\t\tMinIdleInstances: pulumi.Int(0),\n\t\t\tMaxCapacity:      pulumi.Int(300),\n\t\t\tNodeTypeId:       pulumi.String(smallest.Id),\n\t\t\tAwsAttributes: \u0026databricks.InstancePoolAwsAttributesArgs{\n\t\t\t\tAvailability:        pulumi.String(\"ON_DEMAND\"),\n\t\t\t\tZoneId:              pulumi.String(\"us-east-1a\"),\n\t\t\t\tSpotBidPricePercent: pulumi.Int(100),\n\t\t\t},\n\t\t\tIdleInstanceAutoterminationMinutes: pulumi.Int(10),\n\t\t\tDiskSpec: \u0026databricks.InstancePoolDiskSpecArgs{\n\t\t\t\tDiskType: \u0026databricks.InstancePoolDiskSpecDiskTypeArgs{\n\t\t\t\t\tEbsVolumeType: pulumi.String(\"GENERAL_PURPOSE_SSD\"),\n\t\t\t\t},\n\t\t\t\tDiskSize:  pulumi.Int(80),\n\t\t\t\tDiskCount: pulumi.Int(1),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetNodeTypeArgs;\nimport com.pulumi.databricks.InstancePool;\nimport com.pulumi.databricks.InstancePoolArgs;\nimport com.pulumi.databricks.inputs.InstancePoolAwsAttributesArgs;\nimport com.pulumi.databricks.inputs.InstancePoolDiskSpecArgs;\nimport com.pulumi.databricks.inputs.InstancePoolDiskSpecDiskTypeArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var smallest = DatabricksFunctions.getNodeType(GetNodeTypeArgs.builder()\n            .build());\n\n        var smallestNodes = new InstancePool(\"smallestNodes\", InstancePoolArgs.builder()\n            .instancePoolName(\"Smallest Nodes\")\n            .minIdleInstances(0)\n            .maxCapacity(300)\n            .nodeTypeId(smallest.id())\n            .awsAttributes(InstancePoolAwsAttributesArgs.builder()\n                .availability(\"ON_DEMAND\")\n                .zoneId(\"us-east-1a\")\n                .spotBidPricePercent(100)\n                .build())\n            .idleInstanceAutoterminationMinutes(10)\n            .diskSpec(InstancePoolDiskSpecArgs.builder()\n                .diskType(InstancePoolDiskSpecDiskTypeArgs.builder()\n                    .ebsVolumeType(\"GENERAL_PURPOSE_SSD\")\n                    .build())\n                .diskSize(80)\n                .diskCount(1)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  smallestNodes:\n    type: databricks:InstancePool\n    name: smallest_nodes\n    properties:\n      instancePoolName: Smallest Nodes\n      minIdleInstances: 0\n      maxCapacity: 300\n      nodeTypeId: ${smallest.id}\n      awsAttributes:\n        availability: ON_DEMAND\n        zoneId: us-east-1a\n        spotBidPricePercent: '100'\n      idleInstanceAutoterminationMinutes: 10\n      diskSpec:\n        diskType:\n          ebsVolumeType: GENERAL_PURPOSE_SSD\n        diskSize: 80\n        diskCount: 1\nvariables:\n  smallest:\n    fn::invoke:\n      function: databricks:getNodeType\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003ecan control which groups or individual users can create instance pools.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Manage* or *Attach to* individual instance pools.\n\n","properties":{"awsAttributes":{"$ref":"#/types/databricks:index/InstancePoolAwsAttributes:InstancePoolAwsAttributes"},"azureAttributes":{"$ref":"#/types/databricks:index/InstancePoolAzureAttributes:InstancePoolAzureAttributes"},"customTags":{"type":"object","additionalProperties":{"type":"string"},"description":"(Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS \u0026 Azure instances and Disk volumes). The tags of the instance pool will propagate to the clusters using the pool (see the [official documentation](https://docs.databricks.com/administration-guide/account-settings/usage-detail-tags-aws.html#tag-propagation)). Attempting to set the same tags in both cluster and instance pool will raise an error. *Databricks allows at most 43 custom tags.*\n"},"diskSpec":{"$ref":"#/types/databricks:index/InstancePoolDiskSpec:InstancePoolDiskSpec"},"enableElasticDisk":{"type":"boolean","description":"(Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.\n"},"gcpAttributes":{"$ref":"#/types/databricks:index/InstancePoolGcpAttributes:InstancePoolGcpAttributes"},"idleInstanceAutoterminationMinutes":{"type":"integer","description":"(Integer) The number of minutes that idle instances in excess of the\u003cspan pulumi-lang-nodejs=\" minIdleInstances \" pulumi-lang-dotnet=\" MinIdleInstances \" pulumi-lang-go=\" minIdleInstances \" pulumi-lang-python=\" min_idle_instances \" pulumi-lang-yaml=\" minIdleInstances \" pulumi-lang-java=\" minIdleInstances \"\u003e min_idle_instances \u003c/span\u003eare maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.\n"},"instancePoolFleetAttributes":{"$ref":"#/types/databricks:index/InstancePoolInstancePoolFleetAttributes:InstancePoolInstancePoolFleetAttributes"},"instancePoolId":{"type":"string"},"instancePoolName":{"type":"string","description":"(String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.\n"},"maxCapacity":{"type":"integer","description":"(Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling. There is no default limit, but as a [best practice](https://docs.databricks.com/clusters/instance-pools/pool-best-practices.html#configure-pools-to-control-cost), this should be set based on anticipated usage.\n"},"minIdleInstances":{"type":"integer","description":"(Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.\n"},"nodeTypeFlexibility":{"$ref":"#/types/databricks:index/InstancePoolNodeTypeFlexibility:InstancePoolNodeTypeFlexibility","description":"a block describing the alternative driver node types if \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e isn't available.\n"},"nodeTypeId":{"type":"string","description":"(String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool's idle instances are allocated based on this type. You can retrieve a list of available node types by using the [List Node Types API](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterclusterservicelistnodetypes) call.\n"},"preloadedDockerImages":{"type":"array","items":{"$ref":"#/types/databricks:index/InstancePoolPreloadedDockerImage:InstancePoolPreloadedDockerImage"}},"preloadedSparkVersions":{"type":"array","items":{"type":"string"},"description":"(List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via\u003cspan pulumi-lang-nodejs=\" databricks.getSparkVersion \" pulumi-lang-dotnet=\" databricks.getSparkVersion \" pulumi-lang-go=\" getSparkVersion \" pulumi-lang-python=\" get_spark_version \" pulumi-lang-yaml=\" databricks.getSparkVersion \" pulumi-lang-java=\" databricks.getSparkVersion \"\u003e databricks.getSparkVersion \u003c/span\u003edata source or via  [Runtime Versions API](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterclusterservicelistsparkversions) call.\n"},"providerConfig":{"$ref":"#/types/databricks:index/InstancePoolProviderConfig:InstancePoolProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"required":["idleInstanceAutoterminationMinutes","instancePoolId","instancePoolName"],"inputProperties":{"awsAttributes":{"$ref":"#/types/databricks:index/InstancePoolAwsAttributes:InstancePoolAwsAttributes","willReplaceOnChanges":true},"azureAttributes":{"$ref":"#/types/databricks:index/InstancePoolAzureAttributes:InstancePoolAzureAttributes","willReplaceOnChanges":true},"customTags":{"type":"object","additionalProperties":{"type":"string"},"description":"(Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS \u0026 Azure instances and Disk volumes). The tags of the instance pool will propagate to the clusters using the pool (see the [official documentation](https://docs.databricks.com/administration-guide/account-settings/usage-detail-tags-aws.html#tag-propagation)). Attempting to set the same tags in both cluster and instance pool will raise an error. *Databricks allows at most 43 custom tags.*\n"},"diskSpec":{"$ref":"#/types/databricks:index/InstancePoolDiskSpec:InstancePoolDiskSpec","willReplaceOnChanges":true},"enableElasticDisk":{"type":"boolean","description":"(Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.\n","willReplaceOnChanges":true},"gcpAttributes":{"$ref":"#/types/databricks:index/InstancePoolGcpAttributes:InstancePoolGcpAttributes","willReplaceOnChanges":true},"idleInstanceAutoterminationMinutes":{"type":"integer","description":"(Integer) The number of minutes that idle instances in excess of the\u003cspan pulumi-lang-nodejs=\" minIdleInstances \" pulumi-lang-dotnet=\" MinIdleInstances \" pulumi-lang-go=\" minIdleInstances \" pulumi-lang-python=\" min_idle_instances \" pulumi-lang-yaml=\" minIdleInstances \" pulumi-lang-java=\" minIdleInstances \"\u003e min_idle_instances \u003c/span\u003eare maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.\n"},"instancePoolFleetAttributes":{"$ref":"#/types/databricks:index/InstancePoolInstancePoolFleetAttributes:InstancePoolInstancePoolFleetAttributes","willReplaceOnChanges":true},"instancePoolId":{"type":"string"},"instancePoolName":{"type":"string","description":"(String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.\n"},"maxCapacity":{"type":"integer","description":"(Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling. There is no default limit, but as a [best practice](https://docs.databricks.com/clusters/instance-pools/pool-best-practices.html#configure-pools-to-control-cost), this should be set based on anticipated usage.\n"},"minIdleInstances":{"type":"integer","description":"(Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.\n"},"nodeTypeFlexibility":{"$ref":"#/types/databricks:index/InstancePoolNodeTypeFlexibility:InstancePoolNodeTypeFlexibility","description":"a block describing the alternative driver node types if \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e isn't available.\n","willReplaceOnChanges":true},"nodeTypeId":{"type":"string","description":"(String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool's idle instances are allocated based on this type. You can retrieve a list of available node types by using the [List Node Types API](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterclusterservicelistnodetypes) call.\n","willReplaceOnChanges":true},"preloadedDockerImages":{"type":"array","items":{"$ref":"#/types/databricks:index/InstancePoolPreloadedDockerImage:InstancePoolPreloadedDockerImage"},"willReplaceOnChanges":true},"preloadedSparkVersions":{"type":"array","items":{"type":"string"},"description":"(List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via\u003cspan pulumi-lang-nodejs=\" databricks.getSparkVersion \" pulumi-lang-dotnet=\" databricks.getSparkVersion \" pulumi-lang-go=\" getSparkVersion \" pulumi-lang-python=\" get_spark_version \" pulumi-lang-yaml=\" databricks.getSparkVersion \" pulumi-lang-java=\" databricks.getSparkVersion \"\u003e databricks.getSparkVersion \u003c/span\u003edata source or via  [Runtime Versions API](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterclusterservicelistsparkversions) call.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/InstancePoolProviderConfig:InstancePoolProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"requiredInputs":["idleInstanceAutoterminationMinutes","instancePoolName"],"stateInputs":{"description":"Input properties used for looking up and filtering InstancePool resources.\n","properties":{"awsAttributes":{"$ref":"#/types/databricks:index/InstancePoolAwsAttributes:InstancePoolAwsAttributes","willReplaceOnChanges":true},"azureAttributes":{"$ref":"#/types/databricks:index/InstancePoolAzureAttributes:InstancePoolAzureAttributes","willReplaceOnChanges":true},"customTags":{"type":"object","additionalProperties":{"type":"string"},"description":"(Map) Additional tags for instance pool resources. Databricks tags all pool resources (e.g. AWS \u0026 Azure instances and Disk volumes). The tags of the instance pool will propagate to the clusters using the pool (see the [official documentation](https://docs.databricks.com/administration-guide/account-settings/usage-detail-tags-aws.html#tag-propagation)). Attempting to set the same tags in both cluster and instance pool will raise an error. *Databricks allows at most 43 custom tags.*\n"},"diskSpec":{"$ref":"#/types/databricks:index/InstancePoolDiskSpec:InstancePoolDiskSpec","willReplaceOnChanges":true},"enableElasticDisk":{"type":"boolean","description":"(Bool) Autoscaling Local Storage: when enabled, the instances in the pool dynamically acquire additional disk space when they are running low on disk space.\n","willReplaceOnChanges":true},"gcpAttributes":{"$ref":"#/types/databricks:index/InstancePoolGcpAttributes:InstancePoolGcpAttributes","willReplaceOnChanges":true},"idleInstanceAutoterminationMinutes":{"type":"integer","description":"(Integer) The number of minutes that idle instances in excess of the\u003cspan pulumi-lang-nodejs=\" minIdleInstances \" pulumi-lang-dotnet=\" MinIdleInstances \" pulumi-lang-go=\" minIdleInstances \" pulumi-lang-python=\" min_idle_instances \" pulumi-lang-yaml=\" minIdleInstances \" pulumi-lang-java=\" minIdleInstances \"\u003e min_idle_instances \u003c/span\u003eare maintained by the pool before being terminated. If not specified, excess idle instances are terminated automatically after a default timeout period. If specified, the time must be between 0 and 10000 minutes. If you specify 0, excess idle instances are removed as soon as possible.\n"},"instancePoolFleetAttributes":{"$ref":"#/types/databricks:index/InstancePoolInstancePoolFleetAttributes:InstancePoolInstancePoolFleetAttributes","willReplaceOnChanges":true},"instancePoolId":{"type":"string"},"instancePoolName":{"type":"string","description":"(String) The name of the instance pool. This is required for create and edit operations. It must be unique, non-empty, and less than 100 characters.\n"},"maxCapacity":{"type":"integer","description":"(Integer) The maximum number of instances the pool can contain, including both idle instances and ones in use by clusters. Once the maximum capacity is reached, you cannot create new clusters from the pool and existing clusters cannot autoscale up until some instances are made idle in the pool via cluster termination or down-scaling. There is no default limit, but as a [best practice](https://docs.databricks.com/clusters/instance-pools/pool-best-practices.html#configure-pools-to-control-cost), this should be set based on anticipated usage.\n"},"minIdleInstances":{"type":"integer","description":"(Integer) The minimum number of idle instances maintained by the pool. This is in addition to any instances in use by active clusters.\n"},"nodeTypeFlexibility":{"$ref":"#/types/databricks:index/InstancePoolNodeTypeFlexibility:InstancePoolNodeTypeFlexibility","description":"a block describing the alternative driver node types if \u003cspan pulumi-lang-nodejs=\"`nodeTypeId`\" pulumi-lang-dotnet=\"`NodeTypeId`\" pulumi-lang-go=\"`nodeTypeId`\" pulumi-lang-python=\"`node_type_id`\" pulumi-lang-yaml=\"`nodeTypeId`\" pulumi-lang-java=\"`nodeTypeId`\"\u003e`node_type_id`\u003c/span\u003e isn't available.\n","willReplaceOnChanges":true},"nodeTypeId":{"type":"string","description":"(String) The node type for the instances in the pool. All clusters attached to the pool inherit this node type and the pool's idle instances are allocated based on this type. You can retrieve a list of available node types by using the [List Node Types API](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterclusterservicelistnodetypes) call.\n","willReplaceOnChanges":true},"preloadedDockerImages":{"type":"array","items":{"$ref":"#/types/databricks:index/InstancePoolPreloadedDockerImage:InstancePoolPreloadedDockerImage"},"willReplaceOnChanges":true},"preloadedSparkVersions":{"type":"array","items":{"type":"string"},"description":"(List) A list with at most one runtime version the pool installs on each instance. Pool clusters that use a preloaded runtime version start faster as they do not have to wait for the image to download. You can retrieve them via\u003cspan pulumi-lang-nodejs=\" databricks.getSparkVersion \" pulumi-lang-dotnet=\" databricks.getSparkVersion \" pulumi-lang-go=\" getSparkVersion \" pulumi-lang-python=\" get_spark_version \" pulumi-lang-yaml=\" databricks.getSparkVersion \" pulumi-lang-java=\" databricks.getSparkVersion \"\u003e databricks.getSparkVersion \u003c/span\u003edata source or via  [Runtime Versions API](https://docs.databricks.com/dev-tools/api/latest/clusters.html#clusterclusterservicelistsparkversions) call.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/InstancePoolProviderConfig:InstancePoolProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"type":"object"}},"databricks:index/instanceProfile:InstanceProfile":{"description":"This resource allows you to manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount. The following example demonstrates how to create an instance profile and create a cluster with it. When creating a new \u003cspan pulumi-lang-nodejs=\"`databricks.InstanceProfile`\" pulumi-lang-dotnet=\"`databricks.InstanceProfile`\" pulumi-lang-go=\"`InstanceProfile`\" pulumi-lang-python=\"`InstanceProfile`\" pulumi-lang-yaml=\"`databricks.InstanceProfile`\" pulumi-lang-java=\"`databricks.InstanceProfile`\"\u003e`databricks.InstanceProfile`\u003c/span\u003e, Databricks validates that it has sufficient permissions to launch instances with the instance profile. This validation uses AWS dry-run mode for the [AWS EC2 RunInstances API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html).\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e Please switch to\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003ewith Unity Catalog to manage storage credentials, which provides a better and faster way for managing credential security.\n\n## Usage with Cluster Policies\n\nIt is advised to keep all common configurations in Cluster Policies to maintain control of the environments launched, so \u003cspan pulumi-lang-nodejs=\"`databricks.Cluster`\" pulumi-lang-dotnet=\"`databricks.Cluster`\" pulumi-lang-go=\"`Cluster`\" pulumi-lang-python=\"`Cluster`\" pulumi-lang-yaml=\"`databricks.Cluster`\" pulumi-lang-java=\"`databricks.Cluster`\"\u003e`databricks.Cluster`\u003c/span\u003e above could be replaced with \u003cspan pulumi-lang-nodejs=\"`databricks.ClusterPolicy`\" pulumi-lang-dotnet=\"`databricks.ClusterPolicy`\" pulumi-lang-go=\"`ClusterPolicy`\" pulumi-lang-python=\"`ClusterPolicy`\" pulumi-lang-yaml=\"`databricks.ClusterPolicy`\" pulumi-lang-java=\"`databricks.ClusterPolicy`\"\u003e`databricks.ClusterPolicy`\u003c/span\u003e:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.ClusterPolicy(\"this\", {\n    name: \"Policy with predefined instance profile\",\n    definition: JSON.stringify({\n        \"aws_attributes.instance_profile_arn\": {\n            type: \"fixed\",\n            value: shared.id,\n        },\n    }),\n});\n```\n```python\nimport pulumi\nimport json\nimport pulumi_databricks as databricks\n\nthis = databricks.ClusterPolicy(\"this\",\n    name=\"Policy with predefined instance profile\",\n    definition=json.dumps({\n        \"aws_attributes.instance_profile_arn\": {\n            \"type\": \"fixed\",\n            \"value\": shared[\"id\"],\n        },\n    }))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text.Json;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.ClusterPolicy(\"this\", new()\n    {\n        Name = \"Policy with predefined instance profile\",\n        Definition = JsonSerializer.Serialize(new Dictionary\u003cstring, object?\u003e\n        {\n            [\"aws_attributes.instance_profile_arn\"] = new Dictionary\u003cstring, object?\u003e\n            {\n                [\"type\"] = \"fixed\",\n                [\"value\"] = shared.Id,\n            },\n        }),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\ttmpJSON0, err := json.Marshal(map[string]interface{}{\n\t\t\t\"aws_attributes.instance_profile_arn\": map[string]interface{}{\n\t\t\t\t\"type\":  \"fixed\",\n\t\t\t\t\"value\": shared.Id,\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tjson0 := string(tmpJSON0)\n\t\t_, err = databricks.NewClusterPolicy(ctx, \"this\", \u0026databricks.ClusterPolicyArgs{\n\t\t\tName:       pulumi.String(\"Policy with predefined instance profile\"),\n\t\t\tDefinition: pulumi.String(json0),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ClusterPolicy;\nimport com.pulumi.databricks.ClusterPolicyArgs;\nimport static com.pulumi.codegen.internal.Serialization.*;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new ClusterPolicy(\"this\", ClusterPolicyArgs.builder()\n            .name(\"Policy with predefined instance profile\")\n            .definition(serializeJson(\n                jsonObject(\n                    jsonProperty(\"aws_attributes.instance_profile_arn\", jsonObject(\n                        jsonProperty(\"type\", \"fixed\"),\n                        jsonProperty(\"value\", shared.id())\n                    ))\n                )))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:ClusterPolicy\n    properties:\n      name: Policy with predefined instance profile\n      definition:\n        fn::toJSON:\n          aws_attributes.instance_profile_arn:\n            type: fixed\n            value: ${shared.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Granting access to all users\n\nYou can make instance profile available to all users by associating it with the special group called \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e through\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata source.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.InstanceProfile(\"this\", {instanceProfileArn: shared.id});\nconst users = databricks.getGroup({\n    displayName: \"users\",\n});\nconst all = new databricks.GroupInstanceProfile(\"all\", {\n    groupId: users.then(users =\u003e users.id),\n    instanceProfileId: _this.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.InstanceProfile(\"this\", instance_profile_arn=shared[\"id\"])\nusers = databricks.get_group(display_name=\"users\")\nall = databricks.GroupInstanceProfile(\"all\",\n    group_id=users.id,\n    instance_profile_id=this.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.InstanceProfile(\"this\", new()\n    {\n        InstanceProfileArn = shared.Id,\n    });\n\n    var users = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"users\",\n    });\n\n    var all = new Databricks.GroupInstanceProfile(\"all\", new()\n    {\n        GroupId = users.Apply(getGroupResult =\u003e getGroupResult.Id),\n        InstanceProfileId = @this.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewInstanceProfile(ctx, \"this\", \u0026databricks.InstanceProfileArgs{\n\t\t\tInstanceProfileArn: pulumi.Any(shared.Id),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tusers, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"users\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupInstanceProfile(ctx, \"all\", \u0026databricks.GroupInstanceProfileArgs{\n\t\t\tGroupId:           pulumi.String(users.Id),\n\t\t\tInstanceProfileId: this.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.InstanceProfile;\nimport com.pulumi.databricks.InstanceProfileArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.GroupInstanceProfile;\nimport com.pulumi.databricks.GroupInstanceProfileArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new InstanceProfile(\"this\", InstanceProfileArgs.builder()\n            .instanceProfileArn(shared.id())\n            .build());\n\n        final var users = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"users\")\n            .build());\n\n        var all = new GroupInstanceProfile(\"all\", GroupInstanceProfileArgs.builder()\n            .groupId(users.id())\n            .instanceProfileId(this_.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:InstanceProfile\n    properties:\n      instanceProfileArn: ${shared.id}\n  all:\n    type: databricks:GroupInstanceProfile\n    properties:\n      groupId: ${users.id}\n      instanceProfileId: ${this.id}\nvariables:\n  users:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: users\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"iamRoleArn":{"type":"string","description":"The AWS IAM role ARN of the role associated with the instance profile. It must have the form `arn:aws:iam::\u003caccount-id\u003e:role/\u003cname\u003e`. This field is required if your role name and instance profile name do not match and you want to use the instance profile with Databricks SQL Serverless.\n"},"instanceProfileArn":{"type":"string","description":"`ARN` attribute of \u003cspan pulumi-lang-nodejs=\"`awsIamInstanceProfile`\" pulumi-lang-dotnet=\"`AwsIamInstanceProfile`\" pulumi-lang-go=\"`awsIamInstanceProfile`\" pulumi-lang-python=\"`aws_iam_instance_profile`\" pulumi-lang-yaml=\"`awsIamInstanceProfile`\" pulumi-lang-java=\"`awsIamInstanceProfile`\"\u003e`aws_iam_instance_profile`\u003c/span\u003e output, the EC2 instance profile association to AWS IAM role. This ARN would be validated upon resource creation.\n"},"isMetaInstanceProfile":{"type":"boolean","description":"Whether the instance profile is a meta instance profile. Used only in [IAM credential passthrough](https://docs.databricks.com/security/credential-passthrough/iam-passthrough.html).\n"},"providerConfig":{"$ref":"#/types/databricks:index/InstanceProfileProviderConfig:InstanceProfileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"skipValidation":{"type":"boolean","description":"**For advanced usage only.** If validation fails with an error message that does not indicate an IAM related permission issue, (e.g. \"Your requested instance type is not supported in your requested availability zone\"), you can pass this flag to skip the validation and forcibly add the instance profile.\n"}},"required":["instanceProfileArn","skipValidation"],"inputProperties":{"iamRoleArn":{"type":"string","description":"The AWS IAM role ARN of the role associated with the instance profile. It must have the form `arn:aws:iam::\u003caccount-id\u003e:role/\u003cname\u003e`. This field is required if your role name and instance profile name do not match and you want to use the instance profile with Databricks SQL Serverless.\n"},"instanceProfileArn":{"type":"string","description":"`ARN` attribute of \u003cspan pulumi-lang-nodejs=\"`awsIamInstanceProfile`\" pulumi-lang-dotnet=\"`AwsIamInstanceProfile`\" pulumi-lang-go=\"`awsIamInstanceProfile`\" pulumi-lang-python=\"`aws_iam_instance_profile`\" pulumi-lang-yaml=\"`awsIamInstanceProfile`\" pulumi-lang-java=\"`awsIamInstanceProfile`\"\u003e`aws_iam_instance_profile`\u003c/span\u003e output, the EC2 instance profile association to AWS IAM role. This ARN would be validated upon resource creation.\n"},"isMetaInstanceProfile":{"type":"boolean","description":"Whether the instance profile is a meta instance profile. Used only in [IAM credential passthrough](https://docs.databricks.com/security/credential-passthrough/iam-passthrough.html).\n"},"providerConfig":{"$ref":"#/types/databricks:index/InstanceProfileProviderConfig:InstanceProfileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"skipValidation":{"type":"boolean","description":"**For advanced usage only.** If validation fails with an error message that does not indicate an IAM related permission issue, (e.g. \"Your requested instance type is not supported in your requested availability zone\"), you can pass this flag to skip the validation and forcibly add the instance profile.\n"}},"requiredInputs":["instanceProfileArn"],"stateInputs":{"description":"Input properties used for looking up and filtering InstanceProfile resources.\n","properties":{"iamRoleArn":{"type":"string","description":"The AWS IAM role ARN of the role associated with the instance profile. It must have the form `arn:aws:iam::\u003caccount-id\u003e:role/\u003cname\u003e`. This field is required if your role name and instance profile name do not match and you want to use the instance profile with Databricks SQL Serverless.\n"},"instanceProfileArn":{"type":"string","description":"`ARN` attribute of \u003cspan pulumi-lang-nodejs=\"`awsIamInstanceProfile`\" pulumi-lang-dotnet=\"`AwsIamInstanceProfile`\" pulumi-lang-go=\"`awsIamInstanceProfile`\" pulumi-lang-python=\"`aws_iam_instance_profile`\" pulumi-lang-yaml=\"`awsIamInstanceProfile`\" pulumi-lang-java=\"`awsIamInstanceProfile`\"\u003e`aws_iam_instance_profile`\u003c/span\u003e output, the EC2 instance profile association to AWS IAM role. This ARN would be validated upon resource creation.\n"},"isMetaInstanceProfile":{"type":"boolean","description":"Whether the instance profile is a meta instance profile. Used only in [IAM credential passthrough](https://docs.databricks.com/security/credential-passthrough/iam-passthrough.html).\n"},"providerConfig":{"$ref":"#/types/databricks:index/InstanceProfileProviderConfig:InstanceProfileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"skipValidation":{"type":"boolean","description":"**For advanced usage only.** If validation fails with an error message that does not indicate an IAM related permission issue, (e.g. \"Your requested instance type is not supported in your requested availability zone\"), you can pass this flag to skip the validation and forcibly add the instance profile.\n"}},"type":"object"}},"databricks:index/ipAccessList:IpAccessList":{"description":"Security-conscious enterprises that use cloud SaaS applications need to restrict access to their own employees. Authentication helps to prove user identity, but that does not enforce network location of the users. Accessing a cloud service from an unsecured network can pose security risks to an enterprise, especially when the user may have authorized access to sensitive or personal data. Enterprise network perimeters apply security policies and limit access to external services (for example, firewalls, proxies, DLP, and logging), so access beyond these controls are assumed to be untrusted. Please see [IP Access List](https://docs.databricks.com/security/network/ip-access-list.html) for full feature documentation.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e The total number of IP addresses and CIDR scopes provided across all ACL Lists in a workspace can not exceed 1000.  Refer to the docs above for specifics.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.WorkspaceConf(\"this\", {customConfig: {\n    enableIpAccessLists: \"true\",\n}});\nconst allowed_list = new databricks.IpAccessList(\"allowed-list\", {\n    label: \"allow_in\",\n    listType: \"ALLOW\",\n    ipAddresses: [\n        \"1.1.1.1\",\n        \"1.2.3.0/24\",\n        \"1.2.5.0/24\",\n    ],\n}, {\n    dependsOn: [_this],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.WorkspaceConf(\"this\", custom_config={\n    \"enableIpAccessLists\": \"true\",\n})\nallowed_list = databricks.IpAccessList(\"allowed-list\",\n    label=\"allow_in\",\n    list_type=\"ALLOW\",\n    ip_addresses=[\n        \"1.1.1.1\",\n        \"1.2.3.0/24\",\n        \"1.2.5.0/24\",\n    ],\n    opts = pulumi.ResourceOptions(depends_on=[this]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.WorkspaceConf(\"this\", new()\n    {\n        CustomConfig = \n        {\n            { \"enableIpAccessLists\", \"true\" },\n        },\n    });\n\n    var allowed_list = new Databricks.IpAccessList(\"allowed-list\", new()\n    {\n        Label = \"allow_in\",\n        ListType = \"ALLOW\",\n        IpAddresses = new[]\n        {\n            \"1.1.1.1\",\n            \"1.2.3.0/24\",\n            \"1.2.5.0/24\",\n        },\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            @this,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewWorkspaceConf(ctx, \"this\", \u0026databricks.WorkspaceConfArgs{\n\t\t\tCustomConfig: pulumi.StringMap{\n\t\t\t\t\"enableIpAccessLists\": pulumi.String(\"true\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewIpAccessList(ctx, \"allowed-list\", \u0026databricks.IpAccessListArgs{\n\t\t\tLabel:    pulumi.String(\"allow_in\"),\n\t\t\tListType: pulumi.String(\"ALLOW\"),\n\t\t\tIpAddresses: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"1.1.1.1\"),\n\t\t\t\tpulumi.String(\"1.2.3.0/24\"),\n\t\t\t\tpulumi.String(\"1.2.5.0/24\"),\n\t\t\t},\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tthis,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.WorkspaceConf;\nimport com.pulumi.databricks.WorkspaceConfArgs;\nimport com.pulumi.databricks.IpAccessList;\nimport com.pulumi.databricks.IpAccessListArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new WorkspaceConf(\"this\", WorkspaceConfArgs.builder()\n            .customConfig(Map.of(\"enableIpAccessLists\", \"true\"))\n            .build());\n\n        var allowed_list = new IpAccessList(\"allowed-list\", IpAccessListArgs.builder()\n            .label(\"allow_in\")\n            .listType(\"ALLOW\")\n            .ipAddresses(            \n                \"1.1.1.1\",\n                \"1.2.3.0/24\",\n                \"1.2.5.0/24\")\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(this_)\n                .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:WorkspaceConf\n    properties:\n      customConfig:\n        enableIpAccessLists: true\n  allowed-list:\n    type: databricks:IpAccessList\n    properties:\n      label: allow_in\n      listType: ALLOW\n      ipAddresses:\n        - 1.1.1.1\n        - 1.2.3.0/24\n        - 1.2.5.0/24\n    options:\n      dependsOn:\n        - ${this}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n* Provisioning AWS Databricks workspaces with a Hub \u0026 Spoke firewall for data exfiltration protection guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eto [configure VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) \u0026 subnets for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-dotnet=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-go=\" MwsPrivateAccessSettings \" pulumi-lang-python=\" MwsPrivateAccessSettings \" pulumi-lang-yaml=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-java=\" databricks.MwsPrivateAccessSettings \"\u003e databricks.MwsPrivateAccessSettings \u003c/span\u003eto create a [Private Access Setting](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html#step-5-create-a-private-access-settings-configuration-using-the-databricks-account-api) that can be used as part of a\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource to create a [Databricks Workspace that leverages AWS PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eto manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage data access in Unity Catalog.\n\n","properties":{"enabled":{"type":"boolean","description":"Boolean \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e indicating whether this list should be active.  Defaults to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e\n"},"ipAddresses":{"type":"array","items":{"type":"string"},"description":"A string list of IP addresses and CIDR ranges.\n"},"label":{"type":"string","description":"This is the display name for the given IP ACL List.\n"},"listType":{"type":"string","description":"Can only be \"ALLOW\" or \"BLOCK\".\n"},"providerConfig":{"$ref":"#/types/databricks:index/IpAccessListProviderConfig:IpAccessListProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"required":["ipAddresses","label","listType"],"inputProperties":{"enabled":{"type":"boolean","description":"Boolean \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e indicating whether this list should be active.  Defaults to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e\n"},"ipAddresses":{"type":"array","items":{"type":"string"},"description":"A string list of IP addresses and CIDR ranges.\n"},"label":{"type":"string","description":"This is the display name for the given IP ACL List.\n"},"listType":{"type":"string","description":"Can only be \"ALLOW\" or \"BLOCK\".\n"},"providerConfig":{"$ref":"#/types/databricks:index/IpAccessListProviderConfig:IpAccessListProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"requiredInputs":["ipAddresses","label","listType"],"stateInputs":{"description":"Input properties used for looking up and filtering IpAccessList resources.\n","properties":{"enabled":{"type":"boolean","description":"Boolean \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e indicating whether this list should be active.  Defaults to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e\n"},"ipAddresses":{"type":"array","items":{"type":"string"},"description":"A string list of IP addresses and CIDR ranges.\n"},"label":{"type":"string","description":"This is the display name for the given IP ACL List.\n"},"listType":{"type":"string","description":"Can only be \"ALLOW\" or \"BLOCK\".\n"},"providerConfig":{"$ref":"#/types/databricks:index/IpAccessListProviderConfig:IpAccessListProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"type":"object"}},"databricks:index/job:Job":{"description":"The \u003cspan pulumi-lang-nodejs=\"`databricks.Job`\" pulumi-lang-dotnet=\"`databricks.Job`\" pulumi-lang-go=\"`Job`\" pulumi-lang-python=\"`Job`\" pulumi-lang-yaml=\"`databricks.Job`\" pulumi-lang-java=\"`databricks.Job`\"\u003e`databricks.Job`\u003c/span\u003e resource allows you to manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a databricks_cluster.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003e In Pulumi configuration, it is recommended to define tasks in alphabetical order of their \u003cspan pulumi-lang-nodejs=\"`taskKey`\" pulumi-lang-dotnet=\"`TaskKey`\" pulumi-lang-go=\"`taskKey`\" pulumi-lang-python=\"`task_key`\" pulumi-lang-yaml=\"`taskKey`\" pulumi-lang-java=\"`taskKey`\"\u003e`task_key`\u003c/span\u003e arguments, so that you get consistent and readable diff. Whenever tasks are added or removed, or \u003cspan pulumi-lang-nodejs=\"`taskKey`\" pulumi-lang-dotnet=\"`TaskKey`\" pulumi-lang-go=\"`taskKey`\" pulumi-lang-python=\"`task_key`\" pulumi-lang-yaml=\"`taskKey`\" pulumi-lang-java=\"`taskKey`\"\u003e`task_key`\u003c/span\u003e is renamed, you'll observe a change in the majority of tasks. It's related to the fact that the current version of the provider treats \u003cspan pulumi-lang-nodejs=\"`task`\" pulumi-lang-dotnet=\"`Task`\" pulumi-lang-go=\"`task`\" pulumi-lang-python=\"`task`\" pulumi-lang-yaml=\"`task`\" pulumi-lang-java=\"`task`\"\u003e`task`\u003c/span\u003e blocks as an ordered list. Alternatively, \u003cspan pulumi-lang-nodejs=\"`task`\" pulumi-lang-dotnet=\"`Task`\" pulumi-lang-go=\"`task`\" pulumi-lang-python=\"`task`\" pulumi-lang-yaml=\"`task`\" pulumi-lang-java=\"`task`\"\u003e`task`\u003c/span\u003e block could have been an unordered set, though end-users would see the entire block replaced upon a change in single property of the task.\n\nIt is possible to create [a Databricks job](https://docs.databricks.com/aws/en/jobs/) using \u003cspan pulumi-lang-nodejs=\"`task`\" pulumi-lang-dotnet=\"`Task`\" pulumi-lang-go=\"`task`\" pulumi-lang-python=\"`task`\" pulumi-lang-yaml=\"`task`\" pulumi-lang-java=\"`task`\"\u003e`task`\u003c/span\u003e blocks. A single task is defined with the \u003cspan pulumi-lang-nodejs=\"`task`\" pulumi-lang-dotnet=\"`Task`\" pulumi-lang-go=\"`task`\" pulumi-lang-python=\"`task`\" pulumi-lang-yaml=\"`task`\" pulumi-lang-java=\"`task`\"\u003e`task`\u003c/span\u003e block containing one of the `*_task` blocks, \u003cspan pulumi-lang-nodejs=\"`taskKey`\" pulumi-lang-dotnet=\"`TaskKey`\" pulumi-lang-go=\"`taskKey`\" pulumi-lang-python=\"`task_key`\" pulumi-lang-yaml=\"`taskKey`\" pulumi-lang-java=\"`taskKey`\"\u003e`task_key`\u003c/span\u003e, and additional arguments described below.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Job(\"this\", {\n    name: \"Job with multiple tasks\",\n    description: \"This job executes multiple tasks on a shared job cluster, which will be provisioned as part of execution, and terminated once all tasks are finished.\",\n    jobClusters: [{\n        jobClusterKey: \"j\",\n        newCluster: {\n            numWorkers: 2,\n            sparkVersion: latest.id,\n            nodeTypeId: smallest.id,\n        },\n    }],\n    tasks: [\n        {\n            taskKey: \"a\",\n            newCluster: {\n                numWorkers: 1,\n                sparkVersion: latest.id,\n                nodeTypeId: smallest.id,\n            },\n            notebookTask: {\n                notebookPath: thisDatabricksNotebook.path,\n            },\n        },\n        {\n            taskKey: \"b\",\n            dependsOns: [{\n                taskKey: \"a\",\n            }],\n            existingClusterId: shared.id,\n            sparkJarTask: {\n                mainClassName: \"com.acme.data.Main\",\n            },\n        },\n        {\n            taskKey: \"c\",\n            jobClusterKey: \"j\",\n            notebookTask: {\n                notebookPath: thisDatabricksNotebook.path,\n            },\n        },\n        {\n            taskKey: \"d\",\n            pipelineTask: {\n                pipelineId: thisDatabricksPipeline.id,\n            },\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Job(\"this\",\n    name=\"Job with multiple tasks\",\n    description=\"This job executes multiple tasks on a shared job cluster, which will be provisioned as part of execution, and terminated once all tasks are finished.\",\n    job_clusters=[{\n        \"job_cluster_key\": \"j\",\n        \"new_cluster\": {\n            \"num_workers\": 2,\n            \"spark_version\": latest[\"id\"],\n            \"node_type_id\": smallest[\"id\"],\n        },\n    }],\n    tasks=[\n        {\n            \"task_key\": \"a\",\n            \"new_cluster\": {\n                \"num_workers\": 1,\n                \"spark_version\": latest[\"id\"],\n                \"node_type_id\": smallest[\"id\"],\n            },\n            \"notebook_task\": {\n                \"notebook_path\": this_databricks_notebook[\"path\"],\n            },\n        },\n        {\n            \"task_key\": \"b\",\n            \"depends_ons\": [{\n                \"task_key\": \"a\",\n            }],\n            \"existing_cluster_id\": shared[\"id\"],\n            \"spark_jar_task\": {\n                \"main_class_name\": \"com.acme.data.Main\",\n            },\n        },\n        {\n            \"task_key\": \"c\",\n            \"job_cluster_key\": \"j\",\n            \"notebook_task\": {\n                \"notebook_path\": this_databricks_notebook[\"path\"],\n            },\n        },\n        {\n            \"task_key\": \"d\",\n            \"pipeline_task\": {\n                \"pipeline_id\": this_databricks_pipeline[\"id\"],\n            },\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Job(\"this\", new()\n    {\n        Name = \"Job with multiple tasks\",\n        Description = \"This job executes multiple tasks on a shared job cluster, which will be provisioned as part of execution, and terminated once all tasks are finished.\",\n        JobClusters = new[]\n        {\n            new Databricks.Inputs.JobJobClusterArgs\n            {\n                JobClusterKey = \"j\",\n                NewCluster = new Databricks.Inputs.JobJobClusterNewClusterArgs\n                {\n                    NumWorkers = 2,\n                    SparkVersion = latest.Id,\n                    NodeTypeId = smallest.Id,\n                },\n            },\n        },\n        Tasks = new[]\n        {\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"a\",\n                NewCluster = new Databricks.Inputs.JobTaskNewClusterArgs\n                {\n                    NumWorkers = 1,\n                    SparkVersion = latest.Id,\n                    NodeTypeId = smallest.Id,\n                },\n                NotebookTask = new Databricks.Inputs.JobTaskNotebookTaskArgs\n                {\n                    NotebookPath = thisDatabricksNotebook.Path,\n                },\n            },\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"b\",\n                DependsOns = new[]\n                {\n                    new Databricks.Inputs.JobTaskDependsOnArgs\n                    {\n                        TaskKey = \"a\",\n                    },\n                },\n                ExistingClusterId = shared.Id,\n                SparkJarTask = new Databricks.Inputs.JobTaskSparkJarTaskArgs\n                {\n                    MainClassName = \"com.acme.data.Main\",\n                },\n            },\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"c\",\n                JobClusterKey = \"j\",\n                NotebookTask = new Databricks.Inputs.JobTaskNotebookTaskArgs\n                {\n                    NotebookPath = thisDatabricksNotebook.Path,\n                },\n            },\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"d\",\n                PipelineTask = new Databricks.Inputs.JobTaskPipelineTaskArgs\n                {\n                    PipelineId = thisDatabricksPipeline.Id,\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewJob(ctx, \"this\", \u0026databricks.JobArgs{\n\t\t\tName:        pulumi.String(\"Job with multiple tasks\"),\n\t\t\tDescription: pulumi.String(\"This job executes multiple tasks on a shared job cluster, which will be provisioned as part of execution, and terminated once all tasks are finished.\"),\n\t\t\tJobClusters: databricks.JobJobClusterArray{\n\t\t\t\t\u0026databricks.JobJobClusterArgs{\n\t\t\t\t\tJobClusterKey: pulumi.String(\"j\"),\n\t\t\t\t\tNewCluster: \u0026databricks.JobJobClusterNewClusterArgs{\n\t\t\t\t\t\tNumWorkers:   pulumi.Int(2),\n\t\t\t\t\t\tSparkVersion: pulumi.Any(latest.Id),\n\t\t\t\t\t\tNodeTypeId:   pulumi.Any(smallest.Id),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tTasks: databricks.JobTaskArray{\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"a\"),\n\t\t\t\t\tNewCluster: \u0026databricks.JobTaskNewClusterArgs{\n\t\t\t\t\t\tNumWorkers:   pulumi.Int(1),\n\t\t\t\t\t\tSparkVersion: pulumi.Any(latest.Id),\n\t\t\t\t\t\tNodeTypeId:   pulumi.Any(smallest.Id),\n\t\t\t\t\t},\n\t\t\t\t\tNotebookTask: \u0026databricks.JobTaskNotebookTaskArgs{\n\t\t\t\t\t\tNotebookPath: pulumi.Any(thisDatabricksNotebook.Path),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"b\"),\n\t\t\t\t\tDependsOns: databricks.JobTaskDependsOnArray{\n\t\t\t\t\t\t\u0026databricks.JobTaskDependsOnArgs{\n\t\t\t\t\t\t\tTaskKey: pulumi.String(\"a\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tExistingClusterId: pulumi.Any(shared.Id),\n\t\t\t\t\tSparkJarTask: \u0026databricks.JobTaskSparkJarTaskArgs{\n\t\t\t\t\t\tMainClassName: pulumi.String(\"com.acme.data.Main\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey:       pulumi.String(\"c\"),\n\t\t\t\t\tJobClusterKey: pulumi.String(\"j\"),\n\t\t\t\t\tNotebookTask: \u0026databricks.JobTaskNotebookTaskArgs{\n\t\t\t\t\t\tNotebookPath: pulumi.Any(thisDatabricksNotebook.Path),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"d\"),\n\t\t\t\t\tPipelineTask: \u0026databricks.JobTaskPipelineTaskArgs{\n\t\t\t\t\t\tPipelineId: pulumi.Any(thisDatabricksPipeline.Id),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Job;\nimport com.pulumi.databricks.JobArgs;\nimport com.pulumi.databricks.inputs.JobJobClusterArgs;\nimport com.pulumi.databricks.inputs.JobJobClusterNewClusterArgs;\nimport com.pulumi.databricks.inputs.JobTaskArgs;\nimport com.pulumi.databricks.inputs.JobTaskNewClusterArgs;\nimport com.pulumi.databricks.inputs.JobTaskNotebookTaskArgs;\nimport com.pulumi.databricks.inputs.JobTaskSparkJarTaskArgs;\nimport com.pulumi.databricks.inputs.JobTaskPipelineTaskArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Job(\"this\", JobArgs.builder()\n            .name(\"Job with multiple tasks\")\n            .description(\"This job executes multiple tasks on a shared job cluster, which will be provisioned as part of execution, and terminated once all tasks are finished.\")\n            .jobClusters(JobJobClusterArgs.builder()\n                .jobClusterKey(\"j\")\n                .newCluster(JobJobClusterNewClusterArgs.builder()\n                    .numWorkers(2)\n                    .sparkVersion(latest.id())\n                    .nodeTypeId(smallest.id())\n                    .build())\n                .build())\n            .tasks(            \n                JobTaskArgs.builder()\n                    .taskKey(\"a\")\n                    .newCluster(JobTaskNewClusterArgs.builder()\n                        .numWorkers(1)\n                        .sparkVersion(latest.id())\n                        .nodeTypeId(smallest.id())\n                        .build())\n                    .notebookTask(JobTaskNotebookTaskArgs.builder()\n                        .notebookPath(thisDatabricksNotebook.path())\n                        .build())\n                    .build(),\n                JobTaskArgs.builder()\n                    .taskKey(\"b\")\n                    .dependsOns(JobTaskDependsOnArgs.builder()\n                        .taskKey(\"a\")\n                        .build())\n                    .existingClusterId(shared.id())\n                    .sparkJarTask(JobTaskSparkJarTaskArgs.builder()\n                        .mainClassName(\"com.acme.data.Main\")\n                        .build())\n                    .build(),\n                JobTaskArgs.builder()\n                    .taskKey(\"c\")\n                    .jobClusterKey(\"j\")\n                    .notebookTask(JobTaskNotebookTaskArgs.builder()\n                        .notebookPath(thisDatabricksNotebook.path())\n                        .build())\n                    .build(),\n                JobTaskArgs.builder()\n                    .taskKey(\"d\")\n                    .pipelineTask(JobTaskPipelineTaskArgs.builder()\n                        .pipelineId(thisDatabricksPipeline.id())\n                        .build())\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Job\n    properties:\n      name: Job with multiple tasks\n      description: This job executes multiple tasks on a shared job cluster, which will be provisioned as part of execution, and terminated once all tasks are finished.\n      jobClusters:\n        - jobClusterKey: j\n          newCluster:\n            numWorkers: 2\n            sparkVersion: ${latest.id}\n            nodeTypeId: ${smallest.id}\n      tasks:\n        - taskKey: a\n          newCluster:\n            numWorkers: 1\n            sparkVersion: ${latest.id}\n            nodeTypeId: ${smallest.id}\n          notebookTask:\n            notebookPath: ${thisDatabricksNotebook.path}\n        - taskKey: b\n          dependsOns:\n            - taskKey: a\n          existingClusterId: ${shared.id}\n          sparkJarTask:\n            mainClassName: com.acme.data.Main\n        - taskKey: c\n          jobClusterKey: j\n          notebookTask:\n            notebookPath: ${thisDatabricksNotebook.path}\n        - taskKey: d\n          pipelineTask:\n            pipelineId: ${thisDatabricksPipeline.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\nBy default, all users can create and modify jobs unless an administrator [enables jobs access control](https://docs.databricks.com/administration-guide/access-control/jobs-acl.html). With jobs access control, individual permissions determine a user's abilities.\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Can View*, *Can Manage Run*, and *Can Manage*.\n*\u003cspan pulumi-lang-nodejs=\" databricks.ClusterPolicy \" pulumi-lang-dotnet=\" databricks.ClusterPolicy \" pulumi-lang-go=\" ClusterPolicy \" pulumi-lang-python=\" ClusterPolicy \" pulumi-lang-yaml=\" databricks.ClusterPolicy \" pulumi-lang-java=\" databricks.ClusterPolicy \"\u003e databricks.ClusterPolicy \u003c/span\u003ecan control which kinds of clusters users can create for jobs.\n\n","properties":{"alwaysRunning":{"type":"boolean","description":"(Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with \u003cspan pulumi-lang-nodejs=\"`parameters`\" pulumi-lang-dotnet=\"`Parameters`\" pulumi-lang-go=\"`parameters`\" pulumi-lang-python=\"`parameters`\" pulumi-lang-yaml=\"`parameters`\" pulumi-lang-java=\"`parameters`\"\u003e`parameters`\u003c/span\u003e specified in \u003cspan pulumi-lang-nodejs=\"`sparkJarTask`\" pulumi-lang-dotnet=\"`SparkJarTask`\" pulumi-lang-go=\"`sparkJarTask`\" pulumi-lang-python=\"`spark_jar_task`\" pulumi-lang-yaml=\"`sparkJarTask`\" pulumi-lang-java=\"`sparkJarTask`\"\u003e`spark_jar_task`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`sparkSubmitTask`\" pulumi-lang-dotnet=\"`SparkSubmitTask`\" pulumi-lang-go=\"`sparkSubmitTask`\" pulumi-lang-python=\"`spark_submit_task`\" pulumi-lang-yaml=\"`sparkSubmitTask`\" pulumi-lang-java=\"`sparkSubmitTask`\"\u003e`spark_submit_task`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`sparkPythonTask`\" pulumi-lang-dotnet=\"`SparkPythonTask`\" pulumi-lang-go=\"`sparkPythonTask`\" pulumi-lang-python=\"`spark_python_task`\" pulumi-lang-yaml=\"`sparkPythonTask`\" pulumi-lang-java=\"`sparkPythonTask`\"\u003e`spark_python_task`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`notebookTask`\" pulumi-lang-dotnet=\"`NotebookTask`\" pulumi-lang-go=\"`notebookTask`\" pulumi-lang-python=\"`notebook_task`\" pulumi-lang-yaml=\"`notebookTask`\" pulumi-lang-java=\"`notebookTask`\"\u003e`notebook_task`\u003c/span\u003e blocks.\n","deprecationMessage":"always_running will be replaced by\u003cspan pulumi-lang-nodejs=\" controlRunState \" pulumi-lang-dotnet=\" ControlRunState \" pulumi-lang-go=\" controlRunState \" pulumi-lang-python=\" control_run_state \" pulumi-lang-yaml=\" controlRunState \" pulumi-lang-java=\" controlRunState \"\u003e control_run_state \u003c/span\u003ein the next major release."},"budgetPolicyId":{"type":"string","description":"The ID of the user-specified budget policy to use for this job. If not specified, a default budget policy may be applied when creating or modifying the job.\n"},"continuous":{"$ref":"#/types/databricks:index/JobContinuous:JobContinuous","description":"Configuration block to configure pause status. See continuous Configuration Block.\n"},"controlRunState":{"type":"boolean","description":"(Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the \u003cspan pulumi-lang-nodejs=\"`pauseStatus`\" pulumi-lang-dotnet=\"`PauseStatus`\" pulumi-lang-go=\"`pauseStatus`\" pulumi-lang-python=\"`pause_status`\" pulumi-lang-yaml=\"`pauseStatus`\" pulumi-lang-java=\"`pauseStatus`\"\u003e`pause_status`\u003c/span\u003e by stopping the current active run. This flag cannot be set for non-continuous jobs.\n\nWhen migrating from \u003cspan pulumi-lang-nodejs=\"`alwaysRunning`\" pulumi-lang-dotnet=\"`AlwaysRunning`\" pulumi-lang-go=\"`alwaysRunning`\" pulumi-lang-python=\"`always_running`\" pulumi-lang-yaml=\"`alwaysRunning`\" pulumi-lang-java=\"`alwaysRunning`\"\u003e`always_running`\u003c/span\u003e to \u003cspan pulumi-lang-nodejs=\"`controlRunState`\" pulumi-lang-dotnet=\"`ControlRunState`\" pulumi-lang-go=\"`controlRunState`\" pulumi-lang-python=\"`control_run_state`\" pulumi-lang-yaml=\"`controlRunState`\" pulumi-lang-java=\"`controlRunState`\"\u003e`control_run_state`\u003c/span\u003e, set \u003cspan pulumi-lang-nodejs=\"`continuous`\" pulumi-lang-dotnet=\"`Continuous`\" pulumi-lang-go=\"`continuous`\" pulumi-lang-python=\"`continuous`\" pulumi-lang-yaml=\"`continuous`\" pulumi-lang-java=\"`continuous`\"\u003e`continuous`\u003c/span\u003e as follows:\n\n"},"dbtTask":{"$ref":"#/types/databricks:index/JobDbtTask:JobDbtTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"deployment":{"$ref":"#/types/databricks:index/JobDeployment:JobDeployment"},"description":{"type":"string","description":"An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.\n"},"editMode":{"type":"string","description":"If `\"UI_LOCKED\"`, the user interface for the job will be locked. If `\"EDITABLE\"` (the default), the user interface will be editable.\n"},"emailNotifications":{"$ref":"#/types/databricks:index/JobEmailNotifications:JobEmailNotifications","description":"(List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.\n"},"environments":{"type":"array","items":{"$ref":"#/types/databricks:index/JobEnvironment:JobEnvironment"}},"existingClusterId":{"type":"string"},"format":{"type":"string"},"gitSource":{"$ref":"#/types/databricks:index/JobGitSource:JobGitSource","description":"Specifies the a Git repository for task source code. See\u003cspan pulumi-lang-nodejs=\" gitSource \" pulumi-lang-dotnet=\" GitSource \" pulumi-lang-go=\" gitSource \" pulumi-lang-python=\" git_source \" pulumi-lang-yaml=\" gitSource \" pulumi-lang-java=\" gitSource \"\u003e git_source \u003c/span\u003eConfiguration Block below.\n"},"health":{"$ref":"#/types/databricks:index/JobHealth:JobHealth","description":"An optional block that specifies the health conditions for the job documented below.\n"},"jobClusters":{"type":"array","items":{"$ref":"#/types/databricks:index/JobJobCluster:JobJobCluster"},"description":"A list of job\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003especifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. *Multi-task syntax*\n"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/JobLibrary:JobLibrary"},"description":"(List) An optional list of libraries to be installed on the cluster that will execute the job. See library Configuration Block below.\n"},"maxConcurrentRuns":{"type":"integer","description":"(Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to *1*.\n"},"maxRetries":{"type":"integer","deprecationMessage":"should be used inside a task block and not inside a job block"},"minRetryIntervalMillis":{"type":"integer","description":"(Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.\n","deprecationMessage":"should be used inside a task block and not inside a job block"},"name":{"type":"string","description":"An optional name for the job. The default value is Untitled.\n"},"newCluster":{"$ref":"#/types/databricks:index/JobNewCluster:JobNewCluster"},"notebookTask":{"$ref":"#/types/databricks:index/JobNotebookTask:JobNotebookTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"notificationSettings":{"$ref":"#/types/databricks:index/JobNotificationSettings:JobNotificationSettings","description":"An optional block controlling the notification settings on the job level documented below.\n"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/JobParameter:JobParameter"},"description":"Specifies job parameter for the job. See parameter Configuration Block\n"},"performanceTarget":{"type":"string","description":"The performance mode on a serverless job. The performance target determines the level of compute performance or cost-efficiency for the run.  Supported values are:\n* `PERFORMANCE_OPTIMIZED`: (default value) Prioritizes fast startup and execution times through rapid scaling and optimized cluster performance.\n* `STANDARD`: Enables cost-efficient execution of serverless workloads.\n"},"pipelineTask":{"$ref":"#/types/databricks:index/JobPipelineTask:JobPipelineTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"providerConfig":{"$ref":"#/types/databricks:index/JobProviderConfig:JobProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pythonWheelTask":{"$ref":"#/types/databricks:index/JobPythonWheelTask:JobPythonWheelTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"queue":{"$ref":"#/types/databricks:index/JobQueue:JobQueue","description":"The queue status for the job. See queue Configuration Block below.\n"},"retryOnTimeout":{"type":"boolean","deprecationMessage":"should be used inside a task block and not inside a job block"},"runAs":{"$ref":"#/types/databricks:index/JobRunAs:JobRunAs","description":"The user or the service principal the job runs as. See\u003cspan pulumi-lang-nodejs=\" runAs \" pulumi-lang-dotnet=\" RunAs \" pulumi-lang-go=\" runAs \" pulumi-lang-python=\" run_as \" pulumi-lang-yaml=\" runAs \" pulumi-lang-java=\" runAs \"\u003e run_as \u003c/span\u003eConfiguration Block below.\n"},"runJobTask":{"$ref":"#/types/databricks:index/JobRunJobTask:JobRunJobTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"schedule":{"$ref":"#/types/databricks:index/JobSchedule:JobSchedule","description":"An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. See schedule Configuration Block below.\n"},"sparkJarTask":{"$ref":"#/types/databricks:index/JobSparkJarTask:JobSparkJarTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"sparkPythonTask":{"$ref":"#/types/databricks:index/JobSparkPythonTask:JobSparkPythonTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"sparkSubmitTask":{"$ref":"#/types/databricks:index/JobSparkSubmitTask:JobSparkSubmitTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"tags":{"type":"object","additionalProperties":{"type":"string"},"description":"An optional map of the tags associated with the job. See tags Configuration Map\n"},"tasks":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTask:JobTask"},"description":"A list of task specification that the job will execute. See task Configuration Block below.\n"},"timeoutSeconds":{"type":"integer","description":"(Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.\n"},"trigger":{"$ref":"#/types/databricks:index/JobTrigger:JobTrigger","description":"The conditions that triggers the job to start. See trigger Configuration Block below.\n"},"url":{"type":"string","description":"URL of the job on the given workspace\n"},"usagePolicyId":{"type":"string"},"webhookNotifications":{"$ref":"#/types/databricks:index/JobWebhookNotifications:JobWebhookNotifications","description":"(List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.\n"}},"required":["format","name","runAs","url"],"inputProperties":{"alwaysRunning":{"type":"boolean","description":"(Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with \u003cspan pulumi-lang-nodejs=\"`parameters`\" pulumi-lang-dotnet=\"`Parameters`\" pulumi-lang-go=\"`parameters`\" pulumi-lang-python=\"`parameters`\" pulumi-lang-yaml=\"`parameters`\" pulumi-lang-java=\"`parameters`\"\u003e`parameters`\u003c/span\u003e specified in \u003cspan pulumi-lang-nodejs=\"`sparkJarTask`\" pulumi-lang-dotnet=\"`SparkJarTask`\" pulumi-lang-go=\"`sparkJarTask`\" pulumi-lang-python=\"`spark_jar_task`\" pulumi-lang-yaml=\"`sparkJarTask`\" pulumi-lang-java=\"`sparkJarTask`\"\u003e`spark_jar_task`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`sparkSubmitTask`\" pulumi-lang-dotnet=\"`SparkSubmitTask`\" pulumi-lang-go=\"`sparkSubmitTask`\" pulumi-lang-python=\"`spark_submit_task`\" pulumi-lang-yaml=\"`sparkSubmitTask`\" pulumi-lang-java=\"`sparkSubmitTask`\"\u003e`spark_submit_task`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`sparkPythonTask`\" pulumi-lang-dotnet=\"`SparkPythonTask`\" pulumi-lang-go=\"`sparkPythonTask`\" pulumi-lang-python=\"`spark_python_task`\" pulumi-lang-yaml=\"`sparkPythonTask`\" pulumi-lang-java=\"`sparkPythonTask`\"\u003e`spark_python_task`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`notebookTask`\" pulumi-lang-dotnet=\"`NotebookTask`\" pulumi-lang-go=\"`notebookTask`\" pulumi-lang-python=\"`notebook_task`\" pulumi-lang-yaml=\"`notebookTask`\" pulumi-lang-java=\"`notebookTask`\"\u003e`notebook_task`\u003c/span\u003e blocks.\n","deprecationMessage":"always_running will be replaced by\u003cspan pulumi-lang-nodejs=\" controlRunState \" pulumi-lang-dotnet=\" ControlRunState \" pulumi-lang-go=\" controlRunState \" pulumi-lang-python=\" control_run_state \" pulumi-lang-yaml=\" controlRunState \" pulumi-lang-java=\" controlRunState \"\u003e control_run_state \u003c/span\u003ein the next major release."},"budgetPolicyId":{"type":"string","description":"The ID of the user-specified budget policy to use for this job. If not specified, a default budget policy may be applied when creating or modifying the job.\n"},"continuous":{"$ref":"#/types/databricks:index/JobContinuous:JobContinuous","description":"Configuration block to configure pause status. See continuous Configuration Block.\n"},"controlRunState":{"type":"boolean","description":"(Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the \u003cspan pulumi-lang-nodejs=\"`pauseStatus`\" pulumi-lang-dotnet=\"`PauseStatus`\" pulumi-lang-go=\"`pauseStatus`\" pulumi-lang-python=\"`pause_status`\" pulumi-lang-yaml=\"`pauseStatus`\" pulumi-lang-java=\"`pauseStatus`\"\u003e`pause_status`\u003c/span\u003e by stopping the current active run. This flag cannot be set for non-continuous jobs.\n\nWhen migrating from \u003cspan pulumi-lang-nodejs=\"`alwaysRunning`\" pulumi-lang-dotnet=\"`AlwaysRunning`\" pulumi-lang-go=\"`alwaysRunning`\" pulumi-lang-python=\"`always_running`\" pulumi-lang-yaml=\"`alwaysRunning`\" pulumi-lang-java=\"`alwaysRunning`\"\u003e`always_running`\u003c/span\u003e to \u003cspan pulumi-lang-nodejs=\"`controlRunState`\" pulumi-lang-dotnet=\"`ControlRunState`\" pulumi-lang-go=\"`controlRunState`\" pulumi-lang-python=\"`control_run_state`\" pulumi-lang-yaml=\"`controlRunState`\" pulumi-lang-java=\"`controlRunState`\"\u003e`control_run_state`\u003c/span\u003e, set \u003cspan pulumi-lang-nodejs=\"`continuous`\" pulumi-lang-dotnet=\"`Continuous`\" pulumi-lang-go=\"`continuous`\" pulumi-lang-python=\"`continuous`\" pulumi-lang-yaml=\"`continuous`\" pulumi-lang-java=\"`continuous`\"\u003e`continuous`\u003c/span\u003e as follows:\n\n"},"dbtTask":{"$ref":"#/types/databricks:index/JobDbtTask:JobDbtTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"deployment":{"$ref":"#/types/databricks:index/JobDeployment:JobDeployment"},"description":{"type":"string","description":"An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.\n"},"editMode":{"type":"string","description":"If `\"UI_LOCKED\"`, the user interface for the job will be locked. If `\"EDITABLE\"` (the default), the user interface will be editable.\n"},"emailNotifications":{"$ref":"#/types/databricks:index/JobEmailNotifications:JobEmailNotifications","description":"(List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.\n"},"environments":{"type":"array","items":{"$ref":"#/types/databricks:index/JobEnvironment:JobEnvironment"}},"existingClusterId":{"type":"string"},"format":{"type":"string"},"gitSource":{"$ref":"#/types/databricks:index/JobGitSource:JobGitSource","description":"Specifies the a Git repository for task source code. See\u003cspan pulumi-lang-nodejs=\" gitSource \" pulumi-lang-dotnet=\" GitSource \" pulumi-lang-go=\" gitSource \" pulumi-lang-python=\" git_source \" pulumi-lang-yaml=\" gitSource \" pulumi-lang-java=\" gitSource \"\u003e git_source \u003c/span\u003eConfiguration Block below.\n"},"health":{"$ref":"#/types/databricks:index/JobHealth:JobHealth","description":"An optional block that specifies the health conditions for the job documented below.\n"},"jobClusters":{"type":"array","items":{"$ref":"#/types/databricks:index/JobJobCluster:JobJobCluster"},"description":"A list of job\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003especifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. *Multi-task syntax*\n"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/JobLibrary:JobLibrary"},"description":"(List) An optional list of libraries to be installed on the cluster that will execute the job. See library Configuration Block below.\n"},"maxConcurrentRuns":{"type":"integer","description":"(Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to *1*.\n"},"maxRetries":{"type":"integer","deprecationMessage":"should be used inside a task block and not inside a job block"},"minRetryIntervalMillis":{"type":"integer","description":"(Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.\n","deprecationMessage":"should be used inside a task block and not inside a job block"},"name":{"type":"string","description":"An optional name for the job. The default value is Untitled.\n"},"newCluster":{"$ref":"#/types/databricks:index/JobNewCluster:JobNewCluster"},"notebookTask":{"$ref":"#/types/databricks:index/JobNotebookTask:JobNotebookTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"notificationSettings":{"$ref":"#/types/databricks:index/JobNotificationSettings:JobNotificationSettings","description":"An optional block controlling the notification settings on the job level documented below.\n"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/JobParameter:JobParameter"},"description":"Specifies job parameter for the job. See parameter Configuration Block\n"},"performanceTarget":{"type":"string","description":"The performance mode on a serverless job. The performance target determines the level of compute performance or cost-efficiency for the run.  Supported values are:\n* `PERFORMANCE_OPTIMIZED`: (default value) Prioritizes fast startup and execution times through rapid scaling and optimized cluster performance.\n* `STANDARD`: Enables cost-efficient execution of serverless workloads.\n"},"pipelineTask":{"$ref":"#/types/databricks:index/JobPipelineTask:JobPipelineTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"providerConfig":{"$ref":"#/types/databricks:index/JobProviderConfig:JobProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pythonWheelTask":{"$ref":"#/types/databricks:index/JobPythonWheelTask:JobPythonWheelTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"queue":{"$ref":"#/types/databricks:index/JobQueue:JobQueue","description":"The queue status for the job. See queue Configuration Block below.\n"},"retryOnTimeout":{"type":"boolean","deprecationMessage":"should be used inside a task block and not inside a job block"},"runAs":{"$ref":"#/types/databricks:index/JobRunAs:JobRunAs","description":"The user or the service principal the job runs as. See\u003cspan pulumi-lang-nodejs=\" runAs \" pulumi-lang-dotnet=\" RunAs \" pulumi-lang-go=\" runAs \" pulumi-lang-python=\" run_as \" pulumi-lang-yaml=\" runAs \" pulumi-lang-java=\" runAs \"\u003e run_as \u003c/span\u003eConfiguration Block below.\n"},"runJobTask":{"$ref":"#/types/databricks:index/JobRunJobTask:JobRunJobTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"schedule":{"$ref":"#/types/databricks:index/JobSchedule:JobSchedule","description":"An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. See schedule Configuration Block below.\n"},"sparkJarTask":{"$ref":"#/types/databricks:index/JobSparkJarTask:JobSparkJarTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"sparkPythonTask":{"$ref":"#/types/databricks:index/JobSparkPythonTask:JobSparkPythonTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"sparkSubmitTask":{"$ref":"#/types/databricks:index/JobSparkSubmitTask:JobSparkSubmitTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"tags":{"type":"object","additionalProperties":{"type":"string"},"description":"An optional map of the tags associated with the job. See tags Configuration Map\n"},"tasks":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTask:JobTask"},"description":"A list of task specification that the job will execute. See task Configuration Block below.\n"},"timeoutSeconds":{"type":"integer","description":"(Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.\n"},"trigger":{"$ref":"#/types/databricks:index/JobTrigger:JobTrigger","description":"The conditions that triggers the job to start. See trigger Configuration Block below.\n"},"usagePolicyId":{"type":"string"},"webhookNotifications":{"$ref":"#/types/databricks:index/JobWebhookNotifications:JobWebhookNotifications","description":"(List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering Job resources.\n","properties":{"alwaysRunning":{"type":"boolean","description":"(Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with \u003cspan pulumi-lang-nodejs=\"`parameters`\" pulumi-lang-dotnet=\"`Parameters`\" pulumi-lang-go=\"`parameters`\" pulumi-lang-python=\"`parameters`\" pulumi-lang-yaml=\"`parameters`\" pulumi-lang-java=\"`parameters`\"\u003e`parameters`\u003c/span\u003e specified in \u003cspan pulumi-lang-nodejs=\"`sparkJarTask`\" pulumi-lang-dotnet=\"`SparkJarTask`\" pulumi-lang-go=\"`sparkJarTask`\" pulumi-lang-python=\"`spark_jar_task`\" pulumi-lang-yaml=\"`sparkJarTask`\" pulumi-lang-java=\"`sparkJarTask`\"\u003e`spark_jar_task`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`sparkSubmitTask`\" pulumi-lang-dotnet=\"`SparkSubmitTask`\" pulumi-lang-go=\"`sparkSubmitTask`\" pulumi-lang-python=\"`spark_submit_task`\" pulumi-lang-yaml=\"`sparkSubmitTask`\" pulumi-lang-java=\"`sparkSubmitTask`\"\u003e`spark_submit_task`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`sparkPythonTask`\" pulumi-lang-dotnet=\"`SparkPythonTask`\" pulumi-lang-go=\"`sparkPythonTask`\" pulumi-lang-python=\"`spark_python_task`\" pulumi-lang-yaml=\"`sparkPythonTask`\" pulumi-lang-java=\"`sparkPythonTask`\"\u003e`spark_python_task`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`notebookTask`\" pulumi-lang-dotnet=\"`NotebookTask`\" pulumi-lang-go=\"`notebookTask`\" pulumi-lang-python=\"`notebook_task`\" pulumi-lang-yaml=\"`notebookTask`\" pulumi-lang-java=\"`notebookTask`\"\u003e`notebook_task`\u003c/span\u003e blocks.\n","deprecationMessage":"always_running will be replaced by\u003cspan pulumi-lang-nodejs=\" controlRunState \" pulumi-lang-dotnet=\" ControlRunState \" pulumi-lang-go=\" controlRunState \" pulumi-lang-python=\" control_run_state \" pulumi-lang-yaml=\" controlRunState \" pulumi-lang-java=\" controlRunState \"\u003e control_run_state \u003c/span\u003ein the next major release."},"budgetPolicyId":{"type":"string","description":"The ID of the user-specified budget policy to use for this job. If not specified, a default budget policy may be applied when creating or modifying the job.\n"},"continuous":{"$ref":"#/types/databricks:index/JobContinuous:JobContinuous","description":"Configuration block to configure pause status. See continuous Configuration Block.\n"},"controlRunState":{"type":"boolean","description":"(Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the \u003cspan pulumi-lang-nodejs=\"`pauseStatus`\" pulumi-lang-dotnet=\"`PauseStatus`\" pulumi-lang-go=\"`pauseStatus`\" pulumi-lang-python=\"`pause_status`\" pulumi-lang-yaml=\"`pauseStatus`\" pulumi-lang-java=\"`pauseStatus`\"\u003e`pause_status`\u003c/span\u003e by stopping the current active run. This flag cannot be set for non-continuous jobs.\n\nWhen migrating from \u003cspan pulumi-lang-nodejs=\"`alwaysRunning`\" pulumi-lang-dotnet=\"`AlwaysRunning`\" pulumi-lang-go=\"`alwaysRunning`\" pulumi-lang-python=\"`always_running`\" pulumi-lang-yaml=\"`alwaysRunning`\" pulumi-lang-java=\"`alwaysRunning`\"\u003e`always_running`\u003c/span\u003e to \u003cspan pulumi-lang-nodejs=\"`controlRunState`\" pulumi-lang-dotnet=\"`ControlRunState`\" pulumi-lang-go=\"`controlRunState`\" pulumi-lang-python=\"`control_run_state`\" pulumi-lang-yaml=\"`controlRunState`\" pulumi-lang-java=\"`controlRunState`\"\u003e`control_run_state`\u003c/span\u003e, set \u003cspan pulumi-lang-nodejs=\"`continuous`\" pulumi-lang-dotnet=\"`Continuous`\" pulumi-lang-go=\"`continuous`\" pulumi-lang-python=\"`continuous`\" pulumi-lang-yaml=\"`continuous`\" pulumi-lang-java=\"`continuous`\"\u003e`continuous`\u003c/span\u003e as follows:\n\n"},"dbtTask":{"$ref":"#/types/databricks:index/JobDbtTask:JobDbtTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"deployment":{"$ref":"#/types/databricks:index/JobDeployment:JobDeployment"},"description":{"type":"string","description":"An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.\n"},"editMode":{"type":"string","description":"If `\"UI_LOCKED\"`, the user interface for the job will be locked. If `\"EDITABLE\"` (the default), the user interface will be editable.\n"},"emailNotifications":{"$ref":"#/types/databricks:index/JobEmailNotifications:JobEmailNotifications","description":"(List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is documented below.\n"},"environments":{"type":"array","items":{"$ref":"#/types/databricks:index/JobEnvironment:JobEnvironment"}},"existingClusterId":{"type":"string"},"format":{"type":"string"},"gitSource":{"$ref":"#/types/databricks:index/JobGitSource:JobGitSource","description":"Specifies the a Git repository for task source code. See\u003cspan pulumi-lang-nodejs=\" gitSource \" pulumi-lang-dotnet=\" GitSource \" pulumi-lang-go=\" gitSource \" pulumi-lang-python=\" git_source \" pulumi-lang-yaml=\" gitSource \" pulumi-lang-java=\" gitSource \"\u003e git_source \u003c/span\u003eConfiguration Block below.\n"},"health":{"$ref":"#/types/databricks:index/JobHealth:JobHealth","description":"An optional block that specifies the health conditions for the job documented below.\n"},"jobClusters":{"type":"array","items":{"$ref":"#/types/databricks:index/JobJobCluster:JobJobCluster"},"description":"A list of job\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003especifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. *Multi-task syntax*\n"},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/JobLibrary:JobLibrary"},"description":"(List) An optional list of libraries to be installed on the cluster that will execute the job. See library Configuration Block below.\n"},"maxConcurrentRuns":{"type":"integer","description":"(Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to *1*.\n"},"maxRetries":{"type":"integer","deprecationMessage":"should be used inside a task block and not inside a job block"},"minRetryIntervalMillis":{"type":"integer","description":"(Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.\n","deprecationMessage":"should be used inside a task block and not inside a job block"},"name":{"type":"string","description":"An optional name for the job. The default value is Untitled.\n"},"newCluster":{"$ref":"#/types/databricks:index/JobNewCluster:JobNewCluster"},"notebookTask":{"$ref":"#/types/databricks:index/JobNotebookTask:JobNotebookTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"notificationSettings":{"$ref":"#/types/databricks:index/JobNotificationSettings:JobNotificationSettings","description":"An optional block controlling the notification settings on the job level documented below.\n"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/JobParameter:JobParameter"},"description":"Specifies job parameter for the job. See parameter Configuration Block\n"},"performanceTarget":{"type":"string","description":"The performance mode on a serverless job. The performance target determines the level of compute performance or cost-efficiency for the run.  Supported values are:\n* `PERFORMANCE_OPTIMIZED`: (default value) Prioritizes fast startup and execution times through rapid scaling and optimized cluster performance.\n* `STANDARD`: Enables cost-efficient execution of serverless workloads.\n"},"pipelineTask":{"$ref":"#/types/databricks:index/JobPipelineTask:JobPipelineTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"providerConfig":{"$ref":"#/types/databricks:index/JobProviderConfig:JobProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"pythonWheelTask":{"$ref":"#/types/databricks:index/JobPythonWheelTask:JobPythonWheelTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"queue":{"$ref":"#/types/databricks:index/JobQueue:JobQueue","description":"The queue status for the job. See queue Configuration Block below.\n"},"retryOnTimeout":{"type":"boolean","deprecationMessage":"should be used inside a task block and not inside a job block"},"runAs":{"$ref":"#/types/databricks:index/JobRunAs:JobRunAs","description":"The user or the service principal the job runs as. See\u003cspan pulumi-lang-nodejs=\" runAs \" pulumi-lang-dotnet=\" RunAs \" pulumi-lang-go=\" runAs \" pulumi-lang-python=\" run_as \" pulumi-lang-yaml=\" runAs \" pulumi-lang-java=\" runAs \"\u003e run_as \u003c/span\u003eConfiguration Block below.\n"},"runJobTask":{"$ref":"#/types/databricks:index/JobRunJobTask:JobRunJobTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"schedule":{"$ref":"#/types/databricks:index/JobSchedule:JobSchedule","description":"An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. See schedule Configuration Block below.\n"},"sparkJarTask":{"$ref":"#/types/databricks:index/JobSparkJarTask:JobSparkJarTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"sparkPythonTask":{"$ref":"#/types/databricks:index/JobSparkPythonTask:JobSparkPythonTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"sparkSubmitTask":{"$ref":"#/types/databricks:index/JobSparkSubmitTask:JobSparkSubmitTask","deprecationMessage":"should be used inside a task block and not inside a job block"},"tags":{"type":"object","additionalProperties":{"type":"string"},"description":"An optional map of the tags associated with the job. See tags Configuration Map\n"},"tasks":{"type":"array","items":{"$ref":"#/types/databricks:index/JobTask:JobTask"},"description":"A list of task specification that the job will execute. See task Configuration Block below.\n"},"timeoutSeconds":{"type":"integer","description":"(Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.\n"},"trigger":{"$ref":"#/types/databricks:index/JobTrigger:JobTrigger","description":"The conditions that triggers the job to start. See trigger Configuration Block below.\n"},"url":{"type":"string","description":"URL of the job on the given workspace\n"},"usagePolicyId":{"type":"string"},"webhookNotifications":{"$ref":"#/types/databricks:index/JobWebhookNotifications:JobWebhookNotifications","description":"(List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.\n"}},"type":"object"}},"databricks:index/lakehouseMonitor:LakehouseMonitor":{"description":"!\u003e This resource has been deprecated and will be removed soon. Please use the\u003cspan pulumi-lang-nodejs=\" databricks.QualityMonitor \" pulumi-lang-dotnet=\" databricks.QualityMonitor \" pulumi-lang-go=\" QualityMonitor \" pulumi-lang-python=\" QualityMonitor \" pulumi-lang-yaml=\" databricks.QualityMonitor \" pulumi-lang-java=\" databricks.QualityMonitor \"\u003e databricks.QualityMonitor \u003c/span\u003eresource instead.\n\nThis resource allows you to manage [Lakehouse Monitors](https://docs.databricks.com/en/lakehouse-monitoring/index.html) in Databricks. \n\nA \u003cspan pulumi-lang-nodejs=\"`databricks.LakehouseMonitor`\" pulumi-lang-dotnet=\"`databricks.LakehouseMonitor`\" pulumi-lang-go=\"`LakehouseMonitor`\" pulumi-lang-python=\"`LakehouseMonitor`\" pulumi-lang-yaml=\"`databricks.LakehouseMonitor`\" pulumi-lang-java=\"`databricks.LakehouseMonitor`\"\u003e`databricks.LakehouseMonitor`\u003c/span\u003e is attached to a\u003cspan pulumi-lang-nodejs=\" databricks.SqlTable \" pulumi-lang-dotnet=\" databricks.SqlTable \" pulumi-lang-go=\" SqlTable \" pulumi-lang-python=\" SqlTable \" pulumi-lang-yaml=\" databricks.SqlTable \" pulumi-lang-java=\" databricks.SqlTable \"\u003e databricks.SqlTable \u003c/span\u003eand can be of type timeseries, snapshot or inference. \n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    comment: \"this catalog is managed by terraform\",\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst things = new databricks.Schema(\"things\", {\n    catalogName: sandbox.id,\n    name: \"things\",\n    comment: \"this database is managed by terraform\",\n    properties: {\n        kind: \"various\",\n    },\n});\nconst myTestTable = new databricks.SqlTable(\"myTestTable\", {\n    catalogName: \"main\",\n    schemaName: things.name,\n    name: \"bar\",\n    tableType: \"MANAGED\",\n    dataSourceFormat: \"DELTA\",\n    columns: [{\n        name: \"timestamp\",\n        type: \"int\",\n    }],\n});\nconst testTimeseriesMonitor = new databricks.LakehouseMonitor(\"testTimeseriesMonitor\", {\n    tableName: pulumi.interpolate`${sandbox.name}.${things.name}.${myTestTable.name}`,\n    assetsDir: pulumi.interpolate`/Shared/provider-test/databricks_lakehouse_monitoring/${myTestTable.name}`,\n    outputSchemaName: pulumi.interpolate`${sandbox.name}.${things.name}`,\n    timeSeries: {\n        granularities: [\"1 hour\"],\n        timestampCol: \"timestamp\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    comment=\"this catalog is managed by terraform\",\n    properties={\n        \"purpose\": \"testing\",\n    })\nthings = databricks.Schema(\"things\",\n    catalog_name=sandbox.id,\n    name=\"things\",\n    comment=\"this database is managed by terraform\",\n    properties={\n        \"kind\": \"various\",\n    })\nmy_test_table = databricks.SqlTable(\"myTestTable\",\n    catalog_name=\"main\",\n    schema_name=things.name,\n    name=\"bar\",\n    table_type=\"MANAGED\",\n    data_source_format=\"DELTA\",\n    columns=[{\n        \"name\": \"timestamp\",\n        \"type\": \"int\",\n    }])\ntest_timeseries_monitor = databricks.LakehouseMonitor(\"testTimeseriesMonitor\",\n    table_name=pulumi.Output.all(\n        sandboxName=sandbox.name,\n        thingsName=things.name,\n        myTestTableName=my_test_table.name\n).apply(lambda resolved_outputs: f\"{resolved_outputs['sandboxName']}.{resolved_outputs['thingsName']}.{resolved_outputs['myTestTableName']}\")\n,\n    assets_dir=my_test_table.name.apply(lambda name: f\"/Shared/provider-test/databricks_lakehouse_monitoring/{name}\"),\n    output_schema_name=pulumi.Output.all(\n        sandboxName=sandbox.name,\n        thingsName=things.name\n).apply(lambda resolved_outputs: f\"{resolved_outputs['sandboxName']}.{resolved_outputs['thingsName']}\")\n,\n    time_series={\n        \"granularities\": [\"1 hour\"],\n        \"timestamp_col\": \"timestamp\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        Comment = \"this catalog is managed by terraform\",\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var things = new Databricks.Schema(\"things\", new()\n    {\n        CatalogName = sandbox.Id,\n        Name = \"things\",\n        Comment = \"this database is managed by terraform\",\n        Properties = \n        {\n            { \"kind\", \"various\" },\n        },\n    });\n\n    var myTestTable = new Databricks.SqlTable(\"myTestTable\", new()\n    {\n        CatalogName = \"main\",\n        SchemaName = things.Name,\n        Name = \"bar\",\n        TableType = \"MANAGED\",\n        DataSourceFormat = \"DELTA\",\n        Columns = new[]\n        {\n            new Databricks.Inputs.SqlTableColumnArgs\n            {\n                Name = \"timestamp\",\n                Type = \"int\",\n            },\n        },\n    });\n\n    var testTimeseriesMonitor = new Databricks.LakehouseMonitor(\"testTimeseriesMonitor\", new()\n    {\n        TableName = Output.Tuple(sandbox.Name, things.Name, myTestTable.Name).Apply(values =\u003e\n        {\n            var sandboxName = values.Item1;\n            var thingsName = values.Item2;\n            var myTestTableName = values.Item3;\n            return $\"{sandboxName}.{thingsName}.{myTestTableName}\";\n        }),\n        AssetsDir = myTestTable.Name.Apply(name =\u003e $\"/Shared/provider-test/databricks_lakehouse_monitoring/{name}\"),\n        OutputSchemaName = Output.Tuple(sandbox.Name, things.Name).Apply(values =\u003e\n        {\n            var sandboxName = values.Item1;\n            var thingsName = values.Item2;\n            return $\"{sandboxName}.{thingsName}\";\n        }),\n        TimeSeries = new Databricks.Inputs.LakehouseMonitorTimeSeriesArgs\n        {\n            Granularities = new[]\n            {\n                \"1 hour\",\n            },\n            TimestampCol = \"timestamp\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:    pulumi.String(\"sandbox\"),\n\t\t\tComment: pulumi.String(\"this catalog is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthings, err := databricks.NewSchema(ctx, \"things\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: sandbox.ID(),\n\t\t\tName:        pulumi.String(\"things\"),\n\t\t\tComment:     pulumi.String(\"this database is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"kind\": pulumi.String(\"various\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmyTestTable, err := databricks.NewSqlTable(ctx, \"myTestTable\", \u0026databricks.SqlTableArgs{\n\t\t\tCatalogName:      pulumi.String(\"main\"),\n\t\t\tSchemaName:       things.Name,\n\t\t\tName:             pulumi.String(\"bar\"),\n\t\t\tTableType:        pulumi.String(\"MANAGED\"),\n\t\t\tDataSourceFormat: pulumi.String(\"DELTA\"),\n\t\t\tColumns: databricks.SqlTableColumnArray{\n\t\t\t\t\u0026databricks.SqlTableColumnArgs{\n\t\t\t\t\tName: pulumi.String(\"timestamp\"),\n\t\t\t\t\tType: pulumi.String(\"int\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewLakehouseMonitor(ctx, \"testTimeseriesMonitor\", \u0026databricks.LakehouseMonitorArgs{\n\t\t\tTableName: pulumi.All(sandbox.Name, things.Name, myTestTable.Name).ApplyT(func(_args []interface{}) (string, error) {\n\t\t\t\tsandboxName := _args[0].(string)\n\t\t\t\tthingsName := _args[1].(string)\n\t\t\t\tmyTestTableName := _args[2].(string)\n\t\t\t\treturn fmt.Sprintf(\"%v.%v.%v\", sandboxName, thingsName, myTestTableName), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tAssetsDir: myTestTable.Name.ApplyT(func(name string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"/Shared/provider-test/databricks_lakehouse_monitoring/%v\", name), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tOutputSchemaName: pulumi.All(sandbox.Name, things.Name).ApplyT(func(_args []interface{}) (string, error) {\n\t\t\t\tsandboxName := _args[0].(string)\n\t\t\t\tthingsName := _args[1].(string)\n\t\t\t\treturn fmt.Sprintf(\"%v.%v\", sandboxName, thingsName), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tTimeSeries: \u0026databricks.LakehouseMonitorTimeSeriesArgs{\n\t\t\t\tGranularities: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"1 hour\"),\n\t\t\t\t},\n\t\t\t\tTimestampCol: pulumi.String(\"timestamp\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport com.pulumi.databricks.SqlTable;\nimport com.pulumi.databricks.SqlTableArgs;\nimport com.pulumi.databricks.inputs.SqlTableColumnArgs;\nimport com.pulumi.databricks.LakehouseMonitor;\nimport com.pulumi.databricks.LakehouseMonitorArgs;\nimport com.pulumi.databricks.inputs.LakehouseMonitorTimeSeriesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .comment(\"this catalog is managed by terraform\")\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var things = new Schema(\"things\", SchemaArgs.builder()\n            .catalogName(sandbox.id())\n            .name(\"things\")\n            .comment(\"this database is managed by terraform\")\n            .properties(Map.of(\"kind\", \"various\"))\n            .build());\n\n        var myTestTable = new SqlTable(\"myTestTable\", SqlTableArgs.builder()\n            .catalogName(\"main\")\n            .schemaName(things.name())\n            .name(\"bar\")\n            .tableType(\"MANAGED\")\n            .dataSourceFormat(\"DELTA\")\n            .columns(SqlTableColumnArgs.builder()\n                .name(\"timestamp\")\n                .type(\"int\")\n                .build())\n            .build());\n\n        var testTimeseriesMonitor = new LakehouseMonitor(\"testTimeseriesMonitor\", LakehouseMonitorArgs.builder()\n            .tableName(Output.tuple(sandbox.name(), things.name(), myTestTable.name()).applyValue(values -\u003e {\n                var sandboxName = values.t1;\n                var thingsName = values.t2;\n                var myTestTableName = values.t3;\n                return String.format(\"%s.%s.%s\", sandboxName,thingsName,myTestTableName);\n            }))\n            .assetsDir(myTestTable.name().applyValue(_name -\u003e String.format(\"/Shared/provider-test/databricks_lakehouse_monitoring/%s\", _name)))\n            .outputSchemaName(Output.tuple(sandbox.name(), things.name()).applyValue(values -\u003e {\n                var sandboxName = values.t1;\n                var thingsName = values.t2;\n                return String.format(\"%s.%s\", sandboxName,thingsName);\n            }))\n            .timeSeries(LakehouseMonitorTimeSeriesArgs.builder()\n                .granularities(\"1 hour\")\n                .timestampCol(\"timestamp\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      comment: this catalog is managed by terraform\n      properties:\n        purpose: testing\n  things:\n    type: databricks:Schema\n    properties:\n      catalogName: ${sandbox.id}\n      name: things\n      comment: this database is managed by terraform\n      properties:\n        kind: various\n  myTestTable:\n    type: databricks:SqlTable\n    properties:\n      catalogName: main\n      schemaName: ${things.name}\n      name: bar\n      tableType: MANAGED\n      dataSourceFormat: DELTA\n      columns:\n        - name: timestamp\n          type: int\n  testTimeseriesMonitor:\n    type: databricks:LakehouseMonitor\n    properties:\n      tableName: ${sandbox.name}.${things.name}.${myTestTable.name}\n      assetsDir: /Shared/provider-test/databricks_lakehouse_monitoring/${myTestTable.name}\n      outputSchemaName: ${sandbox.name}.${things.name}\n      timeSeries:\n        granularities:\n          - 1 hour\n        timestampCol: timestamp\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Inference Monitor\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst testMonitorInference = new databricks.LakehouseMonitor(\"testMonitorInference\", {\n    tableName: `${sandbox.name}.${things.name}.${myTestTable.name}`,\n    assetsDir: `/Shared/provider-test/databricks_lakehouse_monitoring/${myTestTable.name}`,\n    outputSchemaName: `${sandbox.name}.${things.name}`,\n    inferenceLog: {\n        granularities: [\"1 hour\"],\n        timestampCol: \"timestamp\",\n        predictionCol: \"prediction\",\n        modelIdCol: \"model_id\",\n        problemType: \"PROBLEM_TYPE_REGRESSION\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ntest_monitor_inference = databricks.LakehouseMonitor(\"testMonitorInference\",\n    table_name=f\"{sandbox['name']}.{things['name']}.{my_test_table['name']}\",\n    assets_dir=f\"/Shared/provider-test/databricks_lakehouse_monitoring/{my_test_table['name']}\",\n    output_schema_name=f\"{sandbox['name']}.{things['name']}\",\n    inference_log={\n        \"granularities\": [\"1 hour\"],\n        \"timestamp_col\": \"timestamp\",\n        \"prediction_col\": \"prediction\",\n        \"model_id_col\": \"model_id\",\n        \"problem_type\": \"PROBLEM_TYPE_REGRESSION\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var testMonitorInference = new Databricks.LakehouseMonitor(\"testMonitorInference\", new()\n    {\n        TableName = $\"{sandbox.Name}.{things.Name}.{myTestTable.Name}\",\n        AssetsDir = $\"/Shared/provider-test/databricks_lakehouse_monitoring/{myTestTable.Name}\",\n        OutputSchemaName = $\"{sandbox.Name}.{things.Name}\",\n        InferenceLog = new Databricks.Inputs.LakehouseMonitorInferenceLogArgs\n        {\n            Granularities = new[]\n            {\n                \"1 hour\",\n            },\n            TimestampCol = \"timestamp\",\n            PredictionCol = \"prediction\",\n            ModelIdCol = \"model_id\",\n            ProblemType = \"PROBLEM_TYPE_REGRESSION\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewLakehouseMonitor(ctx, \"testMonitorInference\", \u0026databricks.LakehouseMonitorArgs{\n\t\t\tTableName:        pulumi.Sprintf(\"%v.%v.%v\", sandbox.Name, things.Name, myTestTable.Name),\n\t\t\tAssetsDir:        pulumi.Sprintf(\"/Shared/provider-test/databricks_lakehouse_monitoring/%v\", myTestTable.Name),\n\t\t\tOutputSchemaName: pulumi.Sprintf(\"%v.%v\", sandbox.Name, things.Name),\n\t\t\tInferenceLog: \u0026databricks.LakehouseMonitorInferenceLogArgs{\n\t\t\t\tGranularities: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"1 hour\"),\n\t\t\t\t},\n\t\t\t\tTimestampCol:  pulumi.String(\"timestamp\"),\n\t\t\t\tPredictionCol: pulumi.String(\"prediction\"),\n\t\t\t\tModelIdCol:    pulumi.String(\"model_id\"),\n\t\t\t\tProblemType:   pulumi.String(\"PROBLEM_TYPE_REGRESSION\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.LakehouseMonitor;\nimport com.pulumi.databricks.LakehouseMonitorArgs;\nimport com.pulumi.databricks.inputs.LakehouseMonitorInferenceLogArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var testMonitorInference = new LakehouseMonitor(\"testMonitorInference\", LakehouseMonitorArgs.builder()\n            .tableName(String.format(\"%s.%s.%s\", sandbox.name(),things.name(),myTestTable.name()))\n            .assetsDir(String.format(\"/Shared/provider-test/databricks_lakehouse_monitoring/%s\", myTestTable.name()))\n            .outputSchemaName(String.format(\"%s.%s\", sandbox.name(),things.name()))\n            .inferenceLog(LakehouseMonitorInferenceLogArgs.builder()\n                .granularities(\"1 hour\")\n                .timestampCol(\"timestamp\")\n                .predictionCol(\"prediction\")\n                .modelIdCol(\"model_id\")\n                .problemType(\"PROBLEM_TYPE_REGRESSION\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  testMonitorInference:\n    type: databricks:LakehouseMonitor\n    properties:\n      tableName: ${sandbox.name}.${things.name}.${myTestTable.name}\n      assetsDir: /Shared/provider-test/databricks_lakehouse_monitoring/${myTestTable.name}\n      outputSchemaName: ${sandbox.name}.${things.name}\n      inferenceLog:\n        granularities:\n          - 1 hour\n        timestampCol: timestamp\n        predictionCol: prediction\n        modelIdCol: model_id\n        problemType: PROBLEM_TYPE_REGRESSION\n```\n\u003c!--End PulumiCodeChooser --\u003e\n### Snapshot Monitor\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst testMonitorInference = new databricks.LakehouseMonitor(\"testMonitorInference\", {\n    tableName: `${sandbox.name}.${things.name}.${myTestTable.name}`,\n    assetsDir: `/Shared/provider-test/databricks_lakehouse_monitoring/${myTestTable.name}`,\n    outputSchemaName: `${sandbox.name}.${things.name}`,\n    snapshot: {},\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ntest_monitor_inference = databricks.LakehouseMonitor(\"testMonitorInference\",\n    table_name=f\"{sandbox['name']}.{things['name']}.{my_test_table['name']}\",\n    assets_dir=f\"/Shared/provider-test/databricks_lakehouse_monitoring/{my_test_table['name']}\",\n    output_schema_name=f\"{sandbox['name']}.{things['name']}\",\n    snapshot={})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var testMonitorInference = new Databricks.LakehouseMonitor(\"testMonitorInference\", new()\n    {\n        TableName = $\"{sandbox.Name}.{things.Name}.{myTestTable.Name}\",\n        AssetsDir = $\"/Shared/provider-test/databricks_lakehouse_monitoring/{myTestTable.Name}\",\n        OutputSchemaName = $\"{sandbox.Name}.{things.Name}\",\n        Snapshot = null,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewLakehouseMonitor(ctx, \"testMonitorInference\", \u0026databricks.LakehouseMonitorArgs{\n\t\t\tTableName:        pulumi.Sprintf(\"%v.%v.%v\", sandbox.Name, things.Name, myTestTable.Name),\n\t\t\tAssetsDir:        pulumi.Sprintf(\"/Shared/provider-test/databricks_lakehouse_monitoring/%v\", myTestTable.Name),\n\t\t\tOutputSchemaName: pulumi.Sprintf(\"%v.%v\", sandbox.Name, things.Name),\n\t\t\tSnapshot:         \u0026databricks.LakehouseMonitorSnapshotArgs{},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.LakehouseMonitor;\nimport com.pulumi.databricks.LakehouseMonitorArgs;\nimport com.pulumi.databricks.inputs.LakehouseMonitorSnapshotArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var testMonitorInference = new LakehouseMonitor(\"testMonitorInference\", LakehouseMonitorArgs.builder()\n            .tableName(String.format(\"%s.%s.%s\", sandbox.name(),things.name(),myTestTable.name()))\n            .assetsDir(String.format(\"/Shared/provider-test/databricks_lakehouse_monitoring/%s\", myTestTable.name()))\n            .outputSchemaName(String.format(\"%s.%s\", sandbox.name(),things.name()))\n            .snapshot(LakehouseMonitorSnapshotArgs.builder()\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  testMonitorInference:\n    type: databricks:LakehouseMonitor\n    properties:\n      tableName: ${sandbox.name}.${things.name}.${myTestTable.name}\n      assetsDir: /Shared/provider-test/databricks_lakehouse_monitoring/${myTestTable.name}\n      outputSchemaName: ${sandbox.name}.${things.name}\n      snapshot: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog\n\" pulumi-lang-dotnet=\" databricks.Catalog\n\" pulumi-lang-go=\" Catalog\n\" pulumi-lang-python=\" Catalog\n\" pulumi-lang-yaml=\" databricks.Catalog\n\" pulumi-lang-java=\" databricks.Catalog\n\"\u003e databricks.Catalog\n\u003c/span\u003e*\u003cspan pulumi-lang-nodejs=\" databricks.Schema\n\" pulumi-lang-dotnet=\" databricks.Schema\n\" pulumi-lang-go=\" Schema\n\" pulumi-lang-python=\" Schema\n\" pulumi-lang-yaml=\" databricks.Schema\n\" pulumi-lang-java=\" databricks.Schema\n\"\u003e databricks.Schema\n\u003c/span\u003e*\u003cspan pulumi-lang-nodejs=\" databricks.SqlTable\n\" pulumi-lang-dotnet=\" databricks.SqlTable\n\" pulumi-lang-go=\" SqlTable\n\" pulumi-lang-python=\" SqlTable\n\" pulumi-lang-yaml=\" databricks.SqlTable\n\" pulumi-lang-java=\" databricks.SqlTable\n\"\u003e databricks.SqlTable\n\u003c/span\u003e\n","properties":{"assetsDir":{"type":"string","description":"The directory to store the monitoring assets (Eg. Dashboard and Metric Tables)\n"},"baselineTableName":{"type":"string","description":"Name of the baseline table from which drift metrics are computed from.Columns in the monitored table should also be present in the baseline\ntable.\n"},"customMetrics":{"type":"array","items":{"$ref":"#/types/databricks:index/LakehouseMonitorCustomMetric:LakehouseMonitorCustomMetric"},"description":"Custom metrics to compute on the monitored table. These can be aggregate metrics, derived metrics (from already computed aggregate metrics), or drift metrics (comparing metrics across time windows).\n"},"dashboardId":{"type":"string","description":"The ID of the generated dashboard.\n"},"dataClassificationConfig":{"$ref":"#/types/databricks:index/LakehouseMonitorDataClassificationConfig:LakehouseMonitorDataClassificationConfig","description":"The data classification config for the monitor\n"},"driftMetricsTableName":{"type":"string","description":"The full name of the drift metrics table. Format: __catalog_name__.__schema_name__.__table_name__.\n"},"inferenceLog":{"$ref":"#/types/databricks:index/LakehouseMonitorInferenceLog:LakehouseMonitorInferenceLog","description":"Configuration for the inference log monitor\n"},"latestMonitorFailureMsg":{"type":"string"},"monitorVersion":{"type":"integer","description":"The version of the monitor config (e.g. 1,2,3). If negative, the monitor may be corrupted\n"},"notifications":{"$ref":"#/types/databricks:index/LakehouseMonitorNotifications:LakehouseMonitorNotifications","description":"The notification settings for the monitor.  The following optional blocks are supported, each consisting of the single string array field with name \u003cspan pulumi-lang-nodejs=\"`emailAddresses`\" pulumi-lang-dotnet=\"`EmailAddresses`\" pulumi-lang-go=\"`emailAddresses`\" pulumi-lang-python=\"`email_addresses`\" pulumi-lang-yaml=\"`emailAddresses`\" pulumi-lang-java=\"`emailAddresses`\"\u003e`email_addresses`\u003c/span\u003e containing a list of emails to notify:\n"},"outputSchemaName":{"type":"string","description":"Schema where output metric tables are created\n"},"profileMetricsTableName":{"type":"string","description":"The full name of the profile metrics table. Format: __catalog_name__.__schema_name__.__table_name__.\n"},"providerConfig":{"$ref":"#/types/databricks:index/LakehouseMonitorProviderConfig:LakehouseMonitorProviderConfig"},"schedule":{"$ref":"#/types/databricks:index/LakehouseMonitorSchedule:LakehouseMonitorSchedule","description":"The schedule for automatically updating and refreshing metric tables.  This block consists of following fields:\n"},"skipBuiltinDashboard":{"type":"boolean","description":"Whether to skip creating a default dashboard summarizing data quality metrics.\n"},"slicingExprs":{"type":"array","items":{"type":"string"},"description":"List of column expressions to slice data with for targeted analysis. The data is grouped by each expression independently, resulting in a separate slice for each predicate and its complements. For high-cardinality columns, only the top 100 unique values by frequency will generate slices.\n"},"snapshot":{"$ref":"#/types/databricks:index/LakehouseMonitorSnapshot:LakehouseMonitorSnapshot","description":"Configuration for monitoring snapshot tables.\n"},"status":{"type":"string","description":"Status of the Monitor\n"},"tableName":{"type":"string","description":"The full name of the table to attach the monitor too. Its of the format {catalog}.{schema}.{tableName}\n"},"timeSeries":{"$ref":"#/types/databricks:index/LakehouseMonitorTimeSeries:LakehouseMonitorTimeSeries","description":"Configuration for monitoring timeseries tables.\n"},"warehouseId":{"type":"string","description":"Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used.\n"}},"required":["assetsDir","dashboardId","driftMetricsTableName","monitorVersion","outputSchemaName","profileMetricsTableName","status","tableName"],"inputProperties":{"assetsDir":{"type":"string","description":"The directory to store the monitoring assets (Eg. Dashboard and Metric Tables)\n"},"baselineTableName":{"type":"string","description":"Name of the baseline table from which drift metrics are computed from.Columns in the monitored table should also be present in the baseline\ntable.\n"},"customMetrics":{"type":"array","items":{"$ref":"#/types/databricks:index/LakehouseMonitorCustomMetric:LakehouseMonitorCustomMetric"},"description":"Custom metrics to compute on the monitored table. These can be aggregate metrics, derived metrics (from already computed aggregate metrics), or drift metrics (comparing metrics across time windows).\n"},"dataClassificationConfig":{"$ref":"#/types/databricks:index/LakehouseMonitorDataClassificationConfig:LakehouseMonitorDataClassificationConfig","description":"The data classification config for the monitor\n"},"inferenceLog":{"$ref":"#/types/databricks:index/LakehouseMonitorInferenceLog:LakehouseMonitorInferenceLog","description":"Configuration for the inference log monitor\n"},"latestMonitorFailureMsg":{"type":"string"},"notifications":{"$ref":"#/types/databricks:index/LakehouseMonitorNotifications:LakehouseMonitorNotifications","description":"The notification settings for the monitor.  The following optional blocks are supported, each consisting of the single string array field with name \u003cspan pulumi-lang-nodejs=\"`emailAddresses`\" pulumi-lang-dotnet=\"`EmailAddresses`\" pulumi-lang-go=\"`emailAddresses`\" pulumi-lang-python=\"`email_addresses`\" pulumi-lang-yaml=\"`emailAddresses`\" pulumi-lang-java=\"`emailAddresses`\"\u003e`email_addresses`\u003c/span\u003e containing a list of emails to notify:\n"},"outputSchemaName":{"type":"string","description":"Schema where output metric tables are created\n"},"providerConfig":{"$ref":"#/types/databricks:index/LakehouseMonitorProviderConfig:LakehouseMonitorProviderConfig"},"schedule":{"$ref":"#/types/databricks:index/LakehouseMonitorSchedule:LakehouseMonitorSchedule","description":"The schedule for automatically updating and refreshing metric tables.  This block consists of following fields:\n"},"skipBuiltinDashboard":{"type":"boolean","description":"Whether to skip creating a default dashboard summarizing data quality metrics.\n"},"slicingExprs":{"type":"array","items":{"type":"string"},"description":"List of column expressions to slice data with for targeted analysis. The data is grouped by each expression independently, resulting in a separate slice for each predicate and its complements. For high-cardinality columns, only the top 100 unique values by frequency will generate slices.\n"},"snapshot":{"$ref":"#/types/databricks:index/LakehouseMonitorSnapshot:LakehouseMonitorSnapshot","description":"Configuration for monitoring snapshot tables.\n"},"tableName":{"type":"string","description":"The full name of the table to attach the monitor too. Its of the format {catalog}.{schema}.{tableName}\n"},"timeSeries":{"$ref":"#/types/databricks:index/LakehouseMonitorTimeSeries:LakehouseMonitorTimeSeries","description":"Configuration for monitoring timeseries tables.\n"},"warehouseId":{"type":"string","description":"Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used.\n"}},"requiredInputs":["assetsDir","outputSchemaName","tableName"],"stateInputs":{"description":"Input properties used for looking up and filtering LakehouseMonitor resources.\n","properties":{"assetsDir":{"type":"string","description":"The directory to store the monitoring assets (Eg. Dashboard and Metric Tables)\n"},"baselineTableName":{"type":"string","description":"Name of the baseline table from which drift metrics are computed from.Columns in the monitored table should also be present in the baseline\ntable.\n"},"customMetrics":{"type":"array","items":{"$ref":"#/types/databricks:index/LakehouseMonitorCustomMetric:LakehouseMonitorCustomMetric"},"description":"Custom metrics to compute on the monitored table. These can be aggregate metrics, derived metrics (from already computed aggregate metrics), or drift metrics (comparing metrics across time windows).\n"},"dashboardId":{"type":"string","description":"The ID of the generated dashboard.\n"},"dataClassificationConfig":{"$ref":"#/types/databricks:index/LakehouseMonitorDataClassificationConfig:LakehouseMonitorDataClassificationConfig","description":"The data classification config for the monitor\n"},"driftMetricsTableName":{"type":"string","description":"The full name of the drift metrics table. Format: __catalog_name__.__schema_name__.__table_name__.\n"},"inferenceLog":{"$ref":"#/types/databricks:index/LakehouseMonitorInferenceLog:LakehouseMonitorInferenceLog","description":"Configuration for the inference log monitor\n"},"latestMonitorFailureMsg":{"type":"string"},"monitorVersion":{"type":"integer","description":"The version of the monitor config (e.g. 1,2,3). If negative, the monitor may be corrupted\n"},"notifications":{"$ref":"#/types/databricks:index/LakehouseMonitorNotifications:LakehouseMonitorNotifications","description":"The notification settings for the monitor.  The following optional blocks are supported, each consisting of the single string array field with name \u003cspan pulumi-lang-nodejs=\"`emailAddresses`\" pulumi-lang-dotnet=\"`EmailAddresses`\" pulumi-lang-go=\"`emailAddresses`\" pulumi-lang-python=\"`email_addresses`\" pulumi-lang-yaml=\"`emailAddresses`\" pulumi-lang-java=\"`emailAddresses`\"\u003e`email_addresses`\u003c/span\u003e containing a list of emails to notify:\n"},"outputSchemaName":{"type":"string","description":"Schema where output metric tables are created\n"},"profileMetricsTableName":{"type":"string","description":"The full name of the profile metrics table. Format: __catalog_name__.__schema_name__.__table_name__.\n"},"providerConfig":{"$ref":"#/types/databricks:index/LakehouseMonitorProviderConfig:LakehouseMonitorProviderConfig"},"schedule":{"$ref":"#/types/databricks:index/LakehouseMonitorSchedule:LakehouseMonitorSchedule","description":"The schedule for automatically updating and refreshing metric tables.  This block consists of following fields:\n"},"skipBuiltinDashboard":{"type":"boolean","description":"Whether to skip creating a default dashboard summarizing data quality metrics.\n"},"slicingExprs":{"type":"array","items":{"type":"string"},"description":"List of column expressions to slice data with for targeted analysis. The data is grouped by each expression independently, resulting in a separate slice for each predicate and its complements. For high-cardinality columns, only the top 100 unique values by frequency will generate slices.\n"},"snapshot":{"$ref":"#/types/databricks:index/LakehouseMonitorSnapshot:LakehouseMonitorSnapshot","description":"Configuration for monitoring snapshot tables.\n"},"status":{"type":"string","description":"Status of the Monitor\n"},"tableName":{"type":"string","description":"The full name of the table to attach the monitor too. Its of the format {catalog}.{schema}.{tableName}\n"},"timeSeries":{"$ref":"#/types/databricks:index/LakehouseMonitorTimeSeries:LakehouseMonitorTimeSeries","description":"Configuration for monitoring timeseries tables.\n"},"warehouseId":{"type":"string","description":"Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used.\n"}},"type":"object"}},"databricks:index/library:Library":{"description":"Installs a [library](https://docs.databricks.com/libraries/index.html) on databricks_cluster. Each different type of library has a slightly different syntax. It's possible to set only one type of library within one resource. Otherwise, the plan will fail with an error.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e \u003cspan pulumi-lang-nodejs=\"`databricks.Library`\" pulumi-lang-dotnet=\"`databricks.Library`\" pulumi-lang-go=\"`Library`\" pulumi-lang-python=\"`Library`\" pulumi-lang-yaml=\"`databricks.Library`\" pulumi-lang-java=\"`databricks.Library`\"\u003e`databricks.Library`\u003c/span\u003e resource would always start the associated cluster if it's not running, so make sure to have auto-termination configured. It's not possible to atomically change the version of the same library without cluster restart. Libraries are fully removed from the cluster only after restart.\n\n## Plugin Framework Migration\n\nThe library resource has been migrated from sdkv2 to plugin framework。 If you encounter any problem with this resource and suspect it is due to the migration, you can fallback to sdkv2 by setting the environment variable in the following way `export USE_SDK_V2_RESOURCES=\u003cspan pulumi-lang-nodejs=\"\"databricks.Library\"\" pulumi-lang-dotnet=\"\"databricks.Library\"\" pulumi-lang-go=\"\"Library\"\" pulumi-lang-python=\"\"Library\"\" pulumi-lang-yaml=\"\"databricks.Library\"\" pulumi-lang-java=\"\"databricks.Library\"\"\u003e\"databricks.Library\"\u003c/span\u003e`.\n\n## Installing library on all clusters\n\nYou can install libraries on all clusters with the help of\u003cspan pulumi-lang-nodejs=\" databricks.getClusters \" pulumi-lang-dotnet=\" databricks.getClusters \" pulumi-lang-go=\" getClusters \" pulumi-lang-python=\" get_clusters \" pulumi-lang-yaml=\" databricks.getClusters \" pulumi-lang-java=\" databricks.getClusters \"\u003e databricks.getClusters \u003c/span\u003edata resource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nexport = async () =\u003e {\n    const all = await databricks.getClusters({});\n    const cli: databricks.Library[] = [];\n    for (const range of all.ids.map((v, k) =\u003e ({key: k, value: v}))) {\n        cli.push(new databricks.Library(`cli-${range.key}`, {\n            clusterId: range.key,\n            pypi: {\n                \"package\": \"databricks-cli\",\n            },\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_clusters()\ncli = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate(all.ids)]:\n    cli.append(databricks.Library(f\"cli-{range['key']}\",\n        cluster_id=range[\"key\"],\n        pypi={\n            \"package\": \"databricks-cli\",\n        }))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var all = await Databricks.GetClusters.InvokeAsync();\n\n    var cli = new List\u003cDatabricks.Library\u003e();\n    foreach (var range in )\n    {\n        cli.Add(new Databricks.Library($\"cli-{range.Key}\", new()\n        {\n            ClusterId = range.Key,\n            Pypi = new Databricks.Inputs.LibraryPypiArgs\n            {\n                Package = \"databricks-cli\",\n            },\n        }));\n    }\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetClusters(ctx, \u0026databricks.GetClustersArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar cli []*databricks.Library\n\t\tfor key0, _ := range all.Ids {\n\t\t\t__res, err := databricks.NewLibrary(ctx, fmt.Sprintf(\"cli-%v\", key0), \u0026databricks.LibraryArgs{\n\t\t\t\tClusterId: pulumi.Float64(key0),\n\t\t\t\tPypi: \u0026databricks.LibraryPypiArgs{\n\t\t\t\t\tPackage: pulumi.String(\"databricks-cli\"),\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcli = append(cli, __res)\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetClustersArgs;\nimport com.pulumi.databricks.Library;\nimport com.pulumi.databricks.LibraryArgs;\nimport com.pulumi.databricks.inputs.LibraryPypiArgs;\nimport com.pulumi.codegen.internal.KeyedValue;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getClusters(GetClustersArgs.builder()\n            .build());\n\n        final var cli = all.applyValue(getClustersResult -\u003e {\n            final var resources = new ArrayList\u003cLibrary\u003e();\n            for (var range : KeyedValue.of(getClustersResult.ids())) {\n                var resource = new Library(\"cli-\" + range.key(), LibraryArgs.builder()\n                    .clusterId(range.key())\n                    .pypi(LibraryPypiArgs.builder()\n                        .package_(\"databricks-cli\")\n                        .build())\n                    .build());\n\n                resources.add(resource);\n            }\n\n            return resources;\n        });\n\n    }\n}\n```\n```yaml\nresources:\n  cli:\n    type: databricks:Library\n    properties:\n      clusterId: ${range.key}\n      pypi:\n        package: databricks-cli\n    options: {}\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getClusters\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Java/Scala Maven\n\nInstalling artifacts from Maven repository. You can also optionally specify a \u003cspan pulumi-lang-nodejs=\"`repo`\" pulumi-lang-dotnet=\"`Repo`\" pulumi-lang-go=\"`repo`\" pulumi-lang-python=\"`repo`\" pulumi-lang-yaml=\"`repo`\" pulumi-lang-java=\"`repo`\"\u003e`repo`\u003c/span\u003e parameter for a custom Maven-style repository, that should be accessible without any authentication. Maven libraries are resolved in Databricks Control Plane, so repo should be accessible from it. It can even be properly configured [maven s3 wagon](https://github.com/seahen/maven-s3-wagon), [AWS CodeArtifact](https://aws.amazon.com/codeartifact/) or [Azure Artifacts](https://azure.microsoft.com/en-us/services/devops/artifacts/).\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst deequ = new databricks.Library(\"deequ\", {\n    clusterId: _this.id,\n    maven: {\n        coordinates: \"com.amazon.deequ:deequ:1.0.4\",\n        exclusions: [\"org.apache.avro:avro\"],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ndeequ = databricks.Library(\"deequ\",\n    cluster_id=this[\"id\"],\n    maven={\n        \"coordinates\": \"com.amazon.deequ:deequ:1.0.4\",\n        \"exclusions\": [\"org.apache.avro:avro\"],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var deequ = new Databricks.Library(\"deequ\", new()\n    {\n        ClusterId = @this.Id,\n        Maven = new Databricks.Inputs.LibraryMavenArgs\n        {\n            Coordinates = \"com.amazon.deequ:deequ:1.0.4\",\n            Exclusions = new[]\n            {\n                \"org.apache.avro:avro\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewLibrary(ctx, \"deequ\", \u0026databricks.LibraryArgs{\n\t\t\tClusterId: pulumi.Any(this.Id),\n\t\t\tMaven: \u0026databricks.LibraryMavenArgs{\n\t\t\t\tCoordinates: pulumi.String(\"com.amazon.deequ:deequ:1.0.4\"),\n\t\t\t\tExclusions: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"org.apache.avro:avro\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Library;\nimport com.pulumi.databricks.LibraryArgs;\nimport com.pulumi.databricks.inputs.LibraryMavenArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var deequ = new Library(\"deequ\", LibraryArgs.builder()\n            .clusterId(this_.id())\n            .maven(LibraryMavenArgs.builder()\n                .coordinates(\"com.amazon.deequ:deequ:1.0.4\")\n                .exclusions(\"org.apache.avro:avro\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  deequ:\n    type: databricks:Library\n    properties:\n      clusterId: ${this.id}\n      maven:\n        coordinates: com.amazon.deequ:deequ:1.0.4\n        exclusions:\n          - org.apache.avro:avro\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Python PyPI\n\nInstalling Python PyPI artifacts. You can optionally also specify the \u003cspan pulumi-lang-nodejs=\"`repo`\" pulumi-lang-dotnet=\"`Repo`\" pulumi-lang-go=\"`repo`\" pulumi-lang-python=\"`repo`\" pulumi-lang-yaml=\"`repo`\" pulumi-lang-java=\"`repo`\"\u003e`repo`\u003c/span\u003e parameter for a custom PyPI mirror, which should be accessible without any authentication for the network that cluster runs in.\n\n\u003e \u003cspan pulumi-lang-nodejs=\"`repo`\" pulumi-lang-dotnet=\"`Repo`\" pulumi-lang-go=\"`repo`\" pulumi-lang-python=\"`repo`\" pulumi-lang-yaml=\"`repo`\" pulumi-lang-java=\"`repo`\"\u003e`repo`\u003c/span\u003e host should be accessible from the Internet by Databricks control plane. If connectivity to custom PyPI repositories is required, please modify cluster-node `/etc/pip.conf` through databricks_global_init_script.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst fbprophet = new databricks.Library(\"fbprophet\", {\n    clusterId: _this.id,\n    pypi: {\n        \"package\": \"fbprophet==0.6\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nfbprophet = databricks.Library(\"fbprophet\",\n    cluster_id=this[\"id\"],\n    pypi={\n        \"package\": \"fbprophet==0.6\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var fbprophet = new Databricks.Library(\"fbprophet\", new()\n    {\n        ClusterId = @this.Id,\n        Pypi = new Databricks.Inputs.LibraryPypiArgs\n        {\n            Package = \"fbprophet==0.6\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewLibrary(ctx, \"fbprophet\", \u0026databricks.LibraryArgs{\n\t\t\tClusterId: pulumi.Any(this.Id),\n\t\t\tPypi: \u0026databricks.LibraryPypiArgs{\n\t\t\t\tPackage: pulumi.String(\"fbprophet==0.6\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Library;\nimport com.pulumi.databricks.LibraryArgs;\nimport com.pulumi.databricks.inputs.LibraryPypiArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var fbprophet = new Library(\"fbprophet\", LibraryArgs.builder()\n            .clusterId(this_.id())\n            .pypi(LibraryPypiArgs.builder()\n                .package_(\"fbprophet==0.6\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  fbprophet:\n    type: databricks:Library\n    properties:\n      clusterId: ${this.id}\n      pypi:\n        package: fbprophet==0.6\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Python requirements files\n\nInstalling Python libraries listed in the `requirements.txt` file.  Only Workspace paths and Unity Catalog Volumes paths are supported.  Requires a cluster with DBR 15.0+.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst libraries = new databricks.Library(\"libraries\", {\n    clusterId: _this.id,\n    requirements: \"/Workspace/path/to/requirements.txt\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nlibraries = databricks.Library(\"libraries\",\n    cluster_id=this[\"id\"],\n    requirements=\"/Workspace/path/to/requirements.txt\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var libraries = new Databricks.Library(\"libraries\", new()\n    {\n        ClusterId = @this.Id,\n        Requirements = \"/Workspace/path/to/requirements.txt\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewLibrary(ctx, \"libraries\", \u0026databricks.LibraryArgs{\n\t\t\tClusterId:    pulumi.Any(this.Id),\n\t\t\tRequirements: pulumi.String(\"/Workspace/path/to/requirements.txt\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Library;\nimport com.pulumi.databricks.LibraryArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var libraries = new Library(\"libraries\", LibraryArgs.builder()\n            .clusterId(this_.id())\n            .requirements(\"/Workspace/path/to/requirements.txt\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  libraries:\n    type: databricks:Library\n    properties:\n      clusterId: ${this.id}\n      requirements: /Workspace/path/to/requirements.txt\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## R CRan\n\nInstalling artifacts from CRan. You can also optionally specify a \u003cspan pulumi-lang-nodejs=\"`repo`\" pulumi-lang-dotnet=\"`Repo`\" pulumi-lang-go=\"`repo`\" pulumi-lang-python=\"`repo`\" pulumi-lang-yaml=\"`repo`\" pulumi-lang-java=\"`repo`\"\u003e`repo`\u003c/span\u003e parameter for a custom cran mirror.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst rkeops = new databricks.Library(\"rkeops\", {\n    clusterId: _this.id,\n    cran: {\n        \"package\": \"rkeops\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nrkeops = databricks.Library(\"rkeops\",\n    cluster_id=this[\"id\"],\n    cran={\n        \"package\": \"rkeops\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var rkeops = new Databricks.Library(\"rkeops\", new()\n    {\n        ClusterId = @this.Id,\n        Cran = new Databricks.Inputs.LibraryCranArgs\n        {\n            Package = \"rkeops\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewLibrary(ctx, \"rkeops\", \u0026databricks.LibraryArgs{\n\t\t\tClusterId: pulumi.Any(this.Id),\n\t\t\tCran: \u0026databricks.LibraryCranArgs{\n\t\t\t\tPackage: pulumi.String(\"rkeops\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Library;\nimport com.pulumi.databricks.LibraryArgs;\nimport com.pulumi.databricks.inputs.LibraryCranArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var rkeops = new Library(\"rkeops\", LibraryArgs.builder()\n            .clusterId(this_.id())\n            .cran(LibraryCranArgs.builder()\n                .package_(\"rkeops\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  rkeops:\n    type: databricks:Library\n    properties:\n      clusterId: ${this.id}\n      cran:\n        package: rkeops\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getClusters \" pulumi-lang-dotnet=\" databricks.getClusters \" pulumi-lang-go=\" getClusters \" pulumi-lang-python=\" get_clusters \" pulumi-lang-yaml=\" databricks.getClusters \" pulumi-lang-java=\" databricks.getClusters \"\u003e databricks.getClusters \u003c/span\u003edata to retrieve a list of\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eids.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.ClusterPolicy \" pulumi-lang-dotnet=\" databricks.ClusterPolicy \" pulumi-lang-go=\" ClusterPolicy \" pulumi-lang-python=\" ClusterPolicy \" pulumi-lang-yaml=\" databricks.ClusterPolicy \" pulumi-lang-java=\" databricks.ClusterPolicy \"\u003e databricks.ClusterPolicy \u003c/span\u003eto create a\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003epolicy, which limits the ability to create clusters based on a set of rules.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GlobalInitScript \" pulumi-lang-dotnet=\" databricks.GlobalInitScript \" pulumi-lang-go=\" GlobalInitScript \" pulumi-lang-python=\" GlobalInitScript \" pulumi-lang-yaml=\" databricks.GlobalInitScript \" pulumi-lang-java=\" databricks.GlobalInitScript \"\u003e databricks.GlobalInitScript \u003c/span\u003eto manage [global init scripts](https://docs.databricks.com/clusters/init-scripts.html#global-init-scripts), which are run on all\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand databricks_job.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a databricks_cluster.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Pipeline \" pulumi-lang-dotnet=\" databricks.Pipeline \" pulumi-lang-go=\" Pipeline \" pulumi-lang-python=\" Pipeline \" pulumi-lang-yaml=\" databricks.Pipeline \" pulumi-lang-java=\" databricks.Pipeline \"\u003e databricks.Pipeline \u003c/span\u003eto deploy [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage [Databricks Repos](https://docs.databricks.com/repos.html).\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"clusterId":{"type":"string","description":"ID of the\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto install the library on.\n\nYou must specify exactly **one** of the following library types:\n"},"cran":{"$ref":"#/types/databricks:index/LibraryCran:LibraryCran","description":"Configuration block for a CRAN library. The block consists of the following fields:\n"},"egg":{"type":"string","description":"Path to the EGG library. Installing Python egg files is deprecated and is not supported in Databricks Runtime 14.0 and above. Use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead.\n","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string","description":"Path to the JAR library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.jar`, `/Volumes/path/to/library.jar` or `s3://my-bucket/library.jar`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.\n"},"libraryId":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/LibraryMaven:LibraryMaven","description":"Configuration block for a Maven library. The block consists of the following fields:\n"},"providerConfig":{"$ref":"#/types/databricks:index/LibraryProviderConfig:LibraryProviderConfig","description":"Configuration block for management through the account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/LibraryPypi:LibraryPypi","description":"Configuration block for a PyPI library. The block consists of the following fields:\n"},"requirements":{"type":"string","description":"Path to the requirements.txt file. Only Workspace paths and Unity Catalog Volumes paths are supported. For example: `/Workspace/path/to/requirements.txt` or `/Volumes/path/to/requirements.txt`. Requires a cluster with DBR 15.0+.\n"},"whl":{"type":"string","description":"Path to the wheel library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.whl`, `/Volumes/path/to/library.whl` or `s3://my-bucket/library.whl`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.\n"}},"required":["clusterId","libraryId"],"inputProperties":{"clusterId":{"type":"string","description":"ID of the\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto install the library on.\n\nYou must specify exactly **one** of the following library types:\n"},"cran":{"$ref":"#/types/databricks:index/LibraryCran:LibraryCran","description":"Configuration block for a CRAN library. The block consists of the following fields:\n"},"egg":{"type":"string","description":"Path to the EGG library. Installing Python egg files is deprecated and is not supported in Databricks Runtime 14.0 and above. Use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead.\n","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string","description":"Path to the JAR library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.jar`, `/Volumes/path/to/library.jar` or `s3://my-bucket/library.jar`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.\n"},"libraryId":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/LibraryMaven:LibraryMaven","description":"Configuration block for a Maven library. The block consists of the following fields:\n"},"providerConfig":{"$ref":"#/types/databricks:index/LibraryProviderConfig:LibraryProviderConfig","description":"Configuration block for management through the account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/LibraryPypi:LibraryPypi","description":"Configuration block for a PyPI library. The block consists of the following fields:\n"},"requirements":{"type":"string","description":"Path to the requirements.txt file. Only Workspace paths and Unity Catalog Volumes paths are supported. For example: `/Workspace/path/to/requirements.txt` or `/Volumes/path/to/requirements.txt`. Requires a cluster with DBR 15.0+.\n"},"whl":{"type":"string","description":"Path to the wheel library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.whl`, `/Volumes/path/to/library.whl` or `s3://my-bucket/library.whl`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.\n"}},"requiredInputs":["clusterId"],"stateInputs":{"description":"Input properties used for looking up and filtering Library resources.\n","properties":{"clusterId":{"type":"string","description":"ID of the\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto install the library on.\n\nYou must specify exactly **one** of the following library types:\n"},"cran":{"$ref":"#/types/databricks:index/LibraryCran:LibraryCran","description":"Configuration block for a CRAN library. The block consists of the following fields:\n"},"egg":{"type":"string","description":"Path to the EGG library. Installing Python egg files is deprecated and is not supported in Databricks Runtime 14.0 and above. Use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead.\n","deprecationMessage":"The \u003cspan pulumi-lang-nodejs=\"`egg`\" pulumi-lang-dotnet=\"`Egg`\" pulumi-lang-go=\"`egg`\" pulumi-lang-python=\"`egg`\" pulumi-lang-yaml=\"`egg`\" pulumi-lang-java=\"`egg`\"\u003e`egg`\u003c/span\u003e library type is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`whl`\" pulumi-lang-dotnet=\"`Whl`\" pulumi-lang-go=\"`whl`\" pulumi-lang-python=\"`whl`\" pulumi-lang-yaml=\"`whl`\" pulumi-lang-java=\"`whl`\"\u003e`whl`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`pypi`\" pulumi-lang-dotnet=\"`Pypi`\" pulumi-lang-go=\"`pypi`\" pulumi-lang-python=\"`pypi`\" pulumi-lang-yaml=\"`pypi`\" pulumi-lang-java=\"`pypi`\"\u003e`pypi`\u003c/span\u003e instead."},"jar":{"type":"string","description":"Path to the JAR library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.jar`, `/Volumes/path/to/library.jar` or `s3://my-bucket/library.jar`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.\n"},"libraryId":{"type":"string"},"maven":{"$ref":"#/types/databricks:index/LibraryMaven:LibraryMaven","description":"Configuration block for a Maven library. The block consists of the following fields:\n"},"providerConfig":{"$ref":"#/types/databricks:index/LibraryProviderConfig:LibraryProviderConfig","description":"Configuration block for management through the account provider. This block consists of the following fields:\n"},"pypi":{"$ref":"#/types/databricks:index/LibraryPypi:LibraryPypi","description":"Configuration block for a PyPI library. The block consists of the following fields:\n"},"requirements":{"type":"string","description":"Path to the requirements.txt file. Only Workspace paths and Unity Catalog Volumes paths are supported. For example: `/Workspace/path/to/requirements.txt` or `/Volumes/path/to/requirements.txt`. Requires a cluster with DBR 15.0+.\n"},"whl":{"type":"string","description":"Path to the wheel library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.whl`, `/Volumes/path/to/library.whl` or `s3://my-bucket/library.whl`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.\n"}},"type":"object"}},"databricks:index/materializedFeaturesFeatureTag:MaterializedFeaturesFeatureTag":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n","properties":{"key":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/MaterializedFeaturesFeatureTagProviderConfig:MaterializedFeaturesFeatureTagProviderConfig","description":"Configure the provider for management through account provider.\n"},"value":{"type":"string"}},"required":["key"],"inputProperties":{"key":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/MaterializedFeaturesFeatureTagProviderConfig:MaterializedFeaturesFeatureTagProviderConfig","description":"Configure the provider for management through account provider.\n"},"value":{"type":"string"}},"requiredInputs":["key"],"stateInputs":{"description":"Input properties used for looking up and filtering MaterializedFeaturesFeatureTag resources.\n","properties":{"key":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/MaterializedFeaturesFeatureTagProviderConfig:MaterializedFeaturesFeatureTagProviderConfig","description":"Configure the provider for management through account provider.\n"},"value":{"type":"string"}},"type":"object"}},"databricks:index/metastore:Metastore":{"description":"\u003e This resource can be used with an account or workspace-level provider.\n\nA metastore is the top-level container of objects in Unity Catalog. It stores data assets (tables and views) and the permissions that govern access to them. Databricks account admins can create metastores and assign them to Databricks workspaces in order to control which workloads use each metastore.\n\nUnity Catalog offers a new metastore with built in security and auditing. This is distinct to the metastore used in previous versions of Databricks (based on the Hive Metastore).\n\nA Unity Catalog metastore can be created without a root location \u0026 credential to maintain strict separation of storage across catalogs or environments.\n\n## Example Usage\n\nFor AWS\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Metastore(\"this\", {\n    name: \"primary\",\n    storageRoot: `s3://${metastore.id}/metastore`,\n    owner: \"uc admins\",\n    region: \"us-east-1\",\n    forceDestroy: true,\n});\nconst thisMetastoreAssignment = new databricks.MetastoreAssignment(\"this\", {\n    metastoreId: _this.id,\n    workspaceId: workspaceId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Metastore(\"this\",\n    name=\"primary\",\n    storage_root=f\"s3://{metastore['id']}/metastore\",\n    owner=\"uc admins\",\n    region=\"us-east-1\",\n    force_destroy=True)\nthis_metastore_assignment = databricks.MetastoreAssignment(\"this\",\n    metastore_id=this.id,\n    workspace_id=workspace_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Metastore(\"this\", new()\n    {\n        Name = \"primary\",\n        StorageRoot = $\"s3://{metastore.Id}/metastore\",\n        Owner = \"uc admins\",\n        Region = \"us-east-1\",\n        ForceDestroy = true,\n    });\n\n    var thisMetastoreAssignment = new Databricks.MetastoreAssignment(\"this\", new()\n    {\n        MetastoreId = @this.Id,\n        WorkspaceId = workspaceId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewMetastore(ctx, \"this\", \u0026databricks.MetastoreArgs{\n\t\t\tName:         pulumi.String(\"primary\"),\n\t\t\tStorageRoot:  pulumi.Sprintf(\"s3://%v/metastore\", metastore.Id),\n\t\t\tOwner:        pulumi.String(\"uc admins\"),\n\t\t\tRegion:       pulumi.String(\"us-east-1\"),\n\t\t\tForceDestroy: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMetastoreAssignment(ctx, \"this\", \u0026databricks.MetastoreAssignmentArgs{\n\t\t\tMetastoreId: this.ID(),\n\t\t\tWorkspaceId: pulumi.Any(workspaceId),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Metastore;\nimport com.pulumi.databricks.MetastoreArgs;\nimport com.pulumi.databricks.MetastoreAssignment;\nimport com.pulumi.databricks.MetastoreAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Metastore(\"this\", MetastoreArgs.builder()\n            .name(\"primary\")\n            .storageRoot(String.format(\"s3://%s/metastore\", metastore.id()))\n            .owner(\"uc admins\")\n            .region(\"us-east-1\")\n            .forceDestroy(true)\n            .build());\n\n        var thisMetastoreAssignment = new MetastoreAssignment(\"thisMetastoreAssignment\", MetastoreAssignmentArgs.builder()\n            .metastoreId(this_.id())\n            .workspaceId(workspaceId)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Metastore\n    properties:\n      name: primary\n      storageRoot: s3://${metastore.id}/metastore\n      owner: uc admins\n      region: us-east-1\n      forceDestroy: true\n  thisMetastoreAssignment:\n    type: databricks:MetastoreAssignment\n    name: this\n    properties:\n      metastoreId: ${this.id}\n      workspaceId: ${workspaceId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nFor Azure\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst _this = new databricks.Metastore(\"this\", {\n    name: \"primary\",\n    storageRoot: std.format({\n        input: \"abfss://%s@%s.dfs.core.windows.net/\",\n        args: [\n            unityCatalog.name,\n            unityCatalogAzurermStorageAccount.name,\n        ],\n    }).then(invoke =\u003e invoke.result),\n    owner: \"uc admins\",\n    region: \"eastus\",\n    forceDestroy: true,\n});\nconst thisMetastoreAssignment = new databricks.MetastoreAssignment(\"this\", {\n    metastoreId: _this.id,\n    workspaceId: workspaceId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\nthis = databricks.Metastore(\"this\",\n    name=\"primary\",\n    storage_root=std.format(input=\"abfss://%s@%s.dfs.core.windows.net/\",\n        args=[\n            unity_catalog[\"name\"],\n            unity_catalog_azurerm_storage_account[\"name\"],\n        ]).result,\n    owner=\"uc admins\",\n    region=\"eastus\",\n    force_destroy=True)\nthis_metastore_assignment = databricks.MetastoreAssignment(\"this\",\n    metastore_id=this.id,\n    workspace_id=workspace_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Metastore(\"this\", new()\n    {\n        Name = \"primary\",\n        StorageRoot = Std.Format.Invoke(new()\n        {\n            Input = \"abfss://%s@%s.dfs.core.windows.net/\",\n            Args = new[]\n            {\n                unityCatalog.Name,\n                unityCatalogAzurermStorageAccount.Name,\n            },\n        }).Apply(invoke =\u003e invoke.Result),\n        Owner = \"uc admins\",\n        Region = \"eastus\",\n        ForceDestroy = true,\n    });\n\n    var thisMetastoreAssignment = new Databricks.MetastoreAssignment(\"this\", new()\n    {\n        MetastoreId = @this.Id,\n        WorkspaceId = workspaceId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tinvokeFormat, err := std.Format(ctx, \u0026std.FormatArgs{\n\t\t\tInput: \"abfss://%s@%s.dfs.core.windows.net/\",\n\t\t\tArgs: []interface{}{\n\t\t\t\tunityCatalog.Name,\n\t\t\t\tunityCatalogAzurermStorageAccount.Name,\n\t\t\t},\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewMetastore(ctx, \"this\", \u0026databricks.MetastoreArgs{\n\t\t\tName:         pulumi.String(\"primary\"),\n\t\t\tStorageRoot:  pulumi.String(invokeFormat.Result),\n\t\t\tOwner:        pulumi.String(\"uc admins\"),\n\t\t\tRegion:       pulumi.String(\"eastus\"),\n\t\t\tForceDestroy: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMetastoreAssignment(ctx, \"this\", \u0026databricks.MetastoreAssignmentArgs{\n\t\t\tMetastoreId: this.ID(),\n\t\t\tWorkspaceId: pulumi.Any(workspaceId),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Metastore;\nimport com.pulumi.databricks.MetastoreArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.FormatArgs;\nimport com.pulumi.databricks.MetastoreAssignment;\nimport com.pulumi.databricks.MetastoreAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Metastore(\"this\", MetastoreArgs.builder()\n            .name(\"primary\")\n            .storageRoot(StdFunctions.format(FormatArgs.builder()\n                .input(\"abfss://%s@%s.dfs.core.windows.net/\")\n                .args(                \n                    unityCatalog.name(),\n                    unityCatalogAzurermStorageAccount.name())\n                .build()).result())\n            .owner(\"uc admins\")\n            .region(\"eastus\")\n            .forceDestroy(true)\n            .build());\n\n        var thisMetastoreAssignment = new MetastoreAssignment(\"thisMetastoreAssignment\", MetastoreAssignmentArgs.builder()\n            .metastoreId(this_.id())\n            .workspaceId(workspaceId)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Metastore\n    properties:\n      name: primary\n      storageRoot:\n        fn::invoke:\n          function: std:format\n          arguments:\n            input: abfss://%s@%s.dfs.core.windows.net/\n            args:\n              - ${unityCatalog.name}\n              - ${unityCatalogAzurermStorageAccount.name}\n          return: result\n      owner: uc admins\n      region: eastus\n      forceDestroy: true\n  thisMetastoreAssignment:\n    type: databricks:MetastoreAssignment\n    name: this\n    properties:\n      metastoreId: ${this.id}\n      workspaceId: ${workspaceId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nFor GCP\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Metastore(\"this\", {\n    name: \"primary\",\n    storageRoot: `gs://${unityMetastore.name}`,\n    owner: \"uc admins\",\n    region: us_east1,\n    forceDestroy: true,\n});\nconst thisMetastoreAssignment = new databricks.MetastoreAssignment(\"this\", {\n    metastoreId: _this.id,\n    workspaceId: workspaceId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Metastore(\"this\",\n    name=\"primary\",\n    storage_root=f\"gs://{unity_metastore['name']}\",\n    owner=\"uc admins\",\n    region=us_east1,\n    force_destroy=True)\nthis_metastore_assignment = databricks.MetastoreAssignment(\"this\",\n    metastore_id=this.id,\n    workspace_id=workspace_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Metastore(\"this\", new()\n    {\n        Name = \"primary\",\n        StorageRoot = $\"gs://{unityMetastore.Name}\",\n        Owner = \"uc admins\",\n        Region = us_east1,\n        ForceDestroy = true,\n    });\n\n    var thisMetastoreAssignment = new Databricks.MetastoreAssignment(\"this\", new()\n    {\n        MetastoreId = @this.Id,\n        WorkspaceId = workspaceId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewMetastore(ctx, \"this\", \u0026databricks.MetastoreArgs{\n\t\t\tName:         pulumi.String(\"primary\"),\n\t\t\tStorageRoot:  pulumi.Sprintf(\"gs://%v\", unityMetastore.Name),\n\t\t\tOwner:        pulumi.String(\"uc admins\"),\n\t\t\tRegion:       pulumi.Any(us_east1),\n\t\t\tForceDestroy: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMetastoreAssignment(ctx, \"this\", \u0026databricks.MetastoreAssignmentArgs{\n\t\t\tMetastoreId: this.ID(),\n\t\t\tWorkspaceId: pulumi.Any(workspaceId),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Metastore;\nimport com.pulumi.databricks.MetastoreArgs;\nimport com.pulumi.databricks.MetastoreAssignment;\nimport com.pulumi.databricks.MetastoreAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Metastore(\"this\", MetastoreArgs.builder()\n            .name(\"primary\")\n            .storageRoot(String.format(\"gs://%s\", unityMetastore.name()))\n            .owner(\"uc admins\")\n            .region(us_east1)\n            .forceDestroy(true)\n            .build());\n\n        var thisMetastoreAssignment = new MetastoreAssignment(\"thisMetastoreAssignment\", MetastoreAssignmentArgs.builder()\n            .metastoreId(this_.id())\n            .workspaceId(workspaceId)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Metastore\n    properties:\n      name: primary\n      storageRoot: gs://${unityMetastore.name}\n      owner: uc admins\n      region: ${[\"us-east1\"]}\n      forceDestroy: true\n  thisMetastoreAssignment:\n    type: databricks:MetastoreAssignment\n    name: this\n    properties:\n      metastoreId: ${this.id}\n      workspaceId: ${workspaceId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"cloud":{"type":"string","description":"Cloud vendor of the metastore home shard (e.g., \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`azure`\" pulumi-lang-dotnet=\"`Azure`\" pulumi-lang-go=\"`azure`\" pulumi-lang-python=\"`azure`\" pulumi-lang-yaml=\"`azure`\" pulumi-lang-java=\"`azure`\"\u003e`azure`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`gcp`\" pulumi-lang-dotnet=\"`Gcp`\" pulumi-lang-go=\"`gcp`\" pulumi-lang-python=\"`gcp`\" pulumi-lang-yaml=\"`gcp`\" pulumi-lang-java=\"`gcp`\"\u003e`gcp`\u003c/span\u003e).\n"},"createdAt":{"type":"integer","description":"Time at which the metastore was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of metastore creator.\n"},"defaultDataAccessConfigId":{"type":"string","description":"(Optional) Unique identifier of the metastore's default data access configuration.\n"},"deltaSharingOrganizationName":{"type":"string","description":"The organization name of a Delta Sharing entity. This field is used for Databricks to Databricks sharing. Once this is set it cannot be removed and can only be modified to another valid value. To delete this value please taint and recreate the resource.\n"},"deltaSharingRecipientTokenLifetimeInSeconds":{"type":"integer","description":"Required along with \u003cspan pulumi-lang-nodejs=\"`deltaSharingScope`\" pulumi-lang-dotnet=\"`DeltaSharingScope`\" pulumi-lang-go=\"`deltaSharingScope`\" pulumi-lang-python=\"`delta_sharing_scope`\" pulumi-lang-yaml=\"`deltaSharingScope`\" pulumi-lang-java=\"`deltaSharingScope`\"\u003e`delta_sharing_scope`\u003c/span\u003e. Used to set expiration duration in seconds on recipient data access tokens. Defaults to 31536000 (1 year).\n"},"deltaSharingScope":{"type":"string","description":"Required along with \u003cspan pulumi-lang-nodejs=\"`deltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-dotnet=\"`DeltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-go=\"`deltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-python=\"`delta_sharing_recipient_token_lifetime_in_seconds`\" pulumi-lang-yaml=\"`deltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-java=\"`deltaSharingRecipientTokenLifetimeInSeconds`\"\u003e`delta_sharing_recipient_token_lifetime_in_seconds`\u003c/span\u003e. Used to enable delta sharing on the metastore. Valid values: `INTERNAL`, `INTERNAL_AND_EXTERNAL`.  `INTERNAL` only allows sharing within the same account, and `INTERNAL_AND_EXTERNAL` allows cross account sharing and token based sharing.\n"},"externalAccessEnabled":{"type":"boolean","description":"Whether to allow non-DBR clients to directly access entities under the metastore.\n"},"forceDestroy":{"type":"boolean","description":"Destroy metastore regardless of its contents.\n"},"globalMetastoreId":{"type":"string","description":"Globally unique metastore ID across clouds and regions, of the form `cloud:region:metastore_id`.\n"},"metastoreId":{"type":"string","description":"Unique identifier of the metastore.\n"},"name":{"type":"string","description":"Name of metastore.\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the metastore owner.\n"},"privilegeModelVersion":{"type":"string","description":"Privilege model version of the metastore, of the form `major.minor` (e.g., `1.0`).\n"},"region":{"type":"string","description":"The region of the metastore\n"},"storageRoot":{"type":"string","description":"Path on cloud storage account, where managed \u003cspan pulumi-lang-nodejs=\"`databricks.Table`\" pulumi-lang-dotnet=\"`databricks.Table`\" pulumi-lang-go=\"`Table`\" pulumi-lang-python=\"`Table`\" pulumi-lang-yaml=\"`databricks.Table`\" pulumi-lang-java=\"`databricks.Table`\"\u003e`databricks.Table`\u003c/span\u003e are stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). Change forces creation of a new resource. If no \u003cspan pulumi-lang-nodejs=\"`storageRoot`\" pulumi-lang-dotnet=\"`StorageRoot`\" pulumi-lang-go=\"`storageRoot`\" pulumi-lang-python=\"`storage_root`\" pulumi-lang-yaml=\"`storageRoot`\" pulumi-lang-java=\"`storageRoot`\"\u003e`storage_root`\u003c/span\u003e is defined for the metastore, each catalog must have a \u003cspan pulumi-lang-nodejs=\"`storageRoot`\" pulumi-lang-dotnet=\"`StorageRoot`\" pulumi-lang-go=\"`storageRoot`\" pulumi-lang-python=\"`storage_root`\" pulumi-lang-yaml=\"`storageRoot`\" pulumi-lang-java=\"`storageRoot`\"\u003e`storage_root`\u003c/span\u003e defined.  **It's recommended to define \u003cspan pulumi-lang-nodejs=\"`storageRoot`\" pulumi-lang-dotnet=\"`StorageRoot`\" pulumi-lang-go=\"`storageRoot`\" pulumi-lang-python=\"`storage_root`\" pulumi-lang-yaml=\"`storageRoot`\" pulumi-lang-java=\"`storageRoot`\"\u003e`storage_root`\u003c/span\u003e on the catalog level.\n"},"storageRootCredentialId":{"type":"string","description":"(Optional) UUID of storage credential to access the metastore storage_root.\n"},"storageRootCredentialName":{"type":"string","description":"Name of the storage credential to access the metastore storage_root.\n"},"updatedAt":{"type":"integer","description":"Time at which the metastore was last modified, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of user who last modified the metastore.\n"}},"required":["cloud","createdAt","createdBy","globalMetastoreId","metastoreId","name","owner","privilegeModelVersion","region","updatedAt","updatedBy"],"inputProperties":{"defaultDataAccessConfigId":{"type":"string","description":"(Optional) Unique identifier of the metastore's default data access configuration.\n"},"deltaSharingOrganizationName":{"type":"string","description":"The organization name of a Delta Sharing entity. This field is used for Databricks to Databricks sharing. Once this is set it cannot be removed and can only be modified to another valid value. To delete this value please taint and recreate the resource.\n"},"deltaSharingRecipientTokenLifetimeInSeconds":{"type":"integer","description":"Required along with \u003cspan pulumi-lang-nodejs=\"`deltaSharingScope`\" pulumi-lang-dotnet=\"`DeltaSharingScope`\" pulumi-lang-go=\"`deltaSharingScope`\" pulumi-lang-python=\"`delta_sharing_scope`\" pulumi-lang-yaml=\"`deltaSharingScope`\" pulumi-lang-java=\"`deltaSharingScope`\"\u003e`delta_sharing_scope`\u003c/span\u003e. Used to set expiration duration in seconds on recipient data access tokens. Defaults to 31536000 (1 year).\n"},"deltaSharingScope":{"type":"string","description":"Required along with \u003cspan pulumi-lang-nodejs=\"`deltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-dotnet=\"`DeltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-go=\"`deltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-python=\"`delta_sharing_recipient_token_lifetime_in_seconds`\" pulumi-lang-yaml=\"`deltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-java=\"`deltaSharingRecipientTokenLifetimeInSeconds`\"\u003e`delta_sharing_recipient_token_lifetime_in_seconds`\u003c/span\u003e. Used to enable delta sharing on the metastore. Valid values: `INTERNAL`, `INTERNAL_AND_EXTERNAL`.  `INTERNAL` only allows sharing within the same account, and `INTERNAL_AND_EXTERNAL` allows cross account sharing and token based sharing.\n"},"externalAccessEnabled":{"type":"boolean","description":"Whether to allow non-DBR clients to directly access entities under the metastore.\n"},"forceDestroy":{"type":"boolean","description":"Destroy metastore regardless of its contents.\n"},"name":{"type":"string","description":"Name of metastore.\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the metastore owner.\n"},"privilegeModelVersion":{"type":"string","description":"Privilege model version of the metastore, of the form `major.minor` (e.g., `1.0`).\n"},"region":{"type":"string","description":"The region of the metastore\n"},"storageRoot":{"type":"string","description":"Path on cloud storage account, where managed \u003cspan pulumi-lang-nodejs=\"`databricks.Table`\" pulumi-lang-dotnet=\"`databricks.Table`\" pulumi-lang-go=\"`Table`\" pulumi-lang-python=\"`Table`\" pulumi-lang-yaml=\"`databricks.Table`\" pulumi-lang-java=\"`databricks.Table`\"\u003e`databricks.Table`\u003c/span\u003e are stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). Change forces creation of a new resource. If no \u003cspan pulumi-lang-nodejs=\"`storageRoot`\" pulumi-lang-dotnet=\"`StorageRoot`\" pulumi-lang-go=\"`storageRoot`\" pulumi-lang-python=\"`storage_root`\" pulumi-lang-yaml=\"`storageRoot`\" pulumi-lang-java=\"`storageRoot`\"\u003e`storage_root`\u003c/span\u003e is defined for the metastore, each catalog must have a \u003cspan pulumi-lang-nodejs=\"`storageRoot`\" pulumi-lang-dotnet=\"`StorageRoot`\" pulumi-lang-go=\"`storageRoot`\" pulumi-lang-python=\"`storage_root`\" pulumi-lang-yaml=\"`storageRoot`\" pulumi-lang-java=\"`storageRoot`\"\u003e`storage_root`\u003c/span\u003e defined.  **It's recommended to define \u003cspan pulumi-lang-nodejs=\"`storageRoot`\" pulumi-lang-dotnet=\"`StorageRoot`\" pulumi-lang-go=\"`storageRoot`\" pulumi-lang-python=\"`storage_root`\" pulumi-lang-yaml=\"`storageRoot`\" pulumi-lang-java=\"`storageRoot`\"\u003e`storage_root`\u003c/span\u003e on the catalog level.\n","willReplaceOnChanges":true},"storageRootCredentialId":{"type":"string","description":"(Optional) UUID of storage credential to access the metastore storage_root.\n"},"storageRootCredentialName":{"type":"string","description":"Name of the storage credential to access the metastore storage_root.\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering Metastore resources.\n","properties":{"cloud":{"type":"string","description":"Cloud vendor of the metastore home shard (e.g., \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`azure`\" pulumi-lang-dotnet=\"`Azure`\" pulumi-lang-go=\"`azure`\" pulumi-lang-python=\"`azure`\" pulumi-lang-yaml=\"`azure`\" pulumi-lang-java=\"`azure`\"\u003e`azure`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`gcp`\" pulumi-lang-dotnet=\"`Gcp`\" pulumi-lang-go=\"`gcp`\" pulumi-lang-python=\"`gcp`\" pulumi-lang-yaml=\"`gcp`\" pulumi-lang-java=\"`gcp`\"\u003e`gcp`\u003c/span\u003e).\n"},"createdAt":{"type":"integer","description":"Time at which the metastore was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of metastore creator.\n"},"defaultDataAccessConfigId":{"type":"string","description":"(Optional) Unique identifier of the metastore's default data access configuration.\n"},"deltaSharingOrganizationName":{"type":"string","description":"The organization name of a Delta Sharing entity. This field is used for Databricks to Databricks sharing. Once this is set it cannot be removed and can only be modified to another valid value. To delete this value please taint and recreate the resource.\n"},"deltaSharingRecipientTokenLifetimeInSeconds":{"type":"integer","description":"Required along with \u003cspan pulumi-lang-nodejs=\"`deltaSharingScope`\" pulumi-lang-dotnet=\"`DeltaSharingScope`\" pulumi-lang-go=\"`deltaSharingScope`\" pulumi-lang-python=\"`delta_sharing_scope`\" pulumi-lang-yaml=\"`deltaSharingScope`\" pulumi-lang-java=\"`deltaSharingScope`\"\u003e`delta_sharing_scope`\u003c/span\u003e. Used to set expiration duration in seconds on recipient data access tokens. Defaults to 31536000 (1 year).\n"},"deltaSharingScope":{"type":"string","description":"Required along with \u003cspan pulumi-lang-nodejs=\"`deltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-dotnet=\"`DeltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-go=\"`deltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-python=\"`delta_sharing_recipient_token_lifetime_in_seconds`\" pulumi-lang-yaml=\"`deltaSharingRecipientTokenLifetimeInSeconds`\" pulumi-lang-java=\"`deltaSharingRecipientTokenLifetimeInSeconds`\"\u003e`delta_sharing_recipient_token_lifetime_in_seconds`\u003c/span\u003e. Used to enable delta sharing on the metastore. Valid values: `INTERNAL`, `INTERNAL_AND_EXTERNAL`.  `INTERNAL` only allows sharing within the same account, and `INTERNAL_AND_EXTERNAL` allows cross account sharing and token based sharing.\n"},"externalAccessEnabled":{"type":"boolean","description":"Whether to allow non-DBR clients to directly access entities under the metastore.\n"},"forceDestroy":{"type":"boolean","description":"Destroy metastore regardless of its contents.\n"},"globalMetastoreId":{"type":"string","description":"Globally unique metastore ID across clouds and regions, of the form `cloud:region:metastore_id`.\n"},"metastoreId":{"type":"string","description":"Unique identifier of the metastore.\n"},"name":{"type":"string","description":"Name of metastore.\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the metastore owner.\n"},"privilegeModelVersion":{"type":"string","description":"Privilege model version of the metastore, of the form `major.minor` (e.g., `1.0`).\n"},"region":{"type":"string","description":"The region of the metastore\n"},"storageRoot":{"type":"string","description":"Path on cloud storage account, where managed \u003cspan pulumi-lang-nodejs=\"`databricks.Table`\" pulumi-lang-dotnet=\"`databricks.Table`\" pulumi-lang-go=\"`Table`\" pulumi-lang-python=\"`Table`\" pulumi-lang-yaml=\"`databricks.Table`\" pulumi-lang-java=\"`databricks.Table`\"\u003e`databricks.Table`\u003c/span\u003e are stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). Change forces creation of a new resource. If no \u003cspan pulumi-lang-nodejs=\"`storageRoot`\" pulumi-lang-dotnet=\"`StorageRoot`\" pulumi-lang-go=\"`storageRoot`\" pulumi-lang-python=\"`storage_root`\" pulumi-lang-yaml=\"`storageRoot`\" pulumi-lang-java=\"`storageRoot`\"\u003e`storage_root`\u003c/span\u003e is defined for the metastore, each catalog must have a \u003cspan pulumi-lang-nodejs=\"`storageRoot`\" pulumi-lang-dotnet=\"`StorageRoot`\" pulumi-lang-go=\"`storageRoot`\" pulumi-lang-python=\"`storage_root`\" pulumi-lang-yaml=\"`storageRoot`\" pulumi-lang-java=\"`storageRoot`\"\u003e`storage_root`\u003c/span\u003e defined.  **It's recommended to define \u003cspan pulumi-lang-nodejs=\"`storageRoot`\" pulumi-lang-dotnet=\"`StorageRoot`\" pulumi-lang-go=\"`storageRoot`\" pulumi-lang-python=\"`storage_root`\" pulumi-lang-yaml=\"`storageRoot`\" pulumi-lang-java=\"`storageRoot`\"\u003e`storage_root`\u003c/span\u003e on the catalog level.\n","willReplaceOnChanges":true},"storageRootCredentialId":{"type":"string","description":"(Optional) UUID of storage credential to access the metastore storage_root.\n"},"storageRootCredentialName":{"type":"string","description":"Name of the storage credential to access the metastore storage_root.\n"},"updatedAt":{"type":"integer","description":"Time at which the metastore was last modified, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of user who last modified the metastore.\n"}},"type":"object"}},"databricks:index/metastoreAssignment:MetastoreAssignment":{"description":"\u003e This resource can be used with an account or workspace-level provider.\n\nA single\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003ecan be shared across Databricks workspaces, and each linked workspace has a consistent view of the data and a single set of access policies. You can only create a single metastore for each region in which your organization operates.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Metastore(\"this\", {\n    name: \"primary\",\n    storageRoot: `s3://${metastore.id}/metastore`,\n    owner: \"uc admins\",\n    region: \"us-east-1\",\n    forceDestroy: true,\n});\nconst thisMetastoreAssignment = new databricks.MetastoreAssignment(\"this\", {\n    metastoreId: _this.id,\n    workspaceId: workspaceId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Metastore(\"this\",\n    name=\"primary\",\n    storage_root=f\"s3://{metastore['id']}/metastore\",\n    owner=\"uc admins\",\n    region=\"us-east-1\",\n    force_destroy=True)\nthis_metastore_assignment = databricks.MetastoreAssignment(\"this\",\n    metastore_id=this.id,\n    workspace_id=workspace_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Metastore(\"this\", new()\n    {\n        Name = \"primary\",\n        StorageRoot = $\"s3://{metastore.Id}/metastore\",\n        Owner = \"uc admins\",\n        Region = \"us-east-1\",\n        ForceDestroy = true,\n    });\n\n    var thisMetastoreAssignment = new Databricks.MetastoreAssignment(\"this\", new()\n    {\n        MetastoreId = @this.Id,\n        WorkspaceId = workspaceId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewMetastore(ctx, \"this\", \u0026databricks.MetastoreArgs{\n\t\t\tName:         pulumi.String(\"primary\"),\n\t\t\tStorageRoot:  pulumi.Sprintf(\"s3://%v/metastore\", metastore.Id),\n\t\t\tOwner:        pulumi.String(\"uc admins\"),\n\t\t\tRegion:       pulumi.String(\"us-east-1\"),\n\t\t\tForceDestroy: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMetastoreAssignment(ctx, \"this\", \u0026databricks.MetastoreAssignmentArgs{\n\t\t\tMetastoreId: this.ID(),\n\t\t\tWorkspaceId: pulumi.Any(workspaceId),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Metastore;\nimport com.pulumi.databricks.MetastoreArgs;\nimport com.pulumi.databricks.MetastoreAssignment;\nimport com.pulumi.databricks.MetastoreAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Metastore(\"this\", MetastoreArgs.builder()\n            .name(\"primary\")\n            .storageRoot(String.format(\"s3://%s/metastore\", metastore.id()))\n            .owner(\"uc admins\")\n            .region(\"us-east-1\")\n            .forceDestroy(true)\n            .build());\n\n        var thisMetastoreAssignment = new MetastoreAssignment(\"thisMetastoreAssignment\", MetastoreAssignmentArgs.builder()\n            .metastoreId(this_.id())\n            .workspaceId(workspaceId)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Metastore\n    properties:\n      name: primary\n      storageRoot: s3://${metastore.id}/metastore\n      owner: uc admins\n      region: us-east-1\n      forceDestroy: true\n  thisMetastoreAssignment:\n    type: databricks:MetastoreAssignment\n    name: this\n    properties:\n      metastoreId: ${this.id}\n      workspaceId: ${workspaceId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"defaultCatalogName":{"type":"string","description":"Default catalog used for this assignment. Please use\u003cspan pulumi-lang-nodejs=\" databricks.DefaultNamespaceSetting \" pulumi-lang-dotnet=\" databricks.DefaultNamespaceSetting \" pulumi-lang-go=\" DefaultNamespaceSetting \" pulumi-lang-python=\" DefaultNamespaceSetting \" pulumi-lang-yaml=\" databricks.DefaultNamespaceSetting \" pulumi-lang-java=\" databricks.DefaultNamespaceSetting \"\u003e databricks.DefaultNamespaceSetting \u003c/span\u003einstead.\n","deprecationMessage":"Use\u003cspan pulumi-lang-nodejs=\" databricks.DefaultNamespaceSetting \" pulumi-lang-dotnet=\" databricks.DefaultNamespaceSetting \" pulumi-lang-go=\" DefaultNamespaceSetting \" pulumi-lang-python=\" DefaultNamespaceSetting \" pulumi-lang-yaml=\" databricks.DefaultNamespaceSetting \" pulumi-lang-java=\" databricks.DefaultNamespaceSetting \"\u003e databricks.DefaultNamespaceSetting \u003c/span\u003eresource instead"},"metastoreId":{"type":"string","description":"Unique identifier of the parent Metastore\n"},"workspaceId":{"type":"string","description":"id of the workspace for the assignment\n"}},"required":["defaultCatalogName","metastoreId","workspaceId"],"inputProperties":{"defaultCatalogName":{"type":"string","description":"Default catalog used for this assignment. Please use\u003cspan pulumi-lang-nodejs=\" databricks.DefaultNamespaceSetting \" pulumi-lang-dotnet=\" databricks.DefaultNamespaceSetting \" pulumi-lang-go=\" DefaultNamespaceSetting \" pulumi-lang-python=\" DefaultNamespaceSetting \" pulumi-lang-yaml=\" databricks.DefaultNamespaceSetting \" pulumi-lang-java=\" databricks.DefaultNamespaceSetting \"\u003e databricks.DefaultNamespaceSetting \u003c/span\u003einstead.\n","deprecationMessage":"Use\u003cspan pulumi-lang-nodejs=\" databricks.DefaultNamespaceSetting \" pulumi-lang-dotnet=\" databricks.DefaultNamespaceSetting \" pulumi-lang-go=\" DefaultNamespaceSetting \" pulumi-lang-python=\" DefaultNamespaceSetting \" pulumi-lang-yaml=\" databricks.DefaultNamespaceSetting \" pulumi-lang-java=\" databricks.DefaultNamespaceSetting \"\u003e databricks.DefaultNamespaceSetting \u003c/span\u003eresource instead"},"metastoreId":{"type":"string","description":"Unique identifier of the parent Metastore\n","willReplaceOnChanges":true},"workspaceId":{"type":"string","description":"id of the workspace for the assignment\n","willReplaceOnChanges":true}},"requiredInputs":["metastoreId","workspaceId"],"stateInputs":{"description":"Input properties used for looking up and filtering MetastoreAssignment resources.\n","properties":{"defaultCatalogName":{"type":"string","description":"Default catalog used for this assignment. Please use\u003cspan pulumi-lang-nodejs=\" databricks.DefaultNamespaceSetting \" pulumi-lang-dotnet=\" databricks.DefaultNamespaceSetting \" pulumi-lang-go=\" DefaultNamespaceSetting \" pulumi-lang-python=\" DefaultNamespaceSetting \" pulumi-lang-yaml=\" databricks.DefaultNamespaceSetting \" pulumi-lang-java=\" databricks.DefaultNamespaceSetting \"\u003e databricks.DefaultNamespaceSetting \u003c/span\u003einstead.\n","deprecationMessage":"Use\u003cspan pulumi-lang-nodejs=\" databricks.DefaultNamespaceSetting \" pulumi-lang-dotnet=\" databricks.DefaultNamespaceSetting \" pulumi-lang-go=\" DefaultNamespaceSetting \" pulumi-lang-python=\" DefaultNamespaceSetting \" pulumi-lang-yaml=\" databricks.DefaultNamespaceSetting \" pulumi-lang-java=\" databricks.DefaultNamespaceSetting \"\u003e databricks.DefaultNamespaceSetting \u003c/span\u003eresource instead"},"metastoreId":{"type":"string","description":"Unique identifier of the parent Metastore\n","willReplaceOnChanges":true},"workspaceId":{"type":"string","description":"id of the workspace for the assignment\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/metastoreDataAccess:MetastoreDataAccess":{"description":"\u003e This resource can be used with an account or workspace-level provider.\n\nOptionally, each\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003ecan have a default\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003edefined as \u003cspan pulumi-lang-nodejs=\"`databricks.MetastoreDataAccess`\" pulumi-lang-dotnet=\"`databricks.MetastoreDataAccess`\" pulumi-lang-go=\"`MetastoreDataAccess`\" pulumi-lang-python=\"`MetastoreDataAccess`\" pulumi-lang-yaml=\"`databricks.MetastoreDataAccess`\" pulumi-lang-java=\"`databricks.MetastoreDataAccess`\"\u003e`databricks.MetastoreDataAccess`\u003c/span\u003e. This will be used by Unity Catalog to access data in the root storage location if defined.\n\n## Example Usage\n\nFor AWS\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Metastore(\"this\", {\n    name: \"primary\",\n    storageRoot: `s3://${metastore.id}/metastore`,\n    owner: \"uc admins\",\n    region: \"us-east-1\",\n    forceDestroy: true,\n});\nconst thisMetastoreDataAccess = new databricks.MetastoreDataAccess(\"this\", {\n    metastoreId: _this.id,\n    name: metastoreDataAccess.name,\n    awsIamRole: {\n        roleArn: metastoreDataAccess.arn,\n    },\n    isDefault: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Metastore(\"this\",\n    name=\"primary\",\n    storage_root=f\"s3://{metastore['id']}/metastore\",\n    owner=\"uc admins\",\n    region=\"us-east-1\",\n    force_destroy=True)\nthis_metastore_data_access = databricks.MetastoreDataAccess(\"this\",\n    metastore_id=this.id,\n    name=metastore_data_access[\"name\"],\n    aws_iam_role={\n        \"role_arn\": metastore_data_access[\"arn\"],\n    },\n    is_default=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Metastore(\"this\", new()\n    {\n        Name = \"primary\",\n        StorageRoot = $\"s3://{metastore.Id}/metastore\",\n        Owner = \"uc admins\",\n        Region = \"us-east-1\",\n        ForceDestroy = true,\n    });\n\n    var thisMetastoreDataAccess = new Databricks.MetastoreDataAccess(\"this\", new()\n    {\n        MetastoreId = @this.Id,\n        Name = metastoreDataAccess.Name,\n        AwsIamRole = new Databricks.Inputs.MetastoreDataAccessAwsIamRoleArgs\n        {\n            RoleArn = metastoreDataAccess.Arn,\n        },\n        IsDefault = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewMetastore(ctx, \"this\", \u0026databricks.MetastoreArgs{\n\t\t\tName:         pulumi.String(\"primary\"),\n\t\t\tStorageRoot:  pulumi.Sprintf(\"s3://%v/metastore\", metastore.Id),\n\t\t\tOwner:        pulumi.String(\"uc admins\"),\n\t\t\tRegion:       pulumi.String(\"us-east-1\"),\n\t\t\tForceDestroy: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMetastoreDataAccess(ctx, \"this\", \u0026databricks.MetastoreDataAccessArgs{\n\t\t\tMetastoreId: this.ID(),\n\t\t\tName:        pulumi.Any(metastoreDataAccess.Name),\n\t\t\tAwsIamRole: \u0026databricks.MetastoreDataAccessAwsIamRoleArgs{\n\t\t\t\tRoleArn: pulumi.Any(metastoreDataAccess.Arn),\n\t\t\t},\n\t\t\tIsDefault: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Metastore;\nimport com.pulumi.databricks.MetastoreArgs;\nimport com.pulumi.databricks.MetastoreDataAccess;\nimport com.pulumi.databricks.MetastoreDataAccessArgs;\nimport com.pulumi.databricks.inputs.MetastoreDataAccessAwsIamRoleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Metastore(\"this\", MetastoreArgs.builder()\n            .name(\"primary\")\n            .storageRoot(String.format(\"s3://%s/metastore\", metastore.id()))\n            .owner(\"uc admins\")\n            .region(\"us-east-1\")\n            .forceDestroy(true)\n            .build());\n\n        var thisMetastoreDataAccess = new MetastoreDataAccess(\"thisMetastoreDataAccess\", MetastoreDataAccessArgs.builder()\n            .metastoreId(this_.id())\n            .name(metastoreDataAccess.name())\n            .awsIamRole(MetastoreDataAccessAwsIamRoleArgs.builder()\n                .roleArn(metastoreDataAccess.arn())\n                .build())\n            .isDefault(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Metastore\n    properties:\n      name: primary\n      storageRoot: s3://${metastore.id}/metastore\n      owner: uc admins\n      region: us-east-1\n      forceDestroy: true\n  thisMetastoreDataAccess:\n    type: databricks:MetastoreDataAccess\n    name: this\n    properties:\n      metastoreId: ${this.id}\n      name: ${metastoreDataAccess.name}\n      awsIamRole:\n        roleArn: ${metastoreDataAccess.arn}\n      isDefault: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nFor Azure using managed identity as credential (recommended)\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst _this = new databricks.Metastore(\"this\", {\n    name: \"primary\",\n    storageRoot: std.format({\n        input: \"abfss://%s@%s.dfs.core.windows.net/\",\n        args: [\n            unityCatalog.name,\n            unityCatalogAzurermStorageAccount.name,\n        ],\n    }).then(invoke =\u003e invoke.result),\n    owner: \"uc admins\",\n    region: \"eastus\",\n    forceDestroy: true,\n});\nconst thisMetastoreDataAccess = new databricks.MetastoreDataAccess(\"this\", {\n    metastoreId: _this.id,\n    name: \"mi_dac\",\n    azureManagedIdentity: {\n        accessConnectorId: accessConnectorId,\n    },\n    isDefault: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\nthis = databricks.Metastore(\"this\",\n    name=\"primary\",\n    storage_root=std.format(input=\"abfss://%s@%s.dfs.core.windows.net/\",\n        args=[\n            unity_catalog[\"name\"],\n            unity_catalog_azurerm_storage_account[\"name\"],\n        ]).result,\n    owner=\"uc admins\",\n    region=\"eastus\",\n    force_destroy=True)\nthis_metastore_data_access = databricks.MetastoreDataAccess(\"this\",\n    metastore_id=this.id,\n    name=\"mi_dac\",\n    azure_managed_identity={\n        \"access_connector_id\": access_connector_id,\n    },\n    is_default=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Metastore(\"this\", new()\n    {\n        Name = \"primary\",\n        StorageRoot = Std.Format.Invoke(new()\n        {\n            Input = \"abfss://%s@%s.dfs.core.windows.net/\",\n            Args = new[]\n            {\n                unityCatalog.Name,\n                unityCatalogAzurermStorageAccount.Name,\n            },\n        }).Apply(invoke =\u003e invoke.Result),\n        Owner = \"uc admins\",\n        Region = \"eastus\",\n        ForceDestroy = true,\n    });\n\n    var thisMetastoreDataAccess = new Databricks.MetastoreDataAccess(\"this\", new()\n    {\n        MetastoreId = @this.Id,\n        Name = \"mi_dac\",\n        AzureManagedIdentity = new Databricks.Inputs.MetastoreDataAccessAzureManagedIdentityArgs\n        {\n            AccessConnectorId = accessConnectorId,\n        },\n        IsDefault = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tinvokeFormat, err := std.Format(ctx, \u0026std.FormatArgs{\n\t\t\tInput: \"abfss://%s@%s.dfs.core.windows.net/\",\n\t\t\tArgs: []interface{}{\n\t\t\t\tunityCatalog.Name,\n\t\t\t\tunityCatalogAzurermStorageAccount.Name,\n\t\t\t},\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewMetastore(ctx, \"this\", \u0026databricks.MetastoreArgs{\n\t\t\tName:         pulumi.String(\"primary\"),\n\t\t\tStorageRoot:  pulumi.String(invokeFormat.Result),\n\t\t\tOwner:        pulumi.String(\"uc admins\"),\n\t\t\tRegion:       pulumi.String(\"eastus\"),\n\t\t\tForceDestroy: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMetastoreDataAccess(ctx, \"this\", \u0026databricks.MetastoreDataAccessArgs{\n\t\t\tMetastoreId: this.ID(),\n\t\t\tName:        pulumi.String(\"mi_dac\"),\n\t\t\tAzureManagedIdentity: \u0026databricks.MetastoreDataAccessAzureManagedIdentityArgs{\n\t\t\t\tAccessConnectorId: pulumi.Any(accessConnectorId),\n\t\t\t},\n\t\t\tIsDefault: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Metastore;\nimport com.pulumi.databricks.MetastoreArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.FormatArgs;\nimport com.pulumi.databricks.MetastoreDataAccess;\nimport com.pulumi.databricks.MetastoreDataAccessArgs;\nimport com.pulumi.databricks.inputs.MetastoreDataAccessAzureManagedIdentityArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Metastore(\"this\", MetastoreArgs.builder()\n            .name(\"primary\")\n            .storageRoot(StdFunctions.format(FormatArgs.builder()\n                .input(\"abfss://%s@%s.dfs.core.windows.net/\")\n                .args(                \n                    unityCatalog.name(),\n                    unityCatalogAzurermStorageAccount.name())\n                .build()).result())\n            .owner(\"uc admins\")\n            .region(\"eastus\")\n            .forceDestroy(true)\n            .build());\n\n        var thisMetastoreDataAccess = new MetastoreDataAccess(\"thisMetastoreDataAccess\", MetastoreDataAccessArgs.builder()\n            .metastoreId(this_.id())\n            .name(\"mi_dac\")\n            .azureManagedIdentity(MetastoreDataAccessAzureManagedIdentityArgs.builder()\n                .accessConnectorId(accessConnectorId)\n                .build())\n            .isDefault(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Metastore\n    properties:\n      name: primary\n      storageRoot:\n        fn::invoke:\n          function: std:format\n          arguments:\n            input: abfss://%s@%s.dfs.core.windows.net/\n            args:\n              - ${unityCatalog.name}\n              - ${unityCatalogAzurermStorageAccount.name}\n          return: result\n      owner: uc admins\n      region: eastus\n      forceDestroy: true\n  thisMetastoreDataAccess:\n    type: databricks:MetastoreDataAccess\n    name: this\n    properties:\n      metastoreId: ${this.id}\n      name: mi_dac\n      azureManagedIdentity:\n        accessConnectorId: ${accessConnectorId}\n      isDefault: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"awsIamRole":{"$ref":"#/types/databricks:index/MetastoreDataAccessAwsIamRole:MetastoreDataAccessAwsIamRole"},"azureManagedIdentity":{"$ref":"#/types/databricks:index/MetastoreDataAccessAzureManagedIdentity:MetastoreDataAccessAzureManagedIdentity"},"azureServicePrincipal":{"$ref":"#/types/databricks:index/MetastoreDataAccessAzureServicePrincipal:MetastoreDataAccessAzureServicePrincipal"},"cloudflareApiToken":{"$ref":"#/types/databricks:index/MetastoreDataAccessCloudflareApiToken:MetastoreDataAccessCloudflareApiToken"},"comment":{"type":"string"},"databricksGcpServiceAccount":{"$ref":"#/types/databricks:index/MetastoreDataAccessDatabricksGcpServiceAccount:MetastoreDataAccessDatabricksGcpServiceAccount"},"forceDestroy":{"type":"boolean"},"forceUpdate":{"type":"boolean"},"gcpServiceAccountKey":{"$ref":"#/types/databricks:index/MetastoreDataAccessGcpServiceAccountKey:MetastoreDataAccessGcpServiceAccountKey"},"isDefault":{"type":"boolean","description":"whether to set this credential as the default for the metastore. In practice, this should always be true.\n"},"isolationMode":{"type":"string"},"metastoreId":{"type":"string"},"name":{"type":"string"},"owner":{"type":"string"},"readOnly":{"type":"boolean"},"skipValidation":{"type":"boolean"}},"required":["databricksGcpServiceAccount","isolationMode","metastoreId","name","owner"],"inputProperties":{"awsIamRole":{"$ref":"#/types/databricks:index/MetastoreDataAccessAwsIamRole:MetastoreDataAccessAwsIamRole","willReplaceOnChanges":true},"azureManagedIdentity":{"$ref":"#/types/databricks:index/MetastoreDataAccessAzureManagedIdentity:MetastoreDataAccessAzureManagedIdentity","willReplaceOnChanges":true},"azureServicePrincipal":{"$ref":"#/types/databricks:index/MetastoreDataAccessAzureServicePrincipal:MetastoreDataAccessAzureServicePrincipal","willReplaceOnChanges":true},"cloudflareApiToken":{"$ref":"#/types/databricks:index/MetastoreDataAccessCloudflareApiToken:MetastoreDataAccessCloudflareApiToken","willReplaceOnChanges":true},"comment":{"type":"string","willReplaceOnChanges":true},"databricksGcpServiceAccount":{"$ref":"#/types/databricks:index/MetastoreDataAccessDatabricksGcpServiceAccount:MetastoreDataAccessDatabricksGcpServiceAccount"},"forceDestroy":{"type":"boolean","willReplaceOnChanges":true},"forceUpdate":{"type":"boolean","willReplaceOnChanges":true},"gcpServiceAccountKey":{"$ref":"#/types/databricks:index/MetastoreDataAccessGcpServiceAccountKey:MetastoreDataAccessGcpServiceAccountKey","willReplaceOnChanges":true},"isDefault":{"type":"boolean","description":"whether to set this credential as the default for the metastore. In practice, this should always be true.\n","willReplaceOnChanges":true},"isolationMode":{"type":"string"},"metastoreId":{"type":"string"},"name":{"type":"string","willReplaceOnChanges":true},"owner":{"type":"string"},"readOnly":{"type":"boolean","willReplaceOnChanges":true},"skipValidation":{"type":"boolean","willReplaceOnChanges":true}},"stateInputs":{"description":"Input properties used for looking up and filtering MetastoreDataAccess resources.\n","properties":{"awsIamRole":{"$ref":"#/types/databricks:index/MetastoreDataAccessAwsIamRole:MetastoreDataAccessAwsIamRole","willReplaceOnChanges":true},"azureManagedIdentity":{"$ref":"#/types/databricks:index/MetastoreDataAccessAzureManagedIdentity:MetastoreDataAccessAzureManagedIdentity","willReplaceOnChanges":true},"azureServicePrincipal":{"$ref":"#/types/databricks:index/MetastoreDataAccessAzureServicePrincipal:MetastoreDataAccessAzureServicePrincipal","willReplaceOnChanges":true},"cloudflareApiToken":{"$ref":"#/types/databricks:index/MetastoreDataAccessCloudflareApiToken:MetastoreDataAccessCloudflareApiToken","willReplaceOnChanges":true},"comment":{"type":"string","willReplaceOnChanges":true},"databricksGcpServiceAccount":{"$ref":"#/types/databricks:index/MetastoreDataAccessDatabricksGcpServiceAccount:MetastoreDataAccessDatabricksGcpServiceAccount"},"forceDestroy":{"type":"boolean","willReplaceOnChanges":true},"forceUpdate":{"type":"boolean","willReplaceOnChanges":true},"gcpServiceAccountKey":{"$ref":"#/types/databricks:index/MetastoreDataAccessGcpServiceAccountKey:MetastoreDataAccessGcpServiceAccountKey","willReplaceOnChanges":true},"isDefault":{"type":"boolean","description":"whether to set this credential as the default for the metastore. In practice, this should always be true.\n","willReplaceOnChanges":true},"isolationMode":{"type":"string"},"metastoreId":{"type":"string"},"name":{"type":"string","willReplaceOnChanges":true},"owner":{"type":"string"},"readOnly":{"type":"boolean","willReplaceOnChanges":true},"skipValidation":{"type":"boolean","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/metastoreProvider:MetastoreProvider":{"description":"In Delta Sharing, a provider is an entity that shares data with a recipient. Within a metastore, Unity Catalog provides the ability to create a provider which contains a list of shares that have been shared with you.\n\n\u003e This resource can only be used with a workspace-level provider!\n\nA \u003cspan pulumi-lang-nodejs=\"`databricks.MetastoreProvider`\" pulumi-lang-dotnet=\"`databricks.MetastoreProvider`\" pulumi-lang-go=\"`MetastoreProvider`\" pulumi-lang-python=\"`MetastoreProvider`\" pulumi-lang-yaml=\"`databricks.MetastoreProvider`\" pulumi-lang-java=\"`databricks.MetastoreProvider`\"\u003e`databricks.MetastoreProvider`\u003c/span\u003e is contained within\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eand can contain a list of shares that have been shared with you.\n\n\u003e Databricks to Databricks sharing automatically creates the provider.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst dbprovider = new databricks.MetastoreProvider(\"dbprovider\", {\n    name: \"terraform-test-provider\",\n    comment: \"made by terraform 2\",\n    authenticationType: \"TOKEN\",\n    recipientProfileStr: JSON.stringify({\n        shareCredentialsVersion: 1,\n        bearerToken: \"token\",\n        endpoint: \"endpoint\",\n        expirationTime: \"expiration-time\",\n    }),\n});\n```\n```python\nimport pulumi\nimport json\nimport pulumi_databricks as databricks\n\ndbprovider = databricks.MetastoreProvider(\"dbprovider\",\n    name=\"terraform-test-provider\",\n    comment=\"made by terraform 2\",\n    authentication_type=\"TOKEN\",\n    recipient_profile_str=json.dumps({\n        \"shareCredentialsVersion\": 1,\n        \"bearerToken\": \"token\",\n        \"endpoint\": \"endpoint\",\n        \"expirationTime\": \"expiration-time\",\n    }))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text.Json;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var dbprovider = new Databricks.MetastoreProvider(\"dbprovider\", new()\n    {\n        Name = \"terraform-test-provider\",\n        Comment = \"made by terraform 2\",\n        AuthenticationType = \"TOKEN\",\n        RecipientProfileStr = JsonSerializer.Serialize(new Dictionary\u003cstring, object?\u003e\n        {\n            [\"shareCredentialsVersion\"] = 1,\n            [\"bearerToken\"] = \"token\",\n            [\"endpoint\"] = \"endpoint\",\n            [\"expirationTime\"] = \"expiration-time\",\n        }),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\ttmpJSON0, err := json.Marshal(map[string]interface{}{\n\t\t\t\"shareCredentialsVersion\": 1,\n\t\t\t\"bearerToken\":             \"token\",\n\t\t\t\"endpoint\":                \"endpoint\",\n\t\t\t\"expirationTime\":          \"expiration-time\",\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tjson0 := string(tmpJSON0)\n\t\t_, err = databricks.NewMetastoreProvider(ctx, \"dbprovider\", \u0026databricks.MetastoreProviderArgs{\n\t\t\tName:                pulumi.String(\"terraform-test-provider\"),\n\t\t\tComment:             pulumi.String(\"made by terraform 2\"),\n\t\t\tAuthenticationType:  pulumi.String(\"TOKEN\"),\n\t\t\tRecipientProfileStr: pulumi.String(json0),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MetastoreProvider;\nimport com.pulumi.databricks.MetastoreProviderArgs;\nimport static com.pulumi.codegen.internal.Serialization.*;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var dbprovider = new MetastoreProvider(\"dbprovider\", MetastoreProviderArgs.builder()\n            .name(\"terraform-test-provider\")\n            .comment(\"made by terraform 2\")\n            .authenticationType(\"TOKEN\")\n            .recipientProfileStr(serializeJson(\n                jsonObject(\n                    jsonProperty(\"shareCredentialsVersion\", 1),\n                    jsonProperty(\"bearerToken\", \"token\"),\n                    jsonProperty(\"endpoint\", \"endpoint\"),\n                    jsonProperty(\"expirationTime\", \"expiration-time\")\n                )))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  dbprovider:\n    type: databricks:MetastoreProvider\n    properties:\n      name: terraform-test-provider\n      comment: made by terraform 2\n      authenticationType: TOKEN\n      recipientProfileStr:\n        fn::toJSON:\n          shareCredentialsVersion: 1\n          bearerToken: token\n          endpoint: endpoint\n          expirationTime: expiration-time\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.getTables \" pulumi-lang-dotnet=\" databricks.getTables \" pulumi-lang-go=\" getTables \" pulumi-lang-python=\" get_tables \" pulumi-lang-yaml=\" databricks.getTables \" pulumi-lang-java=\" databricks.getTables \"\u003e databricks.getTables \u003c/span\u003edata to list tables within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getSchemas \" pulumi-lang-dotnet=\" databricks.getSchemas \" pulumi-lang-go=\" getSchemas \" pulumi-lang-python=\" get_schemas \" pulumi-lang-yaml=\" databricks.getSchemas \" pulumi-lang-java=\" databricks.getSchemas \"\u003e databricks.getSchemas \u003c/span\u003edata to list schemas within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getCatalogs \" pulumi-lang-dotnet=\" databricks.getCatalogs \" pulumi-lang-go=\" getCatalogs \" pulumi-lang-python=\" get_catalogs \" pulumi-lang-yaml=\" databricks.getCatalogs \" pulumi-lang-java=\" databricks.getCatalogs \"\u003e databricks.getCatalogs \u003c/span\u003edata to list catalogs within Unity Catalog.\n","properties":{"authenticationType":{"type":"string","description":"The delta sharing authentication type. Valid values are `TOKEN`.\n"},"comment":{"type":"string","description":"Description about the provider.\n"},"name":{"type":"string","description":"Name of provider. Change forces creation of a new resource.\n"},"providerConfig":{"$ref":"#/types/databricks:index/MetastoreProviderProviderConfig:MetastoreProviderProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"recipientProfileStr":{"type":"string","description":"This is the json file that is created from a recipient url.\n","secret":true}},"required":["authenticationType","name","recipientProfileStr"],"inputProperties":{"authenticationType":{"type":"string","description":"The delta sharing authentication type. Valid values are `TOKEN`.\n"},"comment":{"type":"string","description":"Description about the provider.\n"},"name":{"type":"string","description":"Name of provider. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/MetastoreProviderProviderConfig:MetastoreProviderProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"recipientProfileStr":{"type":"string","description":"This is the json file that is created from a recipient url.\n","secret":true}},"requiredInputs":["authenticationType","recipientProfileStr"],"stateInputs":{"description":"Input properties used for looking up and filtering MetastoreProvider resources.\n","properties":{"authenticationType":{"type":"string","description":"The delta sharing authentication type. Valid values are `TOKEN`.\n"},"comment":{"type":"string","description":"Description about the provider.\n"},"name":{"type":"string","description":"Name of provider. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/MetastoreProviderProviderConfig:MetastoreProviderProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"recipientProfileStr":{"type":"string","description":"This is the json file that is created from a recipient url.\n","secret":true}},"type":"object"}},"databricks:index/mlflowExperiment:MlflowExperiment":{"description":"This resource allows you to manage [MLflow experiments](https://docs.databricks.com/data/data-sources/mlflow-experiment.html) in Databricks.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst me = databricks.getCurrentUser({});\nconst _this = new databricks.MlflowExperiment(\"this\", {\n    name: me.then(me =\u003e `${me.home}/Sample`),\n    artifactLocation: \"s3://bucket/my-experiment\",\n    tags: [\n        {\n            key: \"key1\",\n            value: \"value1\",\n        },\n        {\n            key: \"key2\",\n            value: \"value2\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nme = databricks.get_current_user()\nthis = databricks.MlflowExperiment(\"this\",\n    name=f\"{me.home}/Sample\",\n    artifact_location=\"s3://bucket/my-experiment\",\n    tags=[\n        {\n            \"key\": \"key1\",\n            \"value\": \"value1\",\n        },\n        {\n            \"key\": \"key2\",\n            \"value\": \"value2\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var me = Databricks.GetCurrentUser.Invoke();\n\n    var @this = new Databricks.MlflowExperiment(\"this\", new()\n    {\n        Name = $\"{me.Apply(getCurrentUserResult =\u003e getCurrentUserResult.Home)}/Sample\",\n        ArtifactLocation = \"s3://bucket/my-experiment\",\n        Tags = new[]\n        {\n            new Databricks.Inputs.MlflowExperimentTagArgs\n            {\n                Key = \"key1\",\n                Value = \"value1\",\n            },\n            new Databricks.Inputs.MlflowExperimentTagArgs\n            {\n                Key = \"key2\",\n                Value = \"value2\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tme, err := databricks.GetCurrentUser(ctx, \u0026databricks.GetCurrentUserArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMlflowExperiment(ctx, \"this\", \u0026databricks.MlflowExperimentArgs{\n\t\t\tName:             pulumi.Sprintf(\"%v/Sample\", me.Home),\n\t\t\tArtifactLocation: pulumi.String(\"s3://bucket/my-experiment\"),\n\t\t\tTags: databricks.MlflowExperimentTagArray{\n\t\t\t\t\u0026databricks.MlflowExperimentTagArgs{\n\t\t\t\t\tKey:   pulumi.String(\"key1\"),\n\t\t\t\t\tValue: pulumi.String(\"value1\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.MlflowExperimentTagArgs{\n\t\t\t\t\tKey:   pulumi.String(\"key2\"),\n\t\t\t\t\tValue: pulumi.String(\"value2\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetCurrentUserArgs;\nimport com.pulumi.databricks.MlflowExperiment;\nimport com.pulumi.databricks.MlflowExperimentArgs;\nimport com.pulumi.databricks.inputs.MlflowExperimentTagArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var me = DatabricksFunctions.getCurrentUser(GetCurrentUserArgs.builder()\n            .build());\n\n        var this_ = new MlflowExperiment(\"this\", MlflowExperimentArgs.builder()\n            .name(String.format(\"%s/Sample\", me.home()))\n            .artifactLocation(\"s3://bucket/my-experiment\")\n            .tags(            \n                MlflowExperimentTagArgs.builder()\n                    .key(\"key1\")\n                    .value(\"value1\")\n                    .build(),\n                MlflowExperimentTagArgs.builder()\n                    .key(\"key2\")\n                    .value(\"value2\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:MlflowExperiment\n    properties:\n      name: ${me.home}/Sample\n      artifactLocation: s3://bucket/my-experiment\n      tags:\n        - key: key1\n          value: value1\n        - key: key2\n          value: value2\nvariables:\n  me:\n    fn::invoke:\n      function: databricks:getCurrentUser\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Read*, *Edit*, or *Manage* individual experiments.\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.RegisteredModel \" pulumi-lang-dotnet=\" databricks.RegisteredModel \" pulumi-lang-go=\" RegisteredModel \" pulumi-lang-python=\" RegisteredModel \" pulumi-lang-yaml=\" databricks.RegisteredModel \" pulumi-lang-java=\" databricks.RegisteredModel \"\u003e databricks.RegisteredModel \u003c/span\u003eto create [Models in Unity Catalog](https://docs.databricks.com/en/mlflow/models-in-uc.html) in Databricks.\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eto manage directories in [Databricks Workpace](https://docs.databricks.com/workspace/workspace-objects.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MlflowModel \" pulumi-lang-dotnet=\" databricks.MlflowModel \" pulumi-lang-go=\" MlflowModel \" pulumi-lang-python=\" MlflowModel \" pulumi-lang-yaml=\" databricks.MlflowModel \" pulumi-lang-java=\" databricks.MlflowModel \"\u003e databricks.MlflowModel \u003c/span\u003eto create models in the [workspace model registry](https://docs.databricks.com/en/mlflow/model-registry.html) in Databricks.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003edata to export a notebook from Databricks Workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage [Databricks Repos](https://docs.databricks.com/repos.html).\n\n","properties":{"artifactLocation":{"type":"string","description":"Path to artifact location of the MLflow experiment.\n"},"creationTime":{"type":"integer"},"description":{"type":"string","deprecationMessage":"Remove the description attribute as it no longer is used and will be removed in a future version."},"experimentId":{"type":"string"},"lastUpdateTime":{"type":"integer"},"lifecycleStage":{"type":"string"},"name":{"type":"string","description":"Name of MLflow experiment. It must be an absolute path within the Databricks workspace, e.g. `/Users/\u003csome-username\u003e/my-experiment`. For more information about changes to experiment naming conventions, see [mlflow docs](https://docs.databricks.com/applications/mlflow/experiments.html#experiment-migration).\n"},"providerConfig":{"$ref":"#/types/databricks:index/MlflowExperimentProviderConfig:MlflowExperimentProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/MlflowExperimentTag:MlflowExperimentTag"},"description":"Tags for the MLflow experiment.\n"}},"required":["creationTime","experimentId","lastUpdateTime","lifecycleStage","name","tags"],"inputProperties":{"artifactLocation":{"type":"string","description":"Path to artifact location of the MLflow experiment.\n","willReplaceOnChanges":true},"creationTime":{"type":"integer"},"description":{"type":"string","deprecationMessage":"Remove the description attribute as it no longer is used and will be removed in a future version."},"experimentId":{"type":"string"},"lastUpdateTime":{"type":"integer"},"lifecycleStage":{"type":"string"},"name":{"type":"string","description":"Name of MLflow experiment. It must be an absolute path within the Databricks workspace, e.g. `/Users/\u003csome-username\u003e/my-experiment`. For more information about changes to experiment naming conventions, see [mlflow docs](https://docs.databricks.com/applications/mlflow/experiments.html#experiment-migration).\n"},"providerConfig":{"$ref":"#/types/databricks:index/MlflowExperimentProviderConfig:MlflowExperimentProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/MlflowExperimentTag:MlflowExperimentTag"},"description":"Tags for the MLflow experiment.\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering MlflowExperiment resources.\n","properties":{"artifactLocation":{"type":"string","description":"Path to artifact location of the MLflow experiment.\n","willReplaceOnChanges":true},"creationTime":{"type":"integer"},"description":{"type":"string","deprecationMessage":"Remove the description attribute as it no longer is used and will be removed in a future version."},"experimentId":{"type":"string"},"lastUpdateTime":{"type":"integer"},"lifecycleStage":{"type":"string"},"name":{"type":"string","description":"Name of MLflow experiment. It must be an absolute path within the Databricks workspace, e.g. `/Users/\u003csome-username\u003e/my-experiment`. For more information about changes to experiment naming conventions, see [mlflow docs](https://docs.databricks.com/applications/mlflow/experiments.html#experiment-migration).\n"},"providerConfig":{"$ref":"#/types/databricks:index/MlflowExperimentProviderConfig:MlflowExperimentProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/MlflowExperimentTag:MlflowExperimentTag"},"description":"Tags for the MLflow experiment.\n"}},"type":"object"}},"databricks:index/mlflowModel:MlflowModel":{"description":"This resource allows you to create [MLflow models](https://docs.databricks.com/applications/mlflow/models.html) in Databricks.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e This documentation covers the Workspace Model Registry. Databricks recommends using Models in Unity Catalog. Models in Unity Catalog provides centralized model governance, cross-workspace access, lineage, and deployment.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst test = new databricks.MlflowModel(\"test\", {\n    name: \"My MLflow Model\",\n    description: \"My MLflow model description\",\n    tags: [\n        {\n            key: \"key1\",\n            value: \"value1\",\n        },\n        {\n            key: \"key2\",\n            value: \"value2\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ntest = databricks.MlflowModel(\"test\",\n    name=\"My MLflow Model\",\n    description=\"My MLflow model description\",\n    tags=[\n        {\n            \"key\": \"key1\",\n            \"value\": \"value1\",\n        },\n        {\n            \"key\": \"key2\",\n            \"value\": \"value2\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var test = new Databricks.MlflowModel(\"test\", new()\n    {\n        Name = \"My MLflow Model\",\n        Description = \"My MLflow model description\",\n        Tags = new[]\n        {\n            new Databricks.Inputs.MlflowModelTagArgs\n            {\n                Key = \"key1\",\n                Value = \"value1\",\n            },\n            new Databricks.Inputs.MlflowModelTagArgs\n            {\n                Key = \"key2\",\n                Value = \"value2\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMlflowModel(ctx, \"test\", \u0026databricks.MlflowModelArgs{\n\t\t\tName:        pulumi.String(\"My MLflow Model\"),\n\t\t\tDescription: pulumi.String(\"My MLflow model description\"),\n\t\t\tTags: databricks.MlflowModelTagArray{\n\t\t\t\t\u0026databricks.MlflowModelTagArgs{\n\t\t\t\t\tKey:   pulumi.String(\"key1\"),\n\t\t\t\t\tValue: pulumi.String(\"value1\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.MlflowModelTagArgs{\n\t\t\t\t\tKey:   pulumi.String(\"key2\"),\n\t\t\t\t\tValue: pulumi.String(\"value2\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MlflowModel;\nimport com.pulumi.databricks.MlflowModelArgs;\nimport com.pulumi.databricks.inputs.MlflowModelTagArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var test = new MlflowModel(\"test\", MlflowModelArgs.builder()\n            .name(\"My MLflow Model\")\n            .description(\"My MLflow model description\")\n            .tags(            \n                MlflowModelTagArgs.builder()\n                    .key(\"key1\")\n                    .value(\"value1\")\n                    .build(),\n                MlflowModelTagArgs.builder()\n                    .key(\"key2\")\n                    .value(\"value2\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  test:\n    type: databricks:MlflowModel\n    properties:\n      name: My MLflow Model\n      description: My MLflow model description\n      tags:\n        - key: key1\n          value: value1\n        - key: key2\n          value: value2\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Read*, *Edit*, *Manage Staging Versions*, *Manage Production Versions*, and *Manage* individual models.\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.RegisteredModel \" pulumi-lang-dotnet=\" databricks.RegisteredModel \" pulumi-lang-go=\" RegisteredModel \" pulumi-lang-python=\" RegisteredModel \" pulumi-lang-yaml=\" databricks.RegisteredModel \" pulumi-lang-java=\" databricks.RegisteredModel \"\u003e databricks.RegisteredModel \u003c/span\u003eto create [Models in Unity Catalog](https://docs.databricks.com/en/mlflow/models-in-uc.html) in Databricks.\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.ModelServing \" pulumi-lang-dotnet=\" databricks.ModelServing \" pulumi-lang-go=\" ModelServing \" pulumi-lang-python=\" ModelServing \" pulumi-lang-yaml=\" databricks.ModelServing \" pulumi-lang-java=\" databricks.ModelServing \"\u003e databricks.ModelServing \u003c/span\u003eto serve this model on a Databricks serving endpoint.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eto manage directories in [Databricks Workspace](https://docs.databricks.com/workspace/workspace-objects.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MlflowExperiment \" pulumi-lang-dotnet=\" databricks.MlflowExperiment \" pulumi-lang-go=\" MlflowExperiment \" pulumi-lang-python=\" MlflowExperiment \" pulumi-lang-yaml=\" databricks.MlflowExperiment \" pulumi-lang-java=\" databricks.MlflowExperiment \"\u003e databricks.MlflowExperiment \u003c/span\u003eto manage [MLflow experiments](https://docs.databricks.com/data/data-sources/mlflow-experiment.html) in Databricks.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003edata to export a notebook from Databricks Workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage [Databricks Repos](https://docs.databricks.com/repos.html).\n\n","properties":{"description":{"type":"string","description":"The description of the MLflow model.\n"},"name":{"type":"string","description":"Name of MLflow model. Change of name triggers new resource.\n"},"providerConfig":{"$ref":"#/types/databricks:index/MlflowModelProviderConfig:MlflowModelProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"registeredModelId":{"type":"string"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/MlflowModelTag:MlflowModelTag"},"description":"Tags for the MLflow model.\n"}},"required":["name","registeredModelId"],"inputProperties":{"description":{"type":"string","description":"The description of the MLflow model.\n"},"name":{"type":"string","description":"Name of MLflow model. Change of name triggers new resource.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/MlflowModelProviderConfig:MlflowModelProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/MlflowModelTag:MlflowModelTag"},"description":"Tags for the MLflow model.\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering MlflowModel resources.\n","properties":{"description":{"type":"string","description":"The description of the MLflow model.\n"},"name":{"type":"string","description":"Name of MLflow model. Change of name triggers new resource.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/MlflowModelProviderConfig:MlflowModelProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"registeredModelId":{"type":"string"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/MlflowModelTag:MlflowModelTag"},"description":"Tags for the MLflow model.\n"}},"type":"object"}},"databricks:index/mlflowWebhook:MlflowWebhook":{"description":"This resource allows you to create [MLflow Model Registry Webhooks](https://docs.databricks.com/applications/mlflow/model-registry-webhooks.html) in Databricks.  Webhooks enable you to listen for Model Registry events so your integrations can automatically trigger actions. You can use webhooks to automate and integrate your machine learning pipeline with existing CI/CD tools and workflows. Webhooks allow trigger execution of a Databricks job or call a web service on specific event(s) that is generated in the MLflow Registry - stage transitioning, creation of registered model, creation of transition request, etc.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n### Triggering Databricks job\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst me = databricks.getCurrentUser({});\nconst latest = databricks.getSparkVersion({});\nconst smallest = databricks.getNodeType({\n    localDisk: true,\n});\nconst _this = new databricks.Notebook(\"this\", {\n    path: me.then(me =\u003e `${me.home}/MLFlowWebhook`),\n    language: \"PYTHON\",\n    contentBase64: std.base64encode({\n        input: `import json\n \nevent_message = dbutils.widgets.get(\\\\\"event_message\\\\\")\nevent_message_dict = json.loads(event_message)\nprint(f\\\\\"event data={event_message_dict}\\\\\")\n`,\n    }).then(invoke =\u003e invoke.result),\n});\nconst thisJob = new databricks.Job(\"this\", {\n    name: me.then(me =\u003e `Pulumi MLflowWebhook Demo (${me.alphanumeric})`),\n    tasks: [{\n        taskKey: \"task1\",\n        newCluster: {\n            numWorkers: 1,\n            sparkVersion: latest.then(latest =\u003e latest.id),\n            nodeTypeId: smallest.then(smallest =\u003e smallest.id),\n        },\n        notebookTask: {\n            notebookPath: _this.path,\n        },\n    }],\n});\nconst patForWebhook = new databricks.Token(\"pat_for_webhook\", {\n    comment: \"MLflow Webhook\",\n    lifetimeSeconds: 86400000,\n});\nconst job = new databricks.MlflowWebhook(\"job\", {\n    events: [\"TRANSITION_REQUEST_CREATED\"],\n    description: \"Databricks Job webhook trigger\",\n    status: \"ACTIVE\",\n    jobSpec: {\n        jobId: thisJob.id,\n        workspaceUrl: me.then(me =\u003e me.workspaceUrl),\n        accessToken: patForWebhook.tokenValue,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\nme = databricks.get_current_user()\nlatest = databricks.get_spark_version()\nsmallest = databricks.get_node_type(local_disk=True)\nthis = databricks.Notebook(\"this\",\n    path=f\"{me.home}/MLFlowWebhook\",\n    language=\"PYTHON\",\n    content_base64=std.base64encode(input=\"\"\"import json\n \nevent_message = dbutils.widgets.get(\\\"event_message\\\")\nevent_message_dict = json.loads(event_message)\nprint(f\\\"event data={event_message_dict}\\\")\n\"\"\").result)\nthis_job = databricks.Job(\"this\",\n    name=f\"Pulumi MLflowWebhook Demo ({me.alphanumeric})\",\n    tasks=[{\n        \"task_key\": \"task1\",\n        \"new_cluster\": {\n            \"num_workers\": 1,\n            \"spark_version\": latest.id,\n            \"node_type_id\": smallest.id,\n        },\n        \"notebook_task\": {\n            \"notebook_path\": this.path,\n        },\n    }])\npat_for_webhook = databricks.Token(\"pat_for_webhook\",\n    comment=\"MLflow Webhook\",\n    lifetime_seconds=86400000)\njob = databricks.MlflowWebhook(\"job\",\n    events=[\"TRANSITION_REQUEST_CREATED\"],\n    description=\"Databricks Job webhook trigger\",\n    status=\"ACTIVE\",\n    job_spec={\n        \"job_id\": this_job.id,\n        \"workspace_url\": me.workspace_url,\n        \"access_token\": pat_for_webhook.token_value,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var me = Databricks.GetCurrentUser.Invoke();\n\n    var latest = Databricks.GetSparkVersion.Invoke();\n\n    var smallest = Databricks.GetNodeType.Invoke(new()\n    {\n        LocalDisk = true,\n    });\n\n    var @this = new Databricks.Notebook(\"this\", new()\n    {\n        Path = $\"{me.Apply(getCurrentUserResult =\u003e getCurrentUserResult.Home)}/MLFlowWebhook\",\n        Language = \"PYTHON\",\n        ContentBase64 = Std.Base64encode.Invoke(new()\n        {\n            Input = @\"import json\n \nevent_message = dbutils.widgets.get(\\\"\"event_message\\\"\")\nevent_message_dict = json.loads(event_message)\nprint(f\\\"\"event data={event_message_dict}\\\"\")\n\",\n        }).Apply(invoke =\u003e invoke.Result),\n    });\n\n    var thisJob = new Databricks.Job(\"this\", new()\n    {\n        Name = $\"Pulumi MLflowWebhook Demo ({me.Apply(getCurrentUserResult =\u003e getCurrentUserResult.Alphanumeric)})\",\n        Tasks = new[]\n        {\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"task1\",\n                NewCluster = new Databricks.Inputs.JobTaskNewClusterArgs\n                {\n                    NumWorkers = 1,\n                    SparkVersion = latest.Apply(getSparkVersionResult =\u003e getSparkVersionResult.Id),\n                    NodeTypeId = smallest.Apply(getNodeTypeResult =\u003e getNodeTypeResult.Id),\n                },\n                NotebookTask = new Databricks.Inputs.JobTaskNotebookTaskArgs\n                {\n                    NotebookPath = @this.Path,\n                },\n            },\n        },\n    });\n\n    var patForWebhook = new Databricks.Token(\"pat_for_webhook\", new()\n    {\n        Comment = \"MLflow Webhook\",\n        LifetimeSeconds = 86400000,\n    });\n\n    var job = new Databricks.MlflowWebhook(\"job\", new()\n    {\n        Events = new[]\n        {\n            \"TRANSITION_REQUEST_CREATED\",\n        },\n        Description = \"Databricks Job webhook trigger\",\n        Status = \"ACTIVE\",\n        JobSpec = new Databricks.Inputs.MlflowWebhookJobSpecArgs\n        {\n            JobId = thisJob.Id,\n            WorkspaceUrl = me.Apply(getCurrentUserResult =\u003e getCurrentUserResult.WorkspaceUrl),\n            AccessToken = patForWebhook.TokenValue,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tme, err := databricks.GetCurrentUser(ctx, \u0026databricks.GetCurrentUserArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlatest, err := databricks.GetSparkVersion(ctx, \u0026databricks.GetSparkVersionArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsmallest, err := databricks.GetNodeType(ctx, \u0026databricks.GetNodeTypeArgs{\n\t\t\tLocalDisk: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tinvokeBase64encode, err := std.Base64encode(ctx, \u0026std.Base64encodeArgs{\n\t\t\tInput: `import json\n \nevent_message = dbutils.widgets.get(\\\"event_message\\\")\nevent_message_dict = json.loads(event_message)\nprint(f\\\"event data={event_message_dict}\\\")\n`,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewNotebook(ctx, \"this\", \u0026databricks.NotebookArgs{\n\t\t\tPath:          pulumi.Sprintf(\"%v/MLFlowWebhook\", me.Home),\n\t\t\tLanguage:      pulumi.String(\"PYTHON\"),\n\t\t\tContentBase64: pulumi.String(invokeBase64encode.Result),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisJob, err := databricks.NewJob(ctx, \"this\", \u0026databricks.JobArgs{\n\t\t\tName: pulumi.Sprintf(\"Pulumi MLflowWebhook Demo (%v)\", me.Alphanumeric),\n\t\t\tTasks: databricks.JobTaskArray{\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"task1\"),\n\t\t\t\t\tNewCluster: \u0026databricks.JobTaskNewClusterArgs{\n\t\t\t\t\t\tNumWorkers:   pulumi.Int(1),\n\t\t\t\t\t\tSparkVersion: pulumi.String(latest.Id),\n\t\t\t\t\t\tNodeTypeId:   pulumi.String(smallest.Id),\n\t\t\t\t\t},\n\t\t\t\t\tNotebookTask: \u0026databricks.JobTaskNotebookTaskArgs{\n\t\t\t\t\t\tNotebookPath: this.Path,\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tpatForWebhook, err := databricks.NewToken(ctx, \"pat_for_webhook\", \u0026databricks.TokenArgs{\n\t\t\tComment:         pulumi.String(\"MLflow Webhook\"),\n\t\t\tLifetimeSeconds: pulumi.Int(86400000),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMlflowWebhook(ctx, \"job\", \u0026databricks.MlflowWebhookArgs{\n\t\t\tEvents: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"TRANSITION_REQUEST_CREATED\"),\n\t\t\t},\n\t\t\tDescription: pulumi.String(\"Databricks Job webhook trigger\"),\n\t\t\tStatus:      pulumi.String(\"ACTIVE\"),\n\t\t\tJobSpec: \u0026databricks.MlflowWebhookJobSpecArgs{\n\t\t\t\tJobId:        thisJob.ID(),\n\t\t\t\tWorkspaceUrl: pulumi.String(me.WorkspaceUrl),\n\t\t\t\tAccessToken:  patForWebhook.TokenValue,\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetCurrentUserArgs;\nimport com.pulumi.databricks.inputs.GetSparkVersionArgs;\nimport com.pulumi.databricks.inputs.GetNodeTypeArgs;\nimport com.pulumi.databricks.Notebook;\nimport com.pulumi.databricks.NotebookArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.Base64encodeArgs;\nimport com.pulumi.databricks.Job;\nimport com.pulumi.databricks.JobArgs;\nimport com.pulumi.databricks.inputs.JobTaskArgs;\nimport com.pulumi.databricks.inputs.JobTaskNewClusterArgs;\nimport com.pulumi.databricks.inputs.JobTaskNotebookTaskArgs;\nimport com.pulumi.databricks.Token;\nimport com.pulumi.databricks.TokenArgs;\nimport com.pulumi.databricks.MlflowWebhook;\nimport com.pulumi.databricks.MlflowWebhookArgs;\nimport com.pulumi.databricks.inputs.MlflowWebhookJobSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var me = DatabricksFunctions.getCurrentUser(GetCurrentUserArgs.builder()\n            .build());\n\n        final var latest = DatabricksFunctions.getSparkVersion(GetSparkVersionArgs.builder()\n            .build());\n\n        final var smallest = DatabricksFunctions.getNodeType(GetNodeTypeArgs.builder()\n            .localDisk(true)\n            .build());\n\n        var this_ = new Notebook(\"this\", NotebookArgs.builder()\n            .path(String.format(\"%s/MLFlowWebhook\", me.home()))\n            .language(\"PYTHON\")\n            .contentBase64(StdFunctions.base64encode(Base64encodeArgs.builder()\n                .input(\"\"\"\nimport json\n \nevent_message = dbutils.widgets.get(\\\"event_message\\\")\nevent_message_dict = json.loads(event_message)\nprint(f\\\"event data={event_message_dict}\\\")\n                \"\"\")\n                .build()).result())\n            .build());\n\n        var thisJob = new Job(\"thisJob\", JobArgs.builder()\n            .name(String.format(\"Pulumi MLflowWebhook Demo (%s)\", me.alphanumeric()))\n            .tasks(JobTaskArgs.builder()\n                .taskKey(\"task1\")\n                .newCluster(JobTaskNewClusterArgs.builder()\n                    .numWorkers(1)\n                    .sparkVersion(latest.id())\n                    .nodeTypeId(smallest.id())\n                    .build())\n                .notebookTask(JobTaskNotebookTaskArgs.builder()\n                    .notebookPath(this_.path())\n                    .build())\n                .build())\n            .build());\n\n        var patForWebhook = new Token(\"patForWebhook\", TokenArgs.builder()\n            .comment(\"MLflow Webhook\")\n            .lifetimeSeconds(86400000)\n            .build());\n\n        var job = new MlflowWebhook(\"job\", MlflowWebhookArgs.builder()\n            .events(\"TRANSITION_REQUEST_CREATED\")\n            .description(\"Databricks Job webhook trigger\")\n            .status(\"ACTIVE\")\n            .jobSpec(MlflowWebhookJobSpecArgs.builder()\n                .jobId(thisJob.id())\n                .workspaceUrl(me.workspaceUrl())\n                .accessToken(patForWebhook.tokenValue())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Notebook\n    properties:\n      path: ${me.home}/MLFlowWebhook\n      language: PYTHON\n      contentBase64:\n        fn::invoke:\n          function: std:base64encode\n          arguments:\n            input: \"import json\\n \\nevent_message = dbutils.widgets.get(\\\\\\\"event_message\\\\\\\")\\nevent_message_dict = json.loads(event_message)\\nprint(f\\\\\\\"event data={event_message_dict}\\\\\\\")\\n\"\n          return: result\n  thisJob:\n    type: databricks:Job\n    name: this\n    properties:\n      name: Pulumi MLflowWebhook Demo (${me.alphanumeric})\n      tasks:\n        - taskKey: task1\n          newCluster:\n            numWorkers: 1\n            sparkVersion: ${latest.id}\n            nodeTypeId: ${smallest.id}\n          notebookTask:\n            notebookPath: ${this.path}\n  patForWebhook:\n    type: databricks:Token\n    name: pat_for_webhook\n    properties:\n      comment: MLflow Webhook\n      lifetimeSeconds: 8.64e+07\n  job:\n    type: databricks:MlflowWebhook\n    properties:\n      events:\n        - TRANSITION_REQUEST_CREATED\n      description: Databricks Job webhook trigger\n      status: ACTIVE\n      jobSpec:\n        jobId: ${thisJob.id}\n        workspaceUrl: ${me.workspaceUrl}\n        accessToken: ${patForWebhook.tokenValue}\nvariables:\n  me:\n    fn::invoke:\n      function: databricks:getCurrentUser\n      arguments: {}\n  latest:\n    fn::invoke:\n      function: databricks:getSparkVersion\n      arguments: {}\n  smallest:\n    fn::invoke:\n      function: databricks:getNodeType\n      arguments:\n        localDisk: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### POSTing to URL\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst url = new databricks.MlflowWebhook(\"url\", {\n    events: [\"TRANSITION_REQUEST_CREATED\"],\n    description: \"URL webhook trigger\",\n    httpUrlSpec: {\n        url: \"https://my_cool_host/webhook\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nurl = databricks.MlflowWebhook(\"url\",\n    events=[\"TRANSITION_REQUEST_CREATED\"],\n    description=\"URL webhook trigger\",\n    http_url_spec={\n        \"url\": \"https://my_cool_host/webhook\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var url = new Databricks.MlflowWebhook(\"url\", new()\n    {\n        Events = new[]\n        {\n            \"TRANSITION_REQUEST_CREATED\",\n        },\n        Description = \"URL webhook trigger\",\n        HttpUrlSpec = new Databricks.Inputs.MlflowWebhookHttpUrlSpecArgs\n        {\n            Url = \"https://my_cool_host/webhook\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMlflowWebhook(ctx, \"url\", \u0026databricks.MlflowWebhookArgs{\n\t\t\tEvents: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"TRANSITION_REQUEST_CREATED\"),\n\t\t\t},\n\t\t\tDescription: pulumi.String(\"URL webhook trigger\"),\n\t\t\tHttpUrlSpec: \u0026databricks.MlflowWebhookHttpUrlSpecArgs{\n\t\t\t\tUrl: pulumi.String(\"https://my_cool_host/webhook\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MlflowWebhook;\nimport com.pulumi.databricks.MlflowWebhookArgs;\nimport com.pulumi.databricks.inputs.MlflowWebhookHttpUrlSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var url = new MlflowWebhook(\"url\", MlflowWebhookArgs.builder()\n            .events(\"TRANSITION_REQUEST_CREATED\")\n            .description(\"URL webhook trigger\")\n            .httpUrlSpec(MlflowWebhookHttpUrlSpecArgs.builder()\n                .url(\"https://my_cool_host/webhook\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  url:\n    type: databricks:MlflowWebhook\n    properties:\n      events:\n        - TRANSITION_REQUEST_CREATED\n      description: URL webhook trigger\n      httpUrlSpec:\n        url: https://my_cool_host/webhook\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\n* MLflow webhooks could be configured only by workspace admins.\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eto manage directories in [Databricks Workpace](https://docs.databricks.com/workspace/workspace-objects.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MlflowExperiment \" pulumi-lang-dotnet=\" databricks.MlflowExperiment \" pulumi-lang-go=\" MlflowExperiment \" pulumi-lang-python=\" MlflowExperiment \" pulumi-lang-yaml=\" databricks.MlflowExperiment \" pulumi-lang-java=\" databricks.MlflowExperiment \"\u003e databricks.MlflowExperiment \u003c/span\u003eto manage [MLflow experiments](https://docs.databricks.com/data/data-sources/mlflow-experiment.html) in Databricks.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MlflowModel \" pulumi-lang-dotnet=\" databricks.MlflowModel \" pulumi-lang-go=\" MlflowModel \" pulumi-lang-python=\" MlflowModel \" pulumi-lang-yaml=\" databricks.MlflowModel \" pulumi-lang-java=\" databricks.MlflowModel \"\u003e databricks.MlflowModel \u003c/span\u003eto create [MLflow models](https://docs.databricks.com/applications/mlflow/models.html) in Databricks.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003edata to export a notebook from Databricks Workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage [Databricks Repos](https://docs.databricks.com/repos.html).\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"description":{"type":"string","description":"Optional description of the MLflow webhook.\n"},"events":{"type":"array","items":{"type":"string"},"description":"The list of events that will trigger execution of Databricks job or POSTing to an URL, for example, `MODEL_VERSION_CREATED`, `MODEL_VERSION_TRANSITIONED_STAGE`, `TRANSITION_REQUEST_CREATED`, etc.  Refer to the [Webhooks API documentation](https://docs.databricks.com/dev-tools/api/latest/mlflow.html#operation/create-registry-webhook) for a full list of supported events.\n\nConfiguration must include one of \u003cspan pulumi-lang-nodejs=\"`httpUrlSpec`\" pulumi-lang-dotnet=\"`HttpUrlSpec`\" pulumi-lang-go=\"`httpUrlSpec`\" pulumi-lang-python=\"`http_url_spec`\" pulumi-lang-yaml=\"`httpUrlSpec`\" pulumi-lang-java=\"`httpUrlSpec`\"\u003e`http_url_spec`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`jobSpec`\" pulumi-lang-dotnet=\"`JobSpec`\" pulumi-lang-go=\"`jobSpec`\" pulumi-lang-python=\"`job_spec`\" pulumi-lang-yaml=\"`jobSpec`\" pulumi-lang-java=\"`jobSpec`\"\u003e`job_spec`\u003c/span\u003e blocks, but not both.\n"},"httpUrlSpec":{"$ref":"#/types/databricks:index/MlflowWebhookHttpUrlSpec:MlflowWebhookHttpUrlSpec"},"jobSpec":{"$ref":"#/types/databricks:index/MlflowWebhookJobSpec:MlflowWebhookJobSpec"},"modelName":{"type":"string","description":"Name of MLflow model for which webhook will be created. If the model name is not specified, a registry-wide webhook is created that listens for the specified events across all versions of all registered models.\n"},"providerConfig":{"$ref":"#/types/databricks:index/MlflowWebhookProviderConfig:MlflowWebhookProviderConfig"},"status":{"type":"string","description":"Optional status of webhook. Possible values are `ACTIVE`, `TEST_MODE`, `DISABLED`. Default is `ACTIVE`.\n"}},"required":["events"],"inputProperties":{"description":{"type":"string","description":"Optional description of the MLflow webhook.\n"},"events":{"type":"array","items":{"type":"string"},"description":"The list of events that will trigger execution of Databricks job or POSTing to an URL, for example, `MODEL_VERSION_CREATED`, `MODEL_VERSION_TRANSITIONED_STAGE`, `TRANSITION_REQUEST_CREATED`, etc.  Refer to the [Webhooks API documentation](https://docs.databricks.com/dev-tools/api/latest/mlflow.html#operation/create-registry-webhook) for a full list of supported events.\n\nConfiguration must include one of \u003cspan pulumi-lang-nodejs=\"`httpUrlSpec`\" pulumi-lang-dotnet=\"`HttpUrlSpec`\" pulumi-lang-go=\"`httpUrlSpec`\" pulumi-lang-python=\"`http_url_spec`\" pulumi-lang-yaml=\"`httpUrlSpec`\" pulumi-lang-java=\"`httpUrlSpec`\"\u003e`http_url_spec`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`jobSpec`\" pulumi-lang-dotnet=\"`JobSpec`\" pulumi-lang-go=\"`jobSpec`\" pulumi-lang-python=\"`job_spec`\" pulumi-lang-yaml=\"`jobSpec`\" pulumi-lang-java=\"`jobSpec`\"\u003e`job_spec`\u003c/span\u003e blocks, but not both.\n"},"httpUrlSpec":{"$ref":"#/types/databricks:index/MlflowWebhookHttpUrlSpec:MlflowWebhookHttpUrlSpec"},"jobSpec":{"$ref":"#/types/databricks:index/MlflowWebhookJobSpec:MlflowWebhookJobSpec"},"modelName":{"type":"string","description":"Name of MLflow model for which webhook will be created. If the model name is not specified, a registry-wide webhook is created that listens for the specified events across all versions of all registered models.\n"},"providerConfig":{"$ref":"#/types/databricks:index/MlflowWebhookProviderConfig:MlflowWebhookProviderConfig"},"status":{"type":"string","description":"Optional status of webhook. Possible values are `ACTIVE`, `TEST_MODE`, `DISABLED`. Default is `ACTIVE`.\n"}},"requiredInputs":["events"],"stateInputs":{"description":"Input properties used for looking up and filtering MlflowWebhook resources.\n","properties":{"description":{"type":"string","description":"Optional description of the MLflow webhook.\n"},"events":{"type":"array","items":{"type":"string"},"description":"The list of events that will trigger execution of Databricks job or POSTing to an URL, for example, `MODEL_VERSION_CREATED`, `MODEL_VERSION_TRANSITIONED_STAGE`, `TRANSITION_REQUEST_CREATED`, etc.  Refer to the [Webhooks API documentation](https://docs.databricks.com/dev-tools/api/latest/mlflow.html#operation/create-registry-webhook) for a full list of supported events.\n\nConfiguration must include one of \u003cspan pulumi-lang-nodejs=\"`httpUrlSpec`\" pulumi-lang-dotnet=\"`HttpUrlSpec`\" pulumi-lang-go=\"`httpUrlSpec`\" pulumi-lang-python=\"`http_url_spec`\" pulumi-lang-yaml=\"`httpUrlSpec`\" pulumi-lang-java=\"`httpUrlSpec`\"\u003e`http_url_spec`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`jobSpec`\" pulumi-lang-dotnet=\"`JobSpec`\" pulumi-lang-go=\"`jobSpec`\" pulumi-lang-python=\"`job_spec`\" pulumi-lang-yaml=\"`jobSpec`\" pulumi-lang-java=\"`jobSpec`\"\u003e`job_spec`\u003c/span\u003e blocks, but not both.\n"},"httpUrlSpec":{"$ref":"#/types/databricks:index/MlflowWebhookHttpUrlSpec:MlflowWebhookHttpUrlSpec"},"jobSpec":{"$ref":"#/types/databricks:index/MlflowWebhookJobSpec:MlflowWebhookJobSpec"},"modelName":{"type":"string","description":"Name of MLflow model for which webhook will be created. If the model name is not specified, a registry-wide webhook is created that listens for the specified events across all versions of all registered models.\n"},"providerConfig":{"$ref":"#/types/databricks:index/MlflowWebhookProviderConfig:MlflowWebhookProviderConfig"},"status":{"type":"string","description":"Optional status of webhook. Possible values are `ACTIVE`, `TEST_MODE`, `DISABLED`. Default is `ACTIVE`.\n"}},"type":"object"}},"databricks:index/modelServing:ModelServing":{"description":"This resource allows you to manage [Model Serving](https://docs.databricks.com/machine-learning/model-serving/index.html) endpoints in Databricks, including custom models, external models, and foundation models. For newer foundation models, including Llama 4, please use the\u003cspan pulumi-lang-nodejs=\" databricks.ModelServingProvisionedThroughput \" pulumi-lang-dotnet=\" databricks.ModelServingProvisionedThroughput \" pulumi-lang-go=\" ModelServingProvisionedThroughput \" pulumi-lang-python=\" ModelServingProvisionedThroughput \" pulumi-lang-yaml=\" databricks.ModelServingProvisionedThroughput \" pulumi-lang-java=\" databricks.ModelServingProvisionedThroughput \"\u003e databricks.ModelServingProvisionedThroughput \u003c/span\u003eresource.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e If you replace \u003cspan pulumi-lang-nodejs=\"`servedModels`\" pulumi-lang-dotnet=\"`ServedModels`\" pulumi-lang-go=\"`servedModels`\" pulumi-lang-python=\"`served_models`\" pulumi-lang-yaml=\"`servedModels`\" pulumi-lang-java=\"`servedModels`\"\u003e`served_models`\u003c/span\u003e with \u003cspan pulumi-lang-nodejs=\"`servedEntities`\" pulumi-lang-dotnet=\"`ServedEntities`\" pulumi-lang-go=\"`servedEntities`\" pulumi-lang-python=\"`served_entities`\" pulumi-lang-yaml=\"`servedEntities`\" pulumi-lang-java=\"`servedEntities`\"\u003e`served_entities`\u003c/span\u003e in an existing serving endpoint, the serving endpoint will briefly go into an update state (~30 seconds) and increment the config version.\n\n## Example Usage\n\nCreating a CPU serving endpoint\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.ModelServing(\"this\", {\n    name: \"ads-serving-endpoint\",\n    config: {\n        servedEntities: [\n            {\n                name: \"prod_model\",\n                entityName: \"ads-model\",\n                entityVersion: \"2\",\n                workloadSize: \"Small\",\n                scaleToZeroEnabled: true,\n            },\n            {\n                name: \"candidate_model\",\n                entityName: \"ads-model\",\n                entityVersion: \"4\",\n                workloadSize: \"Small\",\n                scaleToZeroEnabled: false,\n            },\n        ],\n        trafficConfig: {\n            routes: [\n                {\n                    servedModelName: \"prod_model\",\n                    trafficPercentage: 90,\n                },\n                {\n                    servedModelName: \"candidate_model\",\n                    trafficPercentage: 10,\n                },\n            ],\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.ModelServing(\"this\",\n    name=\"ads-serving-endpoint\",\n    config={\n        \"served_entities\": [\n            {\n                \"name\": \"prod_model\",\n                \"entity_name\": \"ads-model\",\n                \"entity_version\": \"2\",\n                \"workload_size\": \"Small\",\n                \"scale_to_zero_enabled\": True,\n            },\n            {\n                \"name\": \"candidate_model\",\n                \"entity_name\": \"ads-model\",\n                \"entity_version\": \"4\",\n                \"workload_size\": \"Small\",\n                \"scale_to_zero_enabled\": False,\n            },\n        ],\n        \"traffic_config\": {\n            \"routes\": [\n                {\n                    \"served_model_name\": \"prod_model\",\n                    \"traffic_percentage\": 90,\n                },\n                {\n                    \"served_model_name\": \"candidate_model\",\n                    \"traffic_percentage\": 10,\n                },\n            ],\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.ModelServing(\"this\", new()\n    {\n        Name = \"ads-serving-endpoint\",\n        Config = new Databricks.Inputs.ModelServingConfigArgs\n        {\n            ServedEntities = new[]\n            {\n                new Databricks.Inputs.ModelServingConfigServedEntityArgs\n                {\n                    Name = \"prod_model\",\n                    EntityName = \"ads-model\",\n                    EntityVersion = \"2\",\n                    WorkloadSize = \"Small\",\n                    ScaleToZeroEnabled = true,\n                },\n                new Databricks.Inputs.ModelServingConfigServedEntityArgs\n                {\n                    Name = \"candidate_model\",\n                    EntityName = \"ads-model\",\n                    EntityVersion = \"4\",\n                    WorkloadSize = \"Small\",\n                    ScaleToZeroEnabled = false,\n                },\n            },\n            TrafficConfig = new Databricks.Inputs.ModelServingConfigTrafficConfigArgs\n            {\n                Routes = new[]\n                {\n                    new Databricks.Inputs.ModelServingConfigTrafficConfigRouteArgs\n                    {\n                        ServedModelName = \"prod_model\",\n                        TrafficPercentage = 90,\n                    },\n                    new Databricks.Inputs.ModelServingConfigTrafficConfigRouteArgs\n                    {\n                        ServedModelName = \"candidate_model\",\n                        TrafficPercentage = 10,\n                    },\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewModelServing(ctx, \"this\", \u0026databricks.ModelServingArgs{\n\t\t\tName: pulumi.String(\"ads-serving-endpoint\"),\n\t\t\tConfig: \u0026databricks.ModelServingConfigArgs{\n\t\t\t\tServedEntities: databricks.ModelServingConfigServedEntityArray{\n\t\t\t\t\t\u0026databricks.ModelServingConfigServedEntityArgs{\n\t\t\t\t\t\tName:               pulumi.String(\"prod_model\"),\n\t\t\t\t\t\tEntityName:         pulumi.String(\"ads-model\"),\n\t\t\t\t\t\tEntityVersion:      pulumi.String(\"2\"),\n\t\t\t\t\t\tWorkloadSize:       pulumi.String(\"Small\"),\n\t\t\t\t\t\tScaleToZeroEnabled: pulumi.Bool(true),\n\t\t\t\t\t},\n\t\t\t\t\t\u0026databricks.ModelServingConfigServedEntityArgs{\n\t\t\t\t\t\tName:               pulumi.String(\"candidate_model\"),\n\t\t\t\t\t\tEntityName:         pulumi.String(\"ads-model\"),\n\t\t\t\t\t\tEntityVersion:      pulumi.String(\"4\"),\n\t\t\t\t\t\tWorkloadSize:       pulumi.String(\"Small\"),\n\t\t\t\t\t\tScaleToZeroEnabled: pulumi.Bool(false),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tTrafficConfig: \u0026databricks.ModelServingConfigTrafficConfigArgs{\n\t\t\t\t\tRoutes: databricks.ModelServingConfigTrafficConfigRouteArray{\n\t\t\t\t\t\t\u0026databricks.ModelServingConfigTrafficConfigRouteArgs{\n\t\t\t\t\t\t\tServedModelName:   pulumi.String(\"prod_model\"),\n\t\t\t\t\t\t\tTrafficPercentage: pulumi.Int(90),\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\u0026databricks.ModelServingConfigTrafficConfigRouteArgs{\n\t\t\t\t\t\t\tServedModelName:   pulumi.String(\"candidate_model\"),\n\t\t\t\t\t\t\tTrafficPercentage: pulumi.Int(10),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ModelServing;\nimport com.pulumi.databricks.ModelServingArgs;\nimport com.pulumi.databricks.inputs.ModelServingConfigArgs;\nimport com.pulumi.databricks.inputs.ModelServingConfigTrafficConfigArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new ModelServing(\"this\", ModelServingArgs.builder()\n            .name(\"ads-serving-endpoint\")\n            .config(ModelServingConfigArgs.builder()\n                .servedEntities(                \n                    ModelServingConfigServedEntityArgs.builder()\n                        .name(\"prod_model\")\n                        .entityName(\"ads-model\")\n                        .entityVersion(\"2\")\n                        .workloadSize(\"Small\")\n                        .scaleToZeroEnabled(true)\n                        .build(),\n                    ModelServingConfigServedEntityArgs.builder()\n                        .name(\"candidate_model\")\n                        .entityName(\"ads-model\")\n                        .entityVersion(\"4\")\n                        .workloadSize(\"Small\")\n                        .scaleToZeroEnabled(false)\n                        .build())\n                .trafficConfig(ModelServingConfigTrafficConfigArgs.builder()\n                    .routes(                    \n                        ModelServingConfigTrafficConfigRouteArgs.builder()\n                            .servedModelName(\"prod_model\")\n                            .trafficPercentage(90)\n                            .build(),\n                        ModelServingConfigTrafficConfigRouteArgs.builder()\n                            .servedModelName(\"candidate_model\")\n                            .trafficPercentage(10)\n                            .build())\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:ModelServing\n    properties:\n      name: ads-serving-endpoint\n      config:\n        servedEntities:\n          - name: prod_model\n            entityName: ads-model\n            entityVersion: '2'\n            workloadSize: Small\n            scaleToZeroEnabled: true\n          - name: candidate_model\n            entityName: ads-model\n            entityVersion: '4'\n            workloadSize: Small\n            scaleToZeroEnabled: false\n        trafficConfig:\n          routes:\n            - servedModelName: prod_model\n              trafficPercentage: 90\n            - servedModelName: candidate_model\n              trafficPercentage: 10\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating a Foundation Model endpoint\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst llama = new databricks.ModelServing(\"llama\", {\n    name: \"llama_3_2_3b_instruct\",\n    aiGateway: {\n        usageTrackingConfig: {\n            enabled: true,\n        },\n    },\n    config: {\n        servedEntities: [{\n            name: \"meta_llama_v3_2_3b_instruct-3\",\n            entityName: \"system.ai.llama_v3_2_3b_instruct\",\n            entityVersion: \"2\",\n            scaleToZeroEnabled: true,\n            maxProvisionedThroughput: 44000,\n        }],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nllama = databricks.ModelServing(\"llama\",\n    name=\"llama_3_2_3b_instruct\",\n    ai_gateway={\n        \"usage_tracking_config\": {\n            \"enabled\": True,\n        },\n    },\n    config={\n        \"served_entities\": [{\n            \"name\": \"meta_llama_v3_2_3b_instruct-3\",\n            \"entity_name\": \"system.ai.llama_v3_2_3b_instruct\",\n            \"entity_version\": \"2\",\n            \"scale_to_zero_enabled\": True,\n            \"max_provisioned_throughput\": 44000,\n        }],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var llama = new Databricks.ModelServing(\"llama\", new()\n    {\n        Name = \"llama_3_2_3b_instruct\",\n        AiGateway = new Databricks.Inputs.ModelServingAiGatewayArgs\n        {\n            UsageTrackingConfig = new Databricks.Inputs.ModelServingAiGatewayUsageTrackingConfigArgs\n            {\n                Enabled = true,\n            },\n        },\n        Config = new Databricks.Inputs.ModelServingConfigArgs\n        {\n            ServedEntities = new[]\n            {\n                new Databricks.Inputs.ModelServingConfigServedEntityArgs\n                {\n                    Name = \"meta_llama_v3_2_3b_instruct-3\",\n                    EntityName = \"system.ai.llama_v3_2_3b_instruct\",\n                    EntityVersion = \"2\",\n                    ScaleToZeroEnabled = true,\n                    MaxProvisionedThroughput = 44000,\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewModelServing(ctx, \"llama\", \u0026databricks.ModelServingArgs{\n\t\t\tName: pulumi.String(\"llama_3_2_3b_instruct\"),\n\t\t\tAiGateway: \u0026databricks.ModelServingAiGatewayArgs{\n\t\t\t\tUsageTrackingConfig: \u0026databricks.ModelServingAiGatewayUsageTrackingConfigArgs{\n\t\t\t\t\tEnabled: pulumi.Bool(true),\n\t\t\t\t},\n\t\t\t},\n\t\t\tConfig: \u0026databricks.ModelServingConfigArgs{\n\t\t\t\tServedEntities: databricks.ModelServingConfigServedEntityArray{\n\t\t\t\t\t\u0026databricks.ModelServingConfigServedEntityArgs{\n\t\t\t\t\t\tName:                     pulumi.String(\"meta_llama_v3_2_3b_instruct-3\"),\n\t\t\t\t\t\tEntityName:               pulumi.String(\"system.ai.llama_v3_2_3b_instruct\"),\n\t\t\t\t\t\tEntityVersion:            pulumi.String(\"2\"),\n\t\t\t\t\t\tScaleToZeroEnabled:       pulumi.Bool(true),\n\t\t\t\t\t\tMaxProvisionedThroughput: pulumi.Int(44000),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ModelServing;\nimport com.pulumi.databricks.ModelServingArgs;\nimport com.pulumi.databricks.inputs.ModelServingAiGatewayArgs;\nimport com.pulumi.databricks.inputs.ModelServingAiGatewayUsageTrackingConfigArgs;\nimport com.pulumi.databricks.inputs.ModelServingConfigArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var llama = new ModelServing(\"llama\", ModelServingArgs.builder()\n            .name(\"llama_3_2_3b_instruct\")\n            .aiGateway(ModelServingAiGatewayArgs.builder()\n                .usageTrackingConfig(ModelServingAiGatewayUsageTrackingConfigArgs.builder()\n                    .enabled(true)\n                    .build())\n                .build())\n            .config(ModelServingConfigArgs.builder()\n                .servedEntities(ModelServingConfigServedEntityArgs.builder()\n                    .name(\"meta_llama_v3_2_3b_instruct-3\")\n                    .entityName(\"system.ai.llama_v3_2_3b_instruct\")\n                    .entityVersion(\"2\")\n                    .scaleToZeroEnabled(true)\n                    .maxProvisionedThroughput(44000)\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  llama:\n    type: databricks:ModelServing\n    properties:\n      name: llama_3_2_3b_instruct\n      aiGateway:\n        usageTrackingConfig:\n          enabled: true\n      config:\n        servedEntities:\n          - name: meta_llama_v3_2_3b_instruct-3\n            entityName: system.ai.llama_v3_2_3b_instruct\n            entityVersion: '2'\n            scaleToZeroEnabled: true\n            maxProvisionedThroughput: 44000\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating an External Model endpoint\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst gpt4o = new databricks.ModelServing(\"gpt_4o\", {\n    name: \"gpt-4o-mini\",\n    aiGateway: {\n        usageTrackingConfig: {\n            enabled: true,\n        },\n        rateLimits: [{\n            calls: 10,\n            key: \"endpoint\",\n            renewalPeriod: \"minute\",\n        }],\n        inferenceTableConfig: {\n            enabled: true,\n            tableNamePrefix: \"gpt-4o-mini\",\n            catalogName: \"ml\",\n            schemaName: \"ai_gateway\",\n        },\n        guardrails: {\n            input: {\n                invalidKeywords: [\"SuperSecretProject\"],\n                pii: {\n                    behavior: \"BLOCK\",\n                },\n            },\n            output: {\n                pii: {\n                    behavior: \"BLOCK\",\n                },\n            },\n        },\n    },\n    config: {\n        servedEntities: [{\n            name: \"gpt-4o-mini\",\n            externalModel: {\n                name: \"gpt-4o-mini\",\n                provider: \"openai\",\n                task: \"llm/v1/chat\",\n                openaiConfig: {\n                    openaiApiKey: \"{{secrets/llm_scope/openai_api_key}}\",\n                },\n            },\n        }],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ngpt4o = databricks.ModelServing(\"gpt_4o\",\n    name=\"gpt-4o-mini\",\n    ai_gateway={\n        \"usage_tracking_config\": {\n            \"enabled\": True,\n        },\n        \"rate_limits\": [{\n            \"calls\": 10,\n            \"key\": \"endpoint\",\n            \"renewal_period\": \"minute\",\n        }],\n        \"inference_table_config\": {\n            \"enabled\": True,\n            \"table_name_prefix\": \"gpt-4o-mini\",\n            \"catalog_name\": \"ml\",\n            \"schema_name\": \"ai_gateway\",\n        },\n        \"guardrails\": {\n            \"input\": {\n                \"invalid_keywords\": [\"SuperSecretProject\"],\n                \"pii\": {\n                    \"behavior\": \"BLOCK\",\n                },\n            },\n            \"output\": {\n                \"pii\": {\n                    \"behavior\": \"BLOCK\",\n                },\n            },\n        },\n    },\n    config={\n        \"served_entities\": [{\n            \"name\": \"gpt-4o-mini\",\n            \"external_model\": {\n                \"name\": \"gpt-4o-mini\",\n                \"provider\": \"openai\",\n                \"task\": \"llm/v1/chat\",\n                \"openai_config\": {\n                    \"openai_api_key\": \"{{secrets/llm_scope/openai_api_key}}\",\n                },\n            },\n        }],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var gpt4o = new Databricks.ModelServing(\"gpt_4o\", new()\n    {\n        Name = \"gpt-4o-mini\",\n        AiGateway = new Databricks.Inputs.ModelServingAiGatewayArgs\n        {\n            UsageTrackingConfig = new Databricks.Inputs.ModelServingAiGatewayUsageTrackingConfigArgs\n            {\n                Enabled = true,\n            },\n            RateLimits = new[]\n            {\n                new Databricks.Inputs.ModelServingAiGatewayRateLimitArgs\n                {\n                    Calls = 10,\n                    Key = \"endpoint\",\n                    RenewalPeriod = \"minute\",\n                },\n            },\n            InferenceTableConfig = new Databricks.Inputs.ModelServingAiGatewayInferenceTableConfigArgs\n            {\n                Enabled = true,\n                TableNamePrefix = \"gpt-4o-mini\",\n                CatalogName = \"ml\",\n                SchemaName = \"ai_gateway\",\n            },\n            Guardrails = new Databricks.Inputs.ModelServingAiGatewayGuardrailsArgs\n            {\n                Input = new Databricks.Inputs.ModelServingAiGatewayGuardrailsInputArgs\n                {\n                    InvalidKeywords = new[]\n                    {\n                        \"SuperSecretProject\",\n                    },\n                    Pii = new Databricks.Inputs.ModelServingAiGatewayGuardrailsInputPiiArgs\n                    {\n                        Behavior = \"BLOCK\",\n                    },\n                },\n                Output = new Databricks.Inputs.ModelServingAiGatewayGuardrailsOutputArgs\n                {\n                    Pii = new Databricks.Inputs.ModelServingAiGatewayGuardrailsOutputPiiArgs\n                    {\n                        Behavior = \"BLOCK\",\n                    },\n                },\n            },\n        },\n        Config = new Databricks.Inputs.ModelServingConfigArgs\n        {\n            ServedEntities = new[]\n            {\n                new Databricks.Inputs.ModelServingConfigServedEntityArgs\n                {\n                    Name = \"gpt-4o-mini\",\n                    ExternalModel = new Databricks.Inputs.ModelServingConfigServedEntityExternalModelArgs\n                    {\n                        Name = \"gpt-4o-mini\",\n                        Provider = \"openai\",\n                        Task = \"llm/v1/chat\",\n                        OpenaiConfig = new Databricks.Inputs.ModelServingConfigServedEntityExternalModelOpenaiConfigArgs\n                        {\n                            OpenaiApiKey = \"{{secrets/llm_scope/openai_api_key}}\",\n                        },\n                    },\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewModelServing(ctx, \"gpt_4o\", \u0026databricks.ModelServingArgs{\n\t\t\tName: pulumi.String(\"gpt-4o-mini\"),\n\t\t\tAiGateway: \u0026databricks.ModelServingAiGatewayArgs{\n\t\t\t\tUsageTrackingConfig: \u0026databricks.ModelServingAiGatewayUsageTrackingConfigArgs{\n\t\t\t\t\tEnabled: pulumi.Bool(true),\n\t\t\t\t},\n\t\t\t\tRateLimits: databricks.ModelServingAiGatewayRateLimitArray{\n\t\t\t\t\t\u0026databricks.ModelServingAiGatewayRateLimitArgs{\n\t\t\t\t\t\tCalls:         pulumi.Int(10),\n\t\t\t\t\t\tKey:           pulumi.String(\"endpoint\"),\n\t\t\t\t\t\tRenewalPeriod: pulumi.String(\"minute\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\tInferenceTableConfig: \u0026databricks.ModelServingAiGatewayInferenceTableConfigArgs{\n\t\t\t\t\tEnabled:         pulumi.Bool(true),\n\t\t\t\t\tTableNamePrefix: pulumi.String(\"gpt-4o-mini\"),\n\t\t\t\t\tCatalogName:     pulumi.String(\"ml\"),\n\t\t\t\t\tSchemaName:      pulumi.String(\"ai_gateway\"),\n\t\t\t\t},\n\t\t\t\tGuardrails: \u0026databricks.ModelServingAiGatewayGuardrailsArgs{\n\t\t\t\t\tInput: \u0026databricks.ModelServingAiGatewayGuardrailsInputTypeArgs{\n\t\t\t\t\t\tInvalidKeywords: pulumi.StringArray{\n\t\t\t\t\t\t\tpulumi.String(\"SuperSecretProject\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t\tPii: \u0026databricks.ModelServingAiGatewayGuardrailsInputPiiArgs{\n\t\t\t\t\t\t\tBehavior: pulumi.String(\"BLOCK\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t\tOutput: \u0026databricks.ModelServingAiGatewayGuardrailsOutputTypeArgs{\n\t\t\t\t\t\tPii: \u0026databricks.ModelServingAiGatewayGuardrailsOutputPiiArgs{\n\t\t\t\t\t\t\tBehavior: pulumi.String(\"BLOCK\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tConfig: \u0026databricks.ModelServingConfigArgs{\n\t\t\t\tServedEntities: databricks.ModelServingConfigServedEntityArray{\n\t\t\t\t\t\u0026databricks.ModelServingConfigServedEntityArgs{\n\t\t\t\t\t\tName: pulumi.String(\"gpt-4o-mini\"),\n\t\t\t\t\t\tExternalModel: \u0026databricks.ModelServingConfigServedEntityExternalModelArgs{\n\t\t\t\t\t\t\tName:     pulumi.String(\"gpt-4o-mini\"),\n\t\t\t\t\t\t\tProvider: pulumi.String(\"openai\"),\n\t\t\t\t\t\t\tTask:     pulumi.String(\"llm/v1/chat\"),\n\t\t\t\t\t\t\tOpenaiConfig: \u0026databricks.ModelServingConfigServedEntityExternalModelOpenaiConfigArgs{\n\t\t\t\t\t\t\t\tOpenaiApiKey: pulumi.String(\"{{secrets/llm_scope/openai_api_key}}\"),\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ModelServing;\nimport com.pulumi.databricks.ModelServingArgs;\nimport com.pulumi.databricks.inputs.ModelServingAiGatewayArgs;\nimport com.pulumi.databricks.inputs.ModelServingAiGatewayUsageTrackingConfigArgs;\nimport com.pulumi.databricks.inputs.ModelServingAiGatewayInferenceTableConfigArgs;\nimport com.pulumi.databricks.inputs.ModelServingAiGatewayGuardrailsArgs;\nimport com.pulumi.databricks.inputs.ModelServingAiGatewayGuardrailsInputArgs;\nimport com.pulumi.databricks.inputs.ModelServingAiGatewayGuardrailsInputPiiArgs;\nimport com.pulumi.databricks.inputs.ModelServingAiGatewayGuardrailsOutputArgs;\nimport com.pulumi.databricks.inputs.ModelServingAiGatewayGuardrailsOutputPiiArgs;\nimport com.pulumi.databricks.inputs.ModelServingConfigArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var gpt4o = new ModelServing(\"gpt4o\", ModelServingArgs.builder()\n            .name(\"gpt-4o-mini\")\n            .aiGateway(ModelServingAiGatewayArgs.builder()\n                .usageTrackingConfig(ModelServingAiGatewayUsageTrackingConfigArgs.builder()\n                    .enabled(true)\n                    .build())\n                .rateLimits(ModelServingAiGatewayRateLimitArgs.builder()\n                    .calls(10)\n                    .key(\"endpoint\")\n                    .renewalPeriod(\"minute\")\n                    .build())\n                .inferenceTableConfig(ModelServingAiGatewayInferenceTableConfigArgs.builder()\n                    .enabled(true)\n                    .tableNamePrefix(\"gpt-4o-mini\")\n                    .catalogName(\"ml\")\n                    .schemaName(\"ai_gateway\")\n                    .build())\n                .guardrails(ModelServingAiGatewayGuardrailsArgs.builder()\n                    .input(ModelServingAiGatewayGuardrailsInputArgs.builder()\n                        .invalidKeywords(\"SuperSecretProject\")\n                        .pii(ModelServingAiGatewayGuardrailsInputPiiArgs.builder()\n                            .behavior(\"BLOCK\")\n                            .build())\n                        .build())\n                    .output(ModelServingAiGatewayGuardrailsOutputArgs.builder()\n                        .pii(ModelServingAiGatewayGuardrailsOutputPiiArgs.builder()\n                            .behavior(\"BLOCK\")\n                            .build())\n                        .build())\n                    .build())\n                .build())\n            .config(ModelServingConfigArgs.builder()\n                .servedEntities(ModelServingConfigServedEntityArgs.builder()\n                    .name(\"gpt-4o-mini\")\n                    .externalModel(ModelServingConfigServedEntityExternalModelArgs.builder()\n                        .name(\"gpt-4o-mini\")\n                        .provider(\"openai\")\n                        .task(\"llm/v1/chat\")\n                        .openaiConfig(ModelServingConfigServedEntityExternalModelOpenaiConfigArgs.builder()\n                            .openaiApiKey(\"{{secrets/llm_scope/openai_api_key}}\")\n                            .build())\n                        .build())\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  gpt4o:\n    type: databricks:ModelServing\n    name: gpt_4o\n    properties:\n      name: gpt-4o-mini\n      aiGateway:\n        usageTrackingConfig:\n          enabled: true\n        rateLimits:\n          - calls: 10\n            key: endpoint\n            renewalPeriod: minute\n        inferenceTableConfig:\n          enabled: true\n          tableNamePrefix: gpt-4o-mini\n          catalogName: ml\n          schemaName: ai_gateway\n        guardrails:\n          input:\n            invalidKeywords:\n              - SuperSecretProject\n            pii:\n              behavior: BLOCK\n          output:\n            pii:\n              behavior: BLOCK\n      config:\n        servedEntities:\n          - name: gpt-4o-mini\n            externalModel:\n              name: gpt-4o-mini\n              provider: openai\n              task: llm/v1/chat\n              openaiConfig:\n                openaiApiKey: '{{secrets/llm_scope/openai_api_key}}'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Manage*, *Query* or *View* individual serving endpoints.\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.ModelServingProvisionedThroughput \" pulumi-lang-dotnet=\" databricks.ModelServingProvisionedThroughput \" pulumi-lang-go=\" ModelServingProvisionedThroughput \" pulumi-lang-python=\" ModelServingProvisionedThroughput \" pulumi-lang-yaml=\" databricks.ModelServingProvisionedThroughput \" pulumi-lang-java=\" databricks.ModelServingProvisionedThroughput \"\u003e databricks.ModelServingProvisionedThroughput \u003c/span\u003eto create [Foundation Model provisioned throughput](https://docs.databricks.com/aws/en/machine-learning/foundation-model-apis/deploy-prov-throughput-foundation-model-apis) endpoints in Databricks.\n*\u003cspan pulumi-lang-nodejs=\" databricks.RegisteredModel \" pulumi-lang-dotnet=\" databricks.RegisteredModel \" pulumi-lang-go=\" RegisteredModel \" pulumi-lang-python=\" RegisteredModel \" pulumi-lang-yaml=\" databricks.RegisteredModel \" pulumi-lang-java=\" databricks.RegisteredModel \"\u003e databricks.RegisteredModel \u003c/span\u003eto create [Models in Unity Catalog](https://docs.databricks.com/en/mlflow/models-in-uc.html) in Databricks.\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eto manage directories in [Databricks Workspace](https://docs.databricks.com/workspace/workspace-objects.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MlflowModel \" pulumi-lang-dotnet=\" databricks.MlflowModel \" pulumi-lang-go=\" MlflowModel \" pulumi-lang-python=\" MlflowModel \" pulumi-lang-yaml=\" databricks.MlflowModel \" pulumi-lang-java=\" databricks.MlflowModel \"\u003e databricks.MlflowModel \u003c/span\u003eto create models in the [workspace model registry](https://docs.databricks.com/en/mlflow/model-registry.html) in Databricks.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003edata to export a notebook from Databricks Workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage [Databricks Repos](https://docs.databricks.com/repos.html).\n\n","properties":{"aiGateway":{"$ref":"#/types/databricks:index/ModelServingAiGateway:ModelServingAiGateway","description":"A block with AI Gateway configuration for the serving endpoint. *Note: only external model endpoints are supported as of now.*\n"},"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this serving endpoint.\n"},"config":{"$ref":"#/types/databricks:index/ModelServingConfig:ModelServingConfig","description":"The model serving endpoint configuration. This is optional and can be added and modified after creation. If \u003cspan pulumi-lang-nodejs=\"`config`\" pulumi-lang-dotnet=\"`Config`\" pulumi-lang-go=\"`config`\" pulumi-lang-python=\"`config`\" pulumi-lang-yaml=\"`config`\" pulumi-lang-java=\"`config`\"\u003e`config`\u003c/span\u003e was provided in a previous apply but is not provided in the current apply, no change to the model serving endpoint will occur. To recreate the model serving endpoint without the \u003cspan pulumi-lang-nodejs=\"`config`\" pulumi-lang-dotnet=\"`Config`\" pulumi-lang-go=\"`config`\" pulumi-lang-python=\"`config`\" pulumi-lang-yaml=\"`config`\" pulumi-lang-java=\"`config`\"\u003e`config`\u003c/span\u003e block, the model serving endpoint must be destroyed and recreated.\n"},"description":{"type":"string","description":"The description of the model serving endpoint.\n"},"emailNotifications":{"$ref":"#/types/databricks:index/ModelServingEmailNotifications:ModelServingEmailNotifications","description":"A block with Email notification setting.\n"},"endpointUrl":{"type":"string","description":"Invocation url of the endpoint.\n"},"name":{"type":"string","description":"The name of the model serving endpoint. This field is required and must be unique across a workspace. An endpoint name can consist of alphanumeric characters, dashes, and underscores. NOTE: Changing this name will delete the existing endpoint and create a new endpoint with the updated name.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ModelServingProviderConfig:ModelServingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"rateLimits":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingRateLimit:ModelServingRateLimit"},"description":"A list of rate limit blocks to be applied to the serving endpoint. *Note: only external and foundation model endpoints are supported as of now.*\n","deprecationMessage":"Please use AI Gateway to manage rate limits."},"routeOptimized":{"type":"boolean","description":"A boolean enabling route optimization for the endpoint. *Note: only available for custom models.*\n"},"servingEndpointId":{"type":"string","description":"Unique identifier of the serving endpoint primarily used to set permissions and refer to this instance for other operations.\n"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingTag:ModelServingTag"},"description":"Tags to be attached to the serving endpoint and automatically propagated to billing logs.\n"}},"required":["config","endpointUrl","name","servingEndpointId"],"inputProperties":{"aiGateway":{"$ref":"#/types/databricks:index/ModelServingAiGateway:ModelServingAiGateway","description":"A block with AI Gateway configuration for the serving endpoint. *Note: only external model endpoints are supported as of now.*\n"},"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this serving endpoint.\n"},"config":{"$ref":"#/types/databricks:index/ModelServingConfig:ModelServingConfig","description":"The model serving endpoint configuration. This is optional and can be added and modified after creation. If \u003cspan pulumi-lang-nodejs=\"`config`\" pulumi-lang-dotnet=\"`Config`\" pulumi-lang-go=\"`config`\" pulumi-lang-python=\"`config`\" pulumi-lang-yaml=\"`config`\" pulumi-lang-java=\"`config`\"\u003e`config`\u003c/span\u003e was provided in a previous apply but is not provided in the current apply, no change to the model serving endpoint will occur. To recreate the model serving endpoint without the \u003cspan pulumi-lang-nodejs=\"`config`\" pulumi-lang-dotnet=\"`Config`\" pulumi-lang-go=\"`config`\" pulumi-lang-python=\"`config`\" pulumi-lang-yaml=\"`config`\" pulumi-lang-java=\"`config`\"\u003e`config`\u003c/span\u003e block, the model serving endpoint must be destroyed and recreated.\n"},"description":{"type":"string","description":"The description of the model serving endpoint.\n"},"emailNotifications":{"$ref":"#/types/databricks:index/ModelServingEmailNotifications:ModelServingEmailNotifications","description":"A block with Email notification setting.\n"},"name":{"type":"string","description":"The name of the model serving endpoint. This field is required and must be unique across a workspace. An endpoint name can consist of alphanumeric characters, dashes, and underscores. NOTE: Changing this name will delete the existing endpoint and create a new endpoint with the updated name.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/ModelServingProviderConfig:ModelServingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"rateLimits":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingRateLimit:ModelServingRateLimit"},"description":"A list of rate limit blocks to be applied to the serving endpoint. *Note: only external and foundation model endpoints are supported as of now.*\n","deprecationMessage":"Please use AI Gateway to manage rate limits."},"routeOptimized":{"type":"boolean","description":"A boolean enabling route optimization for the endpoint. *Note: only available for custom models.*\n","willReplaceOnChanges":true},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingTag:ModelServingTag"},"description":"Tags to be attached to the serving endpoint and automatically propagated to billing logs.\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering ModelServing resources.\n","properties":{"aiGateway":{"$ref":"#/types/databricks:index/ModelServingAiGateway:ModelServingAiGateway","description":"A block with AI Gateway configuration for the serving endpoint. *Note: only external model endpoints are supported as of now.*\n"},"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this serving endpoint.\n"},"config":{"$ref":"#/types/databricks:index/ModelServingConfig:ModelServingConfig","description":"The model serving endpoint configuration. This is optional and can be added and modified after creation. If \u003cspan pulumi-lang-nodejs=\"`config`\" pulumi-lang-dotnet=\"`Config`\" pulumi-lang-go=\"`config`\" pulumi-lang-python=\"`config`\" pulumi-lang-yaml=\"`config`\" pulumi-lang-java=\"`config`\"\u003e`config`\u003c/span\u003e was provided in a previous apply but is not provided in the current apply, no change to the model serving endpoint will occur. To recreate the model serving endpoint without the \u003cspan pulumi-lang-nodejs=\"`config`\" pulumi-lang-dotnet=\"`Config`\" pulumi-lang-go=\"`config`\" pulumi-lang-python=\"`config`\" pulumi-lang-yaml=\"`config`\" pulumi-lang-java=\"`config`\"\u003e`config`\u003c/span\u003e block, the model serving endpoint must be destroyed and recreated.\n"},"description":{"type":"string","description":"The description of the model serving endpoint.\n"},"emailNotifications":{"$ref":"#/types/databricks:index/ModelServingEmailNotifications:ModelServingEmailNotifications","description":"A block with Email notification setting.\n"},"endpointUrl":{"type":"string","description":"Invocation url of the endpoint.\n"},"name":{"type":"string","description":"The name of the model serving endpoint. This field is required and must be unique across a workspace. An endpoint name can consist of alphanumeric characters, dashes, and underscores. NOTE: Changing this name will delete the existing endpoint and create a new endpoint with the updated name.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/ModelServingProviderConfig:ModelServingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"rateLimits":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingRateLimit:ModelServingRateLimit"},"description":"A list of rate limit blocks to be applied to the serving endpoint. *Note: only external and foundation model endpoints are supported as of now.*\n","deprecationMessage":"Please use AI Gateway to manage rate limits."},"routeOptimized":{"type":"boolean","description":"A boolean enabling route optimization for the endpoint. *Note: only available for custom models.*\n","willReplaceOnChanges":true},"servingEndpointId":{"type":"string","description":"Unique identifier of the serving endpoint primarily used to set permissions and refer to this instance for other operations.\n"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingTag:ModelServingTag"},"description":"Tags to be attached to the serving endpoint and automatically propagated to billing logs.\n"}},"type":"object"}},"databricks:index/modelServingProvisionedThroughput:ModelServingProvisionedThroughput":{"description":"This resource allows you to manage [Foundation Model provisioned throughput](https://docs.databricks.com/aws/en/machine-learning/foundation-model-apis/deploy-prov-throughput-foundation-model-apis) endpoints in Databricks.\n\n\u003e This resource is currently in private preview, and only available for enrolled customers.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\nCreating a Foundation Model provisioned throughput endpoint\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst llama = new databricks.ModelServingProvisionedThroughput(\"llama\", {\n    aiGateway: {\n        usageTrackingConfig: {\n            enabled: true,\n        },\n    },\n    config: {\n        servedEntities: [{\n            entityName: \"system.ai.llama-4-maverick\",\n            entityVersion: \"1\",\n            provisionedModelUnits: 100,\n        }],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nllama = databricks.ModelServingProvisionedThroughput(\"llama\",\n    ai_gateway={\n        \"usage_tracking_config\": {\n            \"enabled\": True,\n        },\n    },\n    config={\n        \"served_entities\": [{\n            \"entity_name\": \"system.ai.llama-4-maverick\",\n            \"entity_version\": \"1\",\n            \"provisioned_model_units\": 100,\n        }],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var llama = new Databricks.ModelServingProvisionedThroughput(\"llama\", new()\n    {\n        AiGateway = new Databricks.Inputs.ModelServingProvisionedThroughputAiGatewayArgs\n        {\n            UsageTrackingConfig = new Databricks.Inputs.ModelServingProvisionedThroughputAiGatewayUsageTrackingConfigArgs\n            {\n                Enabled = true,\n            },\n        },\n        Config = new Databricks.Inputs.ModelServingProvisionedThroughputConfigArgs\n        {\n            ServedEntities = new[]\n            {\n                new Databricks.Inputs.ModelServingProvisionedThroughputConfigServedEntityArgs\n                {\n                    EntityName = \"system.ai.llama-4-maverick\",\n                    EntityVersion = \"1\",\n                    ProvisionedModelUnits = 100,\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewModelServingProvisionedThroughput(ctx, \"llama\", \u0026databricks.ModelServingProvisionedThroughputArgs{\n\t\t\tAiGateway: \u0026databricks.ModelServingProvisionedThroughputAiGatewayArgs{\n\t\t\t\tUsageTrackingConfig: \u0026databricks.ModelServingProvisionedThroughputAiGatewayUsageTrackingConfigArgs{\n\t\t\t\t\tEnabled: pulumi.Bool(true),\n\t\t\t\t},\n\t\t\t},\n\t\t\tConfig: \u0026databricks.ModelServingProvisionedThroughputConfigArgs{\n\t\t\t\tServedEntities: databricks.ModelServingProvisionedThroughputConfigServedEntityArray{\n\t\t\t\t\t\u0026databricks.ModelServingProvisionedThroughputConfigServedEntityArgs{\n\t\t\t\t\t\tEntityName:            pulumi.String(\"system.ai.llama-4-maverick\"),\n\t\t\t\t\t\tEntityVersion:         pulumi.String(\"1\"),\n\t\t\t\t\t\tProvisionedModelUnits: pulumi.Int(100),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ModelServingProvisionedThroughput;\nimport com.pulumi.databricks.ModelServingProvisionedThroughputArgs;\nimport com.pulumi.databricks.inputs.ModelServingProvisionedThroughputAiGatewayArgs;\nimport com.pulumi.databricks.inputs.ModelServingProvisionedThroughputAiGatewayUsageTrackingConfigArgs;\nimport com.pulumi.databricks.inputs.ModelServingProvisionedThroughputConfigArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var llama = new ModelServingProvisionedThroughput(\"llama\", ModelServingProvisionedThroughputArgs.builder()\n            .aiGateway(ModelServingProvisionedThroughputAiGatewayArgs.builder()\n                .usageTrackingConfig(ModelServingProvisionedThroughputAiGatewayUsageTrackingConfigArgs.builder()\n                    .enabled(true)\n                    .build())\n                .build())\n            .config(ModelServingProvisionedThroughputConfigArgs.builder()\n                .servedEntities(ModelServingProvisionedThroughputConfigServedEntityArgs.builder()\n                    .entityName(\"system.ai.llama-4-maverick\")\n                    .entityVersion(\"1\")\n                    .provisionedModelUnits(100)\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  llama:\n    type: databricks:ModelServingProvisionedThroughput\n    properties:\n      aiGateway:\n        usageTrackingConfig:\n          enabled: true\n      config:\n        servedEntities:\n          - entityName: system.ai.llama-4-maverick\n            entityVersion: '1'\n            provisionedModelUnits: 100\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Manage*, *Query* or *View* individual serving endpoints.\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.ModelServing \" pulumi-lang-dotnet=\" databricks.ModelServing \" pulumi-lang-go=\" ModelServing \" pulumi-lang-python=\" ModelServing \" pulumi-lang-yaml=\" databricks.ModelServing \" pulumi-lang-java=\" databricks.ModelServing \"\u003e databricks.ModelServing \u003c/span\u003eto create custom and external serving endpoints in Databricks.\n*\u003cspan pulumi-lang-nodejs=\" databricks.RegisteredModel \" pulumi-lang-dotnet=\" databricks.RegisteredModel \" pulumi-lang-go=\" RegisteredModel \" pulumi-lang-python=\" RegisteredModel \" pulumi-lang-yaml=\" databricks.RegisteredModel \" pulumi-lang-java=\" databricks.RegisteredModel \"\u003e databricks.RegisteredModel \u003c/span\u003eto create [Models in Unity Catalog](https://docs.databricks.com/en/mlflow/models-in-uc.html) in Databricks.\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eto manage directories in [Databricks Workspace](https://docs.databricks.com/workspace/workspace-objects.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MlflowModel \" pulumi-lang-dotnet=\" databricks.MlflowModel \" pulumi-lang-go=\" MlflowModel \" pulumi-lang-python=\" MlflowModel \" pulumi-lang-yaml=\" databricks.MlflowModel \" pulumi-lang-java=\" databricks.MlflowModel \"\u003e databricks.MlflowModel \u003c/span\u003eto create models in the [workspace model registry](https://docs.databricks.com/en/mlflow/model-registry.html) in Databricks.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003edata to export a notebook from Databricks Workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage [Databricks Repos](https://docs.databricks.com/repos.html).\n\n","properties":{"aiGateway":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGateway:ModelServingProvisionedThroughputAiGateway","description":"A block with AI Gateway configuration for the serving endpoint. *Note: only external model endpoints are supported as of now.*\n"},"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this serving endpoint.\n"},"config":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputConfig:ModelServingProvisionedThroughputConfig","description":"The model serving endpoint configuration.\n"},"emailNotifications":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputEmailNotifications:ModelServingProvisionedThroughputEmailNotifications","description":"A block with Email notification setting.\n"},"name":{"type":"string","description":"The name of the model serving endpoint. This field is required and must be unique across a workspace. An endpoint name can consist of alphanumeric characters, dashes, and underscores. NOTE: Changing this name will delete the existing endpoint and create a new endpoint with the updated name.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputProviderConfig:ModelServingProvisionedThroughputProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"servingEndpointId":{"type":"string","description":"Unique identifier of the serving endpoint primarily used to set permissions and refer to this instance for other operations.\n"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputTag:ModelServingProvisionedThroughputTag"},"description":"Tags to be attached to the serving endpoint and automatically propagated to billing logs.\n"}},"required":["config","name","servingEndpointId"],"inputProperties":{"aiGateway":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGateway:ModelServingProvisionedThroughputAiGateway","description":"A block with AI Gateway configuration for the serving endpoint. *Note: only external model endpoints are supported as of now.*\n"},"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this serving endpoint.\n"},"config":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputConfig:ModelServingProvisionedThroughputConfig","description":"The model serving endpoint configuration.\n"},"emailNotifications":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputEmailNotifications:ModelServingProvisionedThroughputEmailNotifications","description":"A block with Email notification setting.\n"},"name":{"type":"string","description":"The name of the model serving endpoint. This field is required and must be unique across a workspace. An endpoint name can consist of alphanumeric characters, dashes, and underscores. NOTE: Changing this name will delete the existing endpoint and create a new endpoint with the updated name.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputProviderConfig:ModelServingProvisionedThroughputProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputTag:ModelServingProvisionedThroughputTag"},"description":"Tags to be attached to the serving endpoint and automatically propagated to billing logs.\n"}},"requiredInputs":["config"],"stateInputs":{"description":"Input properties used for looking up and filtering ModelServingProvisionedThroughput resources.\n","properties":{"aiGateway":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputAiGateway:ModelServingProvisionedThroughputAiGateway","description":"A block with AI Gateway configuration for the serving endpoint. *Note: only external model endpoints are supported as of now.*\n"},"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this serving endpoint.\n"},"config":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputConfig:ModelServingProvisionedThroughputConfig","description":"The model serving endpoint configuration.\n"},"emailNotifications":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputEmailNotifications:ModelServingProvisionedThroughputEmailNotifications","description":"A block with Email notification setting.\n"},"name":{"type":"string","description":"The name of the model serving endpoint. This field is required and must be unique across a workspace. An endpoint name can consist of alphanumeric characters, dashes, and underscores. NOTE: Changing this name will delete the existing endpoint and create a new endpoint with the updated name.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputProviderConfig:ModelServingProvisionedThroughputProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"servingEndpointId":{"type":"string","description":"Unique identifier of the serving endpoint primarily used to set permissions and refer to this instance for other operations.\n"},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/ModelServingProvisionedThroughputTag:ModelServingProvisionedThroughputTag"},"description":"Tags to be attached to the serving endpoint and automatically propagated to billing logs.\n"}},"type":"object"}},"databricks:index/mount:Mount":{"description":"\u003e Please switch to databricks_volume. DBFS mounts are deprecated.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e When \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e is not specified, it will create the smallest possible cluster in the default availability zone with name equal to or starting with `terraform-mount` for the shortest possible amount of time. To avoid mount failure due to potentially quota or capacity issues with the default cluster, we recommend specifying a cluster to use for mounting.\n\n\u003e CRUD operations on a databricks mount require a running cluster. Due to limitations of terraform and the databricks mounts APIs, if the cluster the mount was most recently created / updated using no longer exists AND the mount is destroyed as a part of a pulumi up, we mark it as deleted without cleaning it up from the workspace.\n\nThis resource will [mount your cloud storage](https://docs.databricks.com/data/databricks-file-system.html#mount-object-storage-to-dbfs) on `dbfs:/mnt/name`. Right now it supports mounting AWS S3, Azure (Blob Storage, ADLS Gen1 \u0026 Gen2), Google Cloud Storage.  It is important to understand that this will start up the cluster if the cluster is terminated. The read and refresh terraform command will require a cluster and may take some time to validate the mount.\n\nThis resource provides two ways of mounting a storage account:\n\n1. Use a storage-specific configuration block - this could be used for the most cases, as it will fill most of the necessary details. Currently we support following configuration blocks:\n\n* \u003cspan pulumi-lang-nodejs=\"`s3`\" pulumi-lang-dotnet=\"`S3`\" pulumi-lang-go=\"`s3`\" pulumi-lang-python=\"`s3`\" pulumi-lang-yaml=\"`s3`\" pulumi-lang-java=\"`s3`\"\u003e`s3`\u003c/span\u003e - to [mount AWS S3](https://docs.databricks.com/data/data-sources/aws/amazon-s3.html)\n* \u003cspan pulumi-lang-nodejs=\"`gs`\" pulumi-lang-dotnet=\"`Gs`\" pulumi-lang-go=\"`gs`\" pulumi-lang-python=\"`gs`\" pulumi-lang-yaml=\"`gs`\" pulumi-lang-java=\"`gs`\"\u003e`gs`\u003c/span\u003e - to [mount Google Cloud Storage](https://docs.gcp.databricks.com/data/data-sources/google/gcs.html)\n* \u003cspan pulumi-lang-nodejs=\"`abfs`\" pulumi-lang-dotnet=\"`Abfs`\" pulumi-lang-go=\"`abfs`\" pulumi-lang-python=\"`abfs`\" pulumi-lang-yaml=\"`abfs`\" pulumi-lang-java=\"`abfs`\"\u003e`abfs`\u003c/span\u003e - to [mount ADLS Gen2](https://docs.microsoft.com/en-us/azure/databricks/data/data-sources/azure/adls-gen2/) using Azure Blob Filesystem (ABFS) driver\n* \u003cspan pulumi-lang-nodejs=\"`adl`\" pulumi-lang-dotnet=\"`Adl`\" pulumi-lang-go=\"`adl`\" pulumi-lang-python=\"`adl`\" pulumi-lang-yaml=\"`adl`\" pulumi-lang-java=\"`adl`\"\u003e`adl`\u003c/span\u003e - to [mount ADLS Gen1](https://docs.microsoft.com/en-us/azure/databricks/data/data-sources/azure/azure-datalake) using Azure Data Lake (ADL) driver\n* \u003cspan pulumi-lang-nodejs=\"`wasb`\" pulumi-lang-dotnet=\"`Wasb`\" pulumi-lang-go=\"`wasb`\" pulumi-lang-python=\"`wasb`\" pulumi-lang-yaml=\"`wasb`\" pulumi-lang-java=\"`wasb`\"\u003e`wasb`\u003c/span\u003e  - to [mount Azure Blob Storage](https://docs.microsoft.com/en-us/azure/databricks/data/data-sources/azure/azure-storage) using Windows Azure Storage Blob (WASB) driver\n\n1. Use generic arguments - you have a responsibility for providing all necessary parameters that are required to mount specific storage. This is most flexible option\n\n## Common arguments\n\n* \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e - (Optional, String) Cluster to use for mounting. If no cluster is specified, a new cluster will be created and will mount the bucket for all of the clusters in this workspace. If the cluster is not running - it's going to be started, so be aware to set auto-termination rules on it.\n* \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e - (Optional, String) Name, under which mount will be accessible in `dbfs:/mnt/\u003cMOUNT_NAME\u003e`. If not specified, provider will try to infer it from depending on the resource type:\n  * \u003cspan pulumi-lang-nodejs=\"`bucketName`\" pulumi-lang-dotnet=\"`BucketName`\" pulumi-lang-go=\"`bucketName`\" pulumi-lang-python=\"`bucket_name`\" pulumi-lang-yaml=\"`bucketName`\" pulumi-lang-java=\"`bucketName`\"\u003e`bucket_name`\u003c/span\u003e for AWS S3 and Google Cloud Storage\n  * \u003cspan pulumi-lang-nodejs=\"`containerName`\" pulumi-lang-dotnet=\"`ContainerName`\" pulumi-lang-go=\"`containerName`\" pulumi-lang-python=\"`container_name`\" pulumi-lang-yaml=\"`containerName`\" pulumi-lang-java=\"`containerName`\"\u003e`container_name`\u003c/span\u003e for ADLS Gen2 and Azure Blob Storage\n  * \u003cspan pulumi-lang-nodejs=\"`storageResourceName`\" pulumi-lang-dotnet=\"`StorageResourceName`\" pulumi-lang-go=\"`storageResourceName`\" pulumi-lang-python=\"`storage_resource_name`\" pulumi-lang-yaml=\"`storageResourceName`\" pulumi-lang-java=\"`storageResourceName`\"\u003e`storage_resource_name`\u003c/span\u003e for ADLS Gen1\n* \u003cspan pulumi-lang-nodejs=\"`uri`\" pulumi-lang-dotnet=\"`Uri`\" pulumi-lang-go=\"`uri`\" pulumi-lang-python=\"`uri`\" pulumi-lang-yaml=\"`uri`\" pulumi-lang-java=\"`uri`\"\u003e`uri`\u003c/span\u003e - (Optional, String) the URI for accessing specific storage (`s3a://....`, `abfss://....`, `gs://....`, etc.)\n* \u003cspan pulumi-lang-nodejs=\"`extraConfigs`\" pulumi-lang-dotnet=\"`ExtraConfigs`\" pulumi-lang-go=\"`extraConfigs`\" pulumi-lang-python=\"`extra_configs`\" pulumi-lang-yaml=\"`extraConfigs`\" pulumi-lang-java=\"`extraConfigs`\"\u003e`extra_configs`\u003c/span\u003e - (Optional, String map) configuration parameters that are necessary for mounting of specific storage\n* \u003cspan pulumi-lang-nodejs=\"`resourceId`\" pulumi-lang-dotnet=\"`ResourceId`\" pulumi-lang-go=\"`resourceId`\" pulumi-lang-python=\"`resource_id`\" pulumi-lang-yaml=\"`resourceId`\" pulumi-lang-java=\"`resourceId`\"\u003e`resource_id`\u003c/span\u003e - (Optional, String) resource ID for a given storage account. Could be used to fill defaults, such as storage account \u0026 container names on Azure.\n* \u003cspan pulumi-lang-nodejs=\"`encryptionType`\" pulumi-lang-dotnet=\"`EncryptionType`\" pulumi-lang-go=\"`encryptionType`\" pulumi-lang-python=\"`encryption_type`\" pulumi-lang-yaml=\"`encryptionType`\" pulumi-lang-java=\"`encryptionType`\"\u003e`encryption_type`\u003c/span\u003e - (Optional, String) encryption type. Currently used only for [AWS S3 mounts](https://docs.databricks.com/data/data-sources/aws/amazon-s3.html#encrypt-data-in-s3-buckets)\n\n### Example mounting ADLS Gen2 using uri and\u003cspan pulumi-lang-nodejs=\" extraConfigs\n\" pulumi-lang-dotnet=\" ExtraConfigs\n\" pulumi-lang-go=\" extraConfigs\n\" pulumi-lang-python=\" extra_configs\n\" pulumi-lang-yaml=\" extraConfigs\n\" pulumi-lang-java=\" extraConfigs\n\"\u003e extra_configs\n\u003c/span\u003e\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst tenantId = \"00000000-1111-2222-3333-444444444444\";\nconst clientId = \"55555555-6666-7777-8888-999999999999\";\nconst secretScope = \"some-kv\";\nconst secretKey = \"some-sp-secret\";\nconst container = \"test\";\nconst storageAcc = \"lrs\";\nconst _this = new databricks.Mount(\"this\", {\n    name: \"tf-abfss\",\n    uri: `abfss://${container}@${storageAcc}.dfs.core.windows.net`,\n    extraConfigs: {\n        \"fs.azure.account.auth.type\": \"OAuth\",\n        \"fs.azure.account.oauth.provider.type\": \"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\",\n        \"fs.azure.account.oauth2.client.id\": clientId,\n        \"fs.azure.account.oauth2.client.secret\": `{{secrets/${secretScope}/${secretKey}}}`,\n        \"fs.azure.account.oauth2.client.endpoint\": `https://login.microsoftonline.com/${tenantId}/oauth2/token`,\n        \"fs.azure.createRemoteFileSystemDuringInitialization\": \"false\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ntenant_id = \"00000000-1111-2222-3333-444444444444\"\nclient_id = \"55555555-6666-7777-8888-999999999999\"\nsecret_scope = \"some-kv\"\nsecret_key = \"some-sp-secret\"\ncontainer = \"test\"\nstorage_acc = \"lrs\"\nthis = databricks.Mount(\"this\",\n    name=\"tf-abfss\",\n    uri=f\"abfss://{container}@{storage_acc}.dfs.core.windows.net\",\n    extra_configs={\n        \"fs.azure.account.auth.type\": \"OAuth\",\n        \"fs.azure.account.oauth.provider.type\": \"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\",\n        \"fs.azure.account.oauth2.client.id\": client_id,\n        \"fs.azure.account.oauth2.client.secret\": f\"{{{{secrets/{secret_scope}/{secret_key}}}}}\",\n        \"fs.azure.account.oauth2.client.endpoint\": f\"https://login.microsoftonline.com/{tenant_id}/oauth2/token\",\n        \"fs.azure.createRemoteFileSystemDuringInitialization\": \"false\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var tenantId = \"00000000-1111-2222-3333-444444444444\";\n\n    var clientId = \"55555555-6666-7777-8888-999999999999\";\n\n    var secretScope = \"some-kv\";\n\n    var secretKey = \"some-sp-secret\";\n\n    var container = \"test\";\n\n    var storageAcc = \"lrs\";\n\n    var @this = new Databricks.Mount(\"this\", new()\n    {\n        Name = \"tf-abfss\",\n        Uri = $\"abfss://{container}@{storageAcc}.dfs.core.windows.net\",\n        ExtraConfigs = \n        {\n            { \"fs.azure.account.auth.type\", \"OAuth\" },\n            { \"fs.azure.account.oauth.provider.type\", \"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\" },\n            { \"fs.azure.account.oauth2.client.id\", clientId },\n            { \"fs.azure.account.oauth2.client.secret\", $\"{{{{secrets/{secretScope}/{secretKey}}}}}\" },\n            { \"fs.azure.account.oauth2.client.endpoint\", $\"https://login.microsoftonline.com/{tenantId}/oauth2/token\" },\n            { \"fs.azure.createRemoteFileSystemDuringInitialization\", \"false\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\ttenantId := \"00000000-1111-2222-3333-444444444444\"\n\t\tclientId := \"55555555-6666-7777-8888-999999999999\"\n\t\tsecretScope := \"some-kv\"\n\t\tsecretKey := \"some-sp-secret\"\n\t\tcontainer := \"test\"\n\t\tstorageAcc := \"lrs\"\n\t\t_, err := databricks.NewMount(ctx, \"this\", \u0026databricks.MountArgs{\n\t\t\tName: pulumi.String(\"tf-abfss\"),\n\t\t\tUri:  pulumi.Sprintf(\"abfss://%v@%v.dfs.core.windows.net\", container, storageAcc),\n\t\t\tExtraConfigs: pulumi.StringMap{\n\t\t\t\t\"fs.azure.account.auth.type\":                          pulumi.String(\"OAuth\"),\n\t\t\t\t\"fs.azure.account.oauth.provider.type\":                pulumi.String(\"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\"),\n\t\t\t\t\"fs.azure.account.oauth2.client.id\":                   pulumi.String(clientId),\n\t\t\t\t\"fs.azure.account.oauth2.client.secret\":               pulumi.Sprintf(\"{{secrets/%v/%v}}\", secretScope, secretKey),\n\t\t\t\t\"fs.azure.account.oauth2.client.endpoint\":             pulumi.Sprintf(\"https://login.microsoftonline.com/%v/oauth2/token\", tenantId),\n\t\t\t\t\"fs.azure.createRemoteFileSystemDuringInitialization\": pulumi.String(\"false\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Mount;\nimport com.pulumi.databricks.MountArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var tenantId = \"00000000-1111-2222-3333-444444444444\";\n\n        final var clientId = \"55555555-6666-7777-8888-999999999999\";\n\n        final var secretScope = \"some-kv\";\n\n        final var secretKey = \"some-sp-secret\";\n\n        final var container = \"test\";\n\n        final var storageAcc = \"lrs\";\n\n        var this_ = new Mount(\"this\", MountArgs.builder()\n            .name(\"tf-abfss\")\n            .uri(String.format(\"abfss://%s@%s.dfs.core.windows.net\", container,storageAcc))\n            .extraConfigs(Map.ofEntries(\n                Map.entry(\"fs.azure.account.auth.type\", \"OAuth\"),\n                Map.entry(\"fs.azure.account.oauth.provider.type\", \"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\"),\n                Map.entry(\"fs.azure.account.oauth2.client.id\", clientId),\n                Map.entry(\"fs.azure.account.oauth2.client.secret\", String.format(\"{{{{secrets/%s/%s}}}}\", secretScope,secretKey)),\n                Map.entry(\"fs.azure.account.oauth2.client.endpoint\", String.format(\"https://login.microsoftonline.com/%s/oauth2/token\", tenantId)),\n                Map.entry(\"fs.azure.createRemoteFileSystemDuringInitialization\", \"false\")\n            ))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Mount\n    properties:\n      name: tf-abfss\n      uri: abfss://${container}@${storageAcc}.dfs.core.windows.net\n      extraConfigs:\n        fs.azure.account.auth.type: OAuth\n        fs.azure.account.oauth.provider.type: org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\n        fs.azure.account.oauth2.client.id: ${clientId}\n        fs.azure.account.oauth2.client.secret: '{{secrets/${secretScope}/${secretKey}}}'\n        fs.azure.account.oauth2.client.endpoint: https://login.microsoftonline.com/${tenantId}/oauth2/token\n        fs.azure.createRemoteFileSystemDuringInitialization: 'false'\nvariables:\n  tenantId: 00000000-1111-2222-3333-444444444444\n  clientId: 55555555-6666-7777-8888-999999999999\n  secretScope: some-kv\n  secretKey: some-sp-secret\n  container: test\n  storageAcc: lrs\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Example mounting ADLS Gen2 with AAD passthrough\n\n\u003e AAD passthrough is considered a legacy data access pattern. Use Unity Catalog for fine-grained data access control.\n\n\u003e Mounts using AAD passthrough cannot be created using a service principal.\n\nTo mount ALDS Gen2 with Azure Active Directory Credentials passthrough we need to execute the mount commands using the cluster configured with AAD Credentials passthrough \u0026 provide necessary configuration parameters (see [documentation](https://docs.microsoft.com/en-us/azure/databricks/security/credential-passthrough/adls-passthrough#--mount-azure-data-lake-storage-to-dbfs-using-credential-passthrough) for more details).\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as azurerm from \"@pulumi/azurerm\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\n// Resource group for Databricks Workspace\nconst resourceGroup = config.require(\"resourceGroup\");\n// Name of the Databricks Workspace\nconst workspaceName = config.require(\"workspaceName\");\nconst _this = azurerm.index.DatabricksWorkspace({\n    name: workspaceName,\n    resourceGroupName: resourceGroup,\n});\nconst smallest = databricks.getNodeType({\n    localDisk: true,\n});\nconst latest = databricks.getSparkVersion({});\nconst sharedPassthrough = new databricks.Cluster(\"shared_passthrough\", {\n    clusterName: \"Shared Passthrough for mount\",\n    sparkVersion: latest.then(latest =\u003e latest.id),\n    nodeTypeId: smallest.then(smallest =\u003e smallest.id),\n    autoterminationMinutes: 10,\n    numWorkers: 1,\n    sparkConf: {\n        \"spark.databricks.cluster.profile\": \"serverless\",\n        \"spark.databricks.repl.allowedLanguages\": \"python,sql\",\n        \"spark.databricks.passthrough.enabled\": \"true\",\n        \"spark.databricks.pyspark.enableProcessIsolation\": \"true\",\n    },\n    customTags: {\n        ResourceClass: \"Serverless\",\n    },\n});\n// Name of the ADLS Gen2 storage container\nconst storageAcc = config.require(\"storageAcc\");\n// Name of container inside storage account\nconst container = config.require(\"container\");\nconst passthrough = new databricks.Mount(\"passthrough\", {\n    name: \"passthrough-test\",\n    clusterId: sharedPassthrough.id,\n    uri: `abfss://${container}@${storageAcc}.dfs.core.windows.net`,\n    extraConfigs: {\n        \"fs.azure.account.auth.type\": \"CustomAccessToken\",\n        \"fs.azure.account.custom.token.provider.class\": \"{{sparkconf/spark.databricks.passthrough.adls.gen2.tokenProviderClassName}}\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_azurerm as azurerm\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\n# Resource group for Databricks Workspace\nresource_group = config.require(\"resourceGroup\")\n# Name of the Databricks Workspace\nworkspace_name = config.require(\"workspaceName\")\nthis = azurerm.index.databricks_workspace(name=workspace_name,\n    resource_group_name=resource_group)\nsmallest = databricks.get_node_type(local_disk=True)\nlatest = databricks.get_spark_version()\nshared_passthrough = databricks.Cluster(\"shared_passthrough\",\n    cluster_name=\"Shared Passthrough for mount\",\n    spark_version=latest.id,\n    node_type_id=smallest.id,\n    autotermination_minutes=10,\n    num_workers=1,\n    spark_conf={\n        \"spark.databricks.cluster.profile\": \"serverless\",\n        \"spark.databricks.repl.allowedLanguages\": \"python,sql\",\n        \"spark.databricks.passthrough.enabled\": \"true\",\n        \"spark.databricks.pyspark.enableProcessIsolation\": \"true\",\n    },\n    custom_tags={\n        \"ResourceClass\": \"Serverless\",\n    })\n# Name of the ADLS Gen2 storage container\nstorage_acc = config.require(\"storageAcc\")\n# Name of container inside storage account\ncontainer = config.require(\"container\")\npassthrough = databricks.Mount(\"passthrough\",\n    name=\"passthrough-test\",\n    cluster_id=shared_passthrough.id,\n    uri=f\"abfss://{container}@{storage_acc}.dfs.core.windows.net\",\n    extra_configs={\n        \"fs.azure.account.auth.type\": \"CustomAccessToken\",\n        \"fs.azure.account.custom.token.provider.class\": \"{{sparkconf/spark.databricks.passthrough.adls.gen2.tokenProviderClassName}}\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Azurerm = Pulumi.Azurerm;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Resource group for Databricks Workspace\n    var resourceGroup = config.Require(\"resourceGroup\");\n    // Name of the Databricks Workspace\n    var workspaceName = config.Require(\"workspaceName\");\n    var @this = Azurerm.Index.DatabricksWorkspace.Invoke(new()\n    {\n        Name = workspaceName,\n        ResourceGroupName = resourceGroup,\n    });\n\n    var smallest = Databricks.GetNodeType.Invoke(new()\n    {\n        LocalDisk = true,\n    });\n\n    var latest = Databricks.GetSparkVersion.Invoke();\n\n    var sharedPassthrough = new Databricks.Cluster(\"shared_passthrough\", new()\n    {\n        ClusterName = \"Shared Passthrough for mount\",\n        SparkVersion = latest.Apply(getSparkVersionResult =\u003e getSparkVersionResult.Id),\n        NodeTypeId = smallest.Apply(getNodeTypeResult =\u003e getNodeTypeResult.Id),\n        AutoterminationMinutes = 10,\n        NumWorkers = 1,\n        SparkConf = \n        {\n            { \"spark.databricks.cluster.profile\", \"serverless\" },\n            { \"spark.databricks.repl.allowedLanguages\", \"python,sql\" },\n            { \"spark.databricks.passthrough.enabled\", \"true\" },\n            { \"spark.databricks.pyspark.enableProcessIsolation\", \"true\" },\n        },\n        CustomTags = \n        {\n            { \"ResourceClass\", \"Serverless\" },\n        },\n    });\n\n    // Name of the ADLS Gen2 storage container\n    var storageAcc = config.Require(\"storageAcc\");\n    // Name of container inside storage account\n    var container = config.Require(\"container\");\n    var passthrough = new Databricks.Mount(\"passthrough\", new()\n    {\n        Name = \"passthrough-test\",\n        ClusterId = sharedPassthrough.Id,\n        Uri = $\"abfss://{container}@{storageAcc}.dfs.core.windows.net\",\n        ExtraConfigs = \n        {\n            { \"fs.azure.account.auth.type\", \"CustomAccessToken\" },\n            { \"fs.azure.account.custom.token.provider.class\", \"{{sparkconf/spark.databricks.passthrough.adls.gen2.tokenProviderClassName}}\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-azurerm/sdk/go/azurerm\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Resource group for Databricks Workspace\n\t\tresourceGroup := cfg.Require(\"resourceGroup\")\n\t\t// Name of the Databricks Workspace\n\t\tworkspaceName := cfg.Require(\"workspaceName\")\n\t\t_, err := azurerm.DatabricksWorkspace(ctx, map[string]interface{}{\n\t\t\t\"name\":              workspaceName,\n\t\t\t\"resourceGroupName\": resourceGroup,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsmallest, err := databricks.GetNodeType(ctx, \u0026databricks.GetNodeTypeArgs{\n\t\t\tLocalDisk: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlatest, err := databricks.GetSparkVersion(ctx, \u0026databricks.GetSparkVersionArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsharedPassthrough, err := databricks.NewCluster(ctx, \"shared_passthrough\", \u0026databricks.ClusterArgs{\n\t\t\tClusterName:            pulumi.String(\"Shared Passthrough for mount\"),\n\t\t\tSparkVersion:           pulumi.String(latest.Id),\n\t\t\tNodeTypeId:             pulumi.String(smallest.Id),\n\t\t\tAutoterminationMinutes: pulumi.Int(10),\n\t\t\tNumWorkers:             pulumi.Int(1),\n\t\t\tSparkConf: pulumi.StringMap{\n\t\t\t\t\"spark.databricks.cluster.profile\":                pulumi.String(\"serverless\"),\n\t\t\t\t\"spark.databricks.repl.allowedLanguages\":          pulumi.String(\"python,sql\"),\n\t\t\t\t\"spark.databricks.passthrough.enabled\":            pulumi.String(\"true\"),\n\t\t\t\t\"spark.databricks.pyspark.enableProcessIsolation\": pulumi.String(\"true\"),\n\t\t\t},\n\t\t\tCustomTags: pulumi.StringMap{\n\t\t\t\t\"ResourceClass\": pulumi.String(\"Serverless\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// Name of the ADLS Gen2 storage container\n\t\tstorageAcc := cfg.Require(\"storageAcc\")\n\t\t// Name of container inside storage account\n\t\tcontainer := cfg.Require(\"container\")\n\t\t_, err = databricks.NewMount(ctx, \"passthrough\", \u0026databricks.MountArgs{\n\t\t\tName:      pulumi.String(\"passthrough-test\"),\n\t\t\tClusterId: sharedPassthrough.ID(),\n\t\t\tUri:       pulumi.Sprintf(\"abfss://%v@%v.dfs.core.windows.net\", container, storageAcc),\n\t\t\tExtraConfigs: pulumi.StringMap{\n\t\t\t\t\"fs.azure.account.auth.type\":                   pulumi.String(\"CustomAccessToken\"),\n\t\t\t\t\"fs.azure.account.custom.token.provider.class\": pulumi.String(\"{{sparkconf/spark.databricks.passthrough.adls.gen2.tokenProviderClassName}}\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.azurerm.AzurermFunctions;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetNodeTypeArgs;\nimport com.pulumi.databricks.inputs.GetSparkVersionArgs;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport com.pulumi.databricks.Mount;\nimport com.pulumi.databricks.MountArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var resourceGroup = config.get(\"resourceGroup\");\n        final var workspaceName = config.get(\"workspaceName\");\n        final var this = AzurermFunctions.DatabricksWorkspace(Map.ofEntries(\n            Map.entry(\"name\", workspaceName),\n            Map.entry(\"resourceGroupName\", resourceGroup)\n        ));\n\n        final var smallest = DatabricksFunctions.getNodeType(GetNodeTypeArgs.builder()\n            .localDisk(true)\n            .build());\n\n        final var latest = DatabricksFunctions.getSparkVersion(GetSparkVersionArgs.builder()\n            .build());\n\n        var sharedPassthrough = new Cluster(\"sharedPassthrough\", ClusterArgs.builder()\n            .clusterName(\"Shared Passthrough for mount\")\n            .sparkVersion(latest.id())\n            .nodeTypeId(smallest.id())\n            .autoterminationMinutes(10)\n            .numWorkers(1)\n            .sparkConf(Map.ofEntries(\n                Map.entry(\"spark.databricks.cluster.profile\", \"serverless\"),\n                Map.entry(\"spark.databricks.repl.allowedLanguages\", \"python,sql\"),\n                Map.entry(\"spark.databricks.passthrough.enabled\", \"true\"),\n                Map.entry(\"spark.databricks.pyspark.enableProcessIsolation\", \"true\")\n            ))\n            .customTags(Map.of(\"ResourceClass\", \"Serverless\"))\n            .build());\n\n        final var storageAcc = config.get(\"storageAcc\");\n        final var container = config.get(\"container\");\n        var passthrough = new Mount(\"passthrough\", MountArgs.builder()\n            .name(\"passthrough-test\")\n            .clusterId(sharedPassthrough.id())\n            .uri(String.format(\"abfss://%s@%s.dfs.core.windows.net\", container,storageAcc))\n            .extraConfigs(Map.ofEntries(\n                Map.entry(\"fs.azure.account.auth.type\", \"CustomAccessToken\"),\n                Map.entry(\"fs.azure.account.custom.token.provider.class\", \"{{sparkconf/spark.databricks.passthrough.adls.gen2.tokenProviderClassName}}\")\n            ))\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  resourceGroup:\n    type: string\n  workspaceName:\n    type: string\n  storageAcc:\n    type: string\n  container:\n    type: string\nresources:\n  sharedPassthrough:\n    type: databricks:Cluster\n    name: shared_passthrough\n    properties:\n      clusterName: Shared Passthrough for mount\n      sparkVersion: ${latest.id}\n      nodeTypeId: ${smallest.id}\n      autoterminationMinutes: 10\n      numWorkers: 1\n      sparkConf:\n        spark.databricks.cluster.profile: serverless\n        spark.databricks.repl.allowedLanguages: python,sql\n        spark.databricks.passthrough.enabled: 'true'\n        spark.databricks.pyspark.enableProcessIsolation: 'true'\n      customTags:\n        ResourceClass: Serverless\n  passthrough:\n    type: databricks:Mount\n    properties:\n      name: passthrough-test\n      clusterId: ${sharedPassthrough.id}\n      uri: abfss://${container}@${storageAcc}.dfs.core.windows.net\n      extraConfigs:\n        fs.azure.account.auth.type: CustomAccessToken\n        fs.azure.account.custom.token.provider.class: '{{sparkconf/spark.databricks.passthrough.adls.gen2.tokenProviderClassName}}'\nvariables:\n  this:\n    fn::invoke:\n      function: azurerm:DatabricksWorkspace\n      arguments:\n        name: ${workspaceName}\n        resourceGroupName: ${resourceGroup}\n  smallest:\n    fn::invoke:\n      function: databricks:getNodeType\n      arguments:\n        localDisk: true\n  latest:\n    fn::invoke:\n      function: databricks:getSparkVersion\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## s3 block\n\nThis block allows specifying parameters for mounting of the ADLS Gen2. The following arguments are required inside the \u003cspan pulumi-lang-nodejs=\"`s3`\" pulumi-lang-dotnet=\"`S3`\" pulumi-lang-go=\"`s3`\" pulumi-lang-python=\"`s3`\" pulumi-lang-yaml=\"`s3`\" pulumi-lang-java=\"`s3`\"\u003e`s3`\u003c/span\u003e block:\n\n* \u003cspan pulumi-lang-nodejs=\"`instanceProfile`\" pulumi-lang-dotnet=\"`InstanceProfile`\" pulumi-lang-go=\"`instanceProfile`\" pulumi-lang-python=\"`instance_profile`\" pulumi-lang-yaml=\"`instanceProfile`\" pulumi-lang-java=\"`instanceProfile`\"\u003e`instance_profile`\u003c/span\u003e - (Optional) (String) ARN of registered instance profile for data access.  If it's not specified, then the \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e should be provided, and the cluster should have an instance profile attached to it. If both \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e \u0026 \u003cspan pulumi-lang-nodejs=\"`instanceProfile`\" pulumi-lang-dotnet=\"`InstanceProfile`\" pulumi-lang-go=\"`instanceProfile`\" pulumi-lang-python=\"`instance_profile`\" pulumi-lang-yaml=\"`instanceProfile`\" pulumi-lang-java=\"`instanceProfile`\"\u003e`instance_profile`\u003c/span\u003e are specified, then \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e takes precedence.\n* \u003cspan pulumi-lang-nodejs=\"`bucketName`\" pulumi-lang-dotnet=\"`BucketName`\" pulumi-lang-go=\"`bucketName`\" pulumi-lang-python=\"`bucket_name`\" pulumi-lang-yaml=\"`bucketName`\" pulumi-lang-java=\"`bucketName`\"\u003e`bucket_name`\u003c/span\u003e - (Required) (String) S3 bucket name to be mounted.\n\n### Example of mounting S3\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\n// now you can do `%fs ls /mnt/experiments` in notebooks\nconst _this = new databricks.Mount(\"this\", {\n    name: \"experiments\",\n    s3: {\n        instanceProfile: ds.id,\n        bucketName: thisAwsS3Bucket.bucket,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\n# now you can do `%fs ls /mnt/experiments` in notebooks\nthis = databricks.Mount(\"this\",\n    name=\"experiments\",\n    s3={\n        \"instance_profile\": ds[\"id\"],\n        \"bucket_name\": this_aws_s3_bucket[\"bucket\"],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    // now you can do `%fs ls /mnt/experiments` in notebooks\n    var @this = new Databricks.Mount(\"this\", new()\n    {\n        Name = \"experiments\",\n        S3 = new Databricks.Inputs.MountS3Args\n        {\n            InstanceProfile = ds.Id,\n            BucketName = thisAwsS3Bucket.Bucket,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t// now you can do `%fs ls /mnt/experiments` in notebooks\n\t\t_, err := databricks.NewMount(ctx, \"this\", \u0026databricks.MountArgs{\n\t\t\tName: pulumi.String(\"experiments\"),\n\t\t\tS3: \u0026databricks.MountS3Args{\n\t\t\t\tInstanceProfile: pulumi.Any(ds.Id),\n\t\t\t\tBucketName:      pulumi.Any(thisAwsS3Bucket.Bucket),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Mount;\nimport com.pulumi.databricks.MountArgs;\nimport com.pulumi.databricks.inputs.MountS3Args;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        // now you can do `%fs ls /mnt/experiments` in notebooks\n        var this_ = new Mount(\"this\", MountArgs.builder()\n            .name(\"experiments\")\n            .s3(MountS3Args.builder()\n                .instanceProfile(ds.id())\n                .bucketName(thisAwsS3Bucket.bucket())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  # now you can do `%fs ls /mnt/experiments` in notebooks\n  this:\n    type: databricks:Mount\n    properties:\n      name: experiments\n      s3:\n        instanceProfile: ${ds.id}\n        bucketName: ${thisAwsS3Bucket.bucket}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## abfs block\n\nThis block allows specifying parameters for mounting of the ADLS Gen2. The following arguments are required inside the \u003cspan pulumi-lang-nodejs=\"`abfs`\" pulumi-lang-dotnet=\"`Abfs`\" pulumi-lang-go=\"`abfs`\" pulumi-lang-python=\"`abfs`\" pulumi-lang-yaml=\"`abfs`\" pulumi-lang-java=\"`abfs`\"\u003e`abfs`\u003c/span\u003e block:\n\n* \u003cspan pulumi-lang-nodejs=\"`clientId`\" pulumi-lang-dotnet=\"`ClientId`\" pulumi-lang-go=\"`clientId`\" pulumi-lang-python=\"`client_id`\" pulumi-lang-yaml=\"`clientId`\" pulumi-lang-java=\"`clientId`\"\u003e`client_id`\u003c/span\u003e - (Required) (String) This is the\u003cspan pulumi-lang-nodejs=\" clientId \" pulumi-lang-dotnet=\" ClientId \" pulumi-lang-go=\" clientId \" pulumi-lang-python=\" client_id \" pulumi-lang-yaml=\" clientId \" pulumi-lang-java=\" clientId \"\u003e client_id \u003c/span\u003e(Application Object ID) for the enterprise application for the service principal.\n* \u003cspan pulumi-lang-nodejs=\"`tenantId`\" pulumi-lang-dotnet=\"`TenantId`\" pulumi-lang-go=\"`tenantId`\" pulumi-lang-python=\"`tenant_id`\" pulumi-lang-yaml=\"`tenantId`\" pulumi-lang-java=\"`tenantId`\"\u003e`tenant_id`\u003c/span\u003e - (Optional) (String) This is your azure directory tenant id. It is required for creating the mount. (Could be omitted if Azure authentication is used, and we can extract \u003cspan pulumi-lang-nodejs=\"`tenantId`\" pulumi-lang-dotnet=\"`TenantId`\" pulumi-lang-go=\"`tenantId`\" pulumi-lang-python=\"`tenant_id`\" pulumi-lang-yaml=\"`tenantId`\" pulumi-lang-java=\"`tenantId`\"\u003e`tenant_id`\u003c/span\u003e from it).\n* \u003cspan pulumi-lang-nodejs=\"`clientSecretKey`\" pulumi-lang-dotnet=\"`ClientSecretKey`\" pulumi-lang-go=\"`clientSecretKey`\" pulumi-lang-python=\"`client_secret_key`\" pulumi-lang-yaml=\"`clientSecretKey`\" pulumi-lang-java=\"`clientSecretKey`\"\u003e`client_secret_key`\u003c/span\u003e - (Required) (String) This is the secret key in which your service principal/enterprise app client secret will be stored.\n* \u003cspan pulumi-lang-nodejs=\"`clientSecretScope`\" pulumi-lang-dotnet=\"`ClientSecretScope`\" pulumi-lang-go=\"`clientSecretScope`\" pulumi-lang-python=\"`client_secret_scope`\" pulumi-lang-yaml=\"`clientSecretScope`\" pulumi-lang-java=\"`clientSecretScope`\"\u003e`client_secret_scope`\u003c/span\u003e - (Required) (String) This is the secret scope in which your service principal/enterprise app client secret will be stored.\n* \u003cspan pulumi-lang-nodejs=\"`containerName`\" pulumi-lang-dotnet=\"`ContainerName`\" pulumi-lang-go=\"`containerName`\" pulumi-lang-python=\"`container_name`\" pulumi-lang-yaml=\"`containerName`\" pulumi-lang-java=\"`containerName`\"\u003e`container_name`\u003c/span\u003e - (Required) (String) ADLS gen2 container name. (Could be omitted if \u003cspan pulumi-lang-nodejs=\"`resourceId`\" pulumi-lang-dotnet=\"`ResourceId`\" pulumi-lang-go=\"`resourceId`\" pulumi-lang-python=\"`resource_id`\" pulumi-lang-yaml=\"`resourceId`\" pulumi-lang-java=\"`resourceId`\"\u003e`resource_id`\u003c/span\u003e is provided)\n* \u003cspan pulumi-lang-nodejs=\"`storageAccountName`\" pulumi-lang-dotnet=\"`StorageAccountName`\" pulumi-lang-go=\"`storageAccountName`\" pulumi-lang-python=\"`storage_account_name`\" pulumi-lang-yaml=\"`storageAccountName`\" pulumi-lang-java=\"`storageAccountName`\"\u003e`storage_account_name`\u003c/span\u003e - (Required) (String) The name of the storage resource in which the data is. (Could be omitted if \u003cspan pulumi-lang-nodejs=\"`resourceId`\" pulumi-lang-dotnet=\"`ResourceId`\" pulumi-lang-go=\"`resourceId`\" pulumi-lang-python=\"`resource_id`\" pulumi-lang-yaml=\"`resourceId`\" pulumi-lang-java=\"`resourceId`\"\u003e`resource_id`\u003c/span\u003e is provided)\n* \u003cspan pulumi-lang-nodejs=\"`directory`\" pulumi-lang-dotnet=\"`Directory`\" pulumi-lang-go=\"`directory`\" pulumi-lang-python=\"`directory`\" pulumi-lang-yaml=\"`directory`\" pulumi-lang-java=\"`directory`\"\u003e`directory`\u003c/span\u003e - (Computed) (String) This is optional if you don't want to add an additional directory that you wish to mount. This must start with a \"/\".\n* \u003cspan pulumi-lang-nodejs=\"`initializeFileSystem`\" pulumi-lang-dotnet=\"`InitializeFileSystem`\" pulumi-lang-go=\"`initializeFileSystem`\" pulumi-lang-python=\"`initialize_file_system`\" pulumi-lang-yaml=\"`initializeFileSystem`\" pulumi-lang-java=\"`initializeFileSystem`\"\u003e`initialize_file_system`\u003c/span\u003e - (Required) (Bool) either or not initialize FS for the first use\n\n### Creating mount for ADLS Gen2 using abfs block\n\nIn this example, we're using Azure authentication, so we can omit some parameters (\u003cspan pulumi-lang-nodejs=\"`tenantId`\" pulumi-lang-dotnet=\"`TenantId`\" pulumi-lang-go=\"`tenantId`\" pulumi-lang-python=\"`tenant_id`\" pulumi-lang-yaml=\"`tenantId`\" pulumi-lang-java=\"`tenantId`\"\u003e`tenant_id`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`storageAccountName`\" pulumi-lang-dotnet=\"`StorageAccountName`\" pulumi-lang-go=\"`storageAccountName`\" pulumi-lang-python=\"`storage_account_name`\" pulumi-lang-yaml=\"`storageAccountName`\" pulumi-lang-java=\"`storageAccountName`\"\u003e`storage_account_name`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`containerName`\" pulumi-lang-dotnet=\"`ContainerName`\" pulumi-lang-go=\"`containerName`\" pulumi-lang-python=\"`container_name`\" pulumi-lang-yaml=\"`containerName`\" pulumi-lang-java=\"`containerName`\"\u003e`container_name`\u003c/span\u003e) that will be detected automatically.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as azurerm from \"@pulumi/azurerm\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst terraform = new databricks.SecretScope(\"terraform\", {\n    name: \"application\",\n    initialManagePrincipal: \"users\",\n});\nconst servicePrincipalKey = new databricks.Secret(\"service_principal_key\", {\n    key: \"service_principal_key\",\n    stringValue: ARM_CLIENT_SECRET,\n    scope: terraform.name,\n});\nconst _this = new azurerm.index.StorageAccount(\"this\", {\n    name: `${prefix}datalake`,\n    resourceGroupName: resourceGroupName,\n    location: resourceGroupLocation,\n    accountTier: \"Standard\",\n    accountReplicationType: \"GRS\",\n    accountKind: \"StorageV2\",\n    isHnsEnabled: true,\n});\nconst thisRoleAssignment = new azurerm.index.RoleAssignment(\"this\", {\n    scope: _this.id,\n    roleDefinitionName: \"Storage Blob Data Contributor\",\n    principalId: current.objectId,\n});\nconst thisStorageContainer = new azurerm.index.StorageContainer(\"this\", {\n    name: \"marketing\",\n    storageAccountName: _this.name,\n    containerAccessType: \"private\",\n});\nconst marketing = new databricks.Mount(\"marketing\", {\n    name: \"marketing\",\n    resourceId: thisStorageContainer.resourceManagerId,\n    abfs: {\n        clientId: current.clientId,\n        clientSecretScope: terraform.name,\n        clientSecretKey: servicePrincipalKey.key,\n        initializeFileSystem: true,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_azurerm as azurerm\nimport pulumi_databricks as databricks\n\nterraform = databricks.SecretScope(\"terraform\",\n    name=\"application\",\n    initial_manage_principal=\"users\")\nservice_principal_key = databricks.Secret(\"service_principal_key\",\n    key=\"service_principal_key\",\n    string_value=ar_m__clien_t__secret,\n    scope=terraform.name)\nthis = azurerm.index.StorageAccount(\"this\",\n    name=f{prefix}datalake,\n    resource_group_name=resource_group_name,\n    location=resource_group_location,\n    account_tier=Standard,\n    account_replication_type=GRS,\n    account_kind=StorageV2,\n    is_hns_enabled=True)\nthis_role_assignment = azurerm.index.RoleAssignment(\"this\",\n    scope=this.id,\n    role_definition_name=Storage Blob Data Contributor,\n    principal_id=current.object_id)\nthis_storage_container = azurerm.index.StorageContainer(\"this\",\n    name=marketing,\n    storage_account_name=this.name,\n    container_access_type=private)\nmarketing = databricks.Mount(\"marketing\",\n    name=\"marketing\",\n    resource_id=this_storage_container[\"resourceManagerId\"],\n    abfs={\n        \"client_id\": current[\"clientId\"],\n        \"client_secret_scope\": terraform.name,\n        \"client_secret_key\": service_principal_key.key,\n        \"initialize_file_system\": True,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Azurerm = Pulumi.Azurerm;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var terraform = new Databricks.SecretScope(\"terraform\", new()\n    {\n        Name = \"application\",\n        InitialManagePrincipal = \"users\",\n    });\n\n    var servicePrincipalKey = new Databricks.Secret(\"service_principal_key\", new()\n    {\n        Key = \"service_principal_key\",\n        StringValue = ARM_CLIENT_SECRET,\n        Scope = terraform.Name,\n    });\n\n    var @this = new Azurerm.Index.StorageAccount(\"this\", new()\n    {\n        Name = $\"{prefix}datalake\",\n        ResourceGroupName = resourceGroupName,\n        Location = resourceGroupLocation,\n        AccountTier = \"Standard\",\n        AccountReplicationType = \"GRS\",\n        AccountKind = \"StorageV2\",\n        IsHnsEnabled = true,\n    });\n\n    var thisRoleAssignment = new Azurerm.Index.RoleAssignment(\"this\", new()\n    {\n        Scope = @this.Id,\n        RoleDefinitionName = \"Storage Blob Data Contributor\",\n        PrincipalId = current.ObjectId,\n    });\n\n    var thisStorageContainer = new Azurerm.Index.StorageContainer(\"this\", new()\n    {\n        Name = \"marketing\",\n        StorageAccountName = @this.Name,\n        ContainerAccessType = \"private\",\n    });\n\n    var marketing = new Databricks.Mount(\"marketing\", new()\n    {\n        Name = \"marketing\",\n        ResourceId = thisStorageContainer.ResourceManagerId,\n        Abfs = new Databricks.Inputs.MountAbfsArgs\n        {\n            ClientId = current.ClientId,\n            ClientSecretScope = terraform.Name,\n            ClientSecretKey = servicePrincipalKey.Key,\n            InitializeFileSystem = true,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-azurerm/sdk/go/azurerm\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tterraform, err := databricks.NewSecretScope(ctx, \"terraform\", \u0026databricks.SecretScopeArgs{\n\t\t\tName:                   pulumi.String(\"application\"),\n\t\t\tInitialManagePrincipal: pulumi.String(\"users\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tservicePrincipalKey, err := databricks.NewSecret(ctx, \"service_principal_key\", \u0026databricks.SecretArgs{\n\t\t\tKey:         pulumi.String(\"service_principal_key\"),\n\t\t\tStringValue: pulumi.Any(ARM_CLIENT_SECRET),\n\t\t\tScope:       terraform.Name,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := azurerm.NewStorageAccount(ctx, \"this\", \u0026azurerm.StorageAccountArgs{\n\t\t\tName:                   fmt.Sprintf(\"%vdatalake\", prefix),\n\t\t\tResourceGroupName:      resourceGroupName,\n\t\t\tLocation:               resourceGroupLocation,\n\t\t\tAccountTier:            \"Standard\",\n\t\t\tAccountReplicationType: \"GRS\",\n\t\t\tAccountKind:            \"StorageV2\",\n\t\t\tIsHnsEnabled:           true,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = azurerm.NewRoleAssignment(ctx, \"this\", \u0026azurerm.RoleAssignmentArgs{\n\t\t\tScope:              this.Id,\n\t\t\tRoleDefinitionName: \"Storage Blob Data Contributor\",\n\t\t\tPrincipalId:        current.ObjectId,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisStorageContainer, err := azurerm.NewStorageContainer(ctx, \"this\", \u0026azurerm.StorageContainerArgs{\n\t\t\tName:                \"marketing\",\n\t\t\tStorageAccountName:  this.Name,\n\t\t\tContainerAccessType: \"private\",\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMount(ctx, \"marketing\", \u0026databricks.MountArgs{\n\t\t\tName:       pulumi.String(\"marketing\"),\n\t\t\tResourceId: thisStorageContainer.ResourceManagerId,\n\t\t\tAbfs: \u0026databricks.MountAbfsArgs{\n\t\t\t\tClientId:             pulumi.Any(current.ClientId),\n\t\t\t\tClientSecretScope:    terraform.Name,\n\t\t\t\tClientSecretKey:      servicePrincipalKey.Key,\n\t\t\t\tInitializeFileSystem: pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SecretScope;\nimport com.pulumi.databricks.SecretScopeArgs;\nimport com.pulumi.databricks.Secret;\nimport com.pulumi.databricks.SecretArgs;\nimport com.pulumi.azurerm.StorageAccount;\nimport com.pulumi.azurerm.StorageAccountArgs;\nimport com.pulumi.azurerm.RoleAssignment;\nimport com.pulumi.azurerm.RoleAssignmentArgs;\nimport com.pulumi.azurerm.StorageContainer;\nimport com.pulumi.azurerm.StorageContainerArgs;\nimport com.pulumi.databricks.Mount;\nimport com.pulumi.databricks.MountArgs;\nimport com.pulumi.databricks.inputs.MountAbfsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var terraform = new SecretScope(\"terraform\", SecretScopeArgs.builder()\n            .name(\"application\")\n            .initialManagePrincipal(\"users\")\n            .build());\n\n        var servicePrincipalKey = new Secret(\"servicePrincipalKey\", SecretArgs.builder()\n            .key(\"service_principal_key\")\n            .stringValue(ARM_CLIENT_SECRET)\n            .scope(terraform.name())\n            .build());\n\n        var this_ = new StorageAccount(\"this\", StorageAccountArgs.builder()\n            .name(String.format(\"%sdatalake\", prefix))\n            .resourceGroupName(resourceGroupName)\n            .location(resourceGroupLocation)\n            .accountTier(\"Standard\")\n            .accountReplicationType(\"GRS\")\n            .accountKind(\"StorageV2\")\n            .isHnsEnabled(true)\n            .build());\n\n        var thisRoleAssignment = new RoleAssignment(\"thisRoleAssignment\", RoleAssignmentArgs.builder()\n            .scope(this_.id())\n            .roleDefinitionName(\"Storage Blob Data Contributor\")\n            .principalId(current.objectId())\n            .build());\n\n        var thisStorageContainer = new StorageContainer(\"thisStorageContainer\", StorageContainerArgs.builder()\n            .name(\"marketing\")\n            .storageAccountName(this_.name())\n            .containerAccessType(\"private\")\n            .build());\n\n        var marketing = new Mount(\"marketing\", MountArgs.builder()\n            .name(\"marketing\")\n            .resourceId(thisStorageContainer.resourceManagerId())\n            .abfs(MountAbfsArgs.builder()\n                .clientId(current.clientId())\n                .clientSecretScope(terraform.name())\n                .clientSecretKey(servicePrincipalKey.key())\n                .initializeFileSystem(true)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  terraform:\n    type: databricks:SecretScope\n    properties:\n      name: application\n      initialManagePrincipal: users\n  servicePrincipalKey:\n    type: databricks:Secret\n    name: service_principal_key\n    properties:\n      key: service_principal_key\n      stringValue: ${ARM_CLIENT_SECRET}\n      scope: ${terraform.name}\n  this:\n    type: azurerm:StorageAccount\n    properties:\n      name: ${prefix}datalake\n      resourceGroupName: ${resourceGroupName}\n      location: ${resourceGroupLocation}\n      accountTier: Standard\n      accountReplicationType: GRS\n      accountKind: StorageV2\n      isHnsEnabled: true\n  thisRoleAssignment:\n    type: azurerm:RoleAssignment\n    name: this\n    properties:\n      scope: ${this.id}\n      roleDefinitionName: Storage Blob Data Contributor\n      principalId: ${current.objectId}\n  thisStorageContainer:\n    type: azurerm:StorageContainer\n    name: this\n    properties:\n      name: marketing\n      storageAccountName: ${this.name}\n      containerAccessType: private\n  marketing:\n    type: databricks:Mount\n    properties:\n      name: marketing\n      resourceId: ${thisStorageContainer.resourceManagerId}\n      abfs:\n        clientId: ${current.clientId}\n        clientSecretScope: ${terraform.name}\n        clientSecretKey: ${servicePrincipalKey.key}\n        initializeFileSystem: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## gs block\n\nThis block allows specifying parameters for mounting of the Google Cloud Storage. The following arguments are required inside the \u003cspan pulumi-lang-nodejs=\"`gs`\" pulumi-lang-dotnet=\"`Gs`\" pulumi-lang-go=\"`gs`\" pulumi-lang-python=\"`gs`\" pulumi-lang-yaml=\"`gs`\" pulumi-lang-java=\"`gs`\"\u003e`gs`\u003c/span\u003e block:\n\n* \u003cspan pulumi-lang-nodejs=\"`serviceAccount`\" pulumi-lang-dotnet=\"`ServiceAccount`\" pulumi-lang-go=\"`serviceAccount`\" pulumi-lang-python=\"`service_account`\" pulumi-lang-yaml=\"`serviceAccount`\" pulumi-lang-java=\"`serviceAccount`\"\u003e`service_account`\u003c/span\u003e - (Optional) (String) email of registered [Google Service Account](https://docs.gcp.databricks.com/data/data-sources/google/gcs.html#step-1-set-up-google-cloud-service-account-using-google-cloud-console) for data access.  If it's not specified, then the \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e should be provided, and the cluster should have a Google service account attached to it.\n* \u003cspan pulumi-lang-nodejs=\"`bucketName`\" pulumi-lang-dotnet=\"`BucketName`\" pulumi-lang-go=\"`bucketName`\" pulumi-lang-python=\"`bucket_name`\" pulumi-lang-yaml=\"`bucketName`\" pulumi-lang-java=\"`bucketName`\"\u003e`bucket_name`\u003c/span\u003e - (Required) (String) GCS bucket name to be mounted.\n\n### Example mounting Google Cloud Storage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst thisGs = new databricks.Mount(\"this_gs\", {\n    name: \"gs-mount\",\n    gs: {\n        serviceAccount: \"acc@company.iam.gserviceaccount.com\",\n        bucketName: \"mybucket\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis_gs = databricks.Mount(\"this_gs\",\n    name=\"gs-mount\",\n    gs={\n        \"service_account\": \"acc@company.iam.gserviceaccount.com\",\n        \"bucket_name\": \"mybucket\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var thisGs = new Databricks.Mount(\"this_gs\", new()\n    {\n        Name = \"gs-mount\",\n        Gs = new Databricks.Inputs.MountGsArgs\n        {\n            ServiceAccount = \"acc@company.iam.gserviceaccount.com\",\n            BucketName = \"mybucket\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMount(ctx, \"this_gs\", \u0026databricks.MountArgs{\n\t\t\tName: pulumi.String(\"gs-mount\"),\n\t\t\tGs: \u0026databricks.MountGsArgs{\n\t\t\t\tServiceAccount: pulumi.String(\"acc@company.iam.gserviceaccount.com\"),\n\t\t\t\tBucketName:     pulumi.String(\"mybucket\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Mount;\nimport com.pulumi.databricks.MountArgs;\nimport com.pulumi.databricks.inputs.MountGsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var thisGs = new Mount(\"thisGs\", MountArgs.builder()\n            .name(\"gs-mount\")\n            .gs(MountGsArgs.builder()\n                .serviceAccount(\"acc@company.iam.gserviceaccount.com\")\n                .bucketName(\"mybucket\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  thisGs:\n    type: databricks:Mount\n    name: this_gs\n    properties:\n      name: gs-mount\n      gs:\n        serviceAccount: acc@company.iam.gserviceaccount.com\n        bucketName: mybucket\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## adl block\n\nThis block allows specifying parameters for mounting of the ADLS Gen1. The following arguments are required inside the \u003cspan pulumi-lang-nodejs=\"`adl`\" pulumi-lang-dotnet=\"`Adl`\" pulumi-lang-go=\"`adl`\" pulumi-lang-python=\"`adl`\" pulumi-lang-yaml=\"`adl`\" pulumi-lang-java=\"`adl`\"\u003e`adl`\u003c/span\u003e block:\n\n* \u003cspan pulumi-lang-nodejs=\"`clientId`\" pulumi-lang-dotnet=\"`ClientId`\" pulumi-lang-go=\"`clientId`\" pulumi-lang-python=\"`client_id`\" pulumi-lang-yaml=\"`clientId`\" pulumi-lang-java=\"`clientId`\"\u003e`client_id`\u003c/span\u003e - (Required) (String) This is the\u003cspan pulumi-lang-nodejs=\" clientId \" pulumi-lang-dotnet=\" ClientId \" pulumi-lang-go=\" clientId \" pulumi-lang-python=\" client_id \" pulumi-lang-yaml=\" clientId \" pulumi-lang-java=\" clientId \"\u003e client_id \u003c/span\u003efor the enterprise application for the service principal.\n* \u003cspan pulumi-lang-nodejs=\"`tenantId`\" pulumi-lang-dotnet=\"`TenantId`\" pulumi-lang-go=\"`tenantId`\" pulumi-lang-python=\"`tenant_id`\" pulumi-lang-yaml=\"`tenantId`\" pulumi-lang-java=\"`tenantId`\"\u003e`tenant_id`\u003c/span\u003e - (Optional) (String) This is your azure directory tenant id. It is required for creating the mount. (Could be omitted if Azure authentication is used, and we can extract \u003cspan pulumi-lang-nodejs=\"`tenantId`\" pulumi-lang-dotnet=\"`TenantId`\" pulumi-lang-go=\"`tenantId`\" pulumi-lang-python=\"`tenant_id`\" pulumi-lang-yaml=\"`tenantId`\" pulumi-lang-java=\"`tenantId`\"\u003e`tenant_id`\u003c/span\u003e from it)\n* \u003cspan pulumi-lang-nodejs=\"`clientSecretKey`\" pulumi-lang-dotnet=\"`ClientSecretKey`\" pulumi-lang-go=\"`clientSecretKey`\" pulumi-lang-python=\"`client_secret_key`\" pulumi-lang-yaml=\"`clientSecretKey`\" pulumi-lang-java=\"`clientSecretKey`\"\u003e`client_secret_key`\u003c/span\u003e - (Required) (String) This is the secret key in which your service principal/enterprise app client secret will be stored.\n* \u003cspan pulumi-lang-nodejs=\"`clientSecretScope`\" pulumi-lang-dotnet=\"`ClientSecretScope`\" pulumi-lang-go=\"`clientSecretScope`\" pulumi-lang-python=\"`client_secret_scope`\" pulumi-lang-yaml=\"`clientSecretScope`\" pulumi-lang-java=\"`clientSecretScope`\"\u003e`client_secret_scope`\u003c/span\u003e - (Required) (String) This is the secret scope in which your service principal/enterprise app client secret will be stored.\n\n* \u003cspan pulumi-lang-nodejs=\"`storageResourceName`\" pulumi-lang-dotnet=\"`StorageResourceName`\" pulumi-lang-go=\"`storageResourceName`\" pulumi-lang-python=\"`storage_resource_name`\" pulumi-lang-yaml=\"`storageResourceName`\" pulumi-lang-java=\"`storageResourceName`\"\u003e`storage_resource_name`\u003c/span\u003e - (Required) (String) The name of the storage resource in which the data is for ADLS gen 1. This is what you are trying to mount. (Could be omitted if \u003cspan pulumi-lang-nodejs=\"`resourceId`\" pulumi-lang-dotnet=\"`ResourceId`\" pulumi-lang-go=\"`resourceId`\" pulumi-lang-python=\"`resource_id`\" pulumi-lang-yaml=\"`resourceId`\" pulumi-lang-java=\"`resourceId`\"\u003e`resource_id`\u003c/span\u003e is provided)\n* \u003cspan pulumi-lang-nodejs=\"`sparkConfPrefix`\" pulumi-lang-dotnet=\"`SparkConfPrefix`\" pulumi-lang-go=\"`sparkConfPrefix`\" pulumi-lang-python=\"`spark_conf_prefix`\" pulumi-lang-yaml=\"`sparkConfPrefix`\" pulumi-lang-java=\"`sparkConfPrefix`\"\u003e`spark_conf_prefix`\u003c/span\u003e - (Optional) (String) This is the spark configuration prefix for adls gen 1 mount. The options are `fs.adl`, `dfs.adls`. Use `fs.adl` for runtime 6.0 and above for the clusters. Otherwise use `dfs.adls`. The default value is: `fs.adl`.\n* \u003cspan pulumi-lang-nodejs=\"`directory`\" pulumi-lang-dotnet=\"`Directory`\" pulumi-lang-go=\"`directory`\" pulumi-lang-python=\"`directory`\" pulumi-lang-yaml=\"`directory`\" pulumi-lang-java=\"`directory`\"\u003e`directory`\u003c/span\u003e - (Computed) (String) This is optional if you don't want to add an additional directory that you wish to mount. This must start with a \"/\".\n\n### Example mounting ADLS Gen1\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst mount = new databricks.Mount(\"mount\", {\n    name: \"{var.RANDOM}\",\n    adl: {\n        storageResourceName: \"{env.TEST_STORAGE_ACCOUNT_NAME}\",\n        tenantId: current.tenantId,\n        clientId: current.clientId,\n        clientSecretScope: terraform.name,\n        clientSecretKey: servicePrincipalKey.key,\n        sparkConfPrefix: \"fs.adl\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nmount = databricks.Mount(\"mount\",\n    name=\"{var.RANDOM}\",\n    adl={\n        \"storage_resource_name\": \"{env.TEST_STORAGE_ACCOUNT_NAME}\",\n        \"tenant_id\": current[\"tenantId\"],\n        \"client_id\": current[\"clientId\"],\n        \"client_secret_scope\": terraform[\"name\"],\n        \"client_secret_key\": service_principal_key[\"key\"],\n        \"spark_conf_prefix\": \"fs.adl\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var mount = new Databricks.Mount(\"mount\", new()\n    {\n        Name = \"{var.RANDOM}\",\n        Adl = new Databricks.Inputs.MountAdlArgs\n        {\n            StorageResourceName = \"{env.TEST_STORAGE_ACCOUNT_NAME}\",\n            TenantId = current.TenantId,\n            ClientId = current.ClientId,\n            ClientSecretScope = terraform.Name,\n            ClientSecretKey = servicePrincipalKey.Key,\n            SparkConfPrefix = \"fs.adl\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMount(ctx, \"mount\", \u0026databricks.MountArgs{\n\t\t\tName: pulumi.String(\"{var.RANDOM}\"),\n\t\t\tAdl: \u0026databricks.MountAdlArgs{\n\t\t\t\tStorageResourceName: pulumi.String(\"{env.TEST_STORAGE_ACCOUNT_NAME}\"),\n\t\t\t\tTenantId:            pulumi.Any(current.TenantId),\n\t\t\t\tClientId:            pulumi.Any(current.ClientId),\n\t\t\t\tClientSecretScope:   pulumi.Any(terraform.Name),\n\t\t\t\tClientSecretKey:     pulumi.Any(servicePrincipalKey.Key),\n\t\t\t\tSparkConfPrefix:     pulumi.String(\"fs.adl\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Mount;\nimport com.pulumi.databricks.MountArgs;\nimport com.pulumi.databricks.inputs.MountAdlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var mount = new Mount(\"mount\", MountArgs.builder()\n            .name(\"{var.RANDOM}\")\n            .adl(MountAdlArgs.builder()\n                .storageResourceName(\"{env.TEST_STORAGE_ACCOUNT_NAME}\")\n                .tenantId(current.tenantId())\n                .clientId(current.clientId())\n                .clientSecretScope(terraform.name())\n                .clientSecretKey(servicePrincipalKey.key())\n                .sparkConfPrefix(\"fs.adl\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  mount:\n    type: databricks:Mount\n    properties:\n      name: '{var.RANDOM}'\n      adl:\n        storageResourceName: '{env.TEST_STORAGE_ACCOUNT_NAME}'\n        tenantId: ${current.tenantId}\n        clientId: ${current.clientId}\n        clientSecretScope: ${terraform.name}\n        clientSecretKey: ${servicePrincipalKey.key}\n        sparkConfPrefix: fs.adl\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## wasb block\n\nThis block allows specifying parameters for mounting of the Azure Blob Storage. The following arguments are required inside the \u003cspan pulumi-lang-nodejs=\"`wasb`\" pulumi-lang-dotnet=\"`Wasb`\" pulumi-lang-go=\"`wasb`\" pulumi-lang-python=\"`wasb`\" pulumi-lang-yaml=\"`wasb`\" pulumi-lang-java=\"`wasb`\"\u003e`wasb`\u003c/span\u003e block:\n\n* \u003cspan pulumi-lang-nodejs=\"`authType`\" pulumi-lang-dotnet=\"`AuthType`\" pulumi-lang-go=\"`authType`\" pulumi-lang-python=\"`auth_type`\" pulumi-lang-yaml=\"`authType`\" pulumi-lang-java=\"`authType`\"\u003e`auth_type`\u003c/span\u003e - (Required) (String) This is the auth type for blob storage. This can either be SAS tokens (`SAS`) or account access keys (`ACCESS_KEY`).\n* \u003cspan pulumi-lang-nodejs=\"`tokenSecretScope`\" pulumi-lang-dotnet=\"`TokenSecretScope`\" pulumi-lang-go=\"`tokenSecretScope`\" pulumi-lang-python=\"`token_secret_scope`\" pulumi-lang-yaml=\"`tokenSecretScope`\" pulumi-lang-java=\"`tokenSecretScope`\"\u003e`token_secret_scope`\u003c/span\u003e - (Required) (String) This is the secret scope in which your auth type token is stored.\n* \u003cspan pulumi-lang-nodejs=\"`tokenSecretKey`\" pulumi-lang-dotnet=\"`TokenSecretKey`\" pulumi-lang-go=\"`tokenSecretKey`\" pulumi-lang-python=\"`token_secret_key`\" pulumi-lang-yaml=\"`tokenSecretKey`\" pulumi-lang-java=\"`tokenSecretKey`\"\u003e`token_secret_key`\u003c/span\u003e - (Required) (String) This is the secret key in which your auth type token is stored.\n* \u003cspan pulumi-lang-nodejs=\"`containerName`\" pulumi-lang-dotnet=\"`ContainerName`\" pulumi-lang-go=\"`containerName`\" pulumi-lang-python=\"`container_name`\" pulumi-lang-yaml=\"`containerName`\" pulumi-lang-java=\"`containerName`\"\u003e`container_name`\u003c/span\u003e - (Required) (String) The container in which the data is. This is what you are trying to mount. (Could be omitted if \u003cspan pulumi-lang-nodejs=\"`resourceId`\" pulumi-lang-dotnet=\"`ResourceId`\" pulumi-lang-go=\"`resourceId`\" pulumi-lang-python=\"`resource_id`\" pulumi-lang-yaml=\"`resourceId`\" pulumi-lang-java=\"`resourceId`\"\u003e`resource_id`\u003c/span\u003e is provided)\n* \u003cspan pulumi-lang-nodejs=\"`storageAccountName`\" pulumi-lang-dotnet=\"`StorageAccountName`\" pulumi-lang-go=\"`storageAccountName`\" pulumi-lang-python=\"`storage_account_name`\" pulumi-lang-yaml=\"`storageAccountName`\" pulumi-lang-java=\"`storageAccountName`\"\u003e`storage_account_name`\u003c/span\u003e - (Required) (String) The name of the storage resource in which the data is. (Could be omitted if \u003cspan pulumi-lang-nodejs=\"`resourceId`\" pulumi-lang-dotnet=\"`ResourceId`\" pulumi-lang-go=\"`resourceId`\" pulumi-lang-python=\"`resource_id`\" pulumi-lang-yaml=\"`resourceId`\" pulumi-lang-java=\"`resourceId`\"\u003e`resource_id`\u003c/span\u003e is provided)\n* \u003cspan pulumi-lang-nodejs=\"`directory`\" pulumi-lang-dotnet=\"`Directory`\" pulumi-lang-go=\"`directory`\" pulumi-lang-python=\"`directory`\" pulumi-lang-yaml=\"`directory`\" pulumi-lang-java=\"`directory`\"\u003e`directory`\u003c/span\u003e - (Computed) (String) This is optional if you don't want to add an additional directory that you wish to mount. This must start with a \"/\".\n\n### Example mounting Azure Blob Storage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as azurerm from \"@pulumi/azurerm\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst blobaccount = new azurerm.index.StorageAccount(\"blobaccount\", {\n    name: `${prefix}blob`,\n    resourceGroupName: resourceGroupName,\n    location: resourceGroupLocation,\n    accountTier: \"Standard\",\n    accountReplicationType: \"LRS\",\n    accountKind: \"StorageV2\",\n});\nconst marketing = new azurerm.index.StorageContainer(\"marketing\", {\n    name: \"marketing\",\n    storageAccountName: blobaccount.name,\n    containerAccessType: \"private\",\n});\nconst terraform = new databricks.SecretScope(\"terraform\", {\n    name: \"application\",\n    initialManagePrincipal: \"users\",\n});\nconst storageKey = new databricks.Secret(\"storage_key\", {\n    key: \"blob_storage_key\",\n    stringValue: blobaccount.primaryAccessKey,\n    scope: terraform.name,\n});\nconst marketingMount = new databricks.Mount(\"marketing\", {\n    name: \"marketing\",\n    wasb: {\n        containerName: marketing.name,\n        storageAccountName: blobaccount.name,\n        authType: \"ACCESS_KEY\",\n        tokenSecretScope: terraform.name,\n        tokenSecretKey: storageKey.key,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_azurerm as azurerm\nimport pulumi_databricks as databricks\n\nblobaccount = azurerm.index.StorageAccount(\"blobaccount\",\n    name=f{prefix}blob,\n    resource_group_name=resource_group_name,\n    location=resource_group_location,\n    account_tier=Standard,\n    account_replication_type=LRS,\n    account_kind=StorageV2)\nmarketing = azurerm.index.StorageContainer(\"marketing\",\n    name=marketing,\n    storage_account_name=blobaccount.name,\n    container_access_type=private)\nterraform = databricks.SecretScope(\"terraform\",\n    name=\"application\",\n    initial_manage_principal=\"users\")\nstorage_key = databricks.Secret(\"storage_key\",\n    key=\"blob_storage_key\",\n    string_value=blobaccount[\"primaryAccessKey\"],\n    scope=terraform.name)\nmarketing_mount = databricks.Mount(\"marketing\",\n    name=\"marketing\",\n    wasb={\n        \"container_name\": marketing[\"name\"],\n        \"storage_account_name\": blobaccount[\"name\"],\n        \"auth_type\": \"ACCESS_KEY\",\n        \"token_secret_scope\": terraform.name,\n        \"token_secret_key\": storage_key.key,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Azurerm = Pulumi.Azurerm;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var blobaccount = new Azurerm.Index.StorageAccount(\"blobaccount\", new()\n    {\n        Name = $\"{prefix}blob\",\n        ResourceGroupName = resourceGroupName,\n        Location = resourceGroupLocation,\n        AccountTier = \"Standard\",\n        AccountReplicationType = \"LRS\",\n        AccountKind = \"StorageV2\",\n    });\n\n    var marketing = new Azurerm.Index.StorageContainer(\"marketing\", new()\n    {\n        Name = \"marketing\",\n        StorageAccountName = blobaccount.Name,\n        ContainerAccessType = \"private\",\n    });\n\n    var terraform = new Databricks.SecretScope(\"terraform\", new()\n    {\n        Name = \"application\",\n        InitialManagePrincipal = \"users\",\n    });\n\n    var storageKey = new Databricks.Secret(\"storage_key\", new()\n    {\n        Key = \"blob_storage_key\",\n        StringValue = blobaccount.PrimaryAccessKey,\n        Scope = terraform.Name,\n    });\n\n    var marketingMount = new Databricks.Mount(\"marketing\", new()\n    {\n        Name = \"marketing\",\n        Wasb = new Databricks.Inputs.MountWasbArgs\n        {\n            ContainerName = marketing.Name,\n            StorageAccountName = blobaccount.Name,\n            AuthType = \"ACCESS_KEY\",\n            TokenSecretScope = terraform.Name,\n            TokenSecretKey = storageKey.Key,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-azurerm/sdk/go/azurerm\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tblobaccount, err := azurerm.NewStorageAccount(ctx, \"blobaccount\", \u0026azurerm.StorageAccountArgs{\n\t\t\tName:                   fmt.Sprintf(\"%vblob\", prefix),\n\t\t\tResourceGroupName:      resourceGroupName,\n\t\t\tLocation:               resourceGroupLocation,\n\t\t\tAccountTier:            \"Standard\",\n\t\t\tAccountReplicationType: \"LRS\",\n\t\t\tAccountKind:            \"StorageV2\",\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmarketing, err := azurerm.NewStorageContainer(ctx, \"marketing\", \u0026azurerm.StorageContainerArgs{\n\t\t\tName:                \"marketing\",\n\t\t\tStorageAccountName:  blobaccount.Name,\n\t\t\tContainerAccessType: \"private\",\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tterraform, err := databricks.NewSecretScope(ctx, \"terraform\", \u0026databricks.SecretScopeArgs{\n\t\t\tName:                   pulumi.String(\"application\"),\n\t\t\tInitialManagePrincipal: pulumi.String(\"users\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tstorageKey, err := databricks.NewSecret(ctx, \"storage_key\", \u0026databricks.SecretArgs{\n\t\t\tKey:         pulumi.String(\"blob_storage_key\"),\n\t\t\tStringValue: blobaccount.PrimaryAccessKey,\n\t\t\tScope:       terraform.Name,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMount(ctx, \"marketing\", \u0026databricks.MountArgs{\n\t\t\tName: pulumi.String(\"marketing\"),\n\t\t\tWasb: \u0026databricks.MountWasbArgs{\n\t\t\t\tContainerName:      marketing.Name,\n\t\t\t\tStorageAccountName: blobaccount.Name,\n\t\t\t\tAuthType:           pulumi.String(\"ACCESS_KEY\"),\n\t\t\t\tTokenSecretScope:   terraform.Name,\n\t\t\t\tTokenSecretKey:     storageKey.Key,\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.azurerm.StorageAccount;\nimport com.pulumi.azurerm.StorageAccountArgs;\nimport com.pulumi.azurerm.StorageContainer;\nimport com.pulumi.azurerm.StorageContainerArgs;\nimport com.pulumi.databricks.SecretScope;\nimport com.pulumi.databricks.SecretScopeArgs;\nimport com.pulumi.databricks.Secret;\nimport com.pulumi.databricks.SecretArgs;\nimport com.pulumi.databricks.Mount;\nimport com.pulumi.databricks.MountArgs;\nimport com.pulumi.databricks.inputs.MountWasbArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var blobaccount = new StorageAccount(\"blobaccount\", StorageAccountArgs.builder()\n            .name(String.format(\"%sblob\", prefix))\n            .resourceGroupName(resourceGroupName)\n            .location(resourceGroupLocation)\n            .accountTier(\"Standard\")\n            .accountReplicationType(\"LRS\")\n            .accountKind(\"StorageV2\")\n            .build());\n\n        var marketing = new StorageContainer(\"marketing\", StorageContainerArgs.builder()\n            .name(\"marketing\")\n            .storageAccountName(blobaccount.name())\n            .containerAccessType(\"private\")\n            .build());\n\n        var terraform = new SecretScope(\"terraform\", SecretScopeArgs.builder()\n            .name(\"application\")\n            .initialManagePrincipal(\"users\")\n            .build());\n\n        var storageKey = new Secret(\"storageKey\", SecretArgs.builder()\n            .key(\"blob_storage_key\")\n            .stringValue(blobaccount.primaryAccessKey())\n            .scope(terraform.name())\n            .build());\n\n        var marketingMount = new Mount(\"marketingMount\", MountArgs.builder()\n            .name(\"marketing\")\n            .wasb(MountWasbArgs.builder()\n                .containerName(marketing.name())\n                .storageAccountName(blobaccount.name())\n                .authType(\"ACCESS_KEY\")\n                .tokenSecretScope(terraform.name())\n                .tokenSecretKey(storageKey.key())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  blobaccount:\n    type: azurerm:StorageAccount\n    properties:\n      name: ${prefix}blob\n      resourceGroupName: ${resourceGroupName}\n      location: ${resourceGroupLocation}\n      accountTier: Standard\n      accountReplicationType: LRS\n      accountKind: StorageV2\n  marketing:\n    type: azurerm:StorageContainer\n    properties:\n      name: marketing\n      storageAccountName: ${blobaccount.name}\n      containerAccessType: private\n  terraform:\n    type: databricks:SecretScope\n    properties:\n      name: application\n      initialManagePrincipal: users\n  storageKey:\n    type: databricks:Secret\n    name: storage_key\n    properties:\n      key: blob_storage_key\n      stringValue: ${blobaccount.primaryAccessKey}\n      scope: ${terraform.name}\n  marketingMount:\n    type: databricks:Mount\n    name: marketing\n    properties:\n      name: marketing\n      wasb:\n        containerName: ${marketing.name}\n        storageAccountName: ${blobaccount.name}\n        authType: ACCESS_KEY\n        tokenSecretScope: ${terraform.name}\n        tokenSecretKey: ${storageKey.key}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n* \u003cspan pulumi-lang-nodejs=\"`providerConfig`\" pulumi-lang-dotnet=\"`ProviderConfig`\" pulumi-lang-go=\"`providerConfig`\" pulumi-lang-python=\"`provider_config`\" pulumi-lang-yaml=\"`providerConfig`\" pulumi-lang-java=\"`providerConfig`\"\u003e`provider_config`\u003c/span\u003e - (Optional) Configure the provider for management through account provider. This block consists of the following fields:\n  * \u003cspan pulumi-lang-nodejs=\"`workspaceId`\" pulumi-lang-dotnet=\"`WorkspaceId`\" pulumi-lang-go=\"`workspaceId`\" pulumi-lang-python=\"`workspace_id`\" pulumi-lang-yaml=\"`workspaceId`\" pulumi-lang-java=\"`workspaceId`\"\u003e`workspace_id`\u003c/span\u003e - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.\n\n## Migration from other mount resources\n\nMigration from the specific mount resource is straightforward:\n\n* rename \u003cspan pulumi-lang-nodejs=\"`mountName`\" pulumi-lang-dotnet=\"`MountName`\" pulumi-lang-go=\"`mountName`\" pulumi-lang-python=\"`mount_name`\" pulumi-lang-yaml=\"`mountName`\" pulumi-lang-java=\"`mountName`\"\u003e`mount_name`\u003c/span\u003e to \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e\n* wrap storage-specific settings (\u003cspan pulumi-lang-nodejs=\"`containerName`\" pulumi-lang-dotnet=\"`ContainerName`\" pulumi-lang-go=\"`containerName`\" pulumi-lang-python=\"`container_name`\" pulumi-lang-yaml=\"`containerName`\" pulumi-lang-java=\"`containerName`\"\u003e`container_name`\u003c/span\u003e, ...) into corresponding block (\u003cspan pulumi-lang-nodejs=\"`adl`\" pulumi-lang-dotnet=\"`Adl`\" pulumi-lang-go=\"`adl`\" pulumi-lang-python=\"`adl`\" pulumi-lang-yaml=\"`adl`\" pulumi-lang-java=\"`adl`\"\u003e`adl`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`abfs`\" pulumi-lang-dotnet=\"`Abfs`\" pulumi-lang-go=\"`abfs`\" pulumi-lang-python=\"`abfs`\" pulumi-lang-yaml=\"`abfs`\" pulumi-lang-java=\"`abfs`\"\u003e`abfs`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`s3`\" pulumi-lang-dotnet=\"`S3`\" pulumi-lang-go=\"`s3`\" pulumi-lang-python=\"`s3`\" pulumi-lang-yaml=\"`s3`\" pulumi-lang-java=\"`s3`\"\u003e`s3`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`wasbs`\" pulumi-lang-dotnet=\"`Wasbs`\" pulumi-lang-go=\"`wasbs`\" pulumi-lang-python=\"`wasbs`\" pulumi-lang-yaml=\"`wasbs`\" pulumi-lang-java=\"`wasbs`\"\u003e`wasbs`\u003c/span\u003e)\n* for S3 mounts, rename \u003cspan pulumi-lang-nodejs=\"`s3BucketName`\" pulumi-lang-dotnet=\"`S3BucketName`\" pulumi-lang-go=\"`s3BucketName`\" pulumi-lang-python=\"`s3_bucket_name`\" pulumi-lang-yaml=\"`s3BucketName`\" pulumi-lang-java=\"`s3BucketName`\"\u003e`s3_bucket_name`\u003c/span\u003e to \u003cspan pulumi-lang-nodejs=\"`bucketName`\" pulumi-lang-dotnet=\"`BucketName`\" pulumi-lang-go=\"`bucketName`\" pulumi-lang-python=\"`bucket_name`\" pulumi-lang-yaml=\"`bucketName`\" pulumi-lang-java=\"`bucketName`\"\u003e`bucket_name`\u003c/span\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getAwsBucketPolicy \" pulumi-lang-dotnet=\" databricks.getAwsBucketPolicy \" pulumi-lang-go=\" getAwsBucketPolicy \" pulumi-lang-python=\" get_aws_bucket_policy \" pulumi-lang-yaml=\" databricks.getAwsBucketPolicy \" pulumi-lang-java=\" databricks.getAwsBucketPolicy \"\u003e databricks.getAwsBucketPolicy \u003c/span\u003edata to configure a simple access policy for AWS S3 buckets, so that Databricks can access data in it.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.DbfsFile \" pulumi-lang-dotnet=\" databricks.DbfsFile \" pulumi-lang-go=\" DbfsFile \" pulumi-lang-python=\" DbfsFile \" pulumi-lang-yaml=\" databricks.DbfsFile \" pulumi-lang-java=\" databricks.DbfsFile \"\u003e databricks.DbfsFile \u003c/span\u003edata to get file content from [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.getDbfsFilePaths \" pulumi-lang-dotnet=\" databricks.getDbfsFilePaths \" pulumi-lang-go=\" getDbfsFilePaths \" pulumi-lang-python=\" get_dbfs_file_paths \" pulumi-lang-yaml=\" databricks.getDbfsFilePaths \" pulumi-lang-java=\" databricks.getDbfsFilePaths \"\u003e databricks.getDbfsFilePaths \u003c/span\u003edata to get list of file names from get file content from [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.DbfsFile \" pulumi-lang-dotnet=\" databricks.DbfsFile \" pulumi-lang-go=\" DbfsFile \" pulumi-lang-python=\" DbfsFile \" pulumi-lang-yaml=\" databricks.DbfsFile \" pulumi-lang-java=\" databricks.DbfsFile \"\u003e databricks.DbfsFile \u003c/span\u003eto manage relatively small files on [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Library \" pulumi-lang-dotnet=\" databricks.Library \" pulumi-lang-go=\" Library \" pulumi-lang-python=\" Library \" pulumi-lang-yaml=\" databricks.Library \" pulumi-lang-java=\" databricks.Library \"\u003e databricks.Library \u003c/span\u003eto install a [library](https://docs.databricks.com/libraries/index.html) on databricks_cluster.\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"abfs":{"$ref":"#/types/databricks:index/MountAbfs:MountAbfs"},"adl":{"$ref":"#/types/databricks:index/MountAdl:MountAdl"},"clusterId":{"type":"string"},"encryptionType":{"type":"string"},"extraConfigs":{"type":"object","additionalProperties":{"type":"string"}},"gs":{"$ref":"#/types/databricks:index/MountGs:MountGs"},"name":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/MountProviderConfig:MountProviderConfig"},"resourceId":{"type":"string"},"s3":{"$ref":"#/types/databricks:index/MountS3:MountS3"},"source":{"type":"string","description":"(String) HDFS-compatible url\n"},"uri":{"type":"string"},"wasb":{"$ref":"#/types/databricks:index/MountWasb:MountWasb"}},"required":["clusterId","name","source"],"inputProperties":{"abfs":{"$ref":"#/types/databricks:index/MountAbfs:MountAbfs","willReplaceOnChanges":true},"adl":{"$ref":"#/types/databricks:index/MountAdl:MountAdl","willReplaceOnChanges":true},"clusterId":{"type":"string","willReplaceOnChanges":true},"encryptionType":{"type":"string","willReplaceOnChanges":true},"extraConfigs":{"type":"object","additionalProperties":{"type":"string"},"willReplaceOnChanges":true},"gs":{"$ref":"#/types/databricks:index/MountGs:MountGs","willReplaceOnChanges":true},"name":{"type":"string","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/MountProviderConfig:MountProviderConfig","willReplaceOnChanges":true},"resourceId":{"type":"string","willReplaceOnChanges":true},"s3":{"$ref":"#/types/databricks:index/MountS3:MountS3","willReplaceOnChanges":true},"uri":{"type":"string","willReplaceOnChanges":true},"wasb":{"$ref":"#/types/databricks:index/MountWasb:MountWasb","willReplaceOnChanges":true}},"stateInputs":{"description":"Input properties used for looking up and filtering Mount resources.\n","properties":{"abfs":{"$ref":"#/types/databricks:index/MountAbfs:MountAbfs","willReplaceOnChanges":true},"adl":{"$ref":"#/types/databricks:index/MountAdl:MountAdl","willReplaceOnChanges":true},"clusterId":{"type":"string","willReplaceOnChanges":true},"encryptionType":{"type":"string","willReplaceOnChanges":true},"extraConfigs":{"type":"object","additionalProperties":{"type":"string"},"willReplaceOnChanges":true},"gs":{"$ref":"#/types/databricks:index/MountGs:MountGs","willReplaceOnChanges":true},"name":{"type":"string","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/MountProviderConfig:MountProviderConfig","willReplaceOnChanges":true},"resourceId":{"type":"string","willReplaceOnChanges":true},"s3":{"$ref":"#/types/databricks:index/MountS3:MountS3","willReplaceOnChanges":true},"source":{"type":"string","description":"(String) HDFS-compatible url\n"},"uri":{"type":"string","willReplaceOnChanges":true},"wasb":{"$ref":"#/types/databricks:index/MountWasb:MountWasb","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/mwsCredentials:MwsCredentials":{"description":"This resource to configure the cross-account role for creation of new workspaces within AWS.\n\n\u003e This resource can only be used with an account-level provider!\n\nPlease follow this complete runnable example with new VPC and new workspace setup. Please pay special attention to the fact that there you have two different instances of a databricks provider - one for deploying workspaces (with `host=\"https://accounts.cloud.databricks.com/\"`) and another for the workspace you've created with \u003cspan pulumi-lang-nodejs=\"`databricks.MwsWorkspaces`\" pulumi-lang-dotnet=\"`databricks.MwsWorkspaces`\" pulumi-lang-go=\"`MwsWorkspaces`\" pulumi-lang-python=\"`MwsWorkspaces`\" pulumi-lang-yaml=\"`databricks.MwsWorkspaces`\" pulumi-lang-java=\"`databricks.MwsWorkspaces`\"\u003e`databricks.MwsWorkspaces`\u003c/span\u003e resource. If you want both creation of workspaces \u0026 clusters within workspace within the same terraform module (essentially same directory), you should use the provider aliasing feature of Pulumi. We strongly recommend having one terraform module for creation of workspace + PAT token and the rest in different modules.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as aws from \"@pulumi/aws\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\n// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\n// Names of created resources will be prefixed with this value\nconst prefix = config.requireObject\u003cany\u003e(\"prefix\");\nconst _this = databricks.getAwsAssumeRolePolicy({\n    externalId: databricksAccountId,\n});\nconst crossAccountRole = new aws.index.IamRole(\"cross_account_role\", {\n    name: `${prefix}-crossaccount`,\n    assumeRolePolicy: _this.json,\n    tags: tags,\n});\nconst thisGetAwsCrossAccountPolicy = databricks.getAwsCrossAccountPolicy({});\nconst thisIamRolePolicy = new aws.index.IamRolePolicy(\"this\", {\n    name: `${prefix}-policy`,\n    role: crossAccountRole.id,\n    policy: thisGetAwsCrossAccountPolicy.json,\n});\nconst thisMwsCredentials = new databricks.MwsCredentials(\"this\", {\n    credentialsName: `${prefix}-creds`,\n    roleArn: crossAccountRole.arn,\n});\n```\n```python\nimport pulumi\nimport pulumi_aws as aws\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\n# Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\n# Names of created resources will be prefixed with this value\nprefix = config.require_object(\"prefix\")\nthis = databricks.get_aws_assume_role_policy(external_id=databricks_account_id)\ncross_account_role = aws.index.IamRole(\"cross_account_role\",\n    name=f{prefix}-crossaccount,\n    assume_role_policy=this.json,\n    tags=tags)\nthis_get_aws_cross_account_policy = databricks.get_aws_cross_account_policy()\nthis_iam_role_policy = aws.index.IamRolePolicy(\"this\",\n    name=f{prefix}-policy,\n    role=cross_account_role.id,\n    policy=this_get_aws_cross_account_policy.json)\nthis_mws_credentials = databricks.MwsCredentials(\"this\",\n    credentials_name=f\"{prefix}-creds\",\n    role_arn=cross_account_role[\"arn\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Aws = Pulumi.Aws;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    // Names of created resources will be prefixed with this value\n    var prefix = config.RequireObject\u003cdynamic\u003e(\"prefix\");\n    var @this = Databricks.GetAwsAssumeRolePolicy.Invoke(new()\n    {\n        ExternalId = databricksAccountId,\n    });\n\n    var crossAccountRole = new Aws.Index.IamRole(\"cross_account_role\", new()\n    {\n        Name = $\"{prefix}-crossaccount\",\n        AssumeRolePolicy = @this.Apply(getAwsAssumeRolePolicyResult =\u003e getAwsAssumeRolePolicyResult.Json),\n        Tags = tags,\n    });\n\n    var thisGetAwsCrossAccountPolicy = Databricks.GetAwsCrossAccountPolicy.Invoke();\n\n    var thisIamRolePolicy = new Aws.Index.IamRolePolicy(\"this\", new()\n    {\n        Name = $\"{prefix}-policy\",\n        Role = crossAccountRole.Id,\n        Policy = thisGetAwsCrossAccountPolicy.Apply(getAwsCrossAccountPolicyResult =\u003e getAwsCrossAccountPolicyResult.Json),\n    });\n\n    var thisMwsCredentials = new Databricks.MwsCredentials(\"this\", new()\n    {\n        CredentialsName = $\"{prefix}-creds\",\n        RoleArn = crossAccountRole.Arn,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-aws/sdk/v7/go/aws\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\t// Names of created resources will be prefixed with this value\n\t\tprefix := cfg.RequireObject(\"prefix\")\n\t\tthis, err := databricks.GetAwsAssumeRolePolicy(ctx, \u0026databricks.GetAwsAssumeRolePolicyArgs{\n\t\t\tExternalId: databricksAccountId,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tcrossAccountRole, err := aws.NewIamRole(ctx, \"cross_account_role\", \u0026aws.IamRoleArgs{\n\t\t\tName:             fmt.Sprintf(\"%v-crossaccount\", prefix),\n\t\t\tAssumeRolePolicy: this.Json,\n\t\t\tTags:             tags,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisGetAwsCrossAccountPolicy, err := databricks.GetAwsCrossAccountPolicy(ctx, \u0026databricks.GetAwsCrossAccountPolicyArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewIamRolePolicy(ctx, \"this\", \u0026aws.IamRolePolicyArgs{\n\t\t\tName:   fmt.Sprintf(\"%v-policy\", prefix),\n\t\t\tRole:   crossAccountRole.Id,\n\t\t\tPolicy: thisGetAwsCrossAccountPolicy.Json,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsCredentials(ctx, \"this\", \u0026databricks.MwsCredentialsArgs{\n\t\t\tCredentialsName: pulumi.Sprintf(\"%v-creds\", prefix),\n\t\t\tRoleArn:         crossAccountRole.Arn,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAwsAssumeRolePolicyArgs;\nimport com.pulumi.aws.IamRole;\nimport com.pulumi.aws.IamRoleArgs;\nimport com.pulumi.databricks.inputs.GetAwsCrossAccountPolicyArgs;\nimport com.pulumi.aws.IamRolePolicy;\nimport com.pulumi.aws.IamRolePolicyArgs;\nimport com.pulumi.databricks.MwsCredentials;\nimport com.pulumi.databricks.MwsCredentialsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        final var prefix = config.get(\"prefix\");\n        final var this = DatabricksFunctions.getAwsAssumeRolePolicy(GetAwsAssumeRolePolicyArgs.builder()\n            .externalId(databricksAccountId)\n            .build());\n\n        var crossAccountRole = new IamRole(\"crossAccountRole\", IamRoleArgs.builder()\n            .name(String.format(\"%s-crossaccount\", prefix))\n            .assumeRolePolicy(this_.json())\n            .tags(tags)\n            .build());\n\n        final var thisGetAwsCrossAccountPolicy = DatabricksFunctions.getAwsCrossAccountPolicy(GetAwsCrossAccountPolicyArgs.builder()\n            .build());\n\n        var thisIamRolePolicy = new IamRolePolicy(\"thisIamRolePolicy\", IamRolePolicyArgs.builder()\n            .name(String.format(\"%s-policy\", prefix))\n            .role(crossAccountRole.id())\n            .policy(thisGetAwsCrossAccountPolicy.json())\n            .build());\n\n        var thisMwsCredentials = new MwsCredentials(\"thisMwsCredentials\", MwsCredentialsArgs.builder()\n            .credentialsName(String.format(\"%s-creds\", prefix))\n            .roleArn(crossAccountRole.arn())\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\n  prefix:\n    type: dynamic\nresources:\n  crossAccountRole:\n    type: aws:IamRole\n    name: cross_account_role\n    properties:\n      name: ${prefix}-crossaccount\n      assumeRolePolicy: ${this.json}\n      tags: ${tags}\n  thisIamRolePolicy:\n    type: aws:IamRolePolicy\n    name: this\n    properties:\n      name: ${prefix}-policy\n      role: ${crossAccountRole.id}\n      policy: ${thisGetAwsCrossAccountPolicy.json}\n  thisMwsCredentials:\n    type: databricks:MwsCredentials\n    name: this\n    properties:\n      credentialsName: ${prefix}-creds\n      roleArn: ${crossAccountRole.arn}\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getAwsAssumeRolePolicy\n      arguments:\n        externalId: ${databricksAccountId}\n  thisGetAwsCrossAccountPolicy:\n    fn::invoke:\n      function: databricks:getAwsCrossAccountPolicy\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* Provisioning Databricks on AWS guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-dotnet=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-go=\" MwsCustomerManagedKeys \" pulumi-lang-python=\" MwsCustomerManagedKeys \" pulumi-lang-yaml=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-java=\" databricks.MwsCustomerManagedKeys \"\u003e databricks.MwsCustomerManagedKeys \u003c/span\u003eto configure KMS keys for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsLogDelivery \" pulumi-lang-dotnet=\" databricks.MwsLogDelivery \" pulumi-lang-go=\" MwsLogDelivery \" pulumi-lang-python=\" MwsLogDelivery \" pulumi-lang-yaml=\" databricks.MwsLogDelivery \" pulumi-lang-java=\" databricks.MwsLogDelivery \"\u003e databricks.MwsLogDelivery \u003c/span\u003eto configure delivery of [billable usage logs](https://docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html) and [audit logs](https://docs.databricks.com/administration-guide/account-settings/audit-logs.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eto [configure VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) \u0026 subnets for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsStorageConfigurations \" pulumi-lang-dotnet=\" databricks.MwsStorageConfigurations \" pulumi-lang-go=\" MwsStorageConfigurations \" pulumi-lang-python=\" MwsStorageConfigurations \" pulumi-lang-yaml=\" databricks.MwsStorageConfigurations \" pulumi-lang-java=\" databricks.MwsStorageConfigurations \"\u003e databricks.MwsStorageConfigurations \u003c/span\u003eto configure root bucket new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up [AWS and GCP workspaces](https://docs.databricks.com/getting-started/overview.html#e2-architecture-1).\n\n","properties":{"accountId":{"type":"string","description":"**(Deprecated)** Maintained for backwards compatibility and will be removed in a later version. It should now be specified under a provider instance where `host = \"https://accounts.cloud.databricks.com\"`\n","deprecationMessage":"\u003cspan pulumi-lang-nodejs=\"`accountId`\" pulumi-lang-dotnet=\"`AccountId`\" pulumi-lang-go=\"`accountId`\" pulumi-lang-python=\"`account_id`\" pulumi-lang-yaml=\"`accountId`\" pulumi-lang-java=\"`accountId`\"\u003e`account_id`\u003c/span\u003e should be set as part of the Databricks Config, not in the resource."},"creationTime":{"type":"integer","description":"(Integer) time of credentials registration\n"},"credentialsId":{"type":"string","description":"(String) identifier of credentials\n"},"credentialsName":{"type":"string","description":"name of credentials to register\n"},"externalId":{"type":"string"},"roleArn":{"type":"string","description":"ARN of cross-account role\n"}},"required":["creationTime","credentialsId","credentialsName","externalId","roleArn"],"inputProperties":{"accountId":{"type":"string","description":"**(Deprecated)** Maintained for backwards compatibility and will be removed in a later version. It should now be specified under a provider instance where `host = \"https://accounts.cloud.databricks.com\"`\n","deprecationMessage":"\u003cspan pulumi-lang-nodejs=\"`accountId`\" pulumi-lang-dotnet=\"`AccountId`\" pulumi-lang-go=\"`accountId`\" pulumi-lang-python=\"`account_id`\" pulumi-lang-yaml=\"`accountId`\" pulumi-lang-java=\"`accountId`\"\u003e`account_id`\u003c/span\u003e should be set as part of the Databricks Config, not in the resource.","willReplaceOnChanges":true},"creationTime":{"type":"integer","description":"(Integer) time of credentials registration\n"},"credentialsId":{"type":"string","description":"(String) identifier of credentials\n"},"credentialsName":{"type":"string","description":"name of credentials to register\n","willReplaceOnChanges":true},"externalId":{"type":"string"},"roleArn":{"type":"string","description":"ARN of cross-account role\n","willReplaceOnChanges":true}},"requiredInputs":["credentialsName","roleArn"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsCredentials resources.\n","properties":{"accountId":{"type":"string","description":"**(Deprecated)** Maintained for backwards compatibility and will be removed in a later version. It should now be specified under a provider instance where `host = \"https://accounts.cloud.databricks.com\"`\n","deprecationMessage":"\u003cspan pulumi-lang-nodejs=\"`accountId`\" pulumi-lang-dotnet=\"`AccountId`\" pulumi-lang-go=\"`accountId`\" pulumi-lang-python=\"`account_id`\" pulumi-lang-yaml=\"`accountId`\" pulumi-lang-java=\"`accountId`\"\u003e`account_id`\u003c/span\u003e should be set as part of the Databricks Config, not in the resource.","willReplaceOnChanges":true},"creationTime":{"type":"integer","description":"(Integer) time of credentials registration\n"},"credentialsId":{"type":"string","description":"(String) identifier of credentials\n"},"credentialsName":{"type":"string","description":"name of credentials to register\n","willReplaceOnChanges":true},"externalId":{"type":"string"},"roleArn":{"type":"string","description":"ARN of cross-account role\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/mwsCustomerManagedKeys:MwsCustomerManagedKeys":{"description":"This resource to configure KMS keys for new workspaces within AWS or GCP. This is to support the following features:\n\n* [Customer-managed keys for managed services](https://docs.databricks.com/security/keys/customer-managed-keys-managed-services-aws.html): Encrypt the workspace's managed services data in the control plane, including notebooks, secrets, Databricks SQL queries, and Databricks SQL query history  with a CMK.\n* [Customer-managed keys for workspace storage](https://docs.databricks.com/security/keys/customer-managed-keys-storage-aws.html): Encrypt the workspace's root S3 bucket and clusters' EBS volumes with a CMK.\n\n\u003e This resource can only be used with an account-level provider!\n\nPlease follow this complete runnable example with new VPC and new workspace setup. Please pay special attention to the fact that there you have two different instances of a databricks provider - one for deploying workspaces (with `host=\"https://accounts.cloud.databricks.com/\"`) and another for the workspace you've created with\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource. If you want both creation of workspaces \u0026 clusters within workspace within the same terraform module (essentially same directory), you should use the provider aliasing feature of Pulumi. We strongly recommend having one Pulumi module for creation of workspace + PAT token and the rest in different modules.\n\n## Example Usage\n\n\u003e If you've used the resource before, please add \u003cspan pulumi-lang-nodejs=\"`useCases \" pulumi-lang-dotnet=\"`UseCases \" pulumi-lang-go=\"`useCases \" pulumi-lang-python=\"`use_cases \" pulumi-lang-yaml=\"`useCases \" pulumi-lang-java=\"`useCases \"\u003e`use_cases \u003c/span\u003e= [\"MANAGED_SERVICES\"]` to keep the previous behaviour.\n\n### Customer-managed key for managed services\n\nYou must configure this during workspace creation\n\n### For GCP\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\n// Account Id that could be found in the top right corner of https://accounts.gcp.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\n// Id of a google_kms_crypto_key\nconst cmekResourceId = config.requireObject\u003cany\u003e(\"cmekResourceId\");\nconst managedServices = new databricks.MwsCustomerManagedKeys(\"managed_services\", {\n    accountId: databricksAccountId,\n    gcpKeyInfo: {\n        kmsKeyId: cmekResourceId,\n    },\n    useCases: [\"MANAGED_SERVICES\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\n# Account Id that could be found in the top right corner of https://accounts.gcp.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\n# Id of a google_kms_crypto_key\ncmek_resource_id = config.require_object(\"cmekResourceId\")\nmanaged_services = databricks.MwsCustomerManagedKeys(\"managed_services\",\n    account_id=databricks_account_id,\n    gcp_key_info={\n        \"kms_key_id\": cmek_resource_id,\n    },\n    use_cases=[\"MANAGED_SERVICES\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account Id that could be found in the top right corner of https://accounts.gcp.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    // Id of a google_kms_crypto_key\n    var cmekResourceId = config.RequireObject\u003cdynamic\u003e(\"cmekResourceId\");\n    var managedServices = new Databricks.MwsCustomerManagedKeys(\"managed_services\", new()\n    {\n        AccountId = databricksAccountId,\n        GcpKeyInfo = new Databricks.Inputs.MwsCustomerManagedKeysGcpKeyInfoArgs\n        {\n            KmsKeyId = cmekResourceId,\n        },\n        UseCases = new[]\n        {\n            \"MANAGED_SERVICES\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account Id that could be found in the top right corner of https://accounts.gcp.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\t// Id of a google_kms_crypto_key\n\t\tcmekResourceId := cfg.RequireObject(\"cmekResourceId\")\n\t\t_, err := databricks.NewMwsCustomerManagedKeys(ctx, \"managed_services\", \u0026databricks.MwsCustomerManagedKeysArgs{\n\t\t\tAccountId: pulumi.Any(databricksAccountId),\n\t\t\tGcpKeyInfo: \u0026databricks.MwsCustomerManagedKeysGcpKeyInfoArgs{\n\t\t\t\tKmsKeyId: pulumi.Any(cmekResourceId),\n\t\t\t},\n\t\t\tUseCases: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"MANAGED_SERVICES\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsCustomerManagedKeys;\nimport com.pulumi.databricks.MwsCustomerManagedKeysArgs;\nimport com.pulumi.databricks.inputs.MwsCustomerManagedKeysGcpKeyInfoArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        final var cmekResourceId = config.get(\"cmekResourceId\");\n        var managedServices = new MwsCustomerManagedKeys(\"managedServices\", MwsCustomerManagedKeysArgs.builder()\n            .accountId(databricksAccountId)\n            .gcpKeyInfo(MwsCustomerManagedKeysGcpKeyInfoArgs.builder()\n                .kmsKeyId(cmekResourceId)\n                .build())\n            .useCases(\"MANAGED_SERVICES\")\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\n  cmekResourceId:\n    type: dynamic\nresources:\n  managedServices:\n    type: databricks:MwsCustomerManagedKeys\n    name: managed_services\n    properties:\n      accountId: ${databricksAccountId}\n      gcpKeyInfo:\n        kmsKeyId: ${cmekResourceId}\n      useCases:\n        - MANAGED_SERVICES\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Customer-managed key for workspace storage\n\n### For GCP\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\n// Account Id that could be found in the top right corner of https://accounts.gcp.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\n// Id of a google_kms_crypto_key\nconst cmekResourceId = config.requireObject\u003cany\u003e(\"cmekResourceId\");\nconst storage = new databricks.MwsCustomerManagedKeys(\"storage\", {\n    accountId: databricksAccountId,\n    gcpKeyInfo: {\n        kmsKeyId: cmekResourceId,\n    },\n    useCases: [\"STORAGE\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\n# Account Id that could be found in the top right corner of https://accounts.gcp.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\n# Id of a google_kms_crypto_key\ncmek_resource_id = config.require_object(\"cmekResourceId\")\nstorage = databricks.MwsCustomerManagedKeys(\"storage\",\n    account_id=databricks_account_id,\n    gcp_key_info={\n        \"kms_key_id\": cmek_resource_id,\n    },\n    use_cases=[\"STORAGE\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account Id that could be found in the top right corner of https://accounts.gcp.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    // Id of a google_kms_crypto_key\n    var cmekResourceId = config.RequireObject\u003cdynamic\u003e(\"cmekResourceId\");\n    var storage = new Databricks.MwsCustomerManagedKeys(\"storage\", new()\n    {\n        AccountId = databricksAccountId,\n        GcpKeyInfo = new Databricks.Inputs.MwsCustomerManagedKeysGcpKeyInfoArgs\n        {\n            KmsKeyId = cmekResourceId,\n        },\n        UseCases = new[]\n        {\n            \"STORAGE\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account Id that could be found in the top right corner of https://accounts.gcp.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\t// Id of a google_kms_crypto_key\n\t\tcmekResourceId := cfg.RequireObject(\"cmekResourceId\")\n\t\t_, err := databricks.NewMwsCustomerManagedKeys(ctx, \"storage\", \u0026databricks.MwsCustomerManagedKeysArgs{\n\t\t\tAccountId: pulumi.Any(databricksAccountId),\n\t\t\tGcpKeyInfo: \u0026databricks.MwsCustomerManagedKeysGcpKeyInfoArgs{\n\t\t\t\tKmsKeyId: pulumi.Any(cmekResourceId),\n\t\t\t},\n\t\t\tUseCases: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"STORAGE\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsCustomerManagedKeys;\nimport com.pulumi.databricks.MwsCustomerManagedKeysArgs;\nimport com.pulumi.databricks.inputs.MwsCustomerManagedKeysGcpKeyInfoArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        final var cmekResourceId = config.get(\"cmekResourceId\");\n        var storage = new MwsCustomerManagedKeys(\"storage\", MwsCustomerManagedKeysArgs.builder()\n            .accountId(databricksAccountId)\n            .gcpKeyInfo(MwsCustomerManagedKeysGcpKeyInfoArgs.builder()\n                .kmsKeyId(cmekResourceId)\n                .build())\n            .useCases(\"STORAGE\")\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\n  cmekResourceId:\n    type: dynamic\nresources:\n  storage:\n    type: databricks:MwsCustomerManagedKeys\n    properties:\n      accountId: ${databricksAccountId}\n      gcpKeyInfo:\n        kmsKeyId: ${cmekResourceId}\n      useCases:\n        - STORAGE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* Provisioning Databricks on AWS guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsCredentials \" pulumi-lang-dotnet=\" databricks.MwsCredentials \" pulumi-lang-go=\" MwsCredentials \" pulumi-lang-python=\" MwsCredentials \" pulumi-lang-yaml=\" databricks.MwsCredentials \" pulumi-lang-java=\" databricks.MwsCredentials \"\u003e databricks.MwsCredentials \u003c/span\u003eto configure the cross-account role for creation of new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsLogDelivery \" pulumi-lang-dotnet=\" databricks.MwsLogDelivery \" pulumi-lang-go=\" MwsLogDelivery \" pulumi-lang-python=\" MwsLogDelivery \" pulumi-lang-yaml=\" databricks.MwsLogDelivery \" pulumi-lang-java=\" databricks.MwsLogDelivery \"\u003e databricks.MwsLogDelivery \u003c/span\u003eto configure delivery of [billable usage logs](https://docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html) and [audit logs](https://docs.databricks.com/administration-guide/account-settings/audit-logs.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eto [configure VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) \u0026 subnets for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsStorageConfigurations \" pulumi-lang-dotnet=\" databricks.MwsStorageConfigurations \" pulumi-lang-go=\" MwsStorageConfigurations \" pulumi-lang-python=\" MwsStorageConfigurations \" pulumi-lang-yaml=\" databricks.MwsStorageConfigurations \" pulumi-lang-java=\" databricks.MwsStorageConfigurations \"\u003e databricks.MwsStorageConfigurations \u003c/span\u003eto configure root bucket new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up [AWS and GCP workspaces](https://docs.databricks.com/getting-started/overview.html#e2-architecture-1).\n\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/)\n"},"awsKeyInfo":{"$ref":"#/types/databricks:index/MwsCustomerManagedKeysAwsKeyInfo:MwsCustomerManagedKeysAwsKeyInfo","description":"This field is a block and is documented below. This conflicts with \u003cspan pulumi-lang-nodejs=\"`gcpKeyInfo`\" pulumi-lang-dotnet=\"`GcpKeyInfo`\" pulumi-lang-go=\"`gcpKeyInfo`\" pulumi-lang-python=\"`gcp_key_info`\" pulumi-lang-yaml=\"`gcpKeyInfo`\" pulumi-lang-java=\"`gcpKeyInfo`\"\u003e`gcp_key_info`\u003c/span\u003e\n"},"creationTime":{"type":"integer","description":"(Integer) Time in epoch milliseconds when the customer key was created.\n"},"customerManagedKeyId":{"type":"string","description":"(String) ID of the encryption key configuration object.\n"},"gcpKeyInfo":{"$ref":"#/types/databricks:index/MwsCustomerManagedKeysGcpKeyInfo:MwsCustomerManagedKeysGcpKeyInfo","description":"This field is a block and is documented below. This conflicts with \u003cspan pulumi-lang-nodejs=\"`awsKeyInfo`\" pulumi-lang-dotnet=\"`AwsKeyInfo`\" pulumi-lang-go=\"`awsKeyInfo`\" pulumi-lang-python=\"`aws_key_info`\" pulumi-lang-yaml=\"`awsKeyInfo`\" pulumi-lang-java=\"`awsKeyInfo`\"\u003e`aws_key_info`\u003c/span\u003e\n"},"useCases":{"type":"array","items":{"type":"string"},"description":"*(since v0.3.4)* List of use cases for which this key will be used. *If you've used the resource before, please add \u003cspan pulumi-lang-nodejs=\"`useCases \" pulumi-lang-dotnet=\"`UseCases \" pulumi-lang-go=\"`useCases \" pulumi-lang-python=\"`use_cases \" pulumi-lang-yaml=\"`useCases \" pulumi-lang-java=\"`useCases \"\u003e`use_cases \u003c/span\u003e= [\"MANAGED_SERVICES\"]` to keep the previous behaviour.* Possible values are:\n* `MANAGED_SERVICES` - for encryption of the workspace objects (notebooks, secrets) that are stored in the control plane\n* `STORAGE` - for encryption of the DBFS Storage \u0026 Cluster EBS Volumes\n"}},"required":["accountId","creationTime","customerManagedKeyId","useCases"],"inputProperties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/)\n","willReplaceOnChanges":true},"awsKeyInfo":{"$ref":"#/types/databricks:index/MwsCustomerManagedKeysAwsKeyInfo:MwsCustomerManagedKeysAwsKeyInfo","description":"This field is a block and is documented below. This conflicts with \u003cspan pulumi-lang-nodejs=\"`gcpKeyInfo`\" pulumi-lang-dotnet=\"`GcpKeyInfo`\" pulumi-lang-go=\"`gcpKeyInfo`\" pulumi-lang-python=\"`gcp_key_info`\" pulumi-lang-yaml=\"`gcpKeyInfo`\" pulumi-lang-java=\"`gcpKeyInfo`\"\u003e`gcp_key_info`\u003c/span\u003e\n","willReplaceOnChanges":true},"creationTime":{"type":"integer","description":"(Integer) Time in epoch milliseconds when the customer key was created.\n"},"customerManagedKeyId":{"type":"string","description":"(String) ID of the encryption key configuration object.\n"},"gcpKeyInfo":{"$ref":"#/types/databricks:index/MwsCustomerManagedKeysGcpKeyInfo:MwsCustomerManagedKeysGcpKeyInfo","description":"This field is a block and is documented below. This conflicts with \u003cspan pulumi-lang-nodejs=\"`awsKeyInfo`\" pulumi-lang-dotnet=\"`AwsKeyInfo`\" pulumi-lang-go=\"`awsKeyInfo`\" pulumi-lang-python=\"`aws_key_info`\" pulumi-lang-yaml=\"`awsKeyInfo`\" pulumi-lang-java=\"`awsKeyInfo`\"\u003e`aws_key_info`\u003c/span\u003e\n","willReplaceOnChanges":true},"useCases":{"type":"array","items":{"type":"string"},"description":"*(since v0.3.4)* List of use cases for which this key will be used. *If you've used the resource before, please add \u003cspan pulumi-lang-nodejs=\"`useCases \" pulumi-lang-dotnet=\"`UseCases \" pulumi-lang-go=\"`useCases \" pulumi-lang-python=\"`use_cases \" pulumi-lang-yaml=\"`useCases \" pulumi-lang-java=\"`useCases \"\u003e`use_cases \u003c/span\u003e= [\"MANAGED_SERVICES\"]` to keep the previous behaviour.* Possible values are:\n* `MANAGED_SERVICES` - for encryption of the workspace objects (notebooks, secrets) that are stored in the control plane\n* `STORAGE` - for encryption of the DBFS Storage \u0026 Cluster EBS Volumes\n","willReplaceOnChanges":true}},"requiredInputs":["accountId","useCases"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsCustomerManagedKeys resources.\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/)\n","willReplaceOnChanges":true},"awsKeyInfo":{"$ref":"#/types/databricks:index/MwsCustomerManagedKeysAwsKeyInfo:MwsCustomerManagedKeysAwsKeyInfo","description":"This field is a block and is documented below. This conflicts with \u003cspan pulumi-lang-nodejs=\"`gcpKeyInfo`\" pulumi-lang-dotnet=\"`GcpKeyInfo`\" pulumi-lang-go=\"`gcpKeyInfo`\" pulumi-lang-python=\"`gcp_key_info`\" pulumi-lang-yaml=\"`gcpKeyInfo`\" pulumi-lang-java=\"`gcpKeyInfo`\"\u003e`gcp_key_info`\u003c/span\u003e\n","willReplaceOnChanges":true},"creationTime":{"type":"integer","description":"(Integer) Time in epoch milliseconds when the customer key was created.\n"},"customerManagedKeyId":{"type":"string","description":"(String) ID of the encryption key configuration object.\n"},"gcpKeyInfo":{"$ref":"#/types/databricks:index/MwsCustomerManagedKeysGcpKeyInfo:MwsCustomerManagedKeysGcpKeyInfo","description":"This field is a block and is documented below. This conflicts with \u003cspan pulumi-lang-nodejs=\"`awsKeyInfo`\" pulumi-lang-dotnet=\"`AwsKeyInfo`\" pulumi-lang-go=\"`awsKeyInfo`\" pulumi-lang-python=\"`aws_key_info`\" pulumi-lang-yaml=\"`awsKeyInfo`\" pulumi-lang-java=\"`awsKeyInfo`\"\u003e`aws_key_info`\u003c/span\u003e\n","willReplaceOnChanges":true},"useCases":{"type":"array","items":{"type":"string"},"description":"*(since v0.3.4)* List of use cases for which this key will be used. *If you've used the resource before, please add \u003cspan pulumi-lang-nodejs=\"`useCases \" pulumi-lang-dotnet=\"`UseCases \" pulumi-lang-go=\"`useCases \" pulumi-lang-python=\"`use_cases \" pulumi-lang-yaml=\"`useCases \" pulumi-lang-java=\"`useCases \"\u003e`use_cases \u003c/span\u003e= [\"MANAGED_SERVICES\"]` to keep the previous behaviour.* Possible values are:\n* `MANAGED_SERVICES` - for encryption of the workspace objects (notebooks, secrets) that are stored in the control plane\n* `STORAGE` - for encryption of the DBFS Storage \u0026 Cluster EBS Volumes\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/mwsLogDelivery:MwsLogDelivery":{"description":"This resource configures the delivery of the two supported log types from Databricks workspaces: [billable usage logs](https://docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html) and [audit logs](https://docs.databricks.com/administration-guide/account-settings/audit-logs.html).\n\n\u003e This resource can only be used with an account-level provider!\n\nYou cannot delete a log delivery configuration, but you can disable it when you no longer need it. This fact is important because there is a limit to the number of enabled log delivery configurations that you can create for an account. You can create a maximum of two enabled configurations that use the account level (no workspace filter) and two enabled configurations for every specific workspace (a workspaceId can occur in the workspace filter for two configurations). You can re-enable a disabled configuration, but the request fails if it violates the limits previously described.\n\n## Example Usage\n\nEnd-to-end example of usage and audit log delivery:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as aws from \"@pulumi/aws\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\nimport * as time from \"@pulumiverse/time\";\n\nconst config = new pulumi.Config();\n// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\nconst logdeliveryS3Bucket = new aws.index.S3Bucket(\"logdelivery\", {\n    bucket: `${prefix}-logdelivery`,\n    acl: \"private\",\n    forceDestroy: true,\n    tags: std.merge({\n        input: [\n            tags,\n            {\n                name: `${prefix}-logdelivery`,\n            },\n        ],\n    }).result,\n});\nconst logdeliveryS3BucketPublicAccessBlock = new aws.index.S3BucketPublicAccessBlock(\"logdelivery\", {\n    bucket: logdeliveryS3Bucket.id,\n    ignorePublicAcls: true,\n});\nconst logdelivery = databricks.getAwsAssumeRolePolicy({\n    externalId: databricksAccountId,\n    forLogDelivery: true,\n});\nconst logdeliveryVersioning = new aws.index.S3BucketVersioning(\"logdelivery_versioning\", {\n    bucket: logdeliveryS3Bucket.id,\n    versioningConfiguration: [{\n        status: \"Disabled\",\n    }],\n});\nconst logdeliveryIamRole = new aws.index.IamRole(\"logdelivery\", {\n    name: `${prefix}-logdelivery`,\n    description: `(${prefix}) UsageDelivery role`,\n    assumeRolePolicy: logdelivery.json,\n    tags: tags,\n});\nconst logdeliveryGetAwsBucketPolicy = databricks.getAwsBucketPolicy({\n    fullAccessRole: logdeliveryIamRole.arn,\n    bucket: logdeliveryS3Bucket.bucket,\n});\nconst logdeliveryS3BucketPolicy = new aws.index.S3BucketPolicy(\"logdelivery\", {\n    bucket: logdeliveryS3Bucket.id,\n    policy: logdeliveryGetAwsBucketPolicy.json,\n});\nconst wait = new time.Sleep(\"wait\", {createDuration: \"10s\"}, {\n    dependsOn: [logdeliveryIamRole],\n});\nconst logWriter = new databricks.MwsCredentials(\"log_writer\", {\n    accountId: databricksAccountId,\n    credentialsName: \"Usage Delivery\",\n    roleArn: logdeliveryIamRole.arn,\n}, {\n    dependsOn: [wait],\n});\nconst logBucket = new databricks.MwsStorageConfigurations(\"log_bucket\", {\n    accountId: databricksAccountId,\n    storageConfigurationName: \"Usage Logs\",\n    bucketName: logdeliveryS3Bucket.bucket,\n});\nconst usageLogs = new databricks.MwsLogDelivery(\"usage_logs\", {\n    accountId: databricksAccountId,\n    credentialsId: logWriter.credentialsId,\n    storageConfigurationId: logBucket.storageConfigurationId,\n    deliveryPathPrefix: \"billable-usage\",\n    configName: \"Usage Logs\",\n    logType: \"BILLABLE_USAGE\",\n    outputFormat: \"CSV\",\n});\nconst auditLogs = new databricks.MwsLogDelivery(\"audit_logs\", {\n    accountId: databricksAccountId,\n    credentialsId: logWriter.credentialsId,\n    storageConfigurationId: logBucket.storageConfigurationId,\n    deliveryPathPrefix: \"audit-logs\",\n    configName: \"Audit Logs\",\n    logType: \"AUDIT_LOGS\",\n    outputFormat: \"JSON\",\n});\n```\n```python\nimport pulumi\nimport pulumi_aws as aws\nimport pulumi_databricks as databricks\nimport pulumi_std as std\nimport pulumiverse_time as time\n\nconfig = pulumi.Config()\n# Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\nlogdelivery_s3_bucket = aws.index.S3Bucket(\"logdelivery\",\n    bucket=f{prefix}-logdelivery,\n    acl=private,\n    force_destroy=True,\n    tags=std.merge(input=[\n        tags,\n        {\n            name: f{prefix}-logdelivery,\n        },\n    ]).result)\nlogdelivery_s3_bucket_public_access_block = aws.index.S3BucketPublicAccessBlock(\"logdelivery\",\n    bucket=logdelivery_s3_bucket.id,\n    ignore_public_acls=True)\nlogdelivery = databricks.get_aws_assume_role_policy(external_id=databricks_account_id,\n    for_log_delivery=True)\nlogdelivery_versioning = aws.index.S3BucketVersioning(\"logdelivery_versioning\",\n    bucket=logdelivery_s3_bucket.id,\n    versioning_configuration=[{\n        status: Disabled,\n    }])\nlogdelivery_iam_role = aws.index.IamRole(\"logdelivery\",\n    name=f{prefix}-logdelivery,\n    description=f({prefix}) UsageDelivery role,\n    assume_role_policy=logdelivery.json,\n    tags=tags)\nlogdelivery_get_aws_bucket_policy = databricks.get_aws_bucket_policy(full_access_role=logdelivery_iam_role[\"arn\"],\n    bucket=logdelivery_s3_bucket[\"bucket\"])\nlogdelivery_s3_bucket_policy = aws.index.S3BucketPolicy(\"logdelivery\",\n    bucket=logdelivery_s3_bucket.id,\n    policy=logdelivery_get_aws_bucket_policy.json)\nwait = time.Sleep(\"wait\", create_duration=\"10s\",\nopts = pulumi.ResourceOptions(depends_on=[logdelivery_iam_role]))\nlog_writer = databricks.MwsCredentials(\"log_writer\",\n    account_id=databricks_account_id,\n    credentials_name=\"Usage Delivery\",\n    role_arn=logdelivery_iam_role[\"arn\"],\n    opts = pulumi.ResourceOptions(depends_on=[wait]))\nlog_bucket = databricks.MwsStorageConfigurations(\"log_bucket\",\n    account_id=databricks_account_id,\n    storage_configuration_name=\"Usage Logs\",\n    bucket_name=logdelivery_s3_bucket[\"bucket\"])\nusage_logs = databricks.MwsLogDelivery(\"usage_logs\",\n    account_id=databricks_account_id,\n    credentials_id=log_writer.credentials_id,\n    storage_configuration_id=log_bucket.storage_configuration_id,\n    delivery_path_prefix=\"billable-usage\",\n    config_name=\"Usage Logs\",\n    log_type=\"BILLABLE_USAGE\",\n    output_format=\"CSV\")\naudit_logs = databricks.MwsLogDelivery(\"audit_logs\",\n    account_id=databricks_account_id,\n    credentials_id=log_writer.credentials_id,\n    storage_configuration_id=log_bucket.storage_configuration_id,\n    delivery_path_prefix=\"audit-logs\",\n    config_name=\"Audit Logs\",\n    log_type=\"AUDIT_LOGS\",\n    output_format=\"JSON\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Aws = Pulumi.Aws;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\nusing Time = Pulumiverse.Time;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    var logdeliveryS3Bucket = new Aws.Index.S3Bucket(\"logdelivery\", new()\n    {\n        Bucket = $\"{prefix}-logdelivery\",\n        Acl = \"private\",\n        ForceDestroy = true,\n        Tags = Std.Merge.Invoke(new()\n        {\n            Input = new[]\n            {\n                tags,\n                \n                {\n                    { \"name\", $\"{prefix}-logdelivery\" },\n                },\n            },\n        }).Result,\n    });\n\n    var logdeliveryS3BucketPublicAccessBlock = new Aws.Index.S3BucketPublicAccessBlock(\"logdelivery\", new()\n    {\n        Bucket = logdeliveryS3Bucket.Id,\n        IgnorePublicAcls = true,\n    });\n\n    var logdelivery = Databricks.GetAwsAssumeRolePolicy.Invoke(new()\n    {\n        ExternalId = databricksAccountId,\n        ForLogDelivery = true,\n    });\n\n    var logdeliveryVersioning = new Aws.Index.S3BucketVersioning(\"logdelivery_versioning\", new()\n    {\n        Bucket = logdeliveryS3Bucket.Id,\n        VersioningConfiguration = new[]\n        {\n            \n            {\n                { \"status\", \"Disabled\" },\n            },\n        },\n    });\n\n    var logdeliveryIamRole = new Aws.Index.IamRole(\"logdelivery\", new()\n    {\n        Name = $\"{prefix}-logdelivery\",\n        Description = $\"({prefix}) UsageDelivery role\",\n        AssumeRolePolicy = logdelivery.Apply(getAwsAssumeRolePolicyResult =\u003e getAwsAssumeRolePolicyResult.Json),\n        Tags = tags,\n    });\n\n    var logdeliveryGetAwsBucketPolicy = Databricks.GetAwsBucketPolicy.Invoke(new()\n    {\n        FullAccessRole = logdeliveryIamRole.Arn,\n        Bucket = logdeliveryS3Bucket.Bucket,\n    });\n\n    var logdeliveryS3BucketPolicy = new Aws.Index.S3BucketPolicy(\"logdelivery\", new()\n    {\n        Bucket = logdeliveryS3Bucket.Id,\n        Policy = logdeliveryGetAwsBucketPolicy.Apply(getAwsBucketPolicyResult =\u003e getAwsBucketPolicyResult.Json),\n    });\n\n    var wait = new Time.Sleep(\"wait\", new()\n    {\n        CreateDuration = \"10s\",\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            logdeliveryIamRole,\n        },\n    });\n\n    var logWriter = new Databricks.MwsCredentials(\"log_writer\", new()\n    {\n        AccountId = databricksAccountId,\n        CredentialsName = \"Usage Delivery\",\n        RoleArn = logdeliveryIamRole.Arn,\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            wait,\n        },\n    });\n\n    var logBucket = new Databricks.MwsStorageConfigurations(\"log_bucket\", new()\n    {\n        AccountId = databricksAccountId,\n        StorageConfigurationName = \"Usage Logs\",\n        BucketName = logdeliveryS3Bucket.Bucket,\n    });\n\n    var usageLogs = new Databricks.MwsLogDelivery(\"usage_logs\", new()\n    {\n        AccountId = databricksAccountId,\n        CredentialsId = logWriter.CredentialsId,\n        StorageConfigurationId = logBucket.StorageConfigurationId,\n        DeliveryPathPrefix = \"billable-usage\",\n        ConfigName = \"Usage Logs\",\n        LogType = \"BILLABLE_USAGE\",\n        OutputFormat = \"CSV\",\n    });\n\n    var auditLogs = new Databricks.MwsLogDelivery(\"audit_logs\", new()\n    {\n        AccountId = databricksAccountId,\n        CredentialsId = logWriter.CredentialsId,\n        StorageConfigurationId = logBucket.StorageConfigurationId,\n        DeliveryPathPrefix = \"audit-logs\",\n        ConfigName = \"Audit Logs\",\n        LogType = \"AUDIT_LOGS\",\n        OutputFormat = \"JSON\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-aws/sdk/v7/go/aws\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n\t\"github.com/pulumiverse/pulumi-time/sdk/go/time\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\tlogdeliveryS3Bucket, err := aws.NewS3Bucket(ctx, \"logdelivery\", \u0026aws.S3BucketArgs{\n\t\t\tBucket:       fmt.Sprintf(\"%v-logdelivery\", prefix),\n\t\t\tAcl:          \"private\",\n\t\t\tForceDestroy: true,\n\t\t\tTags: std.Merge(ctx, \u0026std.MergeArgs{\n\t\t\t\tInput: []interface{}{\n\t\t\t\t\ttags,\n\t\t\t\t\tmap[string]interface{}{\n\t\t\t\t\t\t\"name\": fmt.Sprintf(\"%v-logdelivery\", prefix),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t}, nil).Result,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewS3BucketPublicAccessBlock(ctx, \"logdelivery\", \u0026aws.S3BucketPublicAccessBlockArgs{\n\t\t\tBucket:           logdeliveryS3Bucket.Id,\n\t\t\tIgnorePublicAcls: true,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlogdelivery, err := databricks.GetAwsAssumeRolePolicy(ctx, \u0026databricks.GetAwsAssumeRolePolicyArgs{\n\t\t\tExternalId:     databricksAccountId,\n\t\t\tForLogDelivery: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewS3BucketVersioning(ctx, \"logdelivery_versioning\", \u0026aws.S3BucketVersioningArgs{\n\t\t\tBucket: logdeliveryS3Bucket.Id,\n\t\t\tVersioningConfiguration: []map[string]interface{}{\n\t\t\t\tmap[string]interface{}{\n\t\t\t\t\t\"status\": \"Disabled\",\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlogdeliveryIamRole, err := aws.NewIamRole(ctx, \"logdelivery\", \u0026aws.IamRoleArgs{\n\t\t\tName:             fmt.Sprintf(\"%v-logdelivery\", prefix),\n\t\t\tDescription:      fmt.Sprintf(\"(%v) UsageDelivery role\", prefix),\n\t\t\tAssumeRolePolicy: logdelivery.Json,\n\t\t\tTags:             tags,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlogdeliveryGetAwsBucketPolicy, err := databricks.GetAwsBucketPolicy(ctx, \u0026databricks.GetAwsBucketPolicyArgs{\n\t\t\tFullAccessRole: pulumi.StringRef(logdeliveryIamRole.Arn),\n\t\t\tBucket:         logdeliveryS3Bucket.Bucket,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewS3BucketPolicy(ctx, \"logdelivery\", \u0026aws.S3BucketPolicyArgs{\n\t\t\tBucket: logdeliveryS3Bucket.Id,\n\t\t\tPolicy: logdeliveryGetAwsBucketPolicy.Json,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\twait, err := time.NewSleep(ctx, \"wait\", \u0026time.SleepArgs{\n\t\t\tCreateDuration: pulumi.String(\"10s\"),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tlogdeliveryIamRole,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlogWriter, err := databricks.NewMwsCredentials(ctx, \"log_writer\", \u0026databricks.MwsCredentialsArgs{\n\t\t\tAccountId:       pulumi.Any(databricksAccountId),\n\t\t\tCredentialsName: pulumi.String(\"Usage Delivery\"),\n\t\t\tRoleArn:         logdeliveryIamRole.Arn,\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\twait,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlogBucket, err := databricks.NewMwsStorageConfigurations(ctx, \"log_bucket\", \u0026databricks.MwsStorageConfigurationsArgs{\n\t\t\tAccountId:                pulumi.Any(databricksAccountId),\n\t\t\tStorageConfigurationName: pulumi.String(\"Usage Logs\"),\n\t\t\tBucketName:               logdeliveryS3Bucket.Bucket,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsLogDelivery(ctx, \"usage_logs\", \u0026databricks.MwsLogDeliveryArgs{\n\t\t\tAccountId:              pulumi.Any(databricksAccountId),\n\t\t\tCredentialsId:          logWriter.CredentialsId,\n\t\t\tStorageConfigurationId: logBucket.StorageConfigurationId,\n\t\t\tDeliveryPathPrefix:     pulumi.String(\"billable-usage\"),\n\t\t\tConfigName:             pulumi.String(\"Usage Logs\"),\n\t\t\tLogType:                pulumi.String(\"BILLABLE_USAGE\"),\n\t\t\tOutputFormat:           pulumi.String(\"CSV\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsLogDelivery(ctx, \"audit_logs\", \u0026databricks.MwsLogDeliveryArgs{\n\t\t\tAccountId:              pulumi.Any(databricksAccountId),\n\t\t\tCredentialsId:          logWriter.CredentialsId,\n\t\t\tStorageConfigurationId: logBucket.StorageConfigurationId,\n\t\t\tDeliveryPathPrefix:     pulumi.String(\"audit-logs\"),\n\t\t\tConfigName:             pulumi.String(\"Audit Logs\"),\n\t\t\tLogType:                pulumi.String(\"AUDIT_LOGS\"),\n\t\t\tOutputFormat:           pulumi.String(\"JSON\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.aws.S3Bucket;\nimport com.pulumi.aws.S3BucketArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.MergeArgs;\nimport com.pulumi.aws.S3BucketPublicAccessBlock;\nimport com.pulumi.aws.S3BucketPublicAccessBlockArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAwsAssumeRolePolicyArgs;\nimport com.pulumi.aws.S3BucketVersioning;\nimport com.pulumi.aws.S3BucketVersioningArgs;\nimport com.pulumi.aws.IamRole;\nimport com.pulumi.aws.IamRoleArgs;\nimport com.pulumi.databricks.inputs.GetAwsBucketPolicyArgs;\nimport com.pulumi.aws.S3BucketPolicy;\nimport com.pulumi.aws.S3BucketPolicyArgs;\nimport com.pulumiverse.time.Sleep;\nimport com.pulumiverse.time.SleepArgs;\nimport com.pulumi.databricks.MwsCredentials;\nimport com.pulumi.databricks.MwsCredentialsArgs;\nimport com.pulumi.databricks.MwsStorageConfigurations;\nimport com.pulumi.databricks.MwsStorageConfigurationsArgs;\nimport com.pulumi.databricks.MwsLogDelivery;\nimport com.pulumi.databricks.MwsLogDeliveryArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        var logdeliveryS3Bucket = new S3Bucket(\"logdeliveryS3Bucket\", S3BucketArgs.builder()\n            .bucket(String.format(\"%s-logdelivery\", prefix))\n            .acl(\"private\")\n            .forceDestroy(true)\n            .tags(StdFunctions.merge(MergeArgs.builder()\n                .input(                \n                    tags,\n                    Map.of(\"name\", String.format(\"%s-logdelivery\", prefix)))\n                .build()).result())\n            .build());\n\n        var logdeliveryS3BucketPublicAccessBlock = new S3BucketPublicAccessBlock(\"logdeliveryS3BucketPublicAccessBlock\", S3BucketPublicAccessBlockArgs.builder()\n            .bucket(logdeliveryS3Bucket.id())\n            .ignorePublicAcls(true)\n            .build());\n\n        final var logdelivery = DatabricksFunctions.getAwsAssumeRolePolicy(GetAwsAssumeRolePolicyArgs.builder()\n            .externalId(databricksAccountId)\n            .forLogDelivery(true)\n            .build());\n\n        var logdeliveryVersioning = new S3BucketVersioning(\"logdeliveryVersioning\", S3BucketVersioningArgs.builder()\n            .bucket(logdeliveryS3Bucket.id())\n            .versioningConfiguration(List.of(Map.of(\"status\", \"Disabled\")))\n            .build());\n\n        var logdeliveryIamRole = new IamRole(\"logdeliveryIamRole\", IamRoleArgs.builder()\n            .name(String.format(\"%s-logdelivery\", prefix))\n            .description(String.format(\"(%s) UsageDelivery role\", prefix))\n            .assumeRolePolicy(logdelivery.json())\n            .tags(tags)\n            .build());\n\n        final var logdeliveryGetAwsBucketPolicy = DatabricksFunctions.getAwsBucketPolicy(GetAwsBucketPolicyArgs.builder()\n            .fullAccessRole(logdeliveryIamRole.arn())\n            .bucket(logdeliveryS3Bucket.bucket())\n            .build());\n\n        var logdeliveryS3BucketPolicy = new S3BucketPolicy(\"logdeliveryS3BucketPolicy\", S3BucketPolicyArgs.builder()\n            .bucket(logdeliveryS3Bucket.id())\n            .policy(logdeliveryGetAwsBucketPolicy.json())\n            .build());\n\n        var wait = new Sleep(\"wait\", SleepArgs.builder()\n            .createDuration(\"10s\")\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(logdeliveryIamRole)\n                .build());\n\n        var logWriter = new MwsCredentials(\"logWriter\", MwsCredentialsArgs.builder()\n            .accountId(databricksAccountId)\n            .credentialsName(\"Usage Delivery\")\n            .roleArn(logdeliveryIamRole.arn())\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(wait)\n                .build());\n\n        var logBucket = new MwsStorageConfigurations(\"logBucket\", MwsStorageConfigurationsArgs.builder()\n            .accountId(databricksAccountId)\n            .storageConfigurationName(\"Usage Logs\")\n            .bucketName(logdeliveryS3Bucket.bucket())\n            .build());\n\n        var usageLogs = new MwsLogDelivery(\"usageLogs\", MwsLogDeliveryArgs.builder()\n            .accountId(databricksAccountId)\n            .credentialsId(logWriter.credentialsId())\n            .storageConfigurationId(logBucket.storageConfigurationId())\n            .deliveryPathPrefix(\"billable-usage\")\n            .configName(\"Usage Logs\")\n            .logType(\"BILLABLE_USAGE\")\n            .outputFormat(\"CSV\")\n            .build());\n\n        var auditLogs = new MwsLogDelivery(\"auditLogs\", MwsLogDeliveryArgs.builder()\n            .accountId(databricksAccountId)\n            .credentialsId(logWriter.credentialsId())\n            .storageConfigurationId(logBucket.storageConfigurationId())\n            .deliveryPathPrefix(\"audit-logs\")\n            .configName(\"Audit Logs\")\n            .logType(\"AUDIT_LOGS\")\n            .outputFormat(\"JSON\")\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\nresources:\n  logdeliveryS3Bucket:\n    type: aws:S3Bucket\n    name: logdelivery\n    properties:\n      bucket: ${prefix}-logdelivery\n      acl: private\n      forceDestroy: true\n      tags:\n        fn::invoke:\n          function: std:merge\n          arguments:\n            input:\n              - ${tags}\n              - name: ${prefix}-logdelivery\n          return: result\n  logdeliveryS3BucketPublicAccessBlock:\n    type: aws:S3BucketPublicAccessBlock\n    name: logdelivery\n    properties:\n      bucket: ${logdeliveryS3Bucket.id}\n      ignorePublicAcls: true\n  logdeliveryVersioning:\n    type: aws:S3BucketVersioning\n    name: logdelivery_versioning\n    properties:\n      bucket: ${logdeliveryS3Bucket.id}\n      versioningConfiguration:\n        - status: Disabled\n  logdeliveryIamRole:\n    type: aws:IamRole\n    name: logdelivery\n    properties:\n      name: ${prefix}-logdelivery\n      description: (${prefix}) UsageDelivery role\n      assumeRolePolicy: ${logdelivery.json}\n      tags: ${tags}\n  logdeliveryS3BucketPolicy:\n    type: aws:S3BucketPolicy\n    name: logdelivery\n    properties:\n      bucket: ${logdeliveryS3Bucket.id}\n      policy: ${logdeliveryGetAwsBucketPolicy.json}\n  wait:\n    type: time:Sleep\n    properties:\n      createDuration: 10s\n    options:\n      dependsOn:\n        - ${logdeliveryIamRole}\n  logWriter:\n    type: databricks:MwsCredentials\n    name: log_writer\n    properties:\n      accountId: ${databricksAccountId}\n      credentialsName: Usage Delivery\n      roleArn: ${logdeliveryIamRole.arn}\n    options:\n      dependsOn:\n        - ${wait}\n  logBucket:\n    type: databricks:MwsStorageConfigurations\n    name: log_bucket\n    properties:\n      accountId: ${databricksAccountId}\n      storageConfigurationName: Usage Logs\n      bucketName: ${logdeliveryS3Bucket.bucket}\n  usageLogs:\n    type: databricks:MwsLogDelivery\n    name: usage_logs\n    properties:\n      accountId: ${databricksAccountId}\n      credentialsId: ${logWriter.credentialsId}\n      storageConfigurationId: ${logBucket.storageConfigurationId}\n      deliveryPathPrefix: billable-usage\n      configName: Usage Logs\n      logType: BILLABLE_USAGE\n      outputFormat: CSV\n  auditLogs:\n    type: databricks:MwsLogDelivery\n    name: audit_logs\n    properties:\n      accountId: ${databricksAccountId}\n      credentialsId: ${logWriter.credentialsId}\n      storageConfigurationId: ${logBucket.storageConfigurationId}\n      deliveryPathPrefix: audit-logs\n      configName: Audit Logs\n      logType: AUDIT_LOGS\n      outputFormat: JSON\nvariables:\n  logdelivery:\n    fn::invoke:\n      function: databricks:getAwsAssumeRolePolicy\n      arguments:\n        externalId: ${databricksAccountId}\n        forLogDelivery: true\n  logdeliveryGetAwsBucketPolicy:\n    fn::invoke:\n      function: databricks:getAwsBucketPolicy\n      arguments:\n        fullAccessRole: ${logdeliveryIamRole.arn}\n        bucket: ${logdeliveryS3Bucket.bucket}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Billable Usage\n\nCSV files are delivered to `\u003cdelivery_path_prefix\u003e/billable-usage/csv/` and are named `workspaceId=\u003cworkspace-id\u003e-usageMonth=\u003cmonth\u003e.csv`, which are delivered daily by overwriting the month's CSV file for each workspace. Format of CSV file, as well as some usage examples, can be found [here](https://docs.databricks.com/administration-guide/account-settings/usage.html#download-usage-as-a-csv-file).\n\nCommon processing scenario is to apply [cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html), that could be enforced by setting\u003cspan pulumi-lang-nodejs=\" customTags \" pulumi-lang-dotnet=\" CustomTags \" pulumi-lang-go=\" customTags \" pulumi-lang-python=\" custom_tags \" pulumi-lang-yaml=\" customTags \" pulumi-lang-java=\" customTags \"\u003e custom_tags \u003c/span\u003eon a cluster or through cluster policy. Report contains `clusterId` field, that could be joined with data from AWS [cost and usage reports](https://docs.aws.amazon.com/cur/latest/userguide/cur-create.html), that can be joined with `user:ClusterId` tag from AWS usage report.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst usageLogs = new databricks.MwsLogDelivery(\"usage_logs\", {\n    accountId: databricksAccountId,\n    credentialsId: logWriter.credentialsId,\n    storageConfigurationId: logBucket.storageConfigurationId,\n    deliveryPathPrefix: \"billable-usage\",\n    configName: \"Usage Logs\",\n    logType: \"BILLABLE_USAGE\",\n    outputFormat: \"CSV\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nusage_logs = databricks.MwsLogDelivery(\"usage_logs\",\n    account_id=databricks_account_id,\n    credentials_id=log_writer[\"credentialsId\"],\n    storage_configuration_id=log_bucket[\"storageConfigurationId\"],\n    delivery_path_prefix=\"billable-usage\",\n    config_name=\"Usage Logs\",\n    log_type=\"BILLABLE_USAGE\",\n    output_format=\"CSV\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var usageLogs = new Databricks.MwsLogDelivery(\"usage_logs\", new()\n    {\n        AccountId = databricksAccountId,\n        CredentialsId = logWriter.CredentialsId,\n        StorageConfigurationId = logBucket.StorageConfigurationId,\n        DeliveryPathPrefix = \"billable-usage\",\n        ConfigName = \"Usage Logs\",\n        LogType = \"BILLABLE_USAGE\",\n        OutputFormat = \"CSV\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsLogDelivery(ctx, \"usage_logs\", \u0026databricks.MwsLogDeliveryArgs{\n\t\t\tAccountId:              pulumi.Any(databricksAccountId),\n\t\t\tCredentialsId:          pulumi.Any(logWriter.CredentialsId),\n\t\t\tStorageConfigurationId: pulumi.Any(logBucket.StorageConfigurationId),\n\t\t\tDeliveryPathPrefix:     pulumi.String(\"billable-usage\"),\n\t\t\tConfigName:             pulumi.String(\"Usage Logs\"),\n\t\t\tLogType:                pulumi.String(\"BILLABLE_USAGE\"),\n\t\t\tOutputFormat:           pulumi.String(\"CSV\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsLogDelivery;\nimport com.pulumi.databricks.MwsLogDeliveryArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var usageLogs = new MwsLogDelivery(\"usageLogs\", MwsLogDeliveryArgs.builder()\n            .accountId(databricksAccountId)\n            .credentialsId(logWriter.credentialsId())\n            .storageConfigurationId(logBucket.storageConfigurationId())\n            .deliveryPathPrefix(\"billable-usage\")\n            .configName(\"Usage Logs\")\n            .logType(\"BILLABLE_USAGE\")\n            .outputFormat(\"CSV\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  usageLogs:\n    type: databricks:MwsLogDelivery\n    name: usage_logs\n    properties:\n      accountId: ${databricksAccountId}\n      credentialsId: ${logWriter.credentialsId}\n      storageConfigurationId: ${logBucket.storageConfigurationId}\n      deliveryPathPrefix: billable-usage\n      configName: Usage Logs\n      logType: BILLABLE_USAGE\n      outputFormat: CSV\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Audit Logs\n\nJSON files with [static schema](https://docs.databricks.com/administration-guide/account-settings/audit-logs.html#audit-log-schema) are delivered to `\u003cdelivery_path_prefix\u003e/workspaceId=\u003cworkspaceId\u003e/date=\u003cyyyy-mm-dd\u003e/auditlogs_\u003cinternal-id\u003e.json`. Logs are available within 15 minutes of activation for audit logs. New JSON files are delivered every few minutes, potentially overwriting existing files for each workspace. Sometimes data may arrive later than 15 minutes. Databricks can overwrite the delivered log files in your bucket at any time. If a file is overwritten, the existing content remains, but there may be additional lines for more auditable events. Overwriting ensures exactly-once semantics without requiring read or delete access to your account.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auditLogs = new databricks.MwsLogDelivery(\"audit_logs\", {\n    accountId: databricksAccountId,\n    credentialsId: logWriter.credentialsId,\n    storageConfigurationId: logBucket.storageConfigurationId,\n    deliveryPathPrefix: \"audit-logs\",\n    configName: \"Audit Logs\",\n    logType: \"AUDIT_LOGS\",\n    outputFormat: \"JSON\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\naudit_logs = databricks.MwsLogDelivery(\"audit_logs\",\n    account_id=databricks_account_id,\n    credentials_id=log_writer[\"credentialsId\"],\n    storage_configuration_id=log_bucket[\"storageConfigurationId\"],\n    delivery_path_prefix=\"audit-logs\",\n    config_name=\"Audit Logs\",\n    log_type=\"AUDIT_LOGS\",\n    output_format=\"JSON\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auditLogs = new Databricks.MwsLogDelivery(\"audit_logs\", new()\n    {\n        AccountId = databricksAccountId,\n        CredentialsId = logWriter.CredentialsId,\n        StorageConfigurationId = logBucket.StorageConfigurationId,\n        DeliveryPathPrefix = \"audit-logs\",\n        ConfigName = \"Audit Logs\",\n        LogType = \"AUDIT_LOGS\",\n        OutputFormat = \"JSON\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsLogDelivery(ctx, \"audit_logs\", \u0026databricks.MwsLogDeliveryArgs{\n\t\t\tAccountId:              pulumi.Any(databricksAccountId),\n\t\t\tCredentialsId:          pulumi.Any(logWriter.CredentialsId),\n\t\t\tStorageConfigurationId: pulumi.Any(logBucket.StorageConfigurationId),\n\t\t\tDeliveryPathPrefix:     pulumi.String(\"audit-logs\"),\n\t\t\tConfigName:             pulumi.String(\"Audit Logs\"),\n\t\t\tLogType:                pulumi.String(\"AUDIT_LOGS\"),\n\t\t\tOutputFormat:           pulumi.String(\"JSON\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsLogDelivery;\nimport com.pulumi.databricks.MwsLogDeliveryArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auditLogs = new MwsLogDelivery(\"auditLogs\", MwsLogDeliveryArgs.builder()\n            .accountId(databricksAccountId)\n            .credentialsId(logWriter.credentialsId())\n            .storageConfigurationId(logBucket.storageConfigurationId())\n            .deliveryPathPrefix(\"audit-logs\")\n            .configName(\"Audit Logs\")\n            .logType(\"AUDIT_LOGS\")\n            .outputFormat(\"JSON\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auditLogs:\n    type: databricks:MwsLogDelivery\n    name: audit_logs\n    properties:\n      accountId: ${databricksAccountId}\n      credentialsId: ${logWriter.credentialsId}\n      storageConfigurationId: ${logBucket.storageConfigurationId}\n      deliveryPathPrefix: audit-logs\n      configName: Audit Logs\n      logType: AUDIT_LOGS\n      outputFormat: JSON\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* Provisioning Databricks on AWS guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsCredentials \" pulumi-lang-dotnet=\" databricks.MwsCredentials \" pulumi-lang-go=\" MwsCredentials \" pulumi-lang-python=\" MwsCredentials \" pulumi-lang-yaml=\" databricks.MwsCredentials \" pulumi-lang-java=\" databricks.MwsCredentials \"\u003e databricks.MwsCredentials \u003c/span\u003eto configure the cross-account role for creation of new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-dotnet=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-go=\" MwsCustomerManagedKeys \" pulumi-lang-python=\" MwsCustomerManagedKeys \" pulumi-lang-yaml=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-java=\" databricks.MwsCustomerManagedKeys \"\u003e databricks.MwsCustomerManagedKeys \u003c/span\u003eto configure KMS keys for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eto [configure VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) \u0026 subnets for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsStorageConfigurations \" pulumi-lang-dotnet=\" databricks.MwsStorageConfigurations \" pulumi-lang-go=\" MwsStorageConfigurations \" pulumi-lang-python=\" MwsStorageConfigurations \" pulumi-lang-yaml=\" databricks.MwsStorageConfigurations \" pulumi-lang-java=\" databricks.MwsStorageConfigurations \"\u003e databricks.MwsStorageConfigurations \u003c/span\u003eto configure root bucket new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up [AWS and GCP workspaces](https://docs.databricks.com/getting-started/overview.html#e2-architecture-1).\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/).\n"},"configId":{"type":"string","description":"Databricks log delivery configuration ID.\n"},"configName":{"type":"string","description":"The optional human-readable name of the log delivery configuration. Defaults to empty.\n"},"credentialsId":{"type":"string","description":"The ID for a Databricks credential configuration that represents the AWS IAM role with policy and trust relationship as described in the main billable usage documentation page.\n"},"deliveryPathPrefix":{"type":"string","description":"Defaults to empty, which means that logs are delivered to the root of the bucket. The value must be a valid S3 object key. It must not start or end with a slash character.\n"},"deliveryStartTime":{"type":"string","description":"The optional start month and year for delivery, specified in YYYY-MM format. Defaults to current year and month. Usage is not available before 2019-03.\n"},"logType":{"type":"string","description":"The type of log delivery. `BILLABLE_USAGE` and `AUDIT_LOGS` are supported.\n"},"outputFormat":{"type":"string","description":"The file type of log delivery. Currently `CSV` (for `BILLABLE_USAGE`) and `JSON` (for `AUDIT_LOGS`) are supported.\n"},"status":{"type":"string","description":"Status of log delivery configuration. Set to ENABLED or DISABLED. Defaults to ENABLED. This is the only field you can update.\n"},"storageConfigurationId":{"type":"string","description":"The ID for a Databricks storage configuration that represents the S3 bucket with bucket policy as described in the main billable usage documentation page.\n"},"workspaceIdsFilters":{"type":"array","items":{"type":"integer"},"description":"By default, this log configuration applies to all workspaces associated with your account ID. If your account is on the multitenant version of the platform or on a select custom plan that allows multiple workspaces per account, you may have multiple workspaces associated with your account ID. You can optionally set the field as mentioned earlier to an array of workspace IDs. If you plan to use different log delivery configurations for several workspaces, set this explicitly rather than leaving it blank. If you leave this blank and your account ID gets additional workspaces in the future, this configuration will also apply to the new workspaces.\n"}},"required":["accountId","configId","credentialsId","deliveryStartTime","logType","outputFormat","status","storageConfigurationId"],"inputProperties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/).\n","willReplaceOnChanges":true},"configId":{"type":"string","description":"Databricks log delivery configuration ID.\n","willReplaceOnChanges":true},"configName":{"type":"string","description":"The optional human-readable name of the log delivery configuration. Defaults to empty.\n","willReplaceOnChanges":true},"credentialsId":{"type":"string","description":"The ID for a Databricks credential configuration that represents the AWS IAM role with policy and trust relationship as described in the main billable usage documentation page.\n","willReplaceOnChanges":true},"deliveryPathPrefix":{"type":"string","description":"Defaults to empty, which means that logs are delivered to the root of the bucket. The value must be a valid S3 object key. It must not start or end with a slash character.\n","willReplaceOnChanges":true},"deliveryStartTime":{"type":"string","description":"The optional start month and year for delivery, specified in YYYY-MM format. Defaults to current year and month. Usage is not available before 2019-03.\n","willReplaceOnChanges":true},"logType":{"type":"string","description":"The type of log delivery. `BILLABLE_USAGE` and `AUDIT_LOGS` are supported.\n","willReplaceOnChanges":true},"outputFormat":{"type":"string","description":"The file type of log delivery. Currently `CSV` (for `BILLABLE_USAGE`) and `JSON` (for `AUDIT_LOGS`) are supported.\n","willReplaceOnChanges":true},"status":{"type":"string","description":"Status of log delivery configuration. Set to ENABLED or DISABLED. Defaults to ENABLED. This is the only field you can update.\n"},"storageConfigurationId":{"type":"string","description":"The ID for a Databricks storage configuration that represents the S3 bucket with bucket policy as described in the main billable usage documentation page.\n","willReplaceOnChanges":true},"workspaceIdsFilters":{"type":"array","items":{"type":"integer"},"description":"By default, this log configuration applies to all workspaces associated with your account ID. If your account is on the multitenant version of the platform or on a select custom plan that allows multiple workspaces per account, you may have multiple workspaces associated with your account ID. You can optionally set the field as mentioned earlier to an array of workspace IDs. If you plan to use different log delivery configurations for several workspaces, set this explicitly rather than leaving it blank. If you leave this blank and your account ID gets additional workspaces in the future, this configuration will also apply to the new workspaces.\n","willReplaceOnChanges":true}},"requiredInputs":["accountId","credentialsId","logType","outputFormat","storageConfigurationId"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsLogDelivery resources.\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/).\n","willReplaceOnChanges":true},"configId":{"type":"string","description":"Databricks log delivery configuration ID.\n","willReplaceOnChanges":true},"configName":{"type":"string","description":"The optional human-readable name of the log delivery configuration. Defaults to empty.\n","willReplaceOnChanges":true},"credentialsId":{"type":"string","description":"The ID for a Databricks credential configuration that represents the AWS IAM role with policy and trust relationship as described in the main billable usage documentation page.\n","willReplaceOnChanges":true},"deliveryPathPrefix":{"type":"string","description":"Defaults to empty, which means that logs are delivered to the root of the bucket. The value must be a valid S3 object key. It must not start or end with a slash character.\n","willReplaceOnChanges":true},"deliveryStartTime":{"type":"string","description":"The optional start month and year for delivery, specified in YYYY-MM format. Defaults to current year and month. Usage is not available before 2019-03.\n","willReplaceOnChanges":true},"logType":{"type":"string","description":"The type of log delivery. `BILLABLE_USAGE` and `AUDIT_LOGS` are supported.\n","willReplaceOnChanges":true},"outputFormat":{"type":"string","description":"The file type of log delivery. Currently `CSV` (for `BILLABLE_USAGE`) and `JSON` (for `AUDIT_LOGS`) are supported.\n","willReplaceOnChanges":true},"status":{"type":"string","description":"Status of log delivery configuration. Set to ENABLED or DISABLED. Defaults to ENABLED. This is the only field you can update.\n"},"storageConfigurationId":{"type":"string","description":"The ID for a Databricks storage configuration that represents the S3 bucket with bucket policy as described in the main billable usage documentation page.\n","willReplaceOnChanges":true},"workspaceIdsFilters":{"type":"array","items":{"type":"integer"},"description":"By default, this log configuration applies to all workspaces associated with your account ID. If your account is on the multitenant version of the platform or on a select custom plan that allows multiple workspaces per account, you may have multiple workspaces associated with your account ID. You can optionally set the field as mentioned earlier to an array of workspace IDs. If you plan to use different log delivery configurations for several workspaces, set this explicitly rather than leaving it blank. If you leave this blank and your account ID gets additional workspaces in the future, this configuration will also apply to the new workspaces.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/mwsNccBinding:MwsNccBinding":{"description":"Allows you to attach a Network Connectivity Config object to a\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource to create a [Databricks Workspace that leverages serverless network connectivity configs](https://learn.microsoft.com/en-us/azure/databricks/sql/admin/serverless-firewall).\n\n\u003e This resource can only be used with an account-level provider!\n\n\u003e This feature is available for AWS \u0026 Azure only, and is in [Public Preview](https://docs.databricks.com/release-notes/release-types.html) in AWS.\n\nThe NCC and workspace must be in the same region.\n\n\u003e A workspace can only be bound to a single NCC. Binding a different NCC to the same workspace will overwrite the previous one. If you need multiple private endpoint rules, add them to a single NCC using \u003cspan pulumi-lang-nodejs=\"`databricks.MwsNccPrivateEndpointRule`\" pulumi-lang-dotnet=\"`databricks.MwsNccPrivateEndpointRule`\" pulumi-lang-go=\"`MwsNccPrivateEndpointRule`\" pulumi-lang-python=\"`MwsNccPrivateEndpointRule`\" pulumi-lang-yaml=\"`databricks.MwsNccPrivateEndpointRule`\" pulumi-lang-java=\"`databricks.MwsNccPrivateEndpointRule`\"\u003e`databricks.MwsNccPrivateEndpointRule`\u003c/span\u003e.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\nconst region = config.requireObject\u003cany\u003e(\"region\");\nconst prefix = config.requireObject\u003cany\u003e(\"prefix\");\nconst ncc = new databricks.MwsNetworkConnectivityConfig(\"ncc\", {\n    name: `ncc-for-${prefix}`,\n    region: region,\n});\nconst nccBinding = new databricks.MwsNccBinding(\"ncc_binding\", {\n    networkConnectivityConfigId: ncc.networkConnectivityConfigId,\n    workspaceId: databricksWorkspaceId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\nregion = config.require_object(\"region\")\nprefix = config.require_object(\"prefix\")\nncc = databricks.MwsNetworkConnectivityConfig(\"ncc\",\n    name=f\"ncc-for-{prefix}\",\n    region=region)\nncc_binding = databricks.MwsNccBinding(\"ncc_binding\",\n    network_connectivity_config_id=ncc.network_connectivity_config_id,\n    workspace_id=databricks_workspace_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    var region = config.RequireObject\u003cdynamic\u003e(\"region\");\n    var prefix = config.RequireObject\u003cdynamic\u003e(\"prefix\");\n    var ncc = new Databricks.MwsNetworkConnectivityConfig(\"ncc\", new()\n    {\n        Name = $\"ncc-for-{prefix}\",\n        Region = region,\n    });\n\n    var nccBinding = new Databricks.MwsNccBinding(\"ncc_binding\", new()\n    {\n        NetworkConnectivityConfigId = ncc.NetworkConnectivityConfigId,\n        WorkspaceId = databricksWorkspaceId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\tregion := cfg.RequireObject(\"region\")\n\t\tprefix := cfg.RequireObject(\"prefix\")\n\t\tncc, err := databricks.NewMwsNetworkConnectivityConfig(ctx, \"ncc\", \u0026databricks.MwsNetworkConnectivityConfigArgs{\n\t\t\tName:   pulumi.Sprintf(\"ncc-for-%v\", prefix),\n\t\t\tRegion: pulumi.Any(region),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsNccBinding(ctx, \"ncc_binding\", \u0026databricks.MwsNccBindingArgs{\n\t\t\tNetworkConnectivityConfigId: ncc.NetworkConnectivityConfigId,\n\t\t\tWorkspaceId:                 pulumi.Any(databricksWorkspaceId),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsNetworkConnectivityConfig;\nimport com.pulumi.databricks.MwsNetworkConnectivityConfigArgs;\nimport com.pulumi.databricks.MwsNccBinding;\nimport com.pulumi.databricks.MwsNccBindingArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var region = config.get(\"region\");\n        final var prefix = config.get(\"prefix\");\n        var ncc = new MwsNetworkConnectivityConfig(\"ncc\", MwsNetworkConnectivityConfigArgs.builder()\n            .name(String.format(\"ncc-for-%s\", prefix))\n            .region(region)\n            .build());\n\n        var nccBinding = new MwsNccBinding(\"nccBinding\", MwsNccBindingArgs.builder()\n            .networkConnectivityConfigId(ncc.networkConnectivityConfigId())\n            .workspaceId(databricksWorkspaceId)\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  region:\n    type: dynamic\n  prefix:\n    type: dynamic\nresources:\n  ncc:\n    type: databricks:MwsNetworkConnectivityConfig\n    properties:\n      name: ncc-for-${prefix}\n      region: ${region}\n  nccBinding:\n    type: databricks:MwsNccBinding\n    name: ncc_binding\n    properties:\n      networkConnectivityConfigId: ${ncc.networkConnectivityConfigId}\n      workspaceId: ${databricksWorkspaceId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up Databricks workspaces.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-dotnet=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-go=\" MwsNetworkConnectivityConfig \" pulumi-lang-python=\" MwsNetworkConnectivityConfig \" pulumi-lang-yaml=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-java=\" databricks.MwsNetworkConnectivityConfig \"\u003e databricks.MwsNetworkConnectivityConfig \u003c/span\u003eto create Network Connectivity Config objects.\n","properties":{"networkConnectivityConfigId":{"type":"string","description":"Canonical unique identifier of Network Connectivity Config in Databricks Account.\n"},"workspaceId":{"type":"string","description":"Identifier of the workspace to attach the NCC to. Change forces creation of a new resource.\n"}},"required":["networkConnectivityConfigId","workspaceId"],"inputProperties":{"networkConnectivityConfigId":{"type":"string","description":"Canonical unique identifier of Network Connectivity Config in Databricks Account.\n"},"workspaceId":{"type":"string","description":"Identifier of the workspace to attach the NCC to. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"requiredInputs":["networkConnectivityConfigId","workspaceId"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsNccBinding resources.\n","properties":{"networkConnectivityConfigId":{"type":"string","description":"Canonical unique identifier of Network Connectivity Config in Databricks Account.\n"},"workspaceId":{"type":"string","description":"Identifier of the workspace to attach the NCC to. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/mwsNccPrivateEndpointRule:MwsNccPrivateEndpointRule":{"description":"Allows you to create a private endpoint in a Network Connectivity Config that can be used to [configure private connectivity from serverless compute](https://learn.microsoft.com/en-us/azure/databricks/security/network/serverless-network-security/serverless-private-link).\n\n\u003e This resource can only be used with an account-level provider!\n\n\u003e This feature is available on Azure, and in Public Preview on AWS.\n\n## Example Usage\n\nCreate private endpoints to an Azure storage account and an Azure standard load balancer.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\nconst region = config.requireObject\u003cany\u003e(\"region\");\nconst prefix = config.requireObject\u003cany\u003e(\"prefix\");\nconst ncc = new databricks.MwsNetworkConnectivityConfig(\"ncc\", {\n    name: `ncc-for-${prefix}`,\n    region: region,\n});\nconst storage = new databricks.MwsNccPrivateEndpointRule(\"storage\", {\n    networkConnectivityConfigId: ncc.networkConnectivityConfigId,\n    resourceId: \"/subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Storage/storageAccounts/examplesa\",\n    groupId: \"blob\",\n});\nconst slb = new databricks.MwsNccPrivateEndpointRule(\"slb\", {\n    networkConnectivityConfigId: ncc.networkConnectivityConfigId,\n    resourceId: \"/subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Network/privatelinkServices/example-private-link-service\",\n    domainNames: [\"my-example.exampledomain.com\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\nregion = config.require_object(\"region\")\nprefix = config.require_object(\"prefix\")\nncc = databricks.MwsNetworkConnectivityConfig(\"ncc\",\n    name=f\"ncc-for-{prefix}\",\n    region=region)\nstorage = databricks.MwsNccPrivateEndpointRule(\"storage\",\n    network_connectivity_config_id=ncc.network_connectivity_config_id,\n    resource_id=\"/subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Storage/storageAccounts/examplesa\",\n    group_id=\"blob\")\nslb = databricks.MwsNccPrivateEndpointRule(\"slb\",\n    network_connectivity_config_id=ncc.network_connectivity_config_id,\n    resource_id=\"/subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Network/privatelinkServices/example-private-link-service\",\n    domain_names=[\"my-example.exampledomain.com\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    var region = config.RequireObject\u003cdynamic\u003e(\"region\");\n    var prefix = config.RequireObject\u003cdynamic\u003e(\"prefix\");\n    var ncc = new Databricks.MwsNetworkConnectivityConfig(\"ncc\", new()\n    {\n        Name = $\"ncc-for-{prefix}\",\n        Region = region,\n    });\n\n    var storage = new Databricks.MwsNccPrivateEndpointRule(\"storage\", new()\n    {\n        NetworkConnectivityConfigId = ncc.NetworkConnectivityConfigId,\n        ResourceId = \"/subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Storage/storageAccounts/examplesa\",\n        GroupId = \"blob\",\n    });\n\n    var slb = new Databricks.MwsNccPrivateEndpointRule(\"slb\", new()\n    {\n        NetworkConnectivityConfigId = ncc.NetworkConnectivityConfigId,\n        ResourceId = \"/subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Network/privatelinkServices/example-private-link-service\",\n        DomainNames = new[]\n        {\n            \"my-example.exampledomain.com\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\tregion := cfg.RequireObject(\"region\")\n\t\tprefix := cfg.RequireObject(\"prefix\")\n\t\tncc, err := databricks.NewMwsNetworkConnectivityConfig(ctx, \"ncc\", \u0026databricks.MwsNetworkConnectivityConfigArgs{\n\t\t\tName:   pulumi.Sprintf(\"ncc-for-%v\", prefix),\n\t\t\tRegion: pulumi.Any(region),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsNccPrivateEndpointRule(ctx, \"storage\", \u0026databricks.MwsNccPrivateEndpointRuleArgs{\n\t\t\tNetworkConnectivityConfigId: ncc.NetworkConnectivityConfigId,\n\t\t\tResourceId:                  pulumi.String(\"/subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Storage/storageAccounts/examplesa\"),\n\t\t\tGroupId:                     pulumi.String(\"blob\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsNccPrivateEndpointRule(ctx, \"slb\", \u0026databricks.MwsNccPrivateEndpointRuleArgs{\n\t\t\tNetworkConnectivityConfigId: ncc.NetworkConnectivityConfigId,\n\t\t\tResourceId:                  pulumi.String(\"/subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Network/privatelinkServices/example-private-link-service\"),\n\t\t\tDomainNames: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"my-example.exampledomain.com\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsNetworkConnectivityConfig;\nimport com.pulumi.databricks.MwsNetworkConnectivityConfigArgs;\nimport com.pulumi.databricks.MwsNccPrivateEndpointRule;\nimport com.pulumi.databricks.MwsNccPrivateEndpointRuleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var region = config.get(\"region\");\n        final var prefix = config.get(\"prefix\");\n        var ncc = new MwsNetworkConnectivityConfig(\"ncc\", MwsNetworkConnectivityConfigArgs.builder()\n            .name(String.format(\"ncc-for-%s\", prefix))\n            .region(region)\n            .build());\n\n        var storage = new MwsNccPrivateEndpointRule(\"storage\", MwsNccPrivateEndpointRuleArgs.builder()\n            .networkConnectivityConfigId(ncc.networkConnectivityConfigId())\n            .resourceId(\"/subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Storage/storageAccounts/examplesa\")\n            .groupId(\"blob\")\n            .build());\n\n        var slb = new MwsNccPrivateEndpointRule(\"slb\", MwsNccPrivateEndpointRuleArgs.builder()\n            .networkConnectivityConfigId(ncc.networkConnectivityConfigId())\n            .resourceId(\"/subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Network/privatelinkServices/example-private-link-service\")\n            .domainNames(\"my-example.exampledomain.com\")\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  region:\n    type: dynamic\n  prefix:\n    type: dynamic\nresources:\n  ncc:\n    type: databricks:MwsNetworkConnectivityConfig\n    properties:\n      name: ncc-for-${prefix}\n      region: ${region}\n  storage:\n    type: databricks:MwsNccPrivateEndpointRule\n    properties:\n      networkConnectivityConfigId: ${ncc.networkConnectivityConfigId}\n      resourceId: /subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Storage/storageAccounts/examplesa\n      groupId: blob\n  slb:\n    type: databricks:MwsNccPrivateEndpointRule\n    properties:\n      networkConnectivityConfigId: ${ncc.networkConnectivityConfigId}\n      resourceId: /subscriptions/653bb673-1234-abcd-a90b-d064d5d53ca4/resourcegroups/example-resource-group/providers/Microsoft.Network/privatelinkServices/example-private-link-service\n      domainNames:\n        - my-example.exampledomain.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreate a private endpoint rule to an AWS VPC endpoint and to an S3 bucket.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\nconst region = config.requireObject\u003cany\u003e(\"region\");\nconst prefix = config.requireObject\u003cany\u003e(\"prefix\");\nconst ncc = new databricks.MwsNetworkConnectivityConfig(\"ncc\", {\n    name: `ncc-for-${prefix}`,\n    region: region,\n});\nconst storage = new databricks.MwsNccPrivateEndpointRule(\"storage\", {\n    networkConnectivityConfigId: ncc.networkConnectivityConfigId,\n    endpointService: \"com.amazonaws.us-east-1.s3\",\n    resourceNames: [\"bucket\"],\n});\nconst vpce = new databricks.MwsNccPrivateEndpointRule(\"vpce\", {\n    networkConnectivityConfigId: ncc.networkConnectivityConfigId,\n    endpointService: \"com.amazonaws.vpce.us-west-2.vpce-svc-xyz\",\n    domainNames: [\"subdomain.internal.net\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\nregion = config.require_object(\"region\")\nprefix = config.require_object(\"prefix\")\nncc = databricks.MwsNetworkConnectivityConfig(\"ncc\",\n    name=f\"ncc-for-{prefix}\",\n    region=region)\nstorage = databricks.MwsNccPrivateEndpointRule(\"storage\",\n    network_connectivity_config_id=ncc.network_connectivity_config_id,\n    endpoint_service=\"com.amazonaws.us-east-1.s3\",\n    resource_names=[\"bucket\"])\nvpce = databricks.MwsNccPrivateEndpointRule(\"vpce\",\n    network_connectivity_config_id=ncc.network_connectivity_config_id,\n    endpoint_service=\"com.amazonaws.vpce.us-west-2.vpce-svc-xyz\",\n    domain_names=[\"subdomain.internal.net\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    var region = config.RequireObject\u003cdynamic\u003e(\"region\");\n    var prefix = config.RequireObject\u003cdynamic\u003e(\"prefix\");\n    var ncc = new Databricks.MwsNetworkConnectivityConfig(\"ncc\", new()\n    {\n        Name = $\"ncc-for-{prefix}\",\n        Region = region,\n    });\n\n    var storage = new Databricks.MwsNccPrivateEndpointRule(\"storage\", new()\n    {\n        NetworkConnectivityConfigId = ncc.NetworkConnectivityConfigId,\n        EndpointService = \"com.amazonaws.us-east-1.s3\",\n        ResourceNames = new[]\n        {\n            \"bucket\",\n        },\n    });\n\n    var vpce = new Databricks.MwsNccPrivateEndpointRule(\"vpce\", new()\n    {\n        NetworkConnectivityConfigId = ncc.NetworkConnectivityConfigId,\n        EndpointService = \"com.amazonaws.vpce.us-west-2.vpce-svc-xyz\",\n        DomainNames = new[]\n        {\n            \"subdomain.internal.net\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\tregion := cfg.RequireObject(\"region\")\n\t\tprefix := cfg.RequireObject(\"prefix\")\n\t\tncc, err := databricks.NewMwsNetworkConnectivityConfig(ctx, \"ncc\", \u0026databricks.MwsNetworkConnectivityConfigArgs{\n\t\t\tName:   pulumi.Sprintf(\"ncc-for-%v\", prefix),\n\t\t\tRegion: pulumi.Any(region),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsNccPrivateEndpointRule(ctx, \"storage\", \u0026databricks.MwsNccPrivateEndpointRuleArgs{\n\t\t\tNetworkConnectivityConfigId: ncc.NetworkConnectivityConfigId,\n\t\t\tEndpointService:             pulumi.String(\"com.amazonaws.us-east-1.s3\"),\n\t\t\tResourceNames: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"bucket\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsNccPrivateEndpointRule(ctx, \"vpce\", \u0026databricks.MwsNccPrivateEndpointRuleArgs{\n\t\t\tNetworkConnectivityConfigId: ncc.NetworkConnectivityConfigId,\n\t\t\tEndpointService:             pulumi.String(\"com.amazonaws.vpce.us-west-2.vpce-svc-xyz\"),\n\t\t\tDomainNames: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"subdomain.internal.net\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsNetworkConnectivityConfig;\nimport com.pulumi.databricks.MwsNetworkConnectivityConfigArgs;\nimport com.pulumi.databricks.MwsNccPrivateEndpointRule;\nimport com.pulumi.databricks.MwsNccPrivateEndpointRuleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var region = config.get(\"region\");\n        final var prefix = config.get(\"prefix\");\n        var ncc = new MwsNetworkConnectivityConfig(\"ncc\", MwsNetworkConnectivityConfigArgs.builder()\n            .name(String.format(\"ncc-for-%s\", prefix))\n            .region(region)\n            .build());\n\n        var storage = new MwsNccPrivateEndpointRule(\"storage\", MwsNccPrivateEndpointRuleArgs.builder()\n            .networkConnectivityConfigId(ncc.networkConnectivityConfigId())\n            .endpointService(\"com.amazonaws.us-east-1.s3\")\n            .resourceNames(\"bucket\")\n            .build());\n\n        var vpce = new MwsNccPrivateEndpointRule(\"vpce\", MwsNccPrivateEndpointRuleArgs.builder()\n            .networkConnectivityConfigId(ncc.networkConnectivityConfigId())\n            .endpointService(\"com.amazonaws.vpce.us-west-2.vpce-svc-xyz\")\n            .domainNames(\"subdomain.internal.net\")\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  region:\n    type: dynamic\n  prefix:\n    type: dynamic\nresources:\n  ncc:\n    type: databricks:MwsNetworkConnectivityConfig\n    properties:\n      name: ncc-for-${prefix}\n      region: ${region}\n  storage:\n    type: databricks:MwsNccPrivateEndpointRule\n    properties:\n      networkConnectivityConfigId: ${ncc.networkConnectivityConfigId}\n      endpointService: com.amazonaws.us-east-1.s3\n      resourceNames:\n        - bucket\n  vpce:\n    type: databricks:MwsNccPrivateEndpointRule\n    properties:\n      networkConnectivityConfigId: ${ncc.networkConnectivityConfigId}\n      endpointService: com.amazonaws.vpce.us-west-2.vpce-svc-xyz\n      domainNames:\n        - subdomain.internal.net\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-dotnet=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-go=\" MwsNetworkConnectivityConfig \" pulumi-lang-python=\" MwsNetworkConnectivityConfig \" pulumi-lang-yaml=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-java=\" databricks.MwsNetworkConnectivityConfig \"\u003e databricks.MwsNetworkConnectivityConfig \u003c/span\u003eto create Network Connectivity Config objects.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNccBinding \" pulumi-lang-dotnet=\" databricks.MwsNccBinding \" pulumi-lang-go=\" MwsNccBinding \" pulumi-lang-python=\" MwsNccBinding \" pulumi-lang-yaml=\" databricks.MwsNccBinding \" pulumi-lang-java=\" databricks.MwsNccBinding \"\u003e databricks.MwsNccBinding \u003c/span\u003eto attach an NCC to a workspace.\n\n","properties":{"accountId":{"type":"string"},"connectionState":{"type":"string","description":"The current status of this private endpoint. The private endpoint rules are effective only if the connection state is `ESTABLISHED`. Remember that you must approve new endpoints on your resources in the Azure portal before they take effect.\nThe possible values are:\n* `PENDING`: The endpoint has been created and pending approval.\n* `ESTABLISHED`: The endpoint has been approved and is ready to be used in your serverless compute resources.\n* `REJECTED`: Connection was rejected by the private link resource owner.\n* `DISCONNECTED`: Connection was removed by the private link resource owner, the private endpoint becomes informative and should be deleted for clean-up.\n* `EXPIRED`: If the endpoint was created but not approved in 14 days, it will be EXPIRED.\n"},"creationTime":{"type":"integer","description":"Time in epoch milliseconds when this object was created.\n"},"deactivated":{"type":"boolean","description":"Whether this private endpoint is deactivated.\n"},"deactivatedAt":{"type":"integer","description":"Time in epoch milliseconds when this object was deactivated.\n"},"domainNames":{"type":"array","items":{"type":"string"},"description":"* On Azure: List of domain names of target private link service. Only used by private endpoints to customer-managed private endpoint services. Conflicts with \u003cspan pulumi-lang-nodejs=\"`groupId`\" pulumi-lang-dotnet=\"`GroupId`\" pulumi-lang-go=\"`groupId`\" pulumi-lang-python=\"`group_id`\" pulumi-lang-yaml=\"`groupId`\" pulumi-lang-java=\"`groupId`\"\u003e`group_id`\u003c/span\u003e.\n* On AWS: List of target resource FQDNs accessible via the VPC endpoint service. Only used by private endpoints towards a VPC endpoint service behind a customer-managed VPC endpoint service. Conflicts with \u003cspan pulumi-lang-nodejs=\"`resourceNames`\" pulumi-lang-dotnet=\"`ResourceNames`\" pulumi-lang-go=\"`resourceNames`\" pulumi-lang-python=\"`resource_names`\" pulumi-lang-yaml=\"`resourceNames`\" pulumi-lang-java=\"`resourceNames`\"\u003e`resource_names`\u003c/span\u003e.\n"},"enabled":{"type":"boolean","description":"Activation status. Only used by private endpoints towards an AWS S3 service. Update this field to activate/deactivate this private endpoint to allow egress access from serverless compute resources. Can only be updated after a private endpoint rule towards an AWS S3 service is successfully created.\n"},"endpointName":{"type":"string","description":"The name of the Azure private endpoint resource, e.g. \"databricks-088781b3-77fa-4132-b429-1af0d91bc593-pe-3cb31234\"\n"},"endpointService":{"type":"string","description":"Example `com.amazonaws.vpce.us-east-1.vpce-svc-123abcc1298abc123`. The full target AWS endpoint service name that connects to the destination resources of the private endpoint. Change forces creation of a new resource.\n"},"errorMessage":{"type":"string"},"groupId":{"type":"string","description":"Not used by customer-managed private endpoint services. The sub-resource type (group ID) of the target resource. Must be one of supported resource types (i.e., \u003cspan pulumi-lang-nodejs=\"`blob`\" pulumi-lang-dotnet=\"`Blob`\" pulumi-lang-go=\"`blob`\" pulumi-lang-python=\"`blob`\" pulumi-lang-yaml=\"`blob`\" pulumi-lang-java=\"`blob`\"\u003e`blob`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`dfs`\" pulumi-lang-dotnet=\"`Dfs`\" pulumi-lang-go=\"`dfs`\" pulumi-lang-python=\"`dfs`\" pulumi-lang-yaml=\"`dfs`\" pulumi-lang-java=\"`dfs`\"\u003e`dfs`\u003c/span\u003e, `sqlServer` , etc. Consult the [Azure documentation](https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-overview#private-link-resource) for full list of supported resources). Note that to connect to workspace root storage (root DBFS), you need two endpoints, one for \u003cspan pulumi-lang-nodejs=\"`blob`\" pulumi-lang-dotnet=\"`Blob`\" pulumi-lang-go=\"`blob`\" pulumi-lang-python=\"`blob`\" pulumi-lang-yaml=\"`blob`\" pulumi-lang-java=\"`blob`\"\u003e`blob`\u003c/span\u003e and one for \u003cspan pulumi-lang-nodejs=\"`dfs`\" pulumi-lang-dotnet=\"`Dfs`\" pulumi-lang-go=\"`dfs`\" pulumi-lang-python=\"`dfs`\" pulumi-lang-yaml=\"`dfs`\" pulumi-lang-java=\"`dfs`\"\u003e`dfs`\u003c/span\u003e. Change forces creation of a new resource. Conflicts with \u003cspan pulumi-lang-nodejs=\"`domainNames`\" pulumi-lang-dotnet=\"`DomainNames`\" pulumi-lang-go=\"`domainNames`\" pulumi-lang-python=\"`domain_names`\" pulumi-lang-yaml=\"`domainNames`\" pulumi-lang-java=\"`domainNames`\"\u003e`domain_names`\u003c/span\u003e.\n"},"networkConnectivityConfigId":{"type":"string","description":"Canonical unique identifier of Network Connectivity Config in Databricks Account. Change forces creation of a new resource.\n"},"resourceId":{"type":"string","description":"The Azure resource ID of the target resource. Change forces creation of a new resource.\n"},"resourceNames":{"type":"array","items":{"type":"string"},"description":"Only used by private endpoints towards AWS S3 service. List of globally unique S3 bucket names that will be accessed via the VPC endpoint. The bucket names must be in the same region as the NCC/endpoint service. Conflict with \u003cspan pulumi-lang-nodejs=\"`domainNames`\" pulumi-lang-dotnet=\"`DomainNames`\" pulumi-lang-go=\"`domainNames`\" pulumi-lang-python=\"`domain_names`\" pulumi-lang-yaml=\"`domainNames`\" pulumi-lang-java=\"`domainNames`\"\u003e`domain_names`\u003c/span\u003e.\n"},"ruleId":{"type":"string","description":"the ID of a private endpoint rule.\n"},"updatedTime":{"type":"integer","description":"Time in epoch milliseconds when this object was updated.\n"},"vpcEndpointId":{"type":"string","description":"The AWS VPC endpoint ID. You can use this ID to identify the VPC endpoint created by Databricks.\n"}},"required":["connectionState","creationTime","enabled","endpointName","networkConnectivityConfigId","ruleId","updatedTime","vpcEndpointId"],"inputProperties":{"accountId":{"type":"string"},"connectionState":{"type":"string","description":"The current status of this private endpoint. The private endpoint rules are effective only if the connection state is `ESTABLISHED`. Remember that you must approve new endpoints on your resources in the Azure portal before they take effect.\nThe possible values are:\n* `PENDING`: The endpoint has been created and pending approval.\n* `ESTABLISHED`: The endpoint has been approved and is ready to be used in your serverless compute resources.\n* `REJECTED`: Connection was rejected by the private link resource owner.\n* `DISCONNECTED`: Connection was removed by the private link resource owner, the private endpoint becomes informative and should be deleted for clean-up.\n* `EXPIRED`: If the endpoint was created but not approved in 14 days, it will be EXPIRED.\n"},"creationTime":{"type":"integer","description":"Time in epoch milliseconds when this object was created.\n"},"deactivated":{"type":"boolean","description":"Whether this private endpoint is deactivated.\n"},"deactivatedAt":{"type":"integer","description":"Time in epoch milliseconds when this object was deactivated.\n"},"domainNames":{"type":"array","items":{"type":"string"},"description":"* On Azure: List of domain names of target private link service. Only used by private endpoints to customer-managed private endpoint services. Conflicts with \u003cspan pulumi-lang-nodejs=\"`groupId`\" pulumi-lang-dotnet=\"`GroupId`\" pulumi-lang-go=\"`groupId`\" pulumi-lang-python=\"`group_id`\" pulumi-lang-yaml=\"`groupId`\" pulumi-lang-java=\"`groupId`\"\u003e`group_id`\u003c/span\u003e.\n* On AWS: List of target resource FQDNs accessible via the VPC endpoint service. Only used by private endpoints towards a VPC endpoint service behind a customer-managed VPC endpoint service. Conflicts with \u003cspan pulumi-lang-nodejs=\"`resourceNames`\" pulumi-lang-dotnet=\"`ResourceNames`\" pulumi-lang-go=\"`resourceNames`\" pulumi-lang-python=\"`resource_names`\" pulumi-lang-yaml=\"`resourceNames`\" pulumi-lang-java=\"`resourceNames`\"\u003e`resource_names`\u003c/span\u003e.\n"},"enabled":{"type":"boolean","description":"Activation status. Only used by private endpoints towards an AWS S3 service. Update this field to activate/deactivate this private endpoint to allow egress access from serverless compute resources. Can only be updated after a private endpoint rule towards an AWS S3 service is successfully created.\n"},"endpointName":{"type":"string","description":"The name of the Azure private endpoint resource, e.g. \"databricks-088781b3-77fa-4132-b429-1af0d91bc593-pe-3cb31234\"\n"},"endpointService":{"type":"string","description":"Example `com.amazonaws.vpce.us-east-1.vpce-svc-123abcc1298abc123`. The full target AWS endpoint service name that connects to the destination resources of the private endpoint. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"errorMessage":{"type":"string"},"groupId":{"type":"string","description":"Not used by customer-managed private endpoint services. The sub-resource type (group ID) of the target resource. Must be one of supported resource types (i.e., \u003cspan pulumi-lang-nodejs=\"`blob`\" pulumi-lang-dotnet=\"`Blob`\" pulumi-lang-go=\"`blob`\" pulumi-lang-python=\"`blob`\" pulumi-lang-yaml=\"`blob`\" pulumi-lang-java=\"`blob`\"\u003e`blob`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`dfs`\" pulumi-lang-dotnet=\"`Dfs`\" pulumi-lang-go=\"`dfs`\" pulumi-lang-python=\"`dfs`\" pulumi-lang-yaml=\"`dfs`\" pulumi-lang-java=\"`dfs`\"\u003e`dfs`\u003c/span\u003e, `sqlServer` , etc. Consult the [Azure documentation](https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-overview#private-link-resource) for full list of supported resources). Note that to connect to workspace root storage (root DBFS), you need two endpoints, one for \u003cspan pulumi-lang-nodejs=\"`blob`\" pulumi-lang-dotnet=\"`Blob`\" pulumi-lang-go=\"`blob`\" pulumi-lang-python=\"`blob`\" pulumi-lang-yaml=\"`blob`\" pulumi-lang-java=\"`blob`\"\u003e`blob`\u003c/span\u003e and one for \u003cspan pulumi-lang-nodejs=\"`dfs`\" pulumi-lang-dotnet=\"`Dfs`\" pulumi-lang-go=\"`dfs`\" pulumi-lang-python=\"`dfs`\" pulumi-lang-yaml=\"`dfs`\" pulumi-lang-java=\"`dfs`\"\u003e`dfs`\u003c/span\u003e. Change forces creation of a new resource. Conflicts with \u003cspan pulumi-lang-nodejs=\"`domainNames`\" pulumi-lang-dotnet=\"`DomainNames`\" pulumi-lang-go=\"`domainNames`\" pulumi-lang-python=\"`domain_names`\" pulumi-lang-yaml=\"`domainNames`\" pulumi-lang-java=\"`domainNames`\"\u003e`domain_names`\u003c/span\u003e.\n","willReplaceOnChanges":true},"networkConnectivityConfigId":{"type":"string","description":"Canonical unique identifier of Network Connectivity Config in Databricks Account. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"resourceId":{"type":"string","description":"The Azure resource ID of the target resource. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"resourceNames":{"type":"array","items":{"type":"string"},"description":"Only used by private endpoints towards AWS S3 service. List of globally unique S3 bucket names that will be accessed via the VPC endpoint. The bucket names must be in the same region as the NCC/endpoint service. Conflict with \u003cspan pulumi-lang-nodejs=\"`domainNames`\" pulumi-lang-dotnet=\"`DomainNames`\" pulumi-lang-go=\"`domainNames`\" pulumi-lang-python=\"`domain_names`\" pulumi-lang-yaml=\"`domainNames`\" pulumi-lang-java=\"`domainNames`\"\u003e`domain_names`\u003c/span\u003e.\n"},"ruleId":{"type":"string","description":"the ID of a private endpoint rule.\n"},"updatedTime":{"type":"integer","description":"Time in epoch milliseconds when this object was updated.\n"},"vpcEndpointId":{"type":"string","description":"The AWS VPC endpoint ID. You can use this ID to identify the VPC endpoint created by Databricks.\n"}},"requiredInputs":["networkConnectivityConfigId"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsNccPrivateEndpointRule resources.\n","properties":{"accountId":{"type":"string"},"connectionState":{"type":"string","description":"The current status of this private endpoint. The private endpoint rules are effective only if the connection state is `ESTABLISHED`. Remember that you must approve new endpoints on your resources in the Azure portal before they take effect.\nThe possible values are:\n* `PENDING`: The endpoint has been created and pending approval.\n* `ESTABLISHED`: The endpoint has been approved and is ready to be used in your serverless compute resources.\n* `REJECTED`: Connection was rejected by the private link resource owner.\n* `DISCONNECTED`: Connection was removed by the private link resource owner, the private endpoint becomes informative and should be deleted for clean-up.\n* `EXPIRED`: If the endpoint was created but not approved in 14 days, it will be EXPIRED.\n"},"creationTime":{"type":"integer","description":"Time in epoch milliseconds when this object was created.\n"},"deactivated":{"type":"boolean","description":"Whether this private endpoint is deactivated.\n"},"deactivatedAt":{"type":"integer","description":"Time in epoch milliseconds when this object was deactivated.\n"},"domainNames":{"type":"array","items":{"type":"string"},"description":"* On Azure: List of domain names of target private link service. Only used by private endpoints to customer-managed private endpoint services. Conflicts with \u003cspan pulumi-lang-nodejs=\"`groupId`\" pulumi-lang-dotnet=\"`GroupId`\" pulumi-lang-go=\"`groupId`\" pulumi-lang-python=\"`group_id`\" pulumi-lang-yaml=\"`groupId`\" pulumi-lang-java=\"`groupId`\"\u003e`group_id`\u003c/span\u003e.\n* On AWS: List of target resource FQDNs accessible via the VPC endpoint service. Only used by private endpoints towards a VPC endpoint service behind a customer-managed VPC endpoint service. Conflicts with \u003cspan pulumi-lang-nodejs=\"`resourceNames`\" pulumi-lang-dotnet=\"`ResourceNames`\" pulumi-lang-go=\"`resourceNames`\" pulumi-lang-python=\"`resource_names`\" pulumi-lang-yaml=\"`resourceNames`\" pulumi-lang-java=\"`resourceNames`\"\u003e`resource_names`\u003c/span\u003e.\n"},"enabled":{"type":"boolean","description":"Activation status. Only used by private endpoints towards an AWS S3 service. Update this field to activate/deactivate this private endpoint to allow egress access from serverless compute resources. Can only be updated after a private endpoint rule towards an AWS S3 service is successfully created.\n"},"endpointName":{"type":"string","description":"The name of the Azure private endpoint resource, e.g. \"databricks-088781b3-77fa-4132-b429-1af0d91bc593-pe-3cb31234\"\n"},"endpointService":{"type":"string","description":"Example `com.amazonaws.vpce.us-east-1.vpce-svc-123abcc1298abc123`. The full target AWS endpoint service name that connects to the destination resources of the private endpoint. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"errorMessage":{"type":"string"},"groupId":{"type":"string","description":"Not used by customer-managed private endpoint services. The sub-resource type (group ID) of the target resource. Must be one of supported resource types (i.e., \u003cspan pulumi-lang-nodejs=\"`blob`\" pulumi-lang-dotnet=\"`Blob`\" pulumi-lang-go=\"`blob`\" pulumi-lang-python=\"`blob`\" pulumi-lang-yaml=\"`blob`\" pulumi-lang-java=\"`blob`\"\u003e`blob`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`dfs`\" pulumi-lang-dotnet=\"`Dfs`\" pulumi-lang-go=\"`dfs`\" pulumi-lang-python=\"`dfs`\" pulumi-lang-yaml=\"`dfs`\" pulumi-lang-java=\"`dfs`\"\u003e`dfs`\u003c/span\u003e, `sqlServer` , etc. Consult the [Azure documentation](https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-overview#private-link-resource) for full list of supported resources). Note that to connect to workspace root storage (root DBFS), you need two endpoints, one for \u003cspan pulumi-lang-nodejs=\"`blob`\" pulumi-lang-dotnet=\"`Blob`\" pulumi-lang-go=\"`blob`\" pulumi-lang-python=\"`blob`\" pulumi-lang-yaml=\"`blob`\" pulumi-lang-java=\"`blob`\"\u003e`blob`\u003c/span\u003e and one for \u003cspan pulumi-lang-nodejs=\"`dfs`\" pulumi-lang-dotnet=\"`Dfs`\" pulumi-lang-go=\"`dfs`\" pulumi-lang-python=\"`dfs`\" pulumi-lang-yaml=\"`dfs`\" pulumi-lang-java=\"`dfs`\"\u003e`dfs`\u003c/span\u003e. Change forces creation of a new resource. Conflicts with \u003cspan pulumi-lang-nodejs=\"`domainNames`\" pulumi-lang-dotnet=\"`DomainNames`\" pulumi-lang-go=\"`domainNames`\" pulumi-lang-python=\"`domain_names`\" pulumi-lang-yaml=\"`domainNames`\" pulumi-lang-java=\"`domainNames`\"\u003e`domain_names`\u003c/span\u003e.\n","willReplaceOnChanges":true},"networkConnectivityConfigId":{"type":"string","description":"Canonical unique identifier of Network Connectivity Config in Databricks Account. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"resourceId":{"type":"string","description":"The Azure resource ID of the target resource. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"resourceNames":{"type":"array","items":{"type":"string"},"description":"Only used by private endpoints towards AWS S3 service. List of globally unique S3 bucket names that will be accessed via the VPC endpoint. The bucket names must be in the same region as the NCC/endpoint service. Conflict with \u003cspan pulumi-lang-nodejs=\"`domainNames`\" pulumi-lang-dotnet=\"`DomainNames`\" pulumi-lang-go=\"`domainNames`\" pulumi-lang-python=\"`domain_names`\" pulumi-lang-yaml=\"`domainNames`\" pulumi-lang-java=\"`domainNames`\"\u003e`domain_names`\u003c/span\u003e.\n"},"ruleId":{"type":"string","description":"the ID of a private endpoint rule.\n"},"updatedTime":{"type":"integer","description":"Time in epoch milliseconds when this object was updated.\n"},"vpcEndpointId":{"type":"string","description":"The AWS VPC endpoint ID. You can use this ID to identify the VPC endpoint created by Databricks.\n"}},"type":"object"}},"databricks:index/mwsNetworkConnectivityConfig:MwsNetworkConnectivityConfig":{"description":"Allows you to create a Network Connectivity Config that can be used as part of a\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource to create a [Databricks Workspace that leverages serverless network connectivity configs](https://learn.microsoft.com/en-us/azure/databricks/security/network/serverless-network-security/serverless-firewall).\n\n\u003e This resource can only be used with an account-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\nconst region = config.requireObject\u003cany\u003e(\"region\");\nconst prefix = config.requireObject\u003cany\u003e(\"prefix\");\nconst ncc = new databricks.MwsNetworkConnectivityConfig(\"ncc\", {\n    name: `ncc-for-${prefix}`,\n    region: region,\n});\nconst nccBinding = new databricks.MwsNccBinding(\"ncc_binding\", {\n    networkConnectivityConfigId: ncc.networkConnectivityConfigId,\n    workspaceId: databricksWorkspaceId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\nregion = config.require_object(\"region\")\nprefix = config.require_object(\"prefix\")\nncc = databricks.MwsNetworkConnectivityConfig(\"ncc\",\n    name=f\"ncc-for-{prefix}\",\n    region=region)\nncc_binding = databricks.MwsNccBinding(\"ncc_binding\",\n    network_connectivity_config_id=ncc.network_connectivity_config_id,\n    workspace_id=databricks_workspace_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    var region = config.RequireObject\u003cdynamic\u003e(\"region\");\n    var prefix = config.RequireObject\u003cdynamic\u003e(\"prefix\");\n    var ncc = new Databricks.MwsNetworkConnectivityConfig(\"ncc\", new()\n    {\n        Name = $\"ncc-for-{prefix}\",\n        Region = region,\n    });\n\n    var nccBinding = new Databricks.MwsNccBinding(\"ncc_binding\", new()\n    {\n        NetworkConnectivityConfigId = ncc.NetworkConnectivityConfigId,\n        WorkspaceId = databricksWorkspaceId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\tregion := cfg.RequireObject(\"region\")\n\t\tprefix := cfg.RequireObject(\"prefix\")\n\t\tncc, err := databricks.NewMwsNetworkConnectivityConfig(ctx, \"ncc\", \u0026databricks.MwsNetworkConnectivityConfigArgs{\n\t\t\tName:   pulumi.Sprintf(\"ncc-for-%v\", prefix),\n\t\t\tRegion: pulumi.Any(region),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsNccBinding(ctx, \"ncc_binding\", \u0026databricks.MwsNccBindingArgs{\n\t\t\tNetworkConnectivityConfigId: ncc.NetworkConnectivityConfigId,\n\t\t\tWorkspaceId:                 pulumi.Any(databricksWorkspaceId),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsNetworkConnectivityConfig;\nimport com.pulumi.databricks.MwsNetworkConnectivityConfigArgs;\nimport com.pulumi.databricks.MwsNccBinding;\nimport com.pulumi.databricks.MwsNccBindingArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var region = config.get(\"region\");\n        final var prefix = config.get(\"prefix\");\n        var ncc = new MwsNetworkConnectivityConfig(\"ncc\", MwsNetworkConnectivityConfigArgs.builder()\n            .name(String.format(\"ncc-for-%s\", prefix))\n            .region(region)\n            .build());\n\n        var nccBinding = new MwsNccBinding(\"nccBinding\", MwsNccBindingArgs.builder()\n            .networkConnectivityConfigId(ncc.networkConnectivityConfigId())\n            .workspaceId(databricksWorkspaceId)\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  region:\n    type: dynamic\n  prefix:\n    type: dynamic\nresources:\n  ncc:\n    type: databricks:MwsNetworkConnectivityConfig\n    properties:\n      name: ncc-for-${prefix}\n      region: ${region}\n  nccBinding:\n    type: databricks:MwsNccBinding\n    name: ncc_binding\n    properties:\n      networkConnectivityConfigId: ${ncc.networkConnectivityConfigId}\n      workspaceId: ${databricksWorkspaceId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up Databricks workspaces.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNccBinding \" pulumi-lang-dotnet=\" databricks.MwsNccBinding \" pulumi-lang-go=\" MwsNccBinding \" pulumi-lang-python=\" MwsNccBinding \" pulumi-lang-yaml=\" databricks.MwsNccBinding \" pulumi-lang-java=\" databricks.MwsNccBinding \"\u003e databricks.MwsNccBinding \u003c/span\u003eto attach an NCC to a workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNccPrivateEndpointRule \" pulumi-lang-dotnet=\" databricks.MwsNccPrivateEndpointRule \" pulumi-lang-go=\" MwsNccPrivateEndpointRule \" pulumi-lang-python=\" MwsNccPrivateEndpointRule \" pulumi-lang-yaml=\" databricks.MwsNccPrivateEndpointRule \" pulumi-lang-java=\" databricks.MwsNccPrivateEndpointRule \"\u003e databricks.MwsNccPrivateEndpointRule \u003c/span\u003eto create a private endpoint rule.\n\n","properties":{"accountId":{"type":"string"},"creationTime":{"type":"integer","description":"time in epoch milliseconds when this object was created.\n"},"egressConfig":{"$ref":"#/types/databricks:index/MwsNetworkConnectivityConfigEgressConfig:MwsNetworkConnectivityConfigEgressConfig","description":"block containing information about network connectivity rules that apply to network traffic from your serverless compute resources. Consists of the following fields:\n"},"name":{"type":"string","description":"Name of the network connectivity configuration. The name can contain alphanumeric characters, hyphens, and underscores. The length must be between 3 and 30 characters. The name must match the regular expression `^[0-9a-zA-Z-_]{3,30}$`. Change forces creation of a new resource.\n"},"networkConnectivityConfigId":{"type":"string","description":"Canonical unique identifier of Network Connectivity Config in Databricks Account\n"},"region":{"type":"string","description":"Region of the Network Connectivity Config. NCCs can only be referenced by your workspaces in the same region. Change forces creation of a new resource.\n"},"updatedTime":{"type":"integer","description":"time in epoch milliseconds when this object was updated.\n"}},"required":["accountId","creationTime","egressConfig","name","networkConnectivityConfigId","region","updatedTime"],"inputProperties":{"accountId":{"type":"string"},"creationTime":{"type":"integer","description":"time in epoch milliseconds when this object was created.\n"},"egressConfig":{"$ref":"#/types/databricks:index/MwsNetworkConnectivityConfigEgressConfig:MwsNetworkConnectivityConfigEgressConfig","description":"block containing information about network connectivity rules that apply to network traffic from your serverless compute resources. Consists of the following fields:\n"},"name":{"type":"string","description":"Name of the network connectivity configuration. The name can contain alphanumeric characters, hyphens, and underscores. The length must be between 3 and 30 characters. The name must match the regular expression `^[0-9a-zA-Z-_]{3,30}$`. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"networkConnectivityConfigId":{"type":"string","description":"Canonical unique identifier of Network Connectivity Config in Databricks Account\n"},"region":{"type":"string","description":"Region of the Network Connectivity Config. NCCs can only be referenced by your workspaces in the same region. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"updatedTime":{"type":"integer","description":"time in epoch milliseconds when this object was updated.\n"}},"requiredInputs":["region"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsNetworkConnectivityConfig resources.\n","properties":{"accountId":{"type":"string"},"creationTime":{"type":"integer","description":"time in epoch milliseconds when this object was created.\n"},"egressConfig":{"$ref":"#/types/databricks:index/MwsNetworkConnectivityConfigEgressConfig:MwsNetworkConnectivityConfigEgressConfig","description":"block containing information about network connectivity rules that apply to network traffic from your serverless compute resources. Consists of the following fields:\n"},"name":{"type":"string","description":"Name of the network connectivity configuration. The name can contain alphanumeric characters, hyphens, and underscores. The length must be between 3 and 30 characters. The name must match the regular expression `^[0-9a-zA-Z-_]{3,30}$`. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"networkConnectivityConfigId":{"type":"string","description":"Canonical unique identifier of Network Connectivity Config in Databricks Account\n"},"region":{"type":"string","description":"Region of the Network Connectivity Config. NCCs can only be referenced by your workspaces in the same region. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"updatedTime":{"type":"integer","description":"time in epoch milliseconds when this object was updated.\n"}},"type":"object"}},"databricks:index/mwsNetworks:MwsNetworks":{"description":"## Databricks on AWS usage\n\nUse this resource to configure VPC \u0026 subnets for new workspaces within AWS and GCP.\n\n\u003e This resource can only be used with an account-level provider!\n\n\u003e The \u003cspan pulumi-lang-nodejs=\"`gkeClusterServiceIpRange`\" pulumi-lang-dotnet=\"`GkeClusterServiceIpRange`\" pulumi-lang-go=\"`gkeClusterServiceIpRange`\" pulumi-lang-python=\"`gke_cluster_service_ip_range`\" pulumi-lang-yaml=\"`gkeClusterServiceIpRange`\" pulumi-lang-java=\"`gkeClusterServiceIpRange`\"\u003e`gke_cluster_service_ip_range`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`gkePodServiceIpRange`\" pulumi-lang-dotnet=\"`GkePodServiceIpRange`\" pulumi-lang-go=\"`gkePodServiceIpRange`\" pulumi-lang-python=\"`gke_pod_service_ip_range`\" pulumi-lang-yaml=\"`gkePodServiceIpRange`\" pulumi-lang-java=\"`gkePodServiceIpRange`\"\u003e`gke_pod_service_ip_range`\u003c/span\u003e arguments in \u003cspan pulumi-lang-nodejs=\"`gcpManagedNetworkConfig`\" pulumi-lang-dotnet=\"`GcpManagedNetworkConfig`\" pulumi-lang-go=\"`gcpManagedNetworkConfig`\" pulumi-lang-python=\"`gcp_managed_network_config`\" pulumi-lang-yaml=\"`gcpManagedNetworkConfig`\" pulumi-lang-java=\"`gcpManagedNetworkConfig`\"\u003e`gcp_managed_network_config`\u003c/span\u003e are now deprecated and no longer supported. If you have already created a workspace using these fields, it is safe to remove them from your Pulumi template.\n\n* Databricks must have access to at least two subnets for each workspace, with each subnet in a different Availability Zone. You cannot specify more than one Databricks workspace subnet per Availability Zone in the Create network configuration API call. You can have more than one subnet per Availability Zone as part of your network setup, but you can choose only one subnet per Availability Zone for the Databricks workspace.\n* Databricks assigns two IP addresses per node, one for management traffic and one for Spark applications. The total number of instances for each subnet is equal to half of the available IP addresses.\n* Each subnet must have a netmask between /17 and /25.\n* Subnets must be private.\n* Subnets must have outbound access to the public network using a\u003cspan pulumi-lang-nodejs=\" awsNatGateway \" pulumi-lang-dotnet=\" AwsNatGateway \" pulumi-lang-go=\" awsNatGateway \" pulumi-lang-python=\" aws_nat_gateway \" pulumi-lang-yaml=\" awsNatGateway \" pulumi-lang-java=\" awsNatGateway \"\u003e aws_nat_gateway \u003c/span\u003eand aws_internet_gateway, or other similar customer-managed appliance infrastructure.\n* The NAT gateway must be set up in its subnet (public_subnets in the example below) that routes quad-zero (0.0.0.0/0) traffic to an internet gateway or other customer-managed appliance infrastructure.\n\n\u003e The NAT gateway needs only one IP address per AZ. Hence, the public subnet only needs two IP addresses. In order to limit the number of IP addresses in the public subnet, you can specify a secondary CIDR block (cidr_block_public) using the argument\u003cspan pulumi-lang-nodejs=\" secondaryCidrBlocks \" pulumi-lang-dotnet=\" SecondaryCidrBlocks \" pulumi-lang-go=\" secondaryCidrBlocks \" pulumi-lang-python=\" secondary_cidr_blocks \" pulumi-lang-yaml=\" secondaryCidrBlocks \" pulumi-lang-java=\" secondaryCidrBlocks \"\u003e secondary_cidr_blocks \u003c/span\u003ethen pass it to the\u003cspan pulumi-lang-nodejs=\" publicSubnets \" pulumi-lang-dotnet=\" PublicSubnets \" pulumi-lang-go=\" publicSubnets \" pulumi-lang-python=\" public_subnets \" pulumi-lang-yaml=\" publicSubnets \" pulumi-lang-java=\" publicSubnets \"\u003e public_subnets \u003c/span\u003eargument. Please review the [IPv4 CIDR block association restrictions](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) when choosing the secondary cidr block.\n\nPlease follow this complete runnable example with new VPC and new workspace setup. Please pay special attention to the fact that there you have two different instances of a databricks provider - one for deploying workspaces (with `host=\"https://accounts.cloud.databricks.com/\"`) and another for the workspace you've created with \u003cspan pulumi-lang-nodejs=\"`databricks.MwsWorkspaces`\" pulumi-lang-dotnet=\"`databricks.MwsWorkspaces`\" pulumi-lang-go=\"`MwsWorkspaces`\" pulumi-lang-python=\"`MwsWorkspaces`\" pulumi-lang-yaml=\"`databricks.MwsWorkspaces`\" pulumi-lang-java=\"`databricks.MwsWorkspaces`\"\u003e`databricks.MwsWorkspaces`\u003c/span\u003e resource. If you want both creations of workspaces \u0026 clusters within the same Pulumi module (essentially the same directory), you should use the provider aliasing feature of Pulumi. We strongly recommend having one terraform module to create workspace + PAT token and the rest in different modules.\n\n## Databricks on GCP usage\n\n\u003e Initialize provider with `alias = \"mws\"`, `host  = \"https://accounts.gcp.databricks.com\"` and use `provider = databricks.mws`\n\nUse this resource to [configure VPC](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/customer-managed-vpc.html) \u0026 subnet for new workspaces within GCP. It is essential to understand that this will require you to configure your provider separately for the multiple workspaces resources.\n\n* Databricks must have access to a subnet in the same region as the workspace, of which IP range will be used to allocate your workspace's GCE cluster nodes.\n* The subnet must have a netmask between /29 and /9.\n* Subnet must have outbound access to the public network using a\u003cspan pulumi-lang-nodejs=\" gcpComputeRouterNat \" pulumi-lang-dotnet=\" GcpComputeRouterNat \" pulumi-lang-go=\" gcpComputeRouterNat \" pulumi-lang-python=\" gcp_compute_router_nat \" pulumi-lang-yaml=\" gcpComputeRouterNat \" pulumi-lang-java=\" gcpComputeRouterNat \"\u003e gcp_compute_router_nat \u003c/span\u003eor other similar customer-managed appliance infrastructure.\n\nPlease follow this complete runnable example with new VPC and new workspace setup. Please pay special attention to the fact that there you have two different instances of a databricks provider - one for deploying workspaces (with `host=\"https://accounts.gcp.databricks.com/\"`) and another for the workspace you've created with \u003cspan pulumi-lang-nodejs=\"`databricks.MwsWorkspaces`\" pulumi-lang-dotnet=\"`databricks.MwsWorkspaces`\" pulumi-lang-go=\"`MwsWorkspaces`\" pulumi-lang-python=\"`MwsWorkspaces`\" pulumi-lang-yaml=\"`databricks.MwsWorkspaces`\" pulumi-lang-java=\"`databricks.MwsWorkspaces`\"\u003e`databricks.MwsWorkspaces`\u003c/span\u003e resource. If you want both creations of workspaces \u0026 clusters within the same Pulumi module (essentially the same directory), you should use the provider aliasing feature of Pulumi. We strongly recommend having one terraform module to create workspace + PAT token and the rest in different modules.\n\n## Example Usage\n\n### Creating a Databricks on GCP workspace\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as google from \"@pulumi/google\";\n\nconst config = new pulumi.Config();\n// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\nconst dbxPrivateVpc = new google.index.ComputeNetwork(\"dbx_private_vpc\", {\n    project: googleProject,\n    name: `tf-network-${suffix.result}`,\n    autoCreateSubnetworks: false,\n});\nconst network_with_private_secondary_ip_ranges = new google.index.ComputeSubnetwork(\"network-with-private-secondary-ip-ranges\", {\n    name: `test-dbx-${suffix.result}`,\n    ipCidrRange: \"10.0.0.0/16\",\n    region: \"us-central1\",\n    network: dbxPrivateVpc.id,\n    privateIpGoogleAccess: true,\n});\nconst router = new google.index.ComputeRouter(\"router\", {\n    name: `my-router-${suffix.result}`,\n    region: network_with_private_secondary_ip_ranges.region,\n    network: dbxPrivateVpc.id,\n});\nconst nat = new google.index.ComputeRouterNat(\"nat\", {\n    name: `my-router-nat-${suffix.result}`,\n    router: router.name,\n    region: router.region,\n    natIpAllocateOption: \"AUTO_ONLY\",\n    sourceSubnetworkIpRangesToNat: \"ALL_SUBNETWORKS_ALL_IP_RANGES\",\n});\nconst _this = new databricks.MwsNetworks(\"this\", {\n    accountId: databricksAccountId,\n    networkName: `test-demo-${suffix.result}`,\n    gcpNetworkInfo: {\n        networkProjectId: googleProject,\n        vpcId: dbxPrivateVpc.name,\n        subnetId: networkWithPrivateSecondaryIpRanges.name,\n        subnetRegion: networkWithPrivateSecondaryIpRanges.region,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_google as google\n\nconfig = pulumi.Config()\n# Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\ndbx_private_vpc = google.index.ComputeNetwork(\"dbx_private_vpc\",\n    project=google_project,\n    name=ftf-network-{suffix.result},\n    auto_create_subnetworks=False)\nnetwork_with_private_secondary_ip_ranges = google.index.ComputeSubnetwork(\"network-with-private-secondary-ip-ranges\",\n    name=ftest-dbx-{suffix.result},\n    ip_cidr_range=10.0.0.0/16,\n    region=us-central1,\n    network=dbx_private_vpc.id,\n    private_ip_google_access=True)\nrouter = google.index.ComputeRouter(\"router\",\n    name=fmy-router-{suffix.result},\n    region=network_with_private_secondary_ip_ranges.region,\n    network=dbx_private_vpc.id)\nnat = google.index.ComputeRouterNat(\"nat\",\n    name=fmy-router-nat-{suffix.result},\n    router=router.name,\n    region=router.region,\n    nat_ip_allocate_option=AUTO_ONLY,\n    source_subnetwork_ip_ranges_to_nat=ALL_SUBNETWORKS_ALL_IP_RANGES)\nthis = databricks.MwsNetworks(\"this\",\n    account_id=databricks_account_id,\n    network_name=f\"test-demo-{suffix['result']}\",\n    gcp_network_info={\n        \"network_project_id\": google_project,\n        \"vpc_id\": dbx_private_vpc[\"name\"],\n        \"subnet_id\": network_with_private_secondary_ip_ranges[\"name\"],\n        \"subnet_region\": network_with_private_secondary_ip_ranges[\"region\"],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Google = Pulumi.Google;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    var dbxPrivateVpc = new Google.Index.ComputeNetwork(\"dbx_private_vpc\", new()\n    {\n        Project = googleProject,\n        Name = $\"tf-network-{suffix.Result}\",\n        AutoCreateSubnetworks = false,\n    });\n\n    var network_with_private_secondary_ip_ranges = new Google.Index.ComputeSubnetwork(\"network-with-private-secondary-ip-ranges\", new()\n    {\n        Name = $\"test-dbx-{suffix.Result}\",\n        IpCidrRange = \"10.0.0.0/16\",\n        Region = \"us-central1\",\n        Network = dbxPrivateVpc.Id,\n        PrivateIpGoogleAccess = true,\n    });\n\n    var router = new Google.Index.ComputeRouter(\"router\", new()\n    {\n        Name = $\"my-router-{suffix.Result}\",\n        Region = network_with_private_secondary_ip_ranges.Region,\n        Network = dbxPrivateVpc.Id,\n    });\n\n    var nat = new Google.Index.ComputeRouterNat(\"nat\", new()\n    {\n        Name = $\"my-router-nat-{suffix.Result}\",\n        Router = router.Name,\n        Region = router.Region,\n        NatIpAllocateOption = \"AUTO_ONLY\",\n        SourceSubnetworkIpRangesToNat = \"ALL_SUBNETWORKS_ALL_IP_RANGES\",\n    });\n\n    var @this = new Databricks.MwsNetworks(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        NetworkName = $\"test-demo-{suffix.Result}\",\n        GcpNetworkInfo = new Databricks.Inputs.MwsNetworksGcpNetworkInfoArgs\n        {\n            NetworkProjectId = googleProject,\n            VpcId = dbxPrivateVpc.Name,\n            SubnetId = networkWithPrivateSecondaryIpRanges.Name,\n            SubnetRegion = networkWithPrivateSecondaryIpRanges.Region,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-google/sdk/go/google\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\tdbxPrivateVpc, err := google.NewComputeNetwork(ctx, \"dbx_private_vpc\", \u0026google.ComputeNetworkArgs{\n\t\t\tProject:               googleProject,\n\t\t\tName:                  fmt.Sprintf(\"tf-network-%v\", suffix.Result),\n\t\t\tAutoCreateSubnetworks: false,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tnetwork_with_private_secondary_ip_ranges, err := google.NewComputeSubnetwork(ctx, \"network-with-private-secondary-ip-ranges\", \u0026google.ComputeSubnetworkArgs{\n\t\t\tName:                  fmt.Sprintf(\"test-dbx-%v\", suffix.Result),\n\t\t\tIpCidrRange:           \"10.0.0.0/16\",\n\t\t\tRegion:                \"us-central1\",\n\t\t\tNetwork:               dbxPrivateVpc.Id,\n\t\t\tPrivateIpGoogleAccess: true,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\trouter, err := google.NewComputeRouter(ctx, \"router\", \u0026google.ComputeRouterArgs{\n\t\t\tName:    fmt.Sprintf(\"my-router-%v\", suffix.Result),\n\t\t\tRegion:  network_with_private_secondary_ip_ranges.Region,\n\t\t\tNetwork: dbxPrivateVpc.Id,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = google.NewComputeRouterNat(ctx, \"nat\", \u0026google.ComputeRouterNatArgs{\n\t\t\tName:                          fmt.Sprintf(\"my-router-nat-%v\", suffix.Result),\n\t\t\tRouter:                        router.Name,\n\t\t\tRegion:                        router.Region,\n\t\t\tNatIpAllocateOption:           \"AUTO_ONLY\",\n\t\t\tSourceSubnetworkIpRangesToNat: \"ALL_SUBNETWORKS_ALL_IP_RANGES\",\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsNetworks(ctx, \"this\", \u0026databricks.MwsNetworksArgs{\n\t\t\tAccountId:   pulumi.Any(databricksAccountId),\n\t\t\tNetworkName: pulumi.Sprintf(\"test-demo-%v\", suffix.Result),\n\t\t\tGcpNetworkInfo: \u0026databricks.MwsNetworksGcpNetworkInfoArgs{\n\t\t\t\tNetworkProjectId: pulumi.Any(googleProject),\n\t\t\t\tVpcId:            dbxPrivateVpc.Name,\n\t\t\t\tSubnetId:         pulumi.Any(networkWithPrivateSecondaryIpRanges.Name),\n\t\t\t\tSubnetRegion:     pulumi.Any(networkWithPrivateSecondaryIpRanges.Region),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.google.ComputeNetwork;\nimport com.pulumi.google.ComputeNetworkArgs;\nimport com.pulumi.google.ComputeSubnetwork;\nimport com.pulumi.google.ComputeSubnetworkArgs;\nimport com.pulumi.google.ComputeRouter;\nimport com.pulumi.google.ComputeRouterArgs;\nimport com.pulumi.google.ComputeRouterNat;\nimport com.pulumi.google.ComputeRouterNatArgs;\nimport com.pulumi.databricks.MwsNetworks;\nimport com.pulumi.databricks.MwsNetworksArgs;\nimport com.pulumi.databricks.inputs.MwsNetworksGcpNetworkInfoArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        var dbxPrivateVpc = new ComputeNetwork(\"dbxPrivateVpc\", ComputeNetworkArgs.builder()\n            .project(googleProject)\n            .name(String.format(\"tf-network-%s\", suffix.result()))\n            .autoCreateSubnetworks(false)\n            .build());\n\n        var network_with_private_secondary_ip_ranges = new ComputeSubnetwork(\"network-with-private-secondary-ip-ranges\", ComputeSubnetworkArgs.builder()\n            .name(String.format(\"test-dbx-%s\", suffix.result()))\n            .ipCidrRange(\"10.0.0.0/16\")\n            .region(\"us-central1\")\n            .network(dbxPrivateVpc.id())\n            .privateIpGoogleAccess(true)\n            .build());\n\n        var router = new ComputeRouter(\"router\", ComputeRouterArgs.builder()\n            .name(String.format(\"my-router-%s\", suffix.result()))\n            .region(network_with_private_secondary_ip_ranges.region())\n            .network(dbxPrivateVpc.id())\n            .build());\n\n        var nat = new ComputeRouterNat(\"nat\", ComputeRouterNatArgs.builder()\n            .name(String.format(\"my-router-nat-%s\", suffix.result()))\n            .router(router.name())\n            .region(router.region())\n            .natIpAllocateOption(\"AUTO_ONLY\")\n            .sourceSubnetworkIpRangesToNat(\"ALL_SUBNETWORKS_ALL_IP_RANGES\")\n            .build());\n\n        var this_ = new MwsNetworks(\"this\", MwsNetworksArgs.builder()\n            .accountId(databricksAccountId)\n            .networkName(String.format(\"test-demo-%s\", suffix.result()))\n            .gcpNetworkInfo(MwsNetworksGcpNetworkInfoArgs.builder()\n                .networkProjectId(googleProject)\n                .vpcId(dbxPrivateVpc.name())\n                .subnetId(networkWithPrivateSecondaryIpRanges.name())\n                .subnetRegion(networkWithPrivateSecondaryIpRanges.region())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\nresources:\n  dbxPrivateVpc:\n    type: google:ComputeNetwork\n    name: dbx_private_vpc\n    properties:\n      project: ${googleProject}\n      name: tf-network-${suffix.result}\n      autoCreateSubnetworks: false\n  network-with-private-secondary-ip-ranges:\n    type: google:ComputeSubnetwork\n    properties:\n      name: test-dbx-${suffix.result}\n      ipCidrRange: 10.0.0.0/16\n      region: us-central1\n      network: ${dbxPrivateVpc.id}\n      privateIpGoogleAccess: true\n  router:\n    type: google:ComputeRouter\n    properties:\n      name: my-router-${suffix.result}\n      region: ${[\"network-with-private-secondary-ip-ranges\"].region}\n      network: ${dbxPrivateVpc.id}\n  nat:\n    type: google:ComputeRouterNat\n    properties:\n      name: my-router-nat-${suffix.result}\n      router: ${router.name}\n      region: ${router.region}\n      natIpAllocateOption: AUTO_ONLY\n      sourceSubnetworkIpRangesToNat: ALL_SUBNETWORKS_ALL_IP_RANGES\n  this:\n    type: databricks:MwsNetworks\n    properties:\n      accountId: ${databricksAccountId}\n      networkName: test-demo-${suffix.result}\n      gcpNetworkInfo:\n        networkProjectId: ${googleProject}\n        vpcId: ${dbxPrivateVpc.name}\n        subnetId: ${networkWithPrivateSecondaryIpRanges.name}\n        subnetRegion: ${networkWithPrivateSecondaryIpRanges.region}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nIn order to create a VPC [that leverages GCP Private Service Connect](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html) you would need to add the \u003cspan pulumi-lang-nodejs=\"`vpcEndpointId`\" pulumi-lang-dotnet=\"`VpcEndpointId`\" pulumi-lang-go=\"`vpcEndpointId`\" pulumi-lang-python=\"`vpc_endpoint_id`\" pulumi-lang-yaml=\"`vpcEndpointId`\" pulumi-lang-java=\"`vpcEndpointId`\"\u003e`vpc_endpoint_id`\u003c/span\u003e Attributes from\u003cspan pulumi-lang-nodejs=\" mwsVpcEndpoint \" pulumi-lang-dotnet=\" MwsVpcEndpoint \" pulumi-lang-go=\" mwsVpcEndpoint \" pulumi-lang-python=\" mws_vpc_endpoint \" pulumi-lang-yaml=\" mwsVpcEndpoint \" pulumi-lang-java=\" mwsVpcEndpoint \"\u003e mws_vpc_endpoint \u003c/span\u003eresources into the\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eresource. For example:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.MwsNetworks(\"this\", {\n    accountId: databricksAccountId,\n    networkName: `test-demo-${suffix.result}`,\n    gcpNetworkInfo: {\n        networkProjectId: googleProject,\n        vpcId: dbxPrivateVpc.name,\n        subnetId: networkWithPrivateSecondaryIpRanges.name,\n        subnetRegion: networkWithPrivateSecondaryIpRanges.region,\n    },\n    vpcEndpoints: {\n        dataplaneRelays: [relay.vpcEndpointId],\n        restApis: [workspace.vpcEndpointId],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.MwsNetworks(\"this\",\n    account_id=databricks_account_id,\n    network_name=f\"test-demo-{suffix['result']}\",\n    gcp_network_info={\n        \"network_project_id\": google_project,\n        \"vpc_id\": dbx_private_vpc[\"name\"],\n        \"subnet_id\": network_with_private_secondary_ip_ranges[\"name\"],\n        \"subnet_region\": network_with_private_secondary_ip_ranges[\"region\"],\n    },\n    vpc_endpoints={\n        \"dataplane_relays\": [relay[\"vpcEndpointId\"]],\n        \"rest_apis\": [workspace[\"vpcEndpointId\"]],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.MwsNetworks(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        NetworkName = $\"test-demo-{suffix.Result}\",\n        GcpNetworkInfo = new Databricks.Inputs.MwsNetworksGcpNetworkInfoArgs\n        {\n            NetworkProjectId = googleProject,\n            VpcId = dbxPrivateVpc.Name,\n            SubnetId = networkWithPrivateSecondaryIpRanges.Name,\n            SubnetRegion = networkWithPrivateSecondaryIpRanges.Region,\n        },\n        VpcEndpoints = new Databricks.Inputs.MwsNetworksVpcEndpointsArgs\n        {\n            DataplaneRelays = new[]\n            {\n                relay.VpcEndpointId,\n            },\n            RestApis = new[]\n            {\n                workspace.VpcEndpointId,\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsNetworks(ctx, \"this\", \u0026databricks.MwsNetworksArgs{\n\t\t\tAccountId:   pulumi.Any(databricksAccountId),\n\t\t\tNetworkName: pulumi.Sprintf(\"test-demo-%v\", suffix.Result),\n\t\t\tGcpNetworkInfo: \u0026databricks.MwsNetworksGcpNetworkInfoArgs{\n\t\t\t\tNetworkProjectId: pulumi.Any(googleProject),\n\t\t\t\tVpcId:            pulumi.Any(dbxPrivateVpc.Name),\n\t\t\t\tSubnetId:         pulumi.Any(networkWithPrivateSecondaryIpRanges.Name),\n\t\t\t\tSubnetRegion:     pulumi.Any(networkWithPrivateSecondaryIpRanges.Region),\n\t\t\t},\n\t\t\tVpcEndpoints: \u0026databricks.MwsNetworksVpcEndpointsArgs{\n\t\t\t\tDataplaneRelays: pulumi.StringArray{\n\t\t\t\t\trelay.VpcEndpointId,\n\t\t\t\t},\n\t\t\t\tRestApis: pulumi.StringArray{\n\t\t\t\t\tworkspace.VpcEndpointId,\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsNetworks;\nimport com.pulumi.databricks.MwsNetworksArgs;\nimport com.pulumi.databricks.inputs.MwsNetworksGcpNetworkInfoArgs;\nimport com.pulumi.databricks.inputs.MwsNetworksVpcEndpointsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new MwsNetworks(\"this\", MwsNetworksArgs.builder()\n            .accountId(databricksAccountId)\n            .networkName(String.format(\"test-demo-%s\", suffix.result()))\n            .gcpNetworkInfo(MwsNetworksGcpNetworkInfoArgs.builder()\n                .networkProjectId(googleProject)\n                .vpcId(dbxPrivateVpc.name())\n                .subnetId(networkWithPrivateSecondaryIpRanges.name())\n                .subnetRegion(networkWithPrivateSecondaryIpRanges.region())\n                .build())\n            .vpcEndpoints(MwsNetworksVpcEndpointsArgs.builder()\n                .dataplaneRelays(relay.vpcEndpointId())\n                .restApis(workspace.vpcEndpointId())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:MwsNetworks\n    properties:\n      accountId: ${databricksAccountId}\n      networkName: test-demo-${suffix.result}\n      gcpNetworkInfo:\n        networkProjectId: ${googleProject}\n        vpcId: ${dbxPrivateVpc.name}\n        subnetId: ${networkWithPrivateSecondaryIpRanges.name}\n        subnetRegion: ${networkWithPrivateSecondaryIpRanges.region}\n      vpcEndpoints:\n        dataplaneRelays:\n          - ${relay.vpcEndpointId}\n        restApis:\n          - ${workspace.vpcEndpointId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Modifying networks on running workspaces (AWS only)\n\nDue to specifics of platform APIs, changing any attribute of network configuration would cause \u003cspan pulumi-lang-nodejs=\"`databricks.MwsNetworks`\" pulumi-lang-dotnet=\"`databricks.MwsNetworks`\" pulumi-lang-go=\"`MwsNetworks`\" pulumi-lang-python=\"`MwsNetworks`\" pulumi-lang-yaml=\"`databricks.MwsNetworks`\" pulumi-lang-java=\"`databricks.MwsNetworks`\"\u003e`databricks.MwsNetworks`\u003c/span\u003e to be re-created - deleted \u0026 added again with special case for running workspaces. Once network configuration is attached to a running databricks_mws_workspaces, you cannot delete it and `pulumi up` would result in `INVALID_STATE: Unable to delete, Network is being used by active workspace X` error. In order to modify any attributes of a network, you have to perform three different `pulumi up` steps:\n\n1. Create a new \u003cspan pulumi-lang-nodejs=\"`databricks.MwsNetworks`\" pulumi-lang-dotnet=\"`databricks.MwsNetworks`\" pulumi-lang-go=\"`MwsNetworks`\" pulumi-lang-python=\"`MwsNetworks`\" pulumi-lang-yaml=\"`databricks.MwsNetworks`\" pulumi-lang-java=\"`databricks.MwsNetworks`\"\u003e`databricks.MwsNetworks`\u003c/span\u003e resource.\n2. Update the \u003cspan pulumi-lang-nodejs=\"`databricks.MwsWorkspaces`\" pulumi-lang-dotnet=\"`databricks.MwsWorkspaces`\" pulumi-lang-go=\"`MwsWorkspaces`\" pulumi-lang-python=\"`MwsWorkspaces`\" pulumi-lang-yaml=\"`databricks.MwsWorkspaces`\" pulumi-lang-java=\"`databricks.MwsWorkspaces`\"\u003e`databricks.MwsWorkspaces`\u003c/span\u003e to point to the new \u003cspan pulumi-lang-nodejs=\"`networkId`\" pulumi-lang-dotnet=\"`NetworkId`\" pulumi-lang-go=\"`networkId`\" pulumi-lang-python=\"`network_id`\" pulumi-lang-yaml=\"`networkId`\" pulumi-lang-java=\"`networkId`\"\u003e`network_id`\u003c/span\u003e.\n3. Delete the old \u003cspan pulumi-lang-nodejs=\"`databricks.MwsNetworks`\" pulumi-lang-dotnet=\"`databricks.MwsNetworks`\" pulumi-lang-go=\"`MwsNetworks`\" pulumi-lang-python=\"`MwsNetworks`\" pulumi-lang-yaml=\"`databricks.MwsNetworks`\" pulumi-lang-java=\"`databricks.MwsNetworks`\"\u003e`databricks.MwsNetworks`\u003c/span\u003e resource.\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* Provisioning Databricks on AWS guide.\n* Provisioning Databricks on AWS with Private Link guide.\n* Provisioning AWS Databricks workspaces with a Hub \u0026 Spoke firewall for data exfiltration protection guide.\n* Provisioning Databricks on GCP guide.\n* Provisioning Databricks workspaces on GCP with Private Service Connect guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003eto register\u003cspan pulumi-lang-nodejs=\" awsVpcEndpoint \" pulumi-lang-dotnet=\" AwsVpcEndpoint \" pulumi-lang-go=\" awsVpcEndpoint \" pulumi-lang-python=\" aws_vpc_endpoint \" pulumi-lang-yaml=\" awsVpcEndpoint \" pulumi-lang-java=\" awsVpcEndpoint \"\u003e aws_vpc_endpoint \u003c/span\u003eresources with Databricks such that they can be used as part of a\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003econfiguration.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-dotnet=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-go=\" MwsPrivateAccessSettings \" pulumi-lang-python=\" MwsPrivateAccessSettings \" pulumi-lang-yaml=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-java=\" databricks.MwsPrivateAccessSettings \"\u003e databricks.MwsPrivateAccessSettings \u003c/span\u003eto create a Private Access Setting that can be used as part of a\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource to create a [Databricks Workspace that leverages AWS PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) or [GCP Private Service Connect](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up [AWS and GCP workspaces](https://docs.databricks.com/getting-started/overview.html#e2-architecture-1).\n\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/)\n","secret":true},"creationTime":{"type":"integer"},"errorMessages":{"type":"array","items":{"$ref":"#/types/databricks:index/MwsNetworksErrorMessage:MwsNetworksErrorMessage"}},"gcpNetworkInfo":{"$ref":"#/types/databricks:index/MwsNetworksGcpNetworkInfo:MwsNetworksGcpNetworkInfo","description":"a block consists of Google Cloud specific information for this network, for example the VPC ID, subnet ID, and secondary IP ranges. It has the following fields:\n"},"networkId":{"type":"string","description":"(String) id of network to be used for\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource.\n"},"networkName":{"type":"string","description":"name under which this network is registered\n"},"securityGroupIds":{"type":"array","items":{"type":"string"},"description":"ids of aws_security_group\n"},"subnetIds":{"type":"array","items":{"type":"string"},"description":"ids of aws_subnet\n"},"vpcEndpoints":{"$ref":"#/types/databricks:index/MwsNetworksVpcEndpoints:MwsNetworksVpcEndpoints","description":"mapping of\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003efor PrivateLink or Private Service Connect connections\n"},"vpcId":{"type":"string","description":"aws_vpc id\n"},"vpcStatus":{"type":"string","description":"(String) VPC attachment status\n"},"workspaceId":{"type":"string","description":"(Integer) id of associated workspace\n"}},"required":["accountId","creationTime","errorMessages","networkId","networkName","vpcEndpoints","vpcStatus","workspaceId"],"inputProperties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/)\n","secret":true,"willReplaceOnChanges":true},"creationTime":{"type":"integer"},"errorMessages":{"type":"array","items":{"$ref":"#/types/databricks:index/MwsNetworksErrorMessage:MwsNetworksErrorMessage"}},"gcpNetworkInfo":{"$ref":"#/types/databricks:index/MwsNetworksGcpNetworkInfo:MwsNetworksGcpNetworkInfo","description":"a block consists of Google Cloud specific information for this network, for example the VPC ID, subnet ID, and secondary IP ranges. It has the following fields:\n"},"networkId":{"type":"string","description":"(String) id of network to be used for\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource.\n"},"networkName":{"type":"string","description":"name under which this network is registered\n","willReplaceOnChanges":true},"securityGroupIds":{"type":"array","items":{"type":"string"},"description":"ids of aws_security_group\n","willReplaceOnChanges":true},"subnetIds":{"type":"array","items":{"type":"string"},"description":"ids of aws_subnet\n","willReplaceOnChanges":true},"vpcEndpoints":{"$ref":"#/types/databricks:index/MwsNetworksVpcEndpoints:MwsNetworksVpcEndpoints","description":"mapping of\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003efor PrivateLink or Private Service Connect connections\n","willReplaceOnChanges":true},"vpcId":{"type":"string","description":"aws_vpc id\n","willReplaceOnChanges":true},"vpcStatus":{"type":"string","description":"(String) VPC attachment status\n"},"workspaceId":{"type":"string","description":"(Integer) id of associated workspace\n"}},"requiredInputs":["accountId","networkName"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsNetworks resources.\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/)\n","secret":true,"willReplaceOnChanges":true},"creationTime":{"type":"integer"},"errorMessages":{"type":"array","items":{"$ref":"#/types/databricks:index/MwsNetworksErrorMessage:MwsNetworksErrorMessage"}},"gcpNetworkInfo":{"$ref":"#/types/databricks:index/MwsNetworksGcpNetworkInfo:MwsNetworksGcpNetworkInfo","description":"a block consists of Google Cloud specific information for this network, for example the VPC ID, subnet ID, and secondary IP ranges. It has the following fields:\n"},"networkId":{"type":"string","description":"(String) id of network to be used for\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource.\n"},"networkName":{"type":"string","description":"name under which this network is registered\n","willReplaceOnChanges":true},"securityGroupIds":{"type":"array","items":{"type":"string"},"description":"ids of aws_security_group\n","willReplaceOnChanges":true},"subnetIds":{"type":"array","items":{"type":"string"},"description":"ids of aws_subnet\n","willReplaceOnChanges":true},"vpcEndpoints":{"$ref":"#/types/databricks:index/MwsNetworksVpcEndpoints:MwsNetworksVpcEndpoints","description":"mapping of\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003efor PrivateLink or Private Service Connect connections\n","willReplaceOnChanges":true},"vpcId":{"type":"string","description":"aws_vpc id\n","willReplaceOnChanges":true},"vpcStatus":{"type":"string","description":"(String) VPC attachment status\n"},"workspaceId":{"type":"string","description":"(Integer) id of associated workspace\n"}},"type":"object"}},"databricks:index/mwsPermissionAssignment:MwsPermissionAssignment":{"description":"This resource is used to assign account-level users, service principals and groups to a Databricks workspace. To configure additional entitlements such as cluster creation, please use databricks_entitlements.\n\n\u003e This resource can only be used with an account-level provider!\n\n## Example Usage\n\nIn account context, adding account-level group to a workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst dataEng = new databricks.Group(\"data_eng\", {displayName: \"Data Engineering\"});\nconst addAdminGroup = new databricks.MwsPermissionAssignment(\"add_admin_group\", {\n    workspaceId: _this.workspaceId,\n    principalId: dataEng.id,\n    permissions: [\"ADMIN\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ndata_eng = databricks.Group(\"data_eng\", display_name=\"Data Engineering\")\nadd_admin_group = databricks.MwsPermissionAssignment(\"add_admin_group\",\n    workspace_id=this[\"workspaceId\"],\n    principal_id=data_eng.id,\n    permissions=[\"ADMIN\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var dataEng = new Databricks.Group(\"data_eng\", new()\n    {\n        DisplayName = \"Data Engineering\",\n    });\n\n    var addAdminGroup = new Databricks.MwsPermissionAssignment(\"add_admin_group\", new()\n    {\n        WorkspaceId = @this.WorkspaceId,\n        PrincipalId = dataEng.Id,\n        Permissions = new[]\n        {\n            \"ADMIN\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tdataEng, err := databricks.NewGroup(ctx, \"data_eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Data Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsPermissionAssignment(ctx, \"add_admin_group\", \u0026databricks.MwsPermissionAssignmentArgs{\n\t\t\tWorkspaceId: pulumi.Any(this.WorkspaceId),\n\t\t\tPrincipalId: dataEng.ID(),\n\t\t\tPermissions: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"ADMIN\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.MwsPermissionAssignment;\nimport com.pulumi.databricks.MwsPermissionAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var dataEng = new Group(\"dataEng\", GroupArgs.builder()\n            .displayName(\"Data Engineering\")\n            .build());\n\n        var addAdminGroup = new MwsPermissionAssignment(\"addAdminGroup\", MwsPermissionAssignmentArgs.builder()\n            .workspaceId(this_.workspaceId())\n            .principalId(dataEng.id())\n            .permissions(\"ADMIN\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  dataEng:\n    type: databricks:Group\n    name: data_eng\n    properties:\n      displayName: Data Engineering\n  addAdminGroup:\n    type: databricks:MwsPermissionAssignment\n    name: add_admin_group\n    properties:\n      workspaceId: ${this.workspaceId}\n      principalId: ${dataEng.id}\n      permissions:\n        - ADMIN\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nIn account context, adding account-level user to a workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst me = new databricks.User(\"me\", {userName: \"me@example.com\"});\nconst addUser = new databricks.MwsPermissionAssignment(\"add_user\", {\n    workspaceId: _this.workspaceId,\n    principalId: me.id,\n    permissions: [\"USER\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nme = databricks.User(\"me\", user_name=\"me@example.com\")\nadd_user = databricks.MwsPermissionAssignment(\"add_user\",\n    workspace_id=this[\"workspaceId\"],\n    principal_id=me.id,\n    permissions=[\"USER\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var me = new Databricks.User(\"me\", new()\n    {\n        UserName = \"me@example.com\",\n    });\n\n    var addUser = new Databricks.MwsPermissionAssignment(\"add_user\", new()\n    {\n        WorkspaceId = @this.WorkspaceId,\n        PrincipalId = me.Id,\n        Permissions = new[]\n        {\n            \"USER\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tme, err := databricks.NewUser(ctx, \"me\", \u0026databricks.UserArgs{\n\t\t\tUserName: pulumi.String(\"me@example.com\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsPermissionAssignment(ctx, \"add_user\", \u0026databricks.MwsPermissionAssignmentArgs{\n\t\t\tWorkspaceId: pulumi.Any(this.WorkspaceId),\n\t\t\tPrincipalId: me.ID(),\n\t\t\tPermissions: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USER\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport com.pulumi.databricks.MwsPermissionAssignment;\nimport com.pulumi.databricks.MwsPermissionAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var me = new User(\"me\", UserArgs.builder()\n            .userName(\"me@example.com\")\n            .build());\n\n        var addUser = new MwsPermissionAssignment(\"addUser\", MwsPermissionAssignmentArgs.builder()\n            .workspaceId(this_.workspaceId())\n            .principalId(me.id())\n            .permissions(\"USER\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  me:\n    type: databricks:User\n    properties:\n      userName: me@example.com\n  addUser:\n    type: databricks:MwsPermissionAssignment\n    name: add_user\n    properties:\n      workspaceId: ${this.workspaceId}\n      principalId: ${me.id}\n      permissions:\n        - USER\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nIn account context, adding account-level service principal to a workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sp = new databricks.ServicePrincipal(\"sp\", {displayName: \"Automation-only SP\"});\nconst addAdminSpn = new databricks.MwsPermissionAssignment(\"add_admin_spn\", {\n    workspaceId: _this.workspaceId,\n    principalId: sp.id,\n    permissions: [\"ADMIN\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsp = databricks.ServicePrincipal(\"sp\", display_name=\"Automation-only SP\")\nadd_admin_spn = databricks.MwsPermissionAssignment(\"add_admin_spn\",\n    workspace_id=this[\"workspaceId\"],\n    principal_id=sp.id,\n    permissions=[\"ADMIN\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sp = new Databricks.ServicePrincipal(\"sp\", new()\n    {\n        DisplayName = \"Automation-only SP\",\n    });\n\n    var addAdminSpn = new Databricks.MwsPermissionAssignment(\"add_admin_spn\", new()\n    {\n        WorkspaceId = @this.WorkspaceId,\n        PrincipalId = sp.Id,\n        Permissions = new[]\n        {\n            \"ADMIN\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsp, err := databricks.NewServicePrincipal(ctx, \"sp\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation-only SP\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsPermissionAssignment(ctx, \"add_admin_spn\", \u0026databricks.MwsPermissionAssignmentArgs{\n\t\t\tWorkspaceId: pulumi.Any(this.WorkspaceId),\n\t\t\tPrincipalId: sp.ID(),\n\t\t\tPermissions: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"ADMIN\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport com.pulumi.databricks.MwsPermissionAssignment;\nimport com.pulumi.databricks.MwsPermissionAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sp = new ServicePrincipal(\"sp\", ServicePrincipalArgs.builder()\n            .displayName(\"Automation-only SP\")\n            .build());\n\n        var addAdminSpn = new MwsPermissionAssignment(\"addAdminSpn\", MwsPermissionAssignmentArgs.builder()\n            .workspaceId(this_.workspaceId())\n            .principalId(sp.id())\n            .permissions(\"ADMIN\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sp:\n    type: databricks:ServicePrincipal\n    properties:\n      displayName: Automation-only SP\n  addAdminSpn:\n    type: databricks:MwsPermissionAssignment\n    name: add_admin_spn\n    properties:\n      workspaceId: ${this.workspaceId}\n      principalId: ${sp.id}\n      permissions:\n        - ADMIN\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.PermissionAssignment \" pulumi-lang-dotnet=\" databricks.PermissionAssignment \" pulumi-lang-go=\" PermissionAssignment \" pulumi-lang-python=\" PermissionAssignment \" pulumi-lang-yaml=\" databricks.PermissionAssignment \" pulumi-lang-java=\" databricks.PermissionAssignment \"\u003e databricks.PermissionAssignment \u003c/span\u003eto manage permission assignment from a workspace context\n\n","properties":{"permissions":{"type":"array","items":{"type":"string"},"description":"The list of workspace permissions to assign to the principal:\n* `\"USER\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e group. This gives basic workspace access.\n* `\"ADMIN\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`admins`\" pulumi-lang-dotnet=\"`Admins`\" pulumi-lang-go=\"`admins`\" pulumi-lang-python=\"`admins`\" pulumi-lang-yaml=\"`admins`\" pulumi-lang-java=\"`admins`\"\u003e`admins`\u003c/span\u003e group. This gives workspace admin privileges to manage users and groups, workspace configurations, and more.\n"},"principalId":{"type":"string","description":"Databricks ID of the user, service principal, or group. The principal ID can be retrieved using the SCIM API, or using databricks_user,\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eor\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata sources.\n"},"workspaceId":{"type":"string","description":"Databricks workspace ID.\n"}},"required":["permissions","principalId","workspaceId"],"inputProperties":{"permissions":{"type":"array","items":{"type":"string"},"description":"The list of workspace permissions to assign to the principal:\n* `\"USER\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e group. This gives basic workspace access.\n* `\"ADMIN\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`admins`\" pulumi-lang-dotnet=\"`Admins`\" pulumi-lang-go=\"`admins`\" pulumi-lang-python=\"`admins`\" pulumi-lang-yaml=\"`admins`\" pulumi-lang-java=\"`admins`\"\u003e`admins`\u003c/span\u003e group. This gives workspace admin privileges to manage users and groups, workspace configurations, and more.\n","willReplaceOnChanges":true},"principalId":{"type":"string","description":"Databricks ID of the user, service principal, or group. The principal ID can be retrieved using the SCIM API, or using databricks_user,\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eor\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata sources.\n","willReplaceOnChanges":true},"workspaceId":{"type":"string","description":"Databricks workspace ID.\n","willReplaceOnChanges":true}},"requiredInputs":["permissions","principalId","workspaceId"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsPermissionAssignment resources.\n","properties":{"permissions":{"type":"array","items":{"type":"string"},"description":"The list of workspace permissions to assign to the principal:\n* `\"USER\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e group. This gives basic workspace access.\n* `\"ADMIN\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`admins`\" pulumi-lang-dotnet=\"`Admins`\" pulumi-lang-go=\"`admins`\" pulumi-lang-python=\"`admins`\" pulumi-lang-yaml=\"`admins`\" pulumi-lang-java=\"`admins`\"\u003e`admins`\u003c/span\u003e group. This gives workspace admin privileges to manage users and groups, workspace configurations, and more.\n","willReplaceOnChanges":true},"principalId":{"type":"string","description":"Databricks ID of the user, service principal, or group. The principal ID can be retrieved using the SCIM API, or using databricks_user,\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eor\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata sources.\n","willReplaceOnChanges":true},"workspaceId":{"type":"string","description":"Databricks workspace ID.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/mwsPrivateAccessSettings:MwsPrivateAccessSettings":{"description":"Allows you to create a Private Access Setting resource that can be used as part of a\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource to create a [Databricks Workspace that leverages AWS PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) or [GCP Private Service Connect](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html)\n\n\u003e This resource can only be used with an account-level provider!\n\nIt is strongly recommended that customers read the [Enable AWS Private Link](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) [Enable GCP Private Service Connect](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html) documentation before trying to leverage this resource.\n\n## Databricks on AWS usage\n\n\u003e Initialize provider with `alias = \"mws\"`, `host  = \"https://accounts.cloud.databricks.com\"` and use `provider = databricks.mws`\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst pas = new databricks.MwsPrivateAccessSettings(\"pas\", {\n    accountId: databricksAccountId,\n    privateAccessSettingsName: `Private Access Settings for ${prefix}`,\n    region: region,\n    publicAccessEnabled: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\npas = databricks.MwsPrivateAccessSettings(\"pas\",\n    account_id=databricks_account_id,\n    private_access_settings_name=f\"Private Access Settings for {prefix}\",\n    region=region,\n    public_access_enabled=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var pas = new Databricks.MwsPrivateAccessSettings(\"pas\", new()\n    {\n        AccountId = databricksAccountId,\n        PrivateAccessSettingsName = $\"Private Access Settings for {prefix}\",\n        Region = region,\n        PublicAccessEnabled = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsPrivateAccessSettings(ctx, \"pas\", \u0026databricks.MwsPrivateAccessSettingsArgs{\n\t\t\tAccountId:                 pulumi.Any(databricksAccountId),\n\t\t\tPrivateAccessSettingsName: pulumi.Sprintf(\"Private Access Settings for %v\", prefix),\n\t\t\tRegion:                    pulumi.Any(region),\n\t\t\tPublicAccessEnabled:       pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsPrivateAccessSettings;\nimport com.pulumi.databricks.MwsPrivateAccessSettingsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var pas = new MwsPrivateAccessSettings(\"pas\", MwsPrivateAccessSettingsArgs.builder()\n            .accountId(databricksAccountId)\n            .privateAccessSettingsName(String.format(\"Private Access Settings for %s\", prefix))\n            .region(region)\n            .publicAccessEnabled(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  pas:\n    type: databricks:MwsPrivateAccessSettings\n    properties:\n      accountId: ${databricksAccountId}\n      privateAccessSettingsName: Private Access Settings for ${prefix}\n      region: ${region}\n      publicAccessEnabled: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nThe `databricks_mws_private_access_settings.pas.private_access_settings_id` can then be used as part of a\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.MwsWorkspaces(\"this\", {\n    awsRegion: region,\n    workspaceName: prefix,\n    credentialsId: thisDatabricksMwsCredentials.credentialsId,\n    storageConfigurationId: thisDatabricksMwsStorageConfigurations.storageConfigurationId,\n    networkId: thisDatabricksMwsNetworks.networkId,\n    privateAccessSettingsId: pas.privateAccessSettingsId,\n    pricingTier: \"ENTERPRISE\",\n}, {\n    dependsOn: [thisDatabricksMwsNetworks],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.MwsWorkspaces(\"this\",\n    aws_region=region,\n    workspace_name=prefix,\n    credentials_id=this_databricks_mws_credentials[\"credentialsId\"],\n    storage_configuration_id=this_databricks_mws_storage_configurations[\"storageConfigurationId\"],\n    network_id=this_databricks_mws_networks[\"networkId\"],\n    private_access_settings_id=pas[\"privateAccessSettingsId\"],\n    pricing_tier=\"ENTERPRISE\",\n    opts = pulumi.ResourceOptions(depends_on=[this_databricks_mws_networks]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.MwsWorkspaces(\"this\", new()\n    {\n        AwsRegion = region,\n        WorkspaceName = prefix,\n        CredentialsId = thisDatabricksMwsCredentials.CredentialsId,\n        StorageConfigurationId = thisDatabricksMwsStorageConfigurations.StorageConfigurationId,\n        NetworkId = thisDatabricksMwsNetworks.NetworkId,\n        PrivateAccessSettingsId = pas.PrivateAccessSettingsId,\n        PricingTier = \"ENTERPRISE\",\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            thisDatabricksMwsNetworks,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsWorkspaces(ctx, \"this\", \u0026databricks.MwsWorkspacesArgs{\n\t\t\tAwsRegion:               pulumi.Any(region),\n\t\t\tWorkspaceName:           pulumi.Any(prefix),\n\t\t\tCredentialsId:           pulumi.Any(thisDatabricksMwsCredentials.CredentialsId),\n\t\t\tStorageConfigurationId:  pulumi.Any(thisDatabricksMwsStorageConfigurations.StorageConfigurationId),\n\t\t\tNetworkId:               pulumi.Any(thisDatabricksMwsNetworks.NetworkId),\n\t\t\tPrivateAccessSettingsId: pulumi.Any(pas.PrivateAccessSettingsId),\n\t\t\tPricingTier:             pulumi.String(\"ENTERPRISE\"),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tthisDatabricksMwsNetworks,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsWorkspaces;\nimport com.pulumi.databricks.MwsWorkspacesArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new MwsWorkspaces(\"this\", MwsWorkspacesArgs.builder()\n            .awsRegion(region)\n            .workspaceName(prefix)\n            .credentialsId(thisDatabricksMwsCredentials.credentialsId())\n            .storageConfigurationId(thisDatabricksMwsStorageConfigurations.storageConfigurationId())\n            .networkId(thisDatabricksMwsNetworks.networkId())\n            .privateAccessSettingsId(pas.privateAccessSettingsId())\n            .pricingTier(\"ENTERPRISE\")\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(thisDatabricksMwsNetworks)\n                .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:MwsWorkspaces\n    properties:\n      awsRegion: ${region}\n      workspaceName: ${prefix}\n      credentialsId: ${thisDatabricksMwsCredentials.credentialsId}\n      storageConfigurationId: ${thisDatabricksMwsStorageConfigurations.storageConfigurationId}\n      networkId: ${thisDatabricksMwsNetworks.networkId}\n      privateAccessSettingsId: ${pas.privateAccessSettingsId}\n      pricingTier: ENTERPRISE\n    options:\n      dependsOn:\n        - ${thisDatabricksMwsNetworks}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Databricks on GCP usage\n\n\u003e Initialize provider with `alias = \"mws\"`, `host  = \"https://accounts.gcp.databricks.com\"` and use `provider = databricks.mws`\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.MwsWorkspaces(\"this\", {\n    workspaceName: \"gcp-workspace\",\n    location: subnetRegion,\n    cloudResourceContainer: {\n        gcp: {\n            projectId: googleProject,\n        },\n    },\n    networkId: thisDatabricksMwsNetworks.networkId,\n    privateAccessSettingsId: pas.privateAccessSettingsId,\n    pricingTier: \"PREMIUM\",\n}, {\n    dependsOn: [thisDatabricksMwsNetworks],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.MwsWorkspaces(\"this\",\n    workspace_name=\"gcp-workspace\",\n    location=subnet_region,\n    cloud_resource_container={\n        \"gcp\": {\n            \"project_id\": google_project,\n        },\n    },\n    network_id=this_databricks_mws_networks[\"networkId\"],\n    private_access_settings_id=pas[\"privateAccessSettingsId\"],\n    pricing_tier=\"PREMIUM\",\n    opts = pulumi.ResourceOptions(depends_on=[this_databricks_mws_networks]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.MwsWorkspaces(\"this\", new()\n    {\n        WorkspaceName = \"gcp-workspace\",\n        Location = subnetRegion,\n        CloudResourceContainer = new Databricks.Inputs.MwsWorkspacesCloudResourceContainerArgs\n        {\n            Gcp = new Databricks.Inputs.MwsWorkspacesCloudResourceContainerGcpArgs\n            {\n                ProjectId = googleProject,\n            },\n        },\n        NetworkId = thisDatabricksMwsNetworks.NetworkId,\n        PrivateAccessSettingsId = pas.PrivateAccessSettingsId,\n        PricingTier = \"PREMIUM\",\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            thisDatabricksMwsNetworks,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsWorkspaces(ctx, \"this\", \u0026databricks.MwsWorkspacesArgs{\n\t\t\tWorkspaceName: pulumi.String(\"gcp-workspace\"),\n\t\t\tLocation:      pulumi.Any(subnetRegion),\n\t\t\tCloudResourceContainer: \u0026databricks.MwsWorkspacesCloudResourceContainerArgs{\n\t\t\t\tGcp: \u0026databricks.MwsWorkspacesCloudResourceContainerGcpArgs{\n\t\t\t\t\tProjectId: pulumi.Any(googleProject),\n\t\t\t\t},\n\t\t\t},\n\t\t\tNetworkId:               pulumi.Any(thisDatabricksMwsNetworks.NetworkId),\n\t\t\tPrivateAccessSettingsId: pulumi.Any(pas.PrivateAccessSettingsId),\n\t\t\tPricingTier:             pulumi.String(\"PREMIUM\"),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tthisDatabricksMwsNetworks,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsWorkspaces;\nimport com.pulumi.databricks.MwsWorkspacesArgs;\nimport com.pulumi.databricks.inputs.MwsWorkspacesCloudResourceContainerArgs;\nimport com.pulumi.databricks.inputs.MwsWorkspacesCloudResourceContainerGcpArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new MwsWorkspaces(\"this\", MwsWorkspacesArgs.builder()\n            .workspaceName(\"gcp-workspace\")\n            .location(subnetRegion)\n            .cloudResourceContainer(MwsWorkspacesCloudResourceContainerArgs.builder()\n                .gcp(MwsWorkspacesCloudResourceContainerGcpArgs.builder()\n                    .projectId(googleProject)\n                    .build())\n                .build())\n            .networkId(thisDatabricksMwsNetworks.networkId())\n            .privateAccessSettingsId(pas.privateAccessSettingsId())\n            .pricingTier(\"PREMIUM\")\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(thisDatabricksMwsNetworks)\n                .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:MwsWorkspaces\n    properties:\n      workspaceName: gcp-workspace\n      location: ${subnetRegion}\n      cloudResourceContainer:\n        gcp:\n          projectId: ${googleProject}\n      networkId: ${thisDatabricksMwsNetworks.networkId}\n      privateAccessSettingsId: ${pas.privateAccessSettingsId}\n      pricingTier: PREMIUM\n    options:\n      dependsOn:\n        - ${thisDatabricksMwsNetworks}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* Provisioning Databricks on AWS guide.\n* Provisioning Databricks on AWS with Private Link guide.\n* Provisioning AWS Databricks workspaces with a Hub \u0026 Spoke firewall for data exfiltration protection guide.\n* Provisioning Databricks workspaces on GCP with Private Service Connect guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003eto register\u003cspan pulumi-lang-nodejs=\" awsVpcEndpoint \" pulumi-lang-dotnet=\" AwsVpcEndpoint \" pulumi-lang-go=\" awsVpcEndpoint \" pulumi-lang-python=\" aws_vpc_endpoint \" pulumi-lang-yaml=\" awsVpcEndpoint \" pulumi-lang-java=\" awsVpcEndpoint \"\u003e aws_vpc_endpoint \u003c/span\u003eresources with Databricks such that they can be used as part of a\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003econfiguration.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eto [configure VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) \u0026 subnets for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up [AWS and GCP workspaces](https://docs.databricks.com/getting-started/overview.html#e2-architecture-1).\n\n","properties":{"accountId":{"type":"string","deprecationMessage":"Configuring \u003cspan pulumi-lang-nodejs=\"`accountId`\" pulumi-lang-dotnet=\"`AccountId`\" pulumi-lang-go=\"`accountId`\" pulumi-lang-python=\"`account_id`\" pulumi-lang-yaml=\"`accountId`\" pulumi-lang-java=\"`accountId`\"\u003e`account_id`\u003c/span\u003e at the resource-level is deprecated; please specify it in the `provider {}` configuration block instead"},"allowedVpcEndpointIds":{"type":"array","items":{"type":"string"},"description":"An array of\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003e\u003cspan pulumi-lang-nodejs=\"`vpcEndpointId`\" pulumi-lang-dotnet=\"`VpcEndpointId`\" pulumi-lang-go=\"`vpcEndpointId`\" pulumi-lang-python=\"`vpc_endpoint_id`\" pulumi-lang-yaml=\"`vpcEndpointId`\" pulumi-lang-java=\"`vpcEndpointId`\"\u003e`vpc_endpoint_id`\u003c/span\u003e (not \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e). Only used when \u003cspan pulumi-lang-nodejs=\"`privateAccessLevel`\" pulumi-lang-dotnet=\"`PrivateAccessLevel`\" pulumi-lang-go=\"`privateAccessLevel`\" pulumi-lang-python=\"`private_access_level`\" pulumi-lang-yaml=\"`privateAccessLevel`\" pulumi-lang-java=\"`privateAccessLevel`\"\u003e`private_access_level`\u003c/span\u003e is set to `ENDPOINT`. This is an allow list of\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003ethat in your account that can connect to your\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eover AWS PrivateLink. If hybrid access to your workspace is enabled by setting \u003cspan pulumi-lang-nodejs=\"`publicAccessEnabled`\" pulumi-lang-dotnet=\"`PublicAccessEnabled`\" pulumi-lang-go=\"`publicAccessEnabled`\" pulumi-lang-python=\"`public_access_enabled`\" pulumi-lang-yaml=\"`publicAccessEnabled`\" pulumi-lang-java=\"`publicAccessEnabled`\"\u003e`public_access_enabled`\u003c/span\u003e to true, then this control only works for PrivateLink connections. To control how your workspace is accessed via public internet, see the article for databricks_ip_access_list.\n"},"privateAccessLevel":{"type":"string","description":"The private access level controls which VPC endpoints can connect to the UI or API of any workspace that attaches this private access settings object. `ACCOUNT` level access _(default)_ lets only\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003ethat are registered in your Databricks account connect to your databricks_mws_workspaces. `ENDPOINT` level access lets only specified\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003econnect to your workspace. Please see the \u003cspan pulumi-lang-nodejs=\"`allowedVpcEndpointIds`\" pulumi-lang-dotnet=\"`AllowedVpcEndpointIds`\" pulumi-lang-go=\"`allowedVpcEndpointIds`\" pulumi-lang-python=\"`allowed_vpc_endpoint_ids`\" pulumi-lang-yaml=\"`allowedVpcEndpointIds`\" pulumi-lang-java=\"`allowedVpcEndpointIds`\"\u003e`allowed_vpc_endpoint_ids`\u003c/span\u003e documentation for more details.\n"},"privateAccessSettingsId":{"type":"string","description":"Canonical unique identifier of Private Access Settings in Databricks Account\n"},"privateAccessSettingsName":{"type":"string","description":"Name of Private Access Settings in Databricks Account\n"},"publicAccessEnabled":{"type":"boolean","description":"If \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e, the\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003ecan be accessed over the\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003eas well as over the public network. In such a case, you could also configure an\u003cspan pulumi-lang-nodejs=\" databricks.IpAccessList \" pulumi-lang-dotnet=\" databricks.IpAccessList \" pulumi-lang-go=\" IpAccessList \" pulumi-lang-python=\" IpAccessList \" pulumi-lang-yaml=\" databricks.IpAccessList \" pulumi-lang-java=\" databricks.IpAccessList \"\u003e databricks.IpAccessList \u003c/span\u003efor the workspace, to restrict the source networks that could be used to access it over the public network. If \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e, the workspace can be accessed only over VPC endpoints, and not over the public network. Once explicitly set, this field becomes mandatory.\n"},"region":{"type":"string","description":"Region of AWS VPC or the Google Cloud VPC network\n"}},"required":["accountId","privateAccessSettingsId","privateAccessSettingsName","region"],"inputProperties":{"accountId":{"type":"string","deprecationMessage":"Configuring \u003cspan pulumi-lang-nodejs=\"`accountId`\" pulumi-lang-dotnet=\"`AccountId`\" pulumi-lang-go=\"`accountId`\" pulumi-lang-python=\"`account_id`\" pulumi-lang-yaml=\"`accountId`\" pulumi-lang-java=\"`accountId`\"\u003e`account_id`\u003c/span\u003e at the resource-level is deprecated; please specify it in the `provider {}` configuration block instead"},"allowedVpcEndpointIds":{"type":"array","items":{"type":"string"},"description":"An array of\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003e\u003cspan pulumi-lang-nodejs=\"`vpcEndpointId`\" pulumi-lang-dotnet=\"`VpcEndpointId`\" pulumi-lang-go=\"`vpcEndpointId`\" pulumi-lang-python=\"`vpc_endpoint_id`\" pulumi-lang-yaml=\"`vpcEndpointId`\" pulumi-lang-java=\"`vpcEndpointId`\"\u003e`vpc_endpoint_id`\u003c/span\u003e (not \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e). Only used when \u003cspan pulumi-lang-nodejs=\"`privateAccessLevel`\" pulumi-lang-dotnet=\"`PrivateAccessLevel`\" pulumi-lang-go=\"`privateAccessLevel`\" pulumi-lang-python=\"`private_access_level`\" pulumi-lang-yaml=\"`privateAccessLevel`\" pulumi-lang-java=\"`privateAccessLevel`\"\u003e`private_access_level`\u003c/span\u003e is set to `ENDPOINT`. This is an allow list of\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003ethat in your account that can connect to your\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eover AWS PrivateLink. If hybrid access to your workspace is enabled by setting \u003cspan pulumi-lang-nodejs=\"`publicAccessEnabled`\" pulumi-lang-dotnet=\"`PublicAccessEnabled`\" pulumi-lang-go=\"`publicAccessEnabled`\" pulumi-lang-python=\"`public_access_enabled`\" pulumi-lang-yaml=\"`publicAccessEnabled`\" pulumi-lang-java=\"`publicAccessEnabled`\"\u003e`public_access_enabled`\u003c/span\u003e to true, then this control only works for PrivateLink connections. To control how your workspace is accessed via public internet, see the article for databricks_ip_access_list.\n"},"privateAccessLevel":{"type":"string","description":"The private access level controls which VPC endpoints can connect to the UI or API of any workspace that attaches this private access settings object. `ACCOUNT` level access _(default)_ lets only\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003ethat are registered in your Databricks account connect to your databricks_mws_workspaces. `ENDPOINT` level access lets only specified\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003econnect to your workspace. Please see the \u003cspan pulumi-lang-nodejs=\"`allowedVpcEndpointIds`\" pulumi-lang-dotnet=\"`AllowedVpcEndpointIds`\" pulumi-lang-go=\"`allowedVpcEndpointIds`\" pulumi-lang-python=\"`allowed_vpc_endpoint_ids`\" pulumi-lang-yaml=\"`allowedVpcEndpointIds`\" pulumi-lang-java=\"`allowedVpcEndpointIds`\"\u003e`allowed_vpc_endpoint_ids`\u003c/span\u003e documentation for more details.\n"},"privateAccessSettingsId":{"type":"string","description":"Canonical unique identifier of Private Access Settings in Databricks Account\n"},"privateAccessSettingsName":{"type":"string","description":"Name of Private Access Settings in Databricks Account\n"},"publicAccessEnabled":{"type":"boolean","description":"If \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e, the\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003ecan be accessed over the\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003eas well as over the public network. In such a case, you could also configure an\u003cspan pulumi-lang-nodejs=\" databricks.IpAccessList \" pulumi-lang-dotnet=\" databricks.IpAccessList \" pulumi-lang-go=\" IpAccessList \" pulumi-lang-python=\" IpAccessList \" pulumi-lang-yaml=\" databricks.IpAccessList \" pulumi-lang-java=\" databricks.IpAccessList \"\u003e databricks.IpAccessList \u003c/span\u003efor the workspace, to restrict the source networks that could be used to access it over the public network. If \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e, the workspace can be accessed only over VPC endpoints, and not over the public network. Once explicitly set, this field becomes mandatory.\n"},"region":{"type":"string","description":"Region of AWS VPC or the Google Cloud VPC network\n"}},"requiredInputs":["privateAccessSettingsName","region"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsPrivateAccessSettings resources.\n","properties":{"accountId":{"type":"string","deprecationMessage":"Configuring \u003cspan pulumi-lang-nodejs=\"`accountId`\" pulumi-lang-dotnet=\"`AccountId`\" pulumi-lang-go=\"`accountId`\" pulumi-lang-python=\"`account_id`\" pulumi-lang-yaml=\"`accountId`\" pulumi-lang-java=\"`accountId`\"\u003e`account_id`\u003c/span\u003e at the resource-level is deprecated; please specify it in the `provider {}` configuration block instead"},"allowedVpcEndpointIds":{"type":"array","items":{"type":"string"},"description":"An array of\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003e\u003cspan pulumi-lang-nodejs=\"`vpcEndpointId`\" pulumi-lang-dotnet=\"`VpcEndpointId`\" pulumi-lang-go=\"`vpcEndpointId`\" pulumi-lang-python=\"`vpc_endpoint_id`\" pulumi-lang-yaml=\"`vpcEndpointId`\" pulumi-lang-java=\"`vpcEndpointId`\"\u003e`vpc_endpoint_id`\u003c/span\u003e (not \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e). Only used when \u003cspan pulumi-lang-nodejs=\"`privateAccessLevel`\" pulumi-lang-dotnet=\"`PrivateAccessLevel`\" pulumi-lang-go=\"`privateAccessLevel`\" pulumi-lang-python=\"`private_access_level`\" pulumi-lang-yaml=\"`privateAccessLevel`\" pulumi-lang-java=\"`privateAccessLevel`\"\u003e`private_access_level`\u003c/span\u003e is set to `ENDPOINT`. This is an allow list of\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003ethat in your account that can connect to your\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eover AWS PrivateLink. If hybrid access to your workspace is enabled by setting \u003cspan pulumi-lang-nodejs=\"`publicAccessEnabled`\" pulumi-lang-dotnet=\"`PublicAccessEnabled`\" pulumi-lang-go=\"`publicAccessEnabled`\" pulumi-lang-python=\"`public_access_enabled`\" pulumi-lang-yaml=\"`publicAccessEnabled`\" pulumi-lang-java=\"`publicAccessEnabled`\"\u003e`public_access_enabled`\u003c/span\u003e to true, then this control only works for PrivateLink connections. To control how your workspace is accessed via public internet, see the article for databricks_ip_access_list.\n"},"privateAccessLevel":{"type":"string","description":"The private access level controls which VPC endpoints can connect to the UI or API of any workspace that attaches this private access settings object. `ACCOUNT` level access _(default)_ lets only\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003ethat are registered in your Databricks account connect to your databricks_mws_workspaces. `ENDPOINT` level access lets only specified\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003econnect to your workspace. Please see the \u003cspan pulumi-lang-nodejs=\"`allowedVpcEndpointIds`\" pulumi-lang-dotnet=\"`AllowedVpcEndpointIds`\" pulumi-lang-go=\"`allowedVpcEndpointIds`\" pulumi-lang-python=\"`allowed_vpc_endpoint_ids`\" pulumi-lang-yaml=\"`allowedVpcEndpointIds`\" pulumi-lang-java=\"`allowedVpcEndpointIds`\"\u003e`allowed_vpc_endpoint_ids`\u003c/span\u003e documentation for more details.\n"},"privateAccessSettingsId":{"type":"string","description":"Canonical unique identifier of Private Access Settings in Databricks Account\n"},"privateAccessSettingsName":{"type":"string","description":"Name of Private Access Settings in Databricks Account\n"},"publicAccessEnabled":{"type":"boolean","description":"If \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e, the\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003ecan be accessed over the\u003cspan pulumi-lang-nodejs=\" databricks.MwsVpcEndpoint \" pulumi-lang-dotnet=\" databricks.MwsVpcEndpoint \" pulumi-lang-go=\" MwsVpcEndpoint \" pulumi-lang-python=\" MwsVpcEndpoint \" pulumi-lang-yaml=\" databricks.MwsVpcEndpoint \" pulumi-lang-java=\" databricks.MwsVpcEndpoint \"\u003e databricks.MwsVpcEndpoint \u003c/span\u003eas well as over the public network. In such a case, you could also configure an\u003cspan pulumi-lang-nodejs=\" databricks.IpAccessList \" pulumi-lang-dotnet=\" databricks.IpAccessList \" pulumi-lang-go=\" IpAccessList \" pulumi-lang-python=\" IpAccessList \" pulumi-lang-yaml=\" databricks.IpAccessList \" pulumi-lang-java=\" databricks.IpAccessList \"\u003e databricks.IpAccessList \u003c/span\u003efor the workspace, to restrict the source networks that could be used to access it over the public network. If \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e, the workspace can be accessed only over VPC endpoints, and not over the public network. Once explicitly set, this field becomes mandatory.\n"},"region":{"type":"string","description":"Region of AWS VPC or the Google Cloud VPC network\n"}},"type":"object"}},"databricks:index/mwsStorageConfigurations:MwsStorageConfigurations":{"description":"This resource to configure root bucket new workspaces within AWS.\n\n\u003e This resource can only be used with an account-level provider!\n\nIt is important to understand that this will require you to configure your provider separately for the multiple workspaces resources. This will point to \u003chttps://accounts.cloud.databricks.com\u003e for the HOST and it will use basic auth as that is the only authentication method available for multiple workspaces api.\n\nPlease follow this complete runnable example with new VPC and new workspace setup. Please pay special attention to the fact that there you have two different instances of a databricks provider - one for deploying workspaces (with `host=\"https://accounts.cloud.databricks.com/\"`) and another for the workspace you've created with\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource. If you want both creation of workspaces \u0026 clusters within workspace within the same terraform module (essentially same directory), you should use the provider aliasing feature of Pulumi. We strongly recommend having one terraform module for creation of workspace + PAT token and the rest in different modules.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as aws from \"@pulumi/aws\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\n// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\nconst rootStorageBucket = new aws.index.S3Bucket(\"root_storage_bucket\", {\n    bucket: `${prefix}-rootbucket`,\n    acl: \"private\",\n});\nconst rootVersioning = new aws.index.S3BucketVersioning(\"root_versioning\", {\n    bucket: rootStorageBucket.id,\n    versioningConfiguration: [{\n        status: \"Disabled\",\n    }],\n});\nconst _this = new databricks.MwsStorageConfigurations(\"this\", {\n    accountId: databricksAccountId,\n    storageConfigurationName: `${prefix}-storage`,\n    bucketName: rootStorageBucket.bucket,\n});\n```\n```python\nimport pulumi\nimport pulumi_aws as aws\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\n# Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\nroot_storage_bucket = aws.index.S3Bucket(\"root_storage_bucket\",\n    bucket=f{prefix}-rootbucket,\n    acl=private)\nroot_versioning = aws.index.S3BucketVersioning(\"root_versioning\",\n    bucket=root_storage_bucket.id,\n    versioning_configuration=[{\n        status: Disabled,\n    }])\nthis = databricks.MwsStorageConfigurations(\"this\",\n    account_id=databricks_account_id,\n    storage_configuration_name=f\"{prefix}-storage\",\n    bucket_name=root_storage_bucket[\"bucket\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Aws = Pulumi.Aws;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    var rootStorageBucket = new Aws.Index.S3Bucket(\"root_storage_bucket\", new()\n    {\n        Bucket = $\"{prefix}-rootbucket\",\n        Acl = \"private\",\n    });\n\n    var rootVersioning = new Aws.Index.S3BucketVersioning(\"root_versioning\", new()\n    {\n        Bucket = rootStorageBucket.Id,\n        VersioningConfiguration = new[]\n        {\n            \n            {\n                { \"status\", \"Disabled\" },\n            },\n        },\n    });\n\n    var @this = new Databricks.MwsStorageConfigurations(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        StorageConfigurationName = $\"{prefix}-storage\",\n        BucketName = rootStorageBucket.Bucket,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-aws/sdk/v7/go/aws\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\trootStorageBucket, err := aws.NewS3Bucket(ctx, \"root_storage_bucket\", \u0026aws.S3BucketArgs{\n\t\t\tBucket: fmt.Sprintf(\"%v-rootbucket\", prefix),\n\t\t\tAcl:    \"private\",\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewS3BucketVersioning(ctx, \"root_versioning\", \u0026aws.S3BucketVersioningArgs{\n\t\t\tBucket: rootStorageBucket.Id,\n\t\t\tVersioningConfiguration: []map[string]interface{}{\n\t\t\t\tmap[string]interface{}{\n\t\t\t\t\t\"status\": \"Disabled\",\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsStorageConfigurations(ctx, \"this\", \u0026databricks.MwsStorageConfigurationsArgs{\n\t\t\tAccountId:                pulumi.Any(databricksAccountId),\n\t\t\tStorageConfigurationName: pulumi.Sprintf(\"%v-storage\", prefix),\n\t\t\tBucketName:               rootStorageBucket.Bucket,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.aws.S3Bucket;\nimport com.pulumi.aws.S3BucketArgs;\nimport com.pulumi.aws.S3BucketVersioning;\nimport com.pulumi.aws.S3BucketVersioningArgs;\nimport com.pulumi.databricks.MwsStorageConfigurations;\nimport com.pulumi.databricks.MwsStorageConfigurationsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        var rootStorageBucket = new S3Bucket(\"rootStorageBucket\", S3BucketArgs.builder()\n            .bucket(String.format(\"%s-rootbucket\", prefix))\n            .acl(\"private\")\n            .build());\n\n        var rootVersioning = new S3BucketVersioning(\"rootVersioning\", S3BucketVersioningArgs.builder()\n            .bucket(rootStorageBucket.id())\n            .versioningConfiguration(List.of(Map.of(\"status\", \"Disabled\")))\n            .build());\n\n        var this_ = new MwsStorageConfigurations(\"this\", MwsStorageConfigurationsArgs.builder()\n            .accountId(databricksAccountId)\n            .storageConfigurationName(String.format(\"%s-storage\", prefix))\n            .bucketName(rootStorageBucket.bucket())\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\nresources:\n  rootStorageBucket:\n    type: aws:S3Bucket\n    name: root_storage_bucket\n    properties:\n      bucket: ${prefix}-rootbucket\n      acl: private\n  rootVersioning:\n    type: aws:S3BucketVersioning\n    name: root_versioning\n    properties:\n      bucket: ${rootStorageBucket.id}\n      versioningConfiguration:\n        - status: Disabled\n  this:\n    type: databricks:MwsStorageConfigurations\n    properties:\n      accountId: ${databricksAccountId}\n      storageConfigurationName: ${prefix}-storage\n      bucketName: ${rootStorageBucket.bucket}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Example Usage with Role ARN\n\nWhen sharing an S3 bucket between root storage and a Unity Catalog metastore, you can specify a role ARN:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.MwsStorageConfigurations(\"this\", {\n    accountId: databricksAccountId,\n    storageConfigurationName: `${prefix}-storage`,\n    bucketName: rootStorageBucket.bucket,\n    roleArn: unityCatalogRole.arn,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.MwsStorageConfigurations(\"this\",\n    account_id=databricks_account_id,\n    storage_configuration_name=f\"{prefix}-storage\",\n    bucket_name=root_storage_bucket[\"bucket\"],\n    role_arn=unity_catalog_role[\"arn\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.MwsStorageConfigurations(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        StorageConfigurationName = $\"{prefix}-storage\",\n        BucketName = rootStorageBucket.Bucket,\n        RoleArn = unityCatalogRole.Arn,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsStorageConfigurations(ctx, \"this\", \u0026databricks.MwsStorageConfigurationsArgs{\n\t\t\tAccountId:                pulumi.Any(databricksAccountId),\n\t\t\tStorageConfigurationName: pulumi.Sprintf(\"%v-storage\", prefix),\n\t\t\tBucketName:               pulumi.Any(rootStorageBucket.Bucket),\n\t\t\tRoleArn:                  pulumi.Any(unityCatalogRole.Arn),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsStorageConfigurations;\nimport com.pulumi.databricks.MwsStorageConfigurationsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new MwsStorageConfigurations(\"this\", MwsStorageConfigurationsArgs.builder()\n            .accountId(databricksAccountId)\n            .storageConfigurationName(String.format(\"%s-storage\", prefix))\n            .bucketName(rootStorageBucket.bucket())\n            .roleArn(unityCatalogRole.arn())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:MwsStorageConfigurations\n    properties:\n      accountId: ${databricksAccountId}\n      storageConfigurationName: ${prefix}-storage\n      bucketName: ${rootStorageBucket.bucket}\n      roleArn: ${unityCatalogRole.arn}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* Provisioning Databricks on AWS guide.\n* Provisioning Databricks on AWS with Private Link guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsCredentials \" pulumi-lang-dotnet=\" databricks.MwsCredentials \" pulumi-lang-go=\" MwsCredentials \" pulumi-lang-python=\" MwsCredentials \" pulumi-lang-yaml=\" databricks.MwsCredentials \" pulumi-lang-java=\" databricks.MwsCredentials \"\u003e databricks.MwsCredentials \u003c/span\u003eto configure the cross-account role for creation of new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-dotnet=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-go=\" MwsCustomerManagedKeys \" pulumi-lang-python=\" MwsCustomerManagedKeys \" pulumi-lang-yaml=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-java=\" databricks.MwsCustomerManagedKeys \"\u003e databricks.MwsCustomerManagedKeys \u003c/span\u003eto configure KMS keys for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsLogDelivery \" pulumi-lang-dotnet=\" databricks.MwsLogDelivery \" pulumi-lang-go=\" MwsLogDelivery \" pulumi-lang-python=\" MwsLogDelivery \" pulumi-lang-yaml=\" databricks.MwsLogDelivery \" pulumi-lang-java=\" databricks.MwsLogDelivery \"\u003e databricks.MwsLogDelivery \u003c/span\u003eto configure delivery of [billable usage logs](https://docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html) and [audit logs](https://docs.databricks.com/administration-guide/account-settings/audit-logs.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eto [configure VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) \u0026 subnets for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up [AWS and GCP workspaces](https://docs.databricks.com/getting-started/overview.html#e2-architecture-1).\n\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/)\n","secret":true},"bucketName":{"type":"string","description":"name of AWS S3 bucket\n"},"creationTime":{"type":"integer"},"roleArn":{"type":"string","description":"The ARN of the IAM role that Databricks will assume to access the S3 bucket. This allows sharing an S3 bucket between root storage and the default catalog for a workspace. See the [Databricks API documentation](https://docs.databricks.com/api/account/storage/create) for more details.\n"},"storageConfigurationId":{"type":"string","description":"(String) id of storage config to be used for \u003cspan pulumi-lang-nodejs=\"`databricksMwsWorkspace`\" pulumi-lang-dotnet=\"`DatabricksMwsWorkspace`\" pulumi-lang-go=\"`databricksMwsWorkspace`\" pulumi-lang-python=\"`databricks_mws_workspace`\" pulumi-lang-yaml=\"`databricksMwsWorkspace`\" pulumi-lang-java=\"`databricksMwsWorkspace`\"\u003e`databricks_mws_workspace`\u003c/span\u003e resource.\n"},"storageConfigurationName":{"type":"string","description":"name under which this storage configuration is stored\n\nThe following arguments are optional:\n"}},"required":["accountId","bucketName","creationTime","storageConfigurationId","storageConfigurationName"],"inputProperties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/)\n","secret":true,"willReplaceOnChanges":true},"bucketName":{"type":"string","description":"name of AWS S3 bucket\n","willReplaceOnChanges":true},"roleArn":{"type":"string","description":"The ARN of the IAM role that Databricks will assume to access the S3 bucket. This allows sharing an S3 bucket between root storage and the default catalog for a workspace. See the [Databricks API documentation](https://docs.databricks.com/api/account/storage/create) for more details.\n","willReplaceOnChanges":true},"storageConfigurationName":{"type":"string","description":"name under which this storage configuration is stored\n\nThe following arguments are optional:\n","willReplaceOnChanges":true}},"requiredInputs":["accountId","bucketName","storageConfigurationName"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsStorageConfigurations resources.\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/)\n","secret":true,"willReplaceOnChanges":true},"bucketName":{"type":"string","description":"name of AWS S3 bucket\n","willReplaceOnChanges":true},"creationTime":{"type":"integer"},"roleArn":{"type":"string","description":"The ARN of the IAM role that Databricks will assume to access the S3 bucket. This allows sharing an S3 bucket between root storage and the default catalog for a workspace. See the [Databricks API documentation](https://docs.databricks.com/api/account/storage/create) for more details.\n","willReplaceOnChanges":true},"storageConfigurationId":{"type":"string","description":"(String) id of storage config to be used for \u003cspan pulumi-lang-nodejs=\"`databricksMwsWorkspace`\" pulumi-lang-dotnet=\"`DatabricksMwsWorkspace`\" pulumi-lang-go=\"`databricksMwsWorkspace`\" pulumi-lang-python=\"`databricks_mws_workspace`\" pulumi-lang-yaml=\"`databricksMwsWorkspace`\" pulumi-lang-java=\"`databricksMwsWorkspace`\"\u003e`databricks_mws_workspace`\u003c/span\u003e resource.\n"},"storageConfigurationName":{"type":"string","description":"name under which this storage configuration is stored\n\nThe following arguments are optional:\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/mwsVpcEndpoint:MwsVpcEndpoint":{"description":"Enables you to register\u003cspan pulumi-lang-nodejs=\" awsVpcEndpoint \" pulumi-lang-dotnet=\" AwsVpcEndpoint \" pulumi-lang-go=\" awsVpcEndpoint \" pulumi-lang-python=\" aws_vpc_endpoint \" pulumi-lang-yaml=\" awsVpcEndpoint \" pulumi-lang-java=\" awsVpcEndpoint \"\u003e aws_vpc_endpoint \u003c/span\u003eresources or gcp\u003cspan pulumi-lang-nodejs=\" vpcEndpoint \" pulumi-lang-dotnet=\" VpcEndpoint \" pulumi-lang-go=\" vpcEndpoint \" pulumi-lang-python=\" vpc_endpoint \" pulumi-lang-yaml=\" vpcEndpoint \" pulumi-lang-java=\" vpcEndpoint \"\u003e vpc_endpoint \u003c/span\u003eresources with Databricks such that they can be used as part of a\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003econfiguration.\n\n\u003e This resource can only be used with an account-level provider!\n\nIt is strongly recommended that customers read the [Enable AWS Private Link](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) or the [Enable GCP Private Service Connect](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html) documentation before trying to leverage this resource.\n\n## Example Usage\n\n### Databricks on AWS usage\n\nBefore using this resource, you will need to create the necessary VPC Endpoints as per your [VPC endpoint requirements](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html#vpc-endpoint-requirements). You can use the\u003cspan pulumi-lang-nodejs=\" awsVpcEndpoint \" pulumi-lang-dotnet=\" AwsVpcEndpoint \" pulumi-lang-go=\" awsVpcEndpoint \" pulumi-lang-python=\" aws_vpc_endpoint \" pulumi-lang-yaml=\" awsVpcEndpoint \" pulumi-lang-java=\" awsVpcEndpoint \"\u003e aws_vpc_endpoint \u003c/span\u003eresource for this, for example:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as aws from \"@pulumi/aws\";\n\nconst workspace = new aws.index.VpcEndpoint(\"workspace\", {\n    vpcId: vpc.vpcId,\n    serviceName: privateLink.workspaceService,\n    vpcEndpointType: \"Interface\",\n    securityGroupIds: [vpc.defaultSecurityGroupId],\n    subnetIds: [plSubnet.id],\n    privateDnsEnabled: true,\n}, {\n    dependsOn: [plSubnet],\n});\nconst relay = new aws.index.VpcEndpoint(\"relay\", {\n    vpcId: vpc.vpcId,\n    serviceName: privateLink.relayService,\n    vpcEndpointType: \"Interface\",\n    securityGroupIds: [vpc.defaultSecurityGroupId],\n    subnetIds: [plSubnet.id],\n    privateDnsEnabled: true,\n}, {\n    dependsOn: [plSubnet],\n});\n```\n```python\nimport pulumi\nimport pulumi_aws as aws\n\nworkspace = aws.index.VpcEndpoint(\"workspace\",\n    vpc_id=vpc.vpc_id,\n    service_name=private_link.workspace_service,\n    vpc_endpoint_type=Interface,\n    security_group_ids=[vpc.default_security_group_id],\n    subnet_ids=[pl_subnet.id],\n    private_dns_enabled=True,\n    opts = pulumi.ResourceOptions(depends_on=[pl_subnet]))\nrelay = aws.index.VpcEndpoint(\"relay\",\n    vpc_id=vpc.vpc_id,\n    service_name=private_link.relay_service,\n    vpc_endpoint_type=Interface,\n    security_group_ids=[vpc.default_security_group_id],\n    subnet_ids=[pl_subnet.id],\n    private_dns_enabled=True,\n    opts = pulumi.ResourceOptions(depends_on=[pl_subnet]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Aws = Pulumi.Aws;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var workspace = new Aws.Index.VpcEndpoint(\"workspace\", new()\n    {\n        VpcId = vpc.VpcId,\n        ServiceName = privateLink.WorkspaceService,\n        VpcEndpointType = \"Interface\",\n        SecurityGroupIds = new[]\n        {\n            vpc.DefaultSecurityGroupId,\n        },\n        SubnetIds = new[]\n        {\n            plSubnet.Id,\n        },\n        PrivateDnsEnabled = true,\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            plSubnet,\n        },\n    });\n\n    var relay = new Aws.Index.VpcEndpoint(\"relay\", new()\n    {\n        VpcId = vpc.VpcId,\n        ServiceName = privateLink.RelayService,\n        VpcEndpointType = \"Interface\",\n        SecurityGroupIds = new[]\n        {\n            vpc.DefaultSecurityGroupId,\n        },\n        SubnetIds = new[]\n        {\n            plSubnet.Id,\n        },\n        PrivateDnsEnabled = true,\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            plSubnet,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-aws/sdk/v7/go/aws\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := aws.NewVpcEndpoint(ctx, \"workspace\", \u0026aws.VpcEndpointArgs{\n\t\t\tVpcId:           vpc.VpcId,\n\t\t\tServiceName:     privateLink.WorkspaceService,\n\t\t\tVpcEndpointType: \"Interface\",\n\t\t\tSecurityGroupIds: []interface{}{\n\t\t\t\tvpc.DefaultSecurityGroupId,\n\t\t\t},\n\t\t\tSubnetIds: []interface{}{\n\t\t\t\tplSubnet.Id,\n\t\t\t},\n\t\t\tPrivateDnsEnabled: true,\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tplSubnet,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewVpcEndpoint(ctx, \"relay\", \u0026aws.VpcEndpointArgs{\n\t\t\tVpcId:           vpc.VpcId,\n\t\t\tServiceName:     privateLink.RelayService,\n\t\t\tVpcEndpointType: \"Interface\",\n\t\t\tSecurityGroupIds: []interface{}{\n\t\t\t\tvpc.DefaultSecurityGroupId,\n\t\t\t},\n\t\t\tSubnetIds: []interface{}{\n\t\t\t\tplSubnet.Id,\n\t\t\t},\n\t\t\tPrivateDnsEnabled: true,\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tplSubnet,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.aws.VpcEndpoint;\nimport com.pulumi.aws.VpcEndpointArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var workspace = new VpcEndpoint(\"workspace\", VpcEndpointArgs.builder()\n            .vpcId(vpc.vpcId())\n            .serviceName(privateLink.workspaceService())\n            .vpcEndpointType(\"Interface\")\n            .securityGroupIds(List.of(vpc.defaultSecurityGroupId()))\n            .subnetIds(List.of(plSubnet.id()))\n            .privateDnsEnabled(true)\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(List.of(plSubnet))\n                .build());\n\n        var relay = new VpcEndpoint(\"relay\", VpcEndpointArgs.builder()\n            .vpcId(vpc.vpcId())\n            .serviceName(privateLink.relayService())\n            .vpcEndpointType(\"Interface\")\n            .securityGroupIds(List.of(vpc.defaultSecurityGroupId()))\n            .subnetIds(List.of(plSubnet.id()))\n            .privateDnsEnabled(true)\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(List.of(plSubnet))\n                .build());\n\n    }\n}\n```\n```yaml\nresources:\n  workspace:\n    type: aws:VpcEndpoint\n    properties:\n      vpcId: ${vpc.vpcId}\n      serviceName: ${privateLink.workspaceService}\n      vpcEndpointType: Interface\n      securityGroupIds:\n        - ${vpc.defaultSecurityGroupId}\n      subnetIds:\n        - ${plSubnet.id}\n      privateDnsEnabled: true\n    options:\n      dependsOn:\n        - ${plSubnet}\n  relay:\n    type: aws:VpcEndpoint\n    properties:\n      vpcId: ${vpc.vpcId}\n      serviceName: ${privateLink.relayService}\n      vpcEndpointType: Interface\n      securityGroupIds:\n        - ${vpc.defaultSecurityGroupId}\n      subnetIds:\n        - ${plSubnet.id}\n      privateDnsEnabled: true\n    options:\n      dependsOn:\n        - ${plSubnet}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nDepending on your use case, you may need or choose to add VPC Endpoints for the AWS Services Databricks uses. See [Add VPC endpoints for other AWS services (recommended but optional)\n](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html#step-9-add-vpc-endpoints-for-other-aws-services-recommended-but-optional) for more information. For example:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as aws from \"@pulumi/aws\";\n\nconst s3 = new aws.index.VpcEndpoint(\"s3\", {\n    vpcId: vpc.vpcId,\n    routeTableIds: vpc.privateRouteTableIds,\n    serviceName: `com.amazonaws.${region}.s3`,\n}, {\n    dependsOn: [vpc],\n});\nconst sts = new aws.index.VpcEndpoint(\"sts\", {\n    vpcId: vpc.vpcId,\n    serviceName: `com.amazonaws.${region}.sts`,\n    vpcEndpointType: \"Interface\",\n    subnetIds: vpc.privateSubnets,\n    securityGroupIds: [vpc.defaultSecurityGroupId],\n    privateDnsEnabled: true,\n}, {\n    dependsOn: [vpc],\n});\nconst kinesis_streams = new aws.index.VpcEndpoint(\"kinesis-streams\", {\n    vpcId: vpc.vpcId,\n    serviceName: `com.amazonaws.${region}.kinesis-streams`,\n    vpcEndpointType: \"Interface\",\n    subnetIds: vpc.privateSubnets,\n    securityGroupIds: [vpc.defaultSecurityGroupId],\n}, {\n    dependsOn: [vpc],\n});\n```\n```python\nimport pulumi\nimport pulumi_aws as aws\n\ns3 = aws.index.VpcEndpoint(\"s3\",\n    vpc_id=vpc.vpc_id,\n    route_table_ids=vpc.private_route_table_ids,\n    service_name=fcom.amazonaws.{region}.s3,\n    opts = pulumi.ResourceOptions(depends_on=[vpc]))\nsts = aws.index.VpcEndpoint(\"sts\",\n    vpc_id=vpc.vpc_id,\n    service_name=fcom.amazonaws.{region}.sts,\n    vpc_endpoint_type=Interface,\n    subnet_ids=vpc.private_subnets,\n    security_group_ids=[vpc.default_security_group_id],\n    private_dns_enabled=True,\n    opts = pulumi.ResourceOptions(depends_on=[vpc]))\nkinesis_streams = aws.index.VpcEndpoint(\"kinesis-streams\",\n    vpc_id=vpc.vpc_id,\n    service_name=fcom.amazonaws.{region}.kinesis-streams,\n    vpc_endpoint_type=Interface,\n    subnet_ids=vpc.private_subnets,\n    security_group_ids=[vpc.default_security_group_id],\n    opts = pulumi.ResourceOptions(depends_on=[vpc]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Aws = Pulumi.Aws;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var s3 = new Aws.Index.VpcEndpoint(\"s3\", new()\n    {\n        VpcId = vpc.VpcId,\n        RouteTableIds = vpc.PrivateRouteTableIds,\n        ServiceName = $\"com.amazonaws.{region}.s3\",\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            vpc,\n        },\n    });\n\n    var sts = new Aws.Index.VpcEndpoint(\"sts\", new()\n    {\n        VpcId = vpc.VpcId,\n        ServiceName = $\"com.amazonaws.{region}.sts\",\n        VpcEndpointType = \"Interface\",\n        SubnetIds = vpc.PrivateSubnets,\n        SecurityGroupIds = new[]\n        {\n            vpc.DefaultSecurityGroupId,\n        },\n        PrivateDnsEnabled = true,\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            vpc,\n        },\n    });\n\n    var kinesis_streams = new Aws.Index.VpcEndpoint(\"kinesis-streams\", new()\n    {\n        VpcId = vpc.VpcId,\n        ServiceName = $\"com.amazonaws.{region}.kinesis-streams\",\n        VpcEndpointType = \"Interface\",\n        SubnetIds = vpc.PrivateSubnets,\n        SecurityGroupIds = new[]\n        {\n            vpc.DefaultSecurityGroupId,\n        },\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            vpc,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-aws/sdk/v7/go/aws\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := aws.NewVpcEndpoint(ctx, \"s3\", \u0026aws.VpcEndpointArgs{\n\t\t\tVpcId:         vpc.VpcId,\n\t\t\tRouteTableIds: vpc.PrivateRouteTableIds,\n\t\t\tServiceName:   fmt.Sprintf(\"com.amazonaws.%v.s3\", region),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tvpc,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewVpcEndpoint(ctx, \"sts\", \u0026aws.VpcEndpointArgs{\n\t\t\tVpcId:           vpc.VpcId,\n\t\t\tServiceName:     fmt.Sprintf(\"com.amazonaws.%v.sts\", region),\n\t\t\tVpcEndpointType: \"Interface\",\n\t\t\tSubnetIds:       vpc.PrivateSubnets,\n\t\t\tSecurityGroupIds: []interface{}{\n\t\t\t\tvpc.DefaultSecurityGroupId,\n\t\t\t},\n\t\t\tPrivateDnsEnabled: true,\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tvpc,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewVpcEndpoint(ctx, \"kinesis-streams\", \u0026aws.VpcEndpointArgs{\n\t\t\tVpcId:           vpc.VpcId,\n\t\t\tServiceName:     fmt.Sprintf(\"com.amazonaws.%v.kinesis-streams\", region),\n\t\t\tVpcEndpointType: \"Interface\",\n\t\t\tSubnetIds:       vpc.PrivateSubnets,\n\t\t\tSecurityGroupIds: []interface{}{\n\t\t\t\tvpc.DefaultSecurityGroupId,\n\t\t\t},\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tvpc,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.aws.VpcEndpoint;\nimport com.pulumi.aws.VpcEndpointArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var s3 = new VpcEndpoint(\"s3\", VpcEndpointArgs.builder()\n            .vpcId(vpc.vpcId())\n            .routeTableIds(vpc.privateRouteTableIds())\n            .serviceName(String.format(\"com.amazonaws.%s.s3\", region))\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(List.of(vpc))\n                .build());\n\n        var sts = new VpcEndpoint(\"sts\", VpcEndpointArgs.builder()\n            .vpcId(vpc.vpcId())\n            .serviceName(String.format(\"com.amazonaws.%s.sts\", region))\n            .vpcEndpointType(\"Interface\")\n            .subnetIds(vpc.privateSubnets())\n            .securityGroupIds(List.of(vpc.defaultSecurityGroupId()))\n            .privateDnsEnabled(true)\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(List.of(vpc))\n                .build());\n\n        var kinesis_streams = new VpcEndpoint(\"kinesis-streams\", VpcEndpointArgs.builder()\n            .vpcId(vpc.vpcId())\n            .serviceName(String.format(\"com.amazonaws.%s.kinesis-streams\", region))\n            .vpcEndpointType(\"Interface\")\n            .subnetIds(vpc.privateSubnets())\n            .securityGroupIds(List.of(vpc.defaultSecurityGroupId()))\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(List.of(vpc))\n                .build());\n\n    }\n}\n```\n```yaml\nresources:\n  s3:\n    type: aws:VpcEndpoint\n    properties:\n      vpcId: ${vpc.vpcId}\n      routeTableIds: ${vpc.privateRouteTableIds}\n      serviceName: com.amazonaws.${region}.s3\n    options:\n      dependsOn:\n        - ${vpc}\n  sts:\n    type: aws:VpcEndpoint\n    properties:\n      vpcId: ${vpc.vpcId}\n      serviceName: com.amazonaws.${region}.sts\n      vpcEndpointType: Interface\n      subnetIds: ${vpc.privateSubnets}\n      securityGroupIds:\n        - ${vpc.defaultSecurityGroupId}\n      privateDnsEnabled: true\n    options:\n      dependsOn:\n        - ${vpc}\n  kinesis-streams:\n    type: aws:VpcEndpoint\n    properties:\n      vpcId: ${vpc.vpcId}\n      serviceName: com.amazonaws.${region}.kinesis-streams\n      vpcEndpointType: Interface\n      subnetIds: ${vpc.privateSubnets}\n      securityGroupIds:\n        - ${vpc.defaultSecurityGroupId}\n    options:\n      dependsOn:\n        - ${vpc}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nOnce you have created the necessary endpoints, you need to register each of them via *this* Pulumi resource, which calls out to the [Databricks Account API](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html#step-3-register-your-vpc-endpoint-ids-with-the-account-api)):\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst workspace = new databricks.MwsVpcEndpoint(\"workspace\", {\n    accountId: databricksAccountId,\n    awsVpcEndpointId: workspaceAwsVpcEndpoint.id,\n    vpcEndpointName: `VPC Relay for ${vpc.vpcId}`,\n    region: region,\n}, {\n    dependsOn: [workspaceAwsVpcEndpoint],\n});\nconst relay = new databricks.MwsVpcEndpoint(\"relay\", {\n    accountId: databricksAccountId,\n    awsVpcEndpointId: relayAwsVpcEndpoint.id,\n    vpcEndpointName: `VPC Relay for ${vpc.vpcId}`,\n    region: region,\n}, {\n    dependsOn: [relayAwsVpcEndpoint],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nworkspace = databricks.MwsVpcEndpoint(\"workspace\",\n    account_id=databricks_account_id,\n    aws_vpc_endpoint_id=workspace_aws_vpc_endpoint[\"id\"],\n    vpc_endpoint_name=f\"VPC Relay for {vpc['vpcId']}\",\n    region=region,\n    opts = pulumi.ResourceOptions(depends_on=[workspace_aws_vpc_endpoint]))\nrelay = databricks.MwsVpcEndpoint(\"relay\",\n    account_id=databricks_account_id,\n    aws_vpc_endpoint_id=relay_aws_vpc_endpoint[\"id\"],\n    vpc_endpoint_name=f\"VPC Relay for {vpc['vpcId']}\",\n    region=region,\n    opts = pulumi.ResourceOptions(depends_on=[relay_aws_vpc_endpoint]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var workspace = new Databricks.MwsVpcEndpoint(\"workspace\", new()\n    {\n        AccountId = databricksAccountId,\n        AwsVpcEndpointId = workspaceAwsVpcEndpoint.Id,\n        VpcEndpointName = $\"VPC Relay for {vpc.VpcId}\",\n        Region = region,\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            workspaceAwsVpcEndpoint,\n        },\n    });\n\n    var relay = new Databricks.MwsVpcEndpoint(\"relay\", new()\n    {\n        AccountId = databricksAccountId,\n        AwsVpcEndpointId = relayAwsVpcEndpoint.Id,\n        VpcEndpointName = $\"VPC Relay for {vpc.VpcId}\",\n        Region = region,\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            relayAwsVpcEndpoint,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsVpcEndpoint(ctx, \"workspace\", \u0026databricks.MwsVpcEndpointArgs{\n\t\t\tAccountId:        pulumi.Any(databricksAccountId),\n\t\t\tAwsVpcEndpointId: pulumi.Any(workspaceAwsVpcEndpoint.Id),\n\t\t\tVpcEndpointName:  pulumi.Sprintf(\"VPC Relay for %v\", vpc.VpcId),\n\t\t\tRegion:           pulumi.Any(region),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tworkspaceAwsVpcEndpoint,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsVpcEndpoint(ctx, \"relay\", \u0026databricks.MwsVpcEndpointArgs{\n\t\t\tAccountId:        pulumi.Any(databricksAccountId),\n\t\t\tAwsVpcEndpointId: pulumi.Any(relayAwsVpcEndpoint.Id),\n\t\t\tVpcEndpointName:  pulumi.Sprintf(\"VPC Relay for %v\", vpc.VpcId),\n\t\t\tRegion:           pulumi.Any(region),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\trelayAwsVpcEndpoint,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsVpcEndpoint;\nimport com.pulumi.databricks.MwsVpcEndpointArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var workspace = new MwsVpcEndpoint(\"workspace\", MwsVpcEndpointArgs.builder()\n            .accountId(databricksAccountId)\n            .awsVpcEndpointId(workspaceAwsVpcEndpoint.id())\n            .vpcEndpointName(String.format(\"VPC Relay for %s\", vpc.vpcId()))\n            .region(region)\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(workspaceAwsVpcEndpoint)\n                .build());\n\n        var relay = new MwsVpcEndpoint(\"relay\", MwsVpcEndpointArgs.builder()\n            .accountId(databricksAccountId)\n            .awsVpcEndpointId(relayAwsVpcEndpoint.id())\n            .vpcEndpointName(String.format(\"VPC Relay for %s\", vpc.vpcId()))\n            .region(region)\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(relayAwsVpcEndpoint)\n                .build());\n\n    }\n}\n```\n```yaml\nresources:\n  workspace:\n    type: databricks:MwsVpcEndpoint\n    properties:\n      accountId: ${databricksAccountId}\n      awsVpcEndpointId: ${workspaceAwsVpcEndpoint.id}\n      vpcEndpointName: VPC Relay for ${vpc.vpcId}\n      region: ${region}\n    options:\n      dependsOn:\n        - ${workspaceAwsVpcEndpoint}\n  relay:\n    type: databricks:MwsVpcEndpoint\n    properties:\n      accountId: ${databricksAccountId}\n      awsVpcEndpointId: ${relayAwsVpcEndpoint.id}\n      vpcEndpointName: VPC Relay for ${vpc.vpcId}\n      region: ${region}\n    options:\n      dependsOn:\n        - ${relayAwsVpcEndpoint}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nTypically the next steps after this would be to create a\u003cspan pulumi-lang-nodejs=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-dotnet=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-go=\" MwsPrivateAccessSettings \" pulumi-lang-python=\" MwsPrivateAccessSettings \" pulumi-lang-yaml=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-java=\" databricks.MwsPrivateAccessSettings \"\u003e databricks.MwsPrivateAccessSettings \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003econfiguration, before passing the `databricks_mws_private_access_settings.pas.private_access_settings_id` and `databricks_mws_networks.this.network_id` into a\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.MwsWorkspaces(\"this\", {\n    accountId: databricksAccountId,\n    awsRegion: region,\n    workspaceName: prefix,\n    credentialsId: thisDatabricksMwsCredentials.credentialsId,\n    storageConfigurationId: thisDatabricksMwsStorageConfigurations.storageConfigurationId,\n    networkId: thisDatabricksMwsNetworks.networkId,\n    privateAccessSettingsId: pas.privateAccessSettingsId,\n    pricingTier: \"ENTERPRISE\",\n}, {\n    dependsOn: [thisDatabricksMwsNetworks],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.MwsWorkspaces(\"this\",\n    account_id=databricks_account_id,\n    aws_region=region,\n    workspace_name=prefix,\n    credentials_id=this_databricks_mws_credentials[\"credentialsId\"],\n    storage_configuration_id=this_databricks_mws_storage_configurations[\"storageConfigurationId\"],\n    network_id=this_databricks_mws_networks[\"networkId\"],\n    private_access_settings_id=pas[\"privateAccessSettingsId\"],\n    pricing_tier=\"ENTERPRISE\",\n    opts = pulumi.ResourceOptions(depends_on=[this_databricks_mws_networks]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.MwsWorkspaces(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        AwsRegion = region,\n        WorkspaceName = prefix,\n        CredentialsId = thisDatabricksMwsCredentials.CredentialsId,\n        StorageConfigurationId = thisDatabricksMwsStorageConfigurations.StorageConfigurationId,\n        NetworkId = thisDatabricksMwsNetworks.NetworkId,\n        PrivateAccessSettingsId = pas.PrivateAccessSettingsId,\n        PricingTier = \"ENTERPRISE\",\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            thisDatabricksMwsNetworks,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsWorkspaces(ctx, \"this\", \u0026databricks.MwsWorkspacesArgs{\n\t\t\tAccountId:               pulumi.Any(databricksAccountId),\n\t\t\tAwsRegion:               pulumi.Any(region),\n\t\t\tWorkspaceName:           pulumi.Any(prefix),\n\t\t\tCredentialsId:           pulumi.Any(thisDatabricksMwsCredentials.CredentialsId),\n\t\t\tStorageConfigurationId:  pulumi.Any(thisDatabricksMwsStorageConfigurations.StorageConfigurationId),\n\t\t\tNetworkId:               pulumi.Any(thisDatabricksMwsNetworks.NetworkId),\n\t\t\tPrivateAccessSettingsId: pulumi.Any(pas.PrivateAccessSettingsId),\n\t\t\tPricingTier:             pulumi.String(\"ENTERPRISE\"),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tthisDatabricksMwsNetworks,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsWorkspaces;\nimport com.pulumi.databricks.MwsWorkspacesArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new MwsWorkspaces(\"this\", MwsWorkspacesArgs.builder()\n            .accountId(databricksAccountId)\n            .awsRegion(region)\n            .workspaceName(prefix)\n            .credentialsId(thisDatabricksMwsCredentials.credentialsId())\n            .storageConfigurationId(thisDatabricksMwsStorageConfigurations.storageConfigurationId())\n            .networkId(thisDatabricksMwsNetworks.networkId())\n            .privateAccessSettingsId(pas.privateAccessSettingsId())\n            .pricingTier(\"ENTERPRISE\")\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(thisDatabricksMwsNetworks)\n                .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:MwsWorkspaces\n    properties:\n      accountId: ${databricksAccountId}\n      awsRegion: ${region}\n      workspaceName: ${prefix}\n      credentialsId: ${thisDatabricksMwsCredentials.credentialsId}\n      storageConfigurationId: ${thisDatabricksMwsStorageConfigurations.storageConfigurationId}\n      networkId: ${thisDatabricksMwsNetworks.networkId}\n      privateAccessSettingsId: ${pas.privateAccessSettingsId}\n      pricingTier: ENTERPRISE\n    options:\n      dependsOn:\n        - ${thisDatabricksMwsNetworks}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Databricks on GCP usage\n\nBefore using this resource, you will need to create the necessary Private Service Connect (PSC) connections on your Google Cloud VPC networks. You can see [Enable Private Service Connect for your workspace](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html) for more details.\n\nOnce you have created the necessary PSC connections, you need to register each of them via *this* Pulumi resource, which calls out to the Databricks Account API.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\n// Account Id that could be found in https://accounts.gcp.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\nconst databricksGoogleServiceAccount = config.requireObject\u003cany\u003e(\"databricksGoogleServiceAccount\");\nconst googleProject = config.requireObject\u003cany\u003e(\"googleProject\");\nconst subnetRegion = config.requireObject\u003cany\u003e(\"subnetRegion\");\nconst workspace = new databricks.MwsVpcEndpoint(\"workspace\", {\n    accountId: databricksAccountId,\n    vpcEndpointName: \"PSC Rest API endpoint\",\n    gcpVpcEndpointInfo: {\n        projectId: googleProject,\n        pscEndpointName: \"PSC Rest API endpoint\",\n        endpointRegion: subnetRegion,\n    },\n});\nconst relay = new databricks.MwsVpcEndpoint(\"relay\", {\n    accountId: databricksAccountId,\n    vpcEndpointName: \"PSC Relay endpoint\",\n    gcpVpcEndpointInfo: {\n        projectId: googleProject,\n        pscEndpointName: \"PSC Relay endpoint\",\n        endpointRegion: subnetRegion,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\n# Account Id that could be found in https://accounts.gcp.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\ndatabricks_google_service_account = config.require_object(\"databricksGoogleServiceAccount\")\ngoogle_project = config.require_object(\"googleProject\")\nsubnet_region = config.require_object(\"subnetRegion\")\nworkspace = databricks.MwsVpcEndpoint(\"workspace\",\n    account_id=databricks_account_id,\n    vpc_endpoint_name=\"PSC Rest API endpoint\",\n    gcp_vpc_endpoint_info={\n        \"project_id\": google_project,\n        \"psc_endpoint_name\": \"PSC Rest API endpoint\",\n        \"endpoint_region\": subnet_region,\n    })\nrelay = databricks.MwsVpcEndpoint(\"relay\",\n    account_id=databricks_account_id,\n    vpc_endpoint_name=\"PSC Relay endpoint\",\n    gcp_vpc_endpoint_info={\n        \"project_id\": google_project,\n        \"psc_endpoint_name\": \"PSC Relay endpoint\",\n        \"endpoint_region\": subnet_region,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account Id that could be found in https://accounts.gcp.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    var databricksGoogleServiceAccount = config.RequireObject\u003cdynamic\u003e(\"databricksGoogleServiceAccount\");\n    var googleProject = config.RequireObject\u003cdynamic\u003e(\"googleProject\");\n    var subnetRegion = config.RequireObject\u003cdynamic\u003e(\"subnetRegion\");\n    var workspace = new Databricks.MwsVpcEndpoint(\"workspace\", new()\n    {\n        AccountId = databricksAccountId,\n        VpcEndpointName = \"PSC Rest API endpoint\",\n        GcpVpcEndpointInfo = new Databricks.Inputs.MwsVpcEndpointGcpVpcEndpointInfoArgs\n        {\n            ProjectId = googleProject,\n            PscEndpointName = \"PSC Rest API endpoint\",\n            EndpointRegion = subnetRegion,\n        },\n    });\n\n    var relay = new Databricks.MwsVpcEndpoint(\"relay\", new()\n    {\n        AccountId = databricksAccountId,\n        VpcEndpointName = \"PSC Relay endpoint\",\n        GcpVpcEndpointInfo = new Databricks.Inputs.MwsVpcEndpointGcpVpcEndpointInfoArgs\n        {\n            ProjectId = googleProject,\n            PscEndpointName = \"PSC Relay endpoint\",\n            EndpointRegion = subnetRegion,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account Id that could be found in https://accounts.gcp.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\tdatabricksGoogleServiceAccount := cfg.RequireObject(\"databricksGoogleServiceAccount\")\n\t\tgoogleProject := cfg.RequireObject(\"googleProject\")\n\t\tsubnetRegion := cfg.RequireObject(\"subnetRegion\")\n\t\t_, err := databricks.NewMwsVpcEndpoint(ctx, \"workspace\", \u0026databricks.MwsVpcEndpointArgs{\n\t\t\tAccountId:       pulumi.Any(databricksAccountId),\n\t\t\tVpcEndpointName: pulumi.String(\"PSC Rest API endpoint\"),\n\t\t\tGcpVpcEndpointInfo: \u0026databricks.MwsVpcEndpointGcpVpcEndpointInfoArgs{\n\t\t\t\tProjectId:       pulumi.Any(googleProject),\n\t\t\t\tPscEndpointName: pulumi.String(\"PSC Rest API endpoint\"),\n\t\t\t\tEndpointRegion:  pulumi.Any(subnetRegion),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsVpcEndpoint(ctx, \"relay\", \u0026databricks.MwsVpcEndpointArgs{\n\t\t\tAccountId:       pulumi.Any(databricksAccountId),\n\t\t\tVpcEndpointName: pulumi.String(\"PSC Relay endpoint\"),\n\t\t\tGcpVpcEndpointInfo: \u0026databricks.MwsVpcEndpointGcpVpcEndpointInfoArgs{\n\t\t\t\tProjectId:       pulumi.Any(googleProject),\n\t\t\t\tPscEndpointName: pulumi.String(\"PSC Relay endpoint\"),\n\t\t\t\tEndpointRegion:  pulumi.Any(subnetRegion),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsVpcEndpoint;\nimport com.pulumi.databricks.MwsVpcEndpointArgs;\nimport com.pulumi.databricks.inputs.MwsVpcEndpointGcpVpcEndpointInfoArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        final var databricksGoogleServiceAccount = config.get(\"databricksGoogleServiceAccount\");\n        final var googleProject = config.get(\"googleProject\");\n        final var subnetRegion = config.get(\"subnetRegion\");\n        var workspace = new MwsVpcEndpoint(\"workspace\", MwsVpcEndpointArgs.builder()\n            .accountId(databricksAccountId)\n            .vpcEndpointName(\"PSC Rest API endpoint\")\n            .gcpVpcEndpointInfo(MwsVpcEndpointGcpVpcEndpointInfoArgs.builder()\n                .projectId(googleProject)\n                .pscEndpointName(\"PSC Rest API endpoint\")\n                .endpointRegion(subnetRegion)\n                .build())\n            .build());\n\n        var relay = new MwsVpcEndpoint(\"relay\", MwsVpcEndpointArgs.builder()\n            .accountId(databricksAccountId)\n            .vpcEndpointName(\"PSC Relay endpoint\")\n            .gcpVpcEndpointInfo(MwsVpcEndpointGcpVpcEndpointInfoArgs.builder()\n                .projectId(googleProject)\n                .pscEndpointName(\"PSC Relay endpoint\")\n                .endpointRegion(subnetRegion)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\n  databricksGoogleServiceAccount:\n    type: dynamic\n  googleProject:\n    type: dynamic\n  subnetRegion:\n    type: dynamic\nresources:\n  workspace:\n    type: databricks:MwsVpcEndpoint\n    properties:\n      accountId: ${databricksAccountId}\n      vpcEndpointName: PSC Rest API endpoint\n      gcpVpcEndpointInfo:\n        projectId: ${googleProject}\n        pscEndpointName: PSC Rest API endpoint\n        endpointRegion: ${subnetRegion}\n  relay:\n    type: databricks:MwsVpcEndpoint\n    properties:\n      accountId: ${databricksAccountId}\n      vpcEndpointName: PSC Relay endpoint\n      gcpVpcEndpointInfo:\n        projectId: ${googleProject}\n        pscEndpointName: PSC Relay endpoint\n        endpointRegion: ${subnetRegion}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nTypically the next steps after this would be to create a\u003cspan pulumi-lang-nodejs=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-dotnet=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-go=\" MwsPrivateAccessSettings \" pulumi-lang-python=\" MwsPrivateAccessSettings \" pulumi-lang-yaml=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-java=\" databricks.MwsPrivateAccessSettings \"\u003e databricks.MwsPrivateAccessSettings \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003econfiguration, before passing the `databricks_mws_private_access_settings.pas.private_access_settings_id` and `databricks_mws_networks.this.network_id` into a\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.MwsWorkspaces(\"this\", {\n    accountId: databricksAccountId,\n    workspaceName: \"gcp workspace\",\n    location: subnetRegion,\n    cloudResourceContainer: {\n        gcp: {\n            projectId: googleProject,\n        },\n    },\n    gkeConfig: {\n        connectivityType: \"PRIVATE_NODE_PUBLIC_MASTER\",\n        masterIpRange: \"10.3.0.0/28\",\n    },\n    networkId: thisDatabricksMwsNetworks.networkId,\n    privateAccessSettingsId: pas.privateAccessSettingsId,\n    pricingTier: \"PREMIUM\",\n}, {\n    dependsOn: [thisDatabricksMwsNetworks],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.MwsWorkspaces(\"this\",\n    account_id=databricks_account_id,\n    workspace_name=\"gcp workspace\",\n    location=subnet_region,\n    cloud_resource_container={\n        \"gcp\": {\n            \"project_id\": google_project,\n        },\n    },\n    gke_config={\n        \"connectivity_type\": \"PRIVATE_NODE_PUBLIC_MASTER\",\n        \"master_ip_range\": \"10.3.0.0/28\",\n    },\n    network_id=this_databricks_mws_networks[\"networkId\"],\n    private_access_settings_id=pas[\"privateAccessSettingsId\"],\n    pricing_tier=\"PREMIUM\",\n    opts = pulumi.ResourceOptions(depends_on=[this_databricks_mws_networks]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.MwsWorkspaces(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        WorkspaceName = \"gcp workspace\",\n        Location = subnetRegion,\n        CloudResourceContainer = new Databricks.Inputs.MwsWorkspacesCloudResourceContainerArgs\n        {\n            Gcp = new Databricks.Inputs.MwsWorkspacesCloudResourceContainerGcpArgs\n            {\n                ProjectId = googleProject,\n            },\n        },\n        GkeConfig = new Databricks.Inputs.MwsWorkspacesGkeConfigArgs\n        {\n            ConnectivityType = \"PRIVATE_NODE_PUBLIC_MASTER\",\n            MasterIpRange = \"10.3.0.0/28\",\n        },\n        NetworkId = thisDatabricksMwsNetworks.NetworkId,\n        PrivateAccessSettingsId = pas.PrivateAccessSettingsId,\n        PricingTier = \"PREMIUM\",\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            thisDatabricksMwsNetworks,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsWorkspaces(ctx, \"this\", \u0026databricks.MwsWorkspacesArgs{\n\t\t\tAccountId:     pulumi.Any(databricksAccountId),\n\t\t\tWorkspaceName: pulumi.String(\"gcp workspace\"),\n\t\t\tLocation:      pulumi.Any(subnetRegion),\n\t\t\tCloudResourceContainer: \u0026databricks.MwsWorkspacesCloudResourceContainerArgs{\n\t\t\t\tGcp: \u0026databricks.MwsWorkspacesCloudResourceContainerGcpArgs{\n\t\t\t\t\tProjectId: pulumi.Any(googleProject),\n\t\t\t\t},\n\t\t\t},\n\t\t\tGkeConfig: \u0026databricks.MwsWorkspacesGkeConfigArgs{\n\t\t\t\tConnectivityType: pulumi.String(\"PRIVATE_NODE_PUBLIC_MASTER\"),\n\t\t\t\tMasterIpRange:    pulumi.String(\"10.3.0.0/28\"),\n\t\t\t},\n\t\t\tNetworkId:               pulumi.Any(thisDatabricksMwsNetworks.NetworkId),\n\t\t\tPrivateAccessSettingsId: pulumi.Any(pas.PrivateAccessSettingsId),\n\t\t\tPricingTier:             pulumi.String(\"PREMIUM\"),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tthisDatabricksMwsNetworks,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsWorkspaces;\nimport com.pulumi.databricks.MwsWorkspacesArgs;\nimport com.pulumi.databricks.inputs.MwsWorkspacesCloudResourceContainerArgs;\nimport com.pulumi.databricks.inputs.MwsWorkspacesCloudResourceContainerGcpArgs;\nimport com.pulumi.databricks.inputs.MwsWorkspacesGkeConfigArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new MwsWorkspaces(\"this\", MwsWorkspacesArgs.builder()\n            .accountId(databricksAccountId)\n            .workspaceName(\"gcp workspace\")\n            .location(subnetRegion)\n            .cloudResourceContainer(MwsWorkspacesCloudResourceContainerArgs.builder()\n                .gcp(MwsWorkspacesCloudResourceContainerGcpArgs.builder()\n                    .projectId(googleProject)\n                    .build())\n                .build())\n            .gkeConfig(MwsWorkspacesGkeConfigArgs.builder()\n                .connectivityType(\"PRIVATE_NODE_PUBLIC_MASTER\")\n                .masterIpRange(\"10.3.0.0/28\")\n                .build())\n            .networkId(thisDatabricksMwsNetworks.networkId())\n            .privateAccessSettingsId(pas.privateAccessSettingsId())\n            .pricingTier(\"PREMIUM\")\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(thisDatabricksMwsNetworks)\n                .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:MwsWorkspaces\n    properties:\n      accountId: ${databricksAccountId}\n      workspaceName: gcp workspace\n      location: ${subnetRegion}\n      cloudResourceContainer:\n        gcp:\n          projectId: ${googleProject}\n      gkeConfig:\n        connectivityType: PRIVATE_NODE_PUBLIC_MASTER\n        masterIpRange: 10.3.0.0/28\n      networkId: ${thisDatabricksMwsNetworks.networkId}\n      privateAccessSettingsId: ${pas.privateAccessSettingsId}\n      pricingTier: PREMIUM\n    options:\n      dependsOn:\n        - ${thisDatabricksMwsNetworks}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* Provisioning Databricks on AWS guide.\n* Provisioning Databricks on AWS with Private Link guide.\n* Provisioning AWS Databricks workspaces with a Hub \u0026 Spoke firewall for data exfiltration protection guide.\n* Provisioning Databricks workspaces on GCP with Private Service Connect guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eto [configure VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) \u0026 subnets for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-dotnet=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-go=\" MwsPrivateAccessSettings \" pulumi-lang-python=\" MwsPrivateAccessSettings \" pulumi-lang-yaml=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-java=\" databricks.MwsPrivateAccessSettings \"\u003e databricks.MwsPrivateAccessSettings \u003c/span\u003eto create a [Private Access Setting](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html#step-5-create-a-private-access-settings-configuration-using-the-databricks-account-api) that can be used as part of a\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource to create a [Databricks Workspace that leverages AWS Private Link](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up [AWS and GCP workspaces](https://docs.databricks.com/getting-started/overview.html#e2-architecture-1).\n\n## Import\n\n\u003e Importing this resource is not currently supported.\n\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the Accounts Console for [AWS](https://accounts.cloud.databricks.com/) or [GCP](https://accounts.gcp.databricks.com/)\n"},"awsAccountId":{"type":"string"},"awsEndpointServiceId":{"type":"string","description":"(AWS Only) The ID of the Databricks endpoint service that this VPC endpoint is connected to. Please find the list of endpoint service IDs for each supported region in the [Databricks PrivateLink documentation](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html)\n"},"awsVpcEndpointId":{"type":"string","description":"ID of configured aws_vpc_endpoint\n"},"gcpVpcEndpointInfo":{"$ref":"#/types/databricks:index/MwsVpcEndpointGcpVpcEndpointInfo:MwsVpcEndpointGcpVpcEndpointInfo","description":"a block consists of Google Cloud specific information for this PSC endpoint. It has the following fields:\n"},"region":{"type":"string","description":"Region of AWS VPC\n"},"state":{"type":"string","description":"(AWS Only) State of VPC Endpoint\n"},"useCase":{"type":"string"},"vpcEndpointId":{"type":"string","description":"Canonical unique identifier of VPC Endpoint in Databricks Account\n"},"vpcEndpointName":{"type":"string","description":"Name of VPC Endpoint in Databricks Account\n"}},"required":["awsAccountId","awsEndpointServiceId","state","useCase","vpcEndpointId","vpcEndpointName"],"inputProperties":{"accountId":{"type":"string","description":"Account Id that could be found in the Accounts Console for [AWS](https://accounts.cloud.databricks.com/) or [GCP](https://accounts.gcp.databricks.com/)\n","willReplaceOnChanges":true},"awsAccountId":{"type":"string"},"awsEndpointServiceId":{"type":"string","description":"(AWS Only) The ID of the Databricks endpoint service that this VPC endpoint is connected to. Please find the list of endpoint service IDs for each supported region in the [Databricks PrivateLink documentation](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html)\n"},"awsVpcEndpointId":{"type":"string","description":"ID of configured aws_vpc_endpoint\n","willReplaceOnChanges":true},"gcpVpcEndpointInfo":{"$ref":"#/types/databricks:index/MwsVpcEndpointGcpVpcEndpointInfo:MwsVpcEndpointGcpVpcEndpointInfo","description":"a block consists of Google Cloud specific information for this PSC endpoint. It has the following fields:\n","willReplaceOnChanges":true},"region":{"type":"string","description":"Region of AWS VPC\n","willReplaceOnChanges":true},"state":{"type":"string","description":"(AWS Only) State of VPC Endpoint\n"},"useCase":{"type":"string"},"vpcEndpointId":{"type":"string","description":"Canonical unique identifier of VPC Endpoint in Databricks Account\n"},"vpcEndpointName":{"type":"string","description":"Name of VPC Endpoint in Databricks Account\n","willReplaceOnChanges":true}},"requiredInputs":["vpcEndpointName"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsVpcEndpoint resources.\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the Accounts Console for [AWS](https://accounts.cloud.databricks.com/) or [GCP](https://accounts.gcp.databricks.com/)\n","willReplaceOnChanges":true},"awsAccountId":{"type":"string"},"awsEndpointServiceId":{"type":"string","description":"(AWS Only) The ID of the Databricks endpoint service that this VPC endpoint is connected to. Please find the list of endpoint service IDs for each supported region in the [Databricks PrivateLink documentation](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html)\n"},"awsVpcEndpointId":{"type":"string","description":"ID of configured aws_vpc_endpoint\n","willReplaceOnChanges":true},"gcpVpcEndpointInfo":{"$ref":"#/types/databricks:index/MwsVpcEndpointGcpVpcEndpointInfo:MwsVpcEndpointGcpVpcEndpointInfo","description":"a block consists of Google Cloud specific information for this PSC endpoint. It has the following fields:\n","willReplaceOnChanges":true},"region":{"type":"string","description":"Region of AWS VPC\n","willReplaceOnChanges":true},"state":{"type":"string","description":"(AWS Only) State of VPC Endpoint\n"},"useCase":{"type":"string"},"vpcEndpointId":{"type":"string","description":"Canonical unique identifier of VPC Endpoint in Databricks Account\n"},"vpcEndpointName":{"type":"string","description":"Name of VPC Endpoint in Databricks Account\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/mwsWorkspaces:MwsWorkspaces":{"description":"This resource allows you to set up [workspaces on AWS](https://docs.databricks.com/getting-started/overview.html#e2-architecture-1) or [workspaces on GCP](https://docs.gcp.databricks.com/administration-guide/account-settings-gcp/workspaces.html). Please follow this complete runnable example on AWS or GCP with new VPC and new workspace setup.\n\n\u003e This resource can only be used with an account-level provider!\n\n\u003e The \u003cspan pulumi-lang-nodejs=\"`gkeConfig`\" pulumi-lang-dotnet=\"`GkeConfig`\" pulumi-lang-go=\"`gkeConfig`\" pulumi-lang-python=\"`gke_config`\" pulumi-lang-yaml=\"`gkeConfig`\" pulumi-lang-java=\"`gkeConfig`\"\u003e`gke_config`\u003c/span\u003e argument is now deprecated and no longer supported. If you have already created a workspace using these fields, it is safe to remove them from your Pulumi template.\n\n\u003e On Azure you need to use\u003cspan pulumi-lang-nodejs=\" azurermDatabricksWorkspace \" pulumi-lang-dotnet=\" AzurermDatabricksWorkspace \" pulumi-lang-go=\" azurermDatabricksWorkspace \" pulumi-lang-python=\" azurerm_databricks_workspace \" pulumi-lang-yaml=\" azurermDatabricksWorkspace \" pulumi-lang-java=\" azurermDatabricksWorkspace \"\u003e azurerm_databricks_workspace \u003c/span\u003eresource to create Azure Databricks workspaces.\n\n## Example Usage\n\n### Creating a serverless workspace in AWS and GCP\n\nCreating a serverless workspace does not require any prerequisite resources. Simply specify \u003cspan pulumi-lang-nodejs=\"`computeMode \" pulumi-lang-dotnet=\"`ComputeMode \" pulumi-lang-go=\"`computeMode \" pulumi-lang-python=\"`compute_mode \" pulumi-lang-yaml=\"`computeMode \" pulumi-lang-java=\"`computeMode \"\u003e`compute_mode \u003c/span\u003e= \"SERVERLESS\"` when creating the workspace. Serverless workspaces must not include \u003cspan pulumi-lang-nodejs=\"`credentialsId`\" pulumi-lang-dotnet=\"`CredentialsId`\" pulumi-lang-go=\"`credentialsId`\" pulumi-lang-python=\"`credentials_id`\" pulumi-lang-yaml=\"`credentialsId`\" pulumi-lang-java=\"`credentialsId`\"\u003e`credentials_id`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`storageConfigurationId`\" pulumi-lang-dotnet=\"`StorageConfigurationId`\" pulumi-lang-go=\"`storageConfigurationId`\" pulumi-lang-python=\"`storage_configuration_id`\" pulumi-lang-yaml=\"`storageConfigurationId`\" pulumi-lang-java=\"`storageConfigurationId`\"\u003e`storage_configuration_id`\u003c/span\u003e.\n\nOn [AWS](https://docs.databricks.com/aws/en/admin/workspace/serverless-workspaces):\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst serverlessWorkspace = new databricks.MwsWorkspaces(\"serverless_workspace\", {\n    accountId: \"\",\n    workspaceName: \"serverless-workspace\",\n    awsRegion: \"us-east-1\",\n    computeMode: \"SERVERLESS\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nserverless_workspace = databricks.MwsWorkspaces(\"serverless_workspace\",\n    account_id=\"\",\n    workspace_name=\"serverless-workspace\",\n    aws_region=\"us-east-1\",\n    compute_mode=\"SERVERLESS\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var serverlessWorkspace = new Databricks.MwsWorkspaces(\"serverless_workspace\", new()\n    {\n        AccountId = \"\",\n        WorkspaceName = \"serverless-workspace\",\n        AwsRegion = \"us-east-1\",\n        ComputeMode = \"SERVERLESS\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsWorkspaces(ctx, \"serverless_workspace\", \u0026databricks.MwsWorkspacesArgs{\n\t\t\tAccountId:     pulumi.String(\"\"),\n\t\t\tWorkspaceName: pulumi.String(\"serverless-workspace\"),\n\t\t\tAwsRegion:     pulumi.String(\"us-east-1\"),\n\t\t\tComputeMode:   pulumi.String(\"SERVERLESS\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsWorkspaces;\nimport com.pulumi.databricks.MwsWorkspacesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var serverlessWorkspace = new MwsWorkspaces(\"serverlessWorkspace\", MwsWorkspacesArgs.builder()\n            .accountId(\"\")\n            .workspaceName(\"serverless-workspace\")\n            .awsRegion(\"us-east-1\")\n            .computeMode(\"SERVERLESS\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  serverlessWorkspace:\n    type: databricks:MwsWorkspaces\n    name: serverless_workspace\n    properties:\n      accountId: \"\"\n      workspaceName: serverless-workspace\n      awsRegion: us-east-1\n      computeMode: SERVERLESS\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nOn [GCP](https://docs.databricks.com/gcp/en/admin/workspace/serverless-workspaces):\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst serverlessWorkspace = new databricks.MwsWorkspaces(\"serverless_workspace\", {\n    accountId: \"\",\n    workspaceName: \"serverless-workspace\",\n    location: \"us-east4\",\n    computeMode: \"SERVERLESS\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nserverless_workspace = databricks.MwsWorkspaces(\"serverless_workspace\",\n    account_id=\"\",\n    workspace_name=\"serverless-workspace\",\n    location=\"us-east4\",\n    compute_mode=\"SERVERLESS\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var serverlessWorkspace = new Databricks.MwsWorkspaces(\"serverless_workspace\", new()\n    {\n        AccountId = \"\",\n        WorkspaceName = \"serverless-workspace\",\n        Location = \"us-east4\",\n        ComputeMode = \"SERVERLESS\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMwsWorkspaces(ctx, \"serverless_workspace\", \u0026databricks.MwsWorkspacesArgs{\n\t\t\tAccountId:     pulumi.String(\"\"),\n\t\t\tWorkspaceName: pulumi.String(\"serverless-workspace\"),\n\t\t\tLocation:      pulumi.String(\"us-east4\"),\n\t\t\tComputeMode:   pulumi.String(\"SERVERLESS\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsWorkspaces;\nimport com.pulumi.databricks.MwsWorkspacesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var serverlessWorkspace = new MwsWorkspaces(\"serverlessWorkspace\", MwsWorkspacesArgs.builder()\n            .accountId(\"\")\n            .workspaceName(\"serverless-workspace\")\n            .location(\"us-east4\")\n            .computeMode(\"SERVERLESS\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  serverlessWorkspace:\n    type: databricks:MwsWorkspaces\n    name: serverless_workspace\n    properties:\n      accountId: \"\"\n      workspaceName: serverless-workspace\n      location: us-east4\n      computeMode: SERVERLESS\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Creating a workspace on AWS\n\n!Simplest multiworkspace\n\nTo get workspace running, you have to configure a couple of things:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsCredentials \" pulumi-lang-dotnet=\" databricks.MwsCredentials \" pulumi-lang-go=\" MwsCredentials \" pulumi-lang-python=\" MwsCredentials \" pulumi-lang-yaml=\" databricks.MwsCredentials \" pulumi-lang-java=\" databricks.MwsCredentials \"\u003e databricks.MwsCredentials \u003c/span\u003e- You can share a credentials (cross-account IAM role) configuration ID with multiple workspaces. It is not required to create a new one for each workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsStorageConfigurations \" pulumi-lang-dotnet=\" databricks.MwsStorageConfigurations \" pulumi-lang-go=\" MwsStorageConfigurations \" pulumi-lang-python=\" MwsStorageConfigurations \" pulumi-lang-yaml=\" databricks.MwsStorageConfigurations \" pulumi-lang-java=\" databricks.MwsStorageConfigurations \"\u003e databricks.MwsStorageConfigurations \u003c/span\u003e- You can share a root S3 bucket with multiple workspaces in a single account. You do not have to create new ones for each workspace. If you share a root S3 bucket for multiple workspaces in an account, data on the root S3 bucket is partitioned into separate directories by workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003e- (optional, but recommended) You can share one [customer-managed VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) with multiple workspaces in a single account. However, Databricks recommends using unique subnets and security groups for each workspace. If you plan to share one VPC with multiple workspaces, be sure to size your VPC and subnets accordingly. Because a Databricks\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eencapsulates this information, you cannot reuse it across workspaces.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-dotnet=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-go=\" MwsCustomerManagedKeys \" pulumi-lang-python=\" MwsCustomerManagedKeys \" pulumi-lang-yaml=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-java=\" databricks.MwsCustomerManagedKeys \"\u003e databricks.MwsCustomerManagedKeys \u003c/span\u003e- You can share a customer-managed key across workspaces.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\n// Account ID that can be found in the dropdown under the email address in the upper-right corner of https://accounts.cloud.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\n// register cross-account ARN\nconst _this = new databricks.MwsCredentials(\"this\", {\n    accountId: databricksAccountId,\n    credentialsName: `${prefix}-creds`,\n    roleArn: crossaccountArn,\n});\n// register root bucket\nconst thisMwsStorageConfigurations = new databricks.MwsStorageConfigurations(\"this\", {\n    accountId: databricksAccountId,\n    storageConfigurationName: `${prefix}-storage`,\n    bucketName: rootBucket,\n});\n// register VPC\nconst thisMwsNetworks = new databricks.MwsNetworks(\"this\", {\n    accountId: databricksAccountId,\n    networkName: `${prefix}-network`,\n    vpcId: vpcId,\n    subnetIds: subnetsPrivate,\n    securityGroupIds: [securityGroup],\n});\n// create workspace in given VPC with DBFS on root bucket\nconst thisMwsWorkspaces = new databricks.MwsWorkspaces(\"this\", {\n    accountId: databricksAccountId,\n    workspaceName: prefix,\n    awsRegion: region,\n    credentialsId: _this.credentialsId,\n    storageConfigurationId: thisMwsStorageConfigurations.storageConfigurationId,\n    networkId: thisMwsNetworks.networkId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\n# Account ID that can be found in the dropdown under the email address in the upper-right corner of https://accounts.cloud.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\n# register cross-account ARN\nthis = databricks.MwsCredentials(\"this\",\n    account_id=databricks_account_id,\n    credentials_name=f\"{prefix}-creds\",\n    role_arn=crossaccount_arn)\n# register root bucket\nthis_mws_storage_configurations = databricks.MwsStorageConfigurations(\"this\",\n    account_id=databricks_account_id,\n    storage_configuration_name=f\"{prefix}-storage\",\n    bucket_name=root_bucket)\n# register VPC\nthis_mws_networks = databricks.MwsNetworks(\"this\",\n    account_id=databricks_account_id,\n    network_name=f\"{prefix}-network\",\n    vpc_id=vpc_id,\n    subnet_ids=subnets_private,\n    security_group_ids=[security_group])\n# create workspace in given VPC with DBFS on root bucket\nthis_mws_workspaces = databricks.MwsWorkspaces(\"this\",\n    account_id=databricks_account_id,\n    workspace_name=prefix,\n    aws_region=region,\n    credentials_id=this.credentials_id,\n    storage_configuration_id=this_mws_storage_configurations.storage_configuration_id,\n    network_id=this_mws_networks.network_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account ID that can be found in the dropdown under the email address in the upper-right corner of https://accounts.cloud.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    // register cross-account ARN\n    var @this = new Databricks.MwsCredentials(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        CredentialsName = $\"{prefix}-creds\",\n        RoleArn = crossaccountArn,\n    });\n\n    // register root bucket\n    var thisMwsStorageConfigurations = new Databricks.MwsStorageConfigurations(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        StorageConfigurationName = $\"{prefix}-storage\",\n        BucketName = rootBucket,\n    });\n\n    // register VPC\n    var thisMwsNetworks = new Databricks.MwsNetworks(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        NetworkName = $\"{prefix}-network\",\n        VpcId = vpcId,\n        SubnetIds = subnetsPrivate,\n        SecurityGroupIds = new[]\n        {\n            securityGroup,\n        },\n    });\n\n    // create workspace in given VPC with DBFS on root bucket\n    var thisMwsWorkspaces = new Databricks.MwsWorkspaces(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        WorkspaceName = prefix,\n        AwsRegion = region,\n        CredentialsId = @this.CredentialsId,\n        StorageConfigurationId = thisMwsStorageConfigurations.StorageConfigurationId,\n        NetworkId = thisMwsNetworks.NetworkId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account ID that can be found in the dropdown under the email address in the upper-right corner of https://accounts.cloud.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\t// register cross-account ARN\n\t\tthis, err := databricks.NewMwsCredentials(ctx, \"this\", \u0026databricks.MwsCredentialsArgs{\n\t\t\tAccountId:       pulumi.Any(databricksAccountId),\n\t\t\tCredentialsName: pulumi.Sprintf(\"%v-creds\", prefix),\n\t\t\tRoleArn:         pulumi.Any(crossaccountArn),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// register root bucket\n\t\tthisMwsStorageConfigurations, err := databricks.NewMwsStorageConfigurations(ctx, \"this\", \u0026databricks.MwsStorageConfigurationsArgs{\n\t\t\tAccountId:                pulumi.Any(databricksAccountId),\n\t\t\tStorageConfigurationName: pulumi.Sprintf(\"%v-storage\", prefix),\n\t\t\tBucketName:               pulumi.Any(rootBucket),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// register VPC\n\t\tthisMwsNetworks, err := databricks.NewMwsNetworks(ctx, \"this\", \u0026databricks.MwsNetworksArgs{\n\t\t\tAccountId:   pulumi.Any(databricksAccountId),\n\t\t\tNetworkName: pulumi.Sprintf(\"%v-network\", prefix),\n\t\t\tVpcId:       pulumi.Any(vpcId),\n\t\t\tSubnetIds:   pulumi.Any(subnetsPrivate),\n\t\t\tSecurityGroupIds: pulumi.StringArray{\n\t\t\t\tsecurityGroup,\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// create workspace in given VPC with DBFS on root bucket\n\t\t_, err = databricks.NewMwsWorkspaces(ctx, \"this\", \u0026databricks.MwsWorkspacesArgs{\n\t\t\tAccountId:              pulumi.Any(databricksAccountId),\n\t\t\tWorkspaceName:          pulumi.Any(prefix),\n\t\t\tAwsRegion:              pulumi.Any(region),\n\t\t\tCredentialsId:          this.CredentialsId,\n\t\t\tStorageConfigurationId: thisMwsStorageConfigurations.StorageConfigurationId,\n\t\t\tNetworkId:              thisMwsNetworks.NetworkId,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsCredentials;\nimport com.pulumi.databricks.MwsCredentialsArgs;\nimport com.pulumi.databricks.MwsStorageConfigurations;\nimport com.pulumi.databricks.MwsStorageConfigurationsArgs;\nimport com.pulumi.databricks.MwsNetworks;\nimport com.pulumi.databricks.MwsNetworksArgs;\nimport com.pulumi.databricks.MwsWorkspaces;\nimport com.pulumi.databricks.MwsWorkspacesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        // register cross-account ARN\n        var this_ = new MwsCredentials(\"this\", MwsCredentialsArgs.builder()\n            .accountId(databricksAccountId)\n            .credentialsName(String.format(\"%s-creds\", prefix))\n            .roleArn(crossaccountArn)\n            .build());\n\n        // register root bucket\n        var thisMwsStorageConfigurations = new MwsStorageConfigurations(\"thisMwsStorageConfigurations\", MwsStorageConfigurationsArgs.builder()\n            .accountId(databricksAccountId)\n            .storageConfigurationName(String.format(\"%s-storage\", prefix))\n            .bucketName(rootBucket)\n            .build());\n\n        // register VPC\n        var thisMwsNetworks = new MwsNetworks(\"thisMwsNetworks\", MwsNetworksArgs.builder()\n            .accountId(databricksAccountId)\n            .networkName(String.format(\"%s-network\", prefix))\n            .vpcId(vpcId)\n            .subnetIds(subnetsPrivate)\n            .securityGroupIds(securityGroup)\n            .build());\n\n        // create workspace in given VPC with DBFS on root bucket\n        var thisMwsWorkspaces = new MwsWorkspaces(\"thisMwsWorkspaces\", MwsWorkspacesArgs.builder()\n            .accountId(databricksAccountId)\n            .workspaceName(prefix)\n            .awsRegion(region)\n            .credentialsId(this_.credentialsId())\n            .storageConfigurationId(thisMwsStorageConfigurations.storageConfigurationId())\n            .networkId(thisMwsNetworks.networkId())\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\nresources:\n  # register cross-account ARN\n  this:\n    type: databricks:MwsCredentials\n    properties:\n      accountId: ${databricksAccountId}\n      credentialsName: ${prefix}-creds\n      roleArn: ${crossaccountArn}\n  # register root bucket\n  thisMwsStorageConfigurations:\n    type: databricks:MwsStorageConfigurations\n    name: this\n    properties:\n      accountId: ${databricksAccountId}\n      storageConfigurationName: ${prefix}-storage\n      bucketName: ${rootBucket}\n  # register VPC\n  thisMwsNetworks:\n    type: databricks:MwsNetworks\n    name: this\n    properties:\n      accountId: ${databricksAccountId}\n      networkName: ${prefix}-network\n      vpcId: ${vpcId}\n      subnetIds: ${subnetsPrivate}\n      securityGroupIds:\n        - ${securityGroup}\n  # create workspace in given VPC with DBFS on root bucket\n  thisMwsWorkspaces:\n    type: databricks:MwsWorkspaces\n    name: this\n    properties:\n      accountId: ${databricksAccountId}\n      workspaceName: ${prefix}\n      awsRegion: ${region}\n      credentialsId: ${this.credentialsId}\n      storageConfigurationId: ${thisMwsStorageConfigurations.storageConfigurationId}\n      networkId: ${thisMwsNetworks.networkId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Creating a workspace on AWS with Databricks-Managed VPC\n\n![VPCs](https://docs.databricks.com/_images/customer-managed-vpc.png)\n\nBy default, Databricks creates a VPC in your AWS account for each workspace. Databricks uses it for running clusters in the workspace. Optionally, you can use your VPC for the workspace, using the feature customer-managed VPC. Databricks recommends that you provide your VPC with\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eso that you can configure it according to your organization's enterprise cloud standards while still conforming to Databricks requirements. You cannot migrate an existing workspace to your VPC. Please see the difference described through IAM policy actions [on this page](https://docs.databricks.com/administration-guide/account-api/iam-role.html).\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as aws from \"@pulumi/aws\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as random from \"@pulumi/random\";\n\nconst config = new pulumi.Config();\n// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\nconst naming = new random.index.String(\"naming\", {\n    special: false,\n    upper: false,\n    length: 6,\n});\nconst prefix = `dltp${naming.result}`;\nconst _this = databricks.getAwsAssumeRolePolicy({\n    externalId: databricksAccountId,\n});\nconst crossAccountRole = new aws.index.IamRole(\"cross_account_role\", {\n    name: `${prefix}-crossaccount`,\n    assumeRolePolicy: _this.json,\n    tags: tags,\n});\nconst thisGetAwsCrossAccountPolicy = databricks.getAwsCrossAccountPolicy({});\nconst thisIamRolePolicy = new aws.index.IamRolePolicy(\"this\", {\n    name: `${prefix}-policy`,\n    role: crossAccountRole.id,\n    policy: thisGetAwsCrossAccountPolicy.json,\n});\nconst thisMwsCredentials = new databricks.MwsCredentials(\"this\", {\n    accountId: databricksAccountId,\n    credentialsName: `${prefix}-creds`,\n    roleArn: crossAccountRole.arn,\n});\nconst rootStorageBucket = new aws.index.S3Bucket(\"root_storage_bucket\", {\n    bucket: `${prefix}-rootbucket`,\n    acl: \"private\",\n    forceDestroy: true,\n    tags: tags,\n});\nconst rootVersioning = new aws.index.S3BucketVersioning(\"root_versioning\", {\n    bucket: rootStorageBucket.id,\n    versioningConfiguration: [{\n        status: \"Disabled\",\n    }],\n});\nconst rootStorageBucketS3BucketServerSideEncryptionConfiguration = new aws.index.S3BucketServerSideEncryptionConfiguration(\"root_storage_bucket\", {\n    bucket: rootStorageBucket.bucket,\n    rule: [{\n        applyServerSideEncryptionByDefault: [{\n            sseAlgorithm: \"AES256\",\n        }],\n    }],\n});\nconst rootStorageBucketS3BucketPublicAccessBlock = new aws.index.S3BucketPublicAccessBlock(\"root_storage_bucket\", {\n    bucket: rootStorageBucket.id,\n    blockPublicAcls: true,\n    blockPublicPolicy: true,\n    ignorePublicAcls: true,\n    restrictPublicBuckets: true,\n}, {\n    dependsOn: [rootStorageBucket],\n});\nconst thisGetAwsBucketPolicy = databricks.getAwsBucketPolicy({\n    bucket: rootStorageBucket.bucket,\n});\nconst rootBucketPolicy = new aws.index.S3BucketPolicy(\"root_bucket_policy\", {\n    bucket: rootStorageBucket.id,\n    policy: thisGetAwsBucketPolicy.json,\n}, {\n    dependsOn: [rootStorageBucketS3BucketPublicAccessBlock],\n});\nconst thisMwsStorageConfigurations = new databricks.MwsStorageConfigurations(\"this\", {\n    accountId: databricksAccountId,\n    storageConfigurationName: `${prefix}-storage`,\n    bucketName: rootStorageBucket.bucket,\n});\nconst thisMwsWorkspaces = new databricks.MwsWorkspaces(\"this\", {\n    accountId: databricksAccountId,\n    workspaceName: prefix,\n    awsRegion: \"us-east-1\",\n    credentialsId: thisMwsCredentials.credentialsId,\n    storageConfigurationId: thisMwsStorageConfigurations.storageConfigurationId,\n    customTags: {\n        SoldToCode: \"1234\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_aws as aws\nimport pulumi_databricks as databricks\nimport pulumi_random as random\n\nconfig = pulumi.Config()\n# Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\nnaming = random.index.String(\"naming\",\n    special=False,\n    upper=False,\n    length=6)\nprefix = f\"dltp{naming['result']}\"\nthis = databricks.get_aws_assume_role_policy(external_id=databricks_account_id)\ncross_account_role = aws.index.IamRole(\"cross_account_role\",\n    name=f{prefix}-crossaccount,\n    assume_role_policy=this.json,\n    tags=tags)\nthis_get_aws_cross_account_policy = databricks.get_aws_cross_account_policy()\nthis_iam_role_policy = aws.index.IamRolePolicy(\"this\",\n    name=f{prefix}-policy,\n    role=cross_account_role.id,\n    policy=this_get_aws_cross_account_policy.json)\nthis_mws_credentials = databricks.MwsCredentials(\"this\",\n    account_id=databricks_account_id,\n    credentials_name=f\"{prefix}-creds\",\n    role_arn=cross_account_role[\"arn\"])\nroot_storage_bucket = aws.index.S3Bucket(\"root_storage_bucket\",\n    bucket=f{prefix}-rootbucket,\n    acl=private,\n    force_destroy=True,\n    tags=tags)\nroot_versioning = aws.index.S3BucketVersioning(\"root_versioning\",\n    bucket=root_storage_bucket.id,\n    versioning_configuration=[{\n        status: Disabled,\n    }])\nroot_storage_bucket_s3_bucket_server_side_encryption_configuration = aws.index.S3BucketServerSideEncryptionConfiguration(\"root_storage_bucket\",\n    bucket=root_storage_bucket.bucket,\n    rule=[{\n        applyServerSideEncryptionByDefault: [{\n            sseAlgorithm: AES256,\n        }],\n    }])\nroot_storage_bucket_s3_bucket_public_access_block = aws.index.S3BucketPublicAccessBlock(\"root_storage_bucket\",\n    bucket=root_storage_bucket.id,\n    block_public_acls=True,\n    block_public_policy=True,\n    ignore_public_acls=True,\n    restrict_public_buckets=True,\n    opts = pulumi.ResourceOptions(depends_on=[root_storage_bucket]))\nthis_get_aws_bucket_policy = databricks.get_aws_bucket_policy(bucket=root_storage_bucket[\"bucket\"])\nroot_bucket_policy = aws.index.S3BucketPolicy(\"root_bucket_policy\",\n    bucket=root_storage_bucket.id,\n    policy=this_get_aws_bucket_policy.json,\n    opts = pulumi.ResourceOptions(depends_on=[root_storage_bucket_s3_bucket_public_access_block]))\nthis_mws_storage_configurations = databricks.MwsStorageConfigurations(\"this\",\n    account_id=databricks_account_id,\n    storage_configuration_name=f\"{prefix}-storage\",\n    bucket_name=root_storage_bucket[\"bucket\"])\nthis_mws_workspaces = databricks.MwsWorkspaces(\"this\",\n    account_id=databricks_account_id,\n    workspace_name=prefix,\n    aws_region=\"us-east-1\",\n    credentials_id=this_mws_credentials.credentials_id,\n    storage_configuration_id=this_mws_storage_configurations.storage_configuration_id,\n    custom_tags={\n        \"SoldToCode\": \"1234\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Aws = Pulumi.Aws;\nusing Databricks = Pulumi.Databricks;\nusing Random = Pulumi.Random;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    var naming = new Random.Index.String(\"naming\", new()\n    {\n        Special = false,\n        Upper = false,\n        Length = 6,\n    });\n\n    var prefix = $\"dltp{naming.Result}\";\n\n    var @this = Databricks.GetAwsAssumeRolePolicy.Invoke(new()\n    {\n        ExternalId = databricksAccountId,\n    });\n\n    var crossAccountRole = new Aws.Index.IamRole(\"cross_account_role\", new()\n    {\n        Name = $\"{prefix}-crossaccount\",\n        AssumeRolePolicy = @this.Apply(getAwsAssumeRolePolicyResult =\u003e getAwsAssumeRolePolicyResult.Json),\n        Tags = tags,\n    });\n\n    var thisGetAwsCrossAccountPolicy = Databricks.GetAwsCrossAccountPolicy.Invoke();\n\n    var thisIamRolePolicy = new Aws.Index.IamRolePolicy(\"this\", new()\n    {\n        Name = $\"{prefix}-policy\",\n        Role = crossAccountRole.Id,\n        Policy = thisGetAwsCrossAccountPolicy.Apply(getAwsCrossAccountPolicyResult =\u003e getAwsCrossAccountPolicyResult.Json),\n    });\n\n    var thisMwsCredentials = new Databricks.MwsCredentials(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        CredentialsName = $\"{prefix}-creds\",\n        RoleArn = crossAccountRole.Arn,\n    });\n\n    var rootStorageBucket = new Aws.Index.S3Bucket(\"root_storage_bucket\", new()\n    {\n        Bucket = $\"{prefix}-rootbucket\",\n        Acl = \"private\",\n        ForceDestroy = true,\n        Tags = tags,\n    });\n\n    var rootVersioning = new Aws.Index.S3BucketVersioning(\"root_versioning\", new()\n    {\n        Bucket = rootStorageBucket.Id,\n        VersioningConfiguration = new[]\n        {\n            \n            {\n                { \"status\", \"Disabled\" },\n            },\n        },\n    });\n\n    var rootStorageBucketS3BucketServerSideEncryptionConfiguration = new Aws.Index.S3BucketServerSideEncryptionConfiguration(\"root_storage_bucket\", new()\n    {\n        Bucket = rootStorageBucket.Bucket,\n        Rule = new[]\n        {\n            \n            {\n                { \"applyServerSideEncryptionByDefault\", new[]\n                {\n                    \n                    {\n                        { \"sseAlgorithm\", \"AES256\" },\n                    },\n                } },\n            },\n        },\n    });\n\n    var rootStorageBucketS3BucketPublicAccessBlock = new Aws.Index.S3BucketPublicAccessBlock(\"root_storage_bucket\", new()\n    {\n        Bucket = rootStorageBucket.Id,\n        BlockPublicAcls = true,\n        BlockPublicPolicy = true,\n        IgnorePublicAcls = true,\n        RestrictPublicBuckets = true,\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            rootStorageBucket,\n        },\n    });\n\n    var thisGetAwsBucketPolicy = Databricks.GetAwsBucketPolicy.Invoke(new()\n    {\n        Bucket = rootStorageBucket.Bucket,\n    });\n\n    var rootBucketPolicy = new Aws.Index.S3BucketPolicy(\"root_bucket_policy\", new()\n    {\n        Bucket = rootStorageBucket.Id,\n        Policy = thisGetAwsBucketPolicy.Apply(getAwsBucketPolicyResult =\u003e getAwsBucketPolicyResult.Json),\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            rootStorageBucketS3BucketPublicAccessBlock,\n        },\n    });\n\n    var thisMwsStorageConfigurations = new Databricks.MwsStorageConfigurations(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        StorageConfigurationName = $\"{prefix}-storage\",\n        BucketName = rootStorageBucket.Bucket,\n    });\n\n    var thisMwsWorkspaces = new Databricks.MwsWorkspaces(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        WorkspaceName = prefix,\n        AwsRegion = \"us-east-1\",\n        CredentialsId = thisMwsCredentials.CredentialsId,\n        StorageConfigurationId = thisMwsStorageConfigurations.StorageConfigurationId,\n        CustomTags = \n        {\n            { \"SoldToCode\", \"1234\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-aws/sdk/v7/go/aws\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-random/sdk/v4/go/random\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\tnaming, err := random.NewString(ctx, \"naming\", \u0026random.StringArgs{\n\t\t\tSpecial: false,\n\t\t\tUpper:   false,\n\t\t\tLength:  6,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tprefix := fmt.Sprintf(\"dltp%v\", naming.Result)\n\t\tthis, err := databricks.GetAwsAssumeRolePolicy(ctx, \u0026databricks.GetAwsAssumeRolePolicyArgs{\n\t\t\tExternalId: databricksAccountId,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tcrossAccountRole, err := aws.NewIamRole(ctx, \"cross_account_role\", \u0026aws.IamRoleArgs{\n\t\t\tName:             fmt.Sprintf(\"%v-crossaccount\", prefix),\n\t\t\tAssumeRolePolicy: this.Json,\n\t\t\tTags:             tags,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisGetAwsCrossAccountPolicy, err := databricks.GetAwsCrossAccountPolicy(ctx, \u0026databricks.GetAwsCrossAccountPolicyArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewIamRolePolicy(ctx, \"this\", \u0026aws.IamRolePolicyArgs{\n\t\t\tName:   fmt.Sprintf(\"%v-policy\", prefix),\n\t\t\tRole:   crossAccountRole.Id,\n\t\t\tPolicy: thisGetAwsCrossAccountPolicy.Json,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisMwsCredentials, err := databricks.NewMwsCredentials(ctx, \"this\", \u0026databricks.MwsCredentialsArgs{\n\t\t\tAccountId:       pulumi.Any(databricksAccountId),\n\t\t\tCredentialsName: pulumi.Sprintf(\"%v-creds\", prefix),\n\t\t\tRoleArn:         crossAccountRole.Arn,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\trootStorageBucket, err := aws.NewS3Bucket(ctx, \"root_storage_bucket\", \u0026aws.S3BucketArgs{\n\t\t\tBucket:       fmt.Sprintf(\"%v-rootbucket\", prefix),\n\t\t\tAcl:          \"private\",\n\t\t\tForceDestroy: true,\n\t\t\tTags:         tags,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewS3BucketVersioning(ctx, \"root_versioning\", \u0026aws.S3BucketVersioningArgs{\n\t\t\tBucket: rootStorageBucket.Id,\n\t\t\tVersioningConfiguration: []map[string]interface{}{\n\t\t\t\tmap[string]interface{}{\n\t\t\t\t\t\"status\": \"Disabled\",\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewS3BucketServerSideEncryptionConfiguration(ctx, \"root_storage_bucket\", \u0026aws.S3BucketServerSideEncryptionConfigurationArgs{\n\t\t\tBucket: rootStorageBucket.Bucket,\n\t\t\tRule: []map[string]interface{}{\n\t\t\t\tmap[string]interface{}{\n\t\t\t\t\t\"applyServerSideEncryptionByDefault\": []map[string]interface{}{\n\t\t\t\t\t\tmap[string]interface{}{\n\t\t\t\t\t\t\t\"sseAlgorithm\": \"AES256\",\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\trootStorageBucketS3BucketPublicAccessBlock, err := aws.NewS3BucketPublicAccessBlock(ctx, \"root_storage_bucket\", \u0026aws.S3BucketPublicAccessBlockArgs{\n\t\t\tBucket:                rootStorageBucket.Id,\n\t\t\tBlockPublicAcls:       true,\n\t\t\tBlockPublicPolicy:     true,\n\t\t\tIgnorePublicAcls:      true,\n\t\t\tRestrictPublicBuckets: true,\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\trootStorageBucket,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisGetAwsBucketPolicy, err := databricks.GetAwsBucketPolicy(ctx, \u0026databricks.GetAwsBucketPolicyArgs{\n\t\t\tBucket: rootStorageBucket.Bucket,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewS3BucketPolicy(ctx, \"root_bucket_policy\", \u0026aws.S3BucketPolicyArgs{\n\t\t\tBucket: rootStorageBucket.Id,\n\t\t\tPolicy: thisGetAwsBucketPolicy.Json,\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\trootStorageBucketS3BucketPublicAccessBlock,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisMwsStorageConfigurations, err := databricks.NewMwsStorageConfigurations(ctx, \"this\", \u0026databricks.MwsStorageConfigurationsArgs{\n\t\t\tAccountId:                pulumi.Any(databricksAccountId),\n\t\t\tStorageConfigurationName: pulumi.Sprintf(\"%v-storage\", prefix),\n\t\t\tBucketName:               rootStorageBucket.Bucket,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsWorkspaces(ctx, \"this\", \u0026databricks.MwsWorkspacesArgs{\n\t\t\tAccountId:              pulumi.Any(databricksAccountId),\n\t\t\tWorkspaceName:          pulumi.String(prefix),\n\t\t\tAwsRegion:              pulumi.String(\"us-east-1\"),\n\t\t\tCredentialsId:          thisMwsCredentials.CredentialsId,\n\t\t\tStorageConfigurationId: thisMwsStorageConfigurations.StorageConfigurationId,\n\t\t\tCustomTags: pulumi.StringMap{\n\t\t\t\t\"SoldToCode\": pulumi.String(\"1234\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.random.String;\nimport com.pulumi.random.StringArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAwsAssumeRolePolicyArgs;\nimport com.pulumi.aws.IamRole;\nimport com.pulumi.aws.IamRoleArgs;\nimport com.pulumi.databricks.inputs.GetAwsCrossAccountPolicyArgs;\nimport com.pulumi.aws.IamRolePolicy;\nimport com.pulumi.aws.IamRolePolicyArgs;\nimport com.pulumi.databricks.MwsCredentials;\nimport com.pulumi.databricks.MwsCredentialsArgs;\nimport com.pulumi.aws.S3Bucket;\nimport com.pulumi.aws.S3BucketArgs;\nimport com.pulumi.aws.S3BucketVersioning;\nimport com.pulumi.aws.S3BucketVersioningArgs;\nimport com.pulumi.aws.S3BucketServerSideEncryptionConfiguration;\nimport com.pulumi.aws.S3BucketServerSideEncryptionConfigurationArgs;\nimport com.pulumi.aws.S3BucketPublicAccessBlock;\nimport com.pulumi.aws.S3BucketPublicAccessBlockArgs;\nimport com.pulumi.databricks.inputs.GetAwsBucketPolicyArgs;\nimport com.pulumi.aws.S3BucketPolicy;\nimport com.pulumi.aws.S3BucketPolicyArgs;\nimport com.pulumi.databricks.MwsStorageConfigurations;\nimport com.pulumi.databricks.MwsStorageConfigurationsArgs;\nimport com.pulumi.databricks.MwsWorkspaces;\nimport com.pulumi.databricks.MwsWorkspacesArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(java.lang.String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        var naming = new String(\"naming\", StringArgs.builder()\n            .special(false)\n            .upper(false)\n            .length(6)\n            .build());\n\n        final var prefix = String.format(\"dltp%s\", naming.result());\n\n        final var this = DatabricksFunctions.getAwsAssumeRolePolicy(GetAwsAssumeRolePolicyArgs.builder()\n            .externalId(databricksAccountId)\n            .build());\n\n        var crossAccountRole = new IamRole(\"crossAccountRole\", IamRoleArgs.builder()\n            .name(String.format(\"%s-crossaccount\", prefix))\n            .assumeRolePolicy(this_.json())\n            .tags(tags)\n            .build());\n\n        final var thisGetAwsCrossAccountPolicy = DatabricksFunctions.getAwsCrossAccountPolicy(GetAwsCrossAccountPolicyArgs.builder()\n            .build());\n\n        var thisIamRolePolicy = new IamRolePolicy(\"thisIamRolePolicy\", IamRolePolicyArgs.builder()\n            .name(String.format(\"%s-policy\", prefix))\n            .role(crossAccountRole.id())\n            .policy(thisGetAwsCrossAccountPolicy.json())\n            .build());\n\n        var thisMwsCredentials = new MwsCredentials(\"thisMwsCredentials\", MwsCredentialsArgs.builder()\n            .accountId(databricksAccountId)\n            .credentialsName(String.format(\"%s-creds\", prefix))\n            .roleArn(crossAccountRole.arn())\n            .build());\n\n        var rootStorageBucket = new S3Bucket(\"rootStorageBucket\", S3BucketArgs.builder()\n            .bucket(String.format(\"%s-rootbucket\", prefix))\n            .acl(\"private\")\n            .forceDestroy(true)\n            .tags(tags)\n            .build());\n\n        var rootVersioning = new S3BucketVersioning(\"rootVersioning\", S3BucketVersioningArgs.builder()\n            .bucket(rootStorageBucket.id())\n            .versioningConfiguration(List.of(Map.of(\"status\", \"Disabled\")))\n            .build());\n\n        var rootStorageBucketS3BucketServerSideEncryptionConfiguration = new S3BucketServerSideEncryptionConfiguration(\"rootStorageBucketS3BucketServerSideEncryptionConfiguration\", S3BucketServerSideEncryptionConfigurationArgs.builder()\n            .bucket(rootStorageBucket.bucket())\n            .rule(List.of(Map.of(\"applyServerSideEncryptionByDefault\", List.of(Map.of(\"sseAlgorithm\", \"AES256\")))))\n            .build());\n\n        var rootStorageBucketS3BucketPublicAccessBlock = new S3BucketPublicAccessBlock(\"rootStorageBucketS3BucketPublicAccessBlock\", S3BucketPublicAccessBlockArgs.builder()\n            .bucket(rootStorageBucket.id())\n            .blockPublicAcls(true)\n            .blockPublicPolicy(true)\n            .ignorePublicAcls(true)\n            .restrictPublicBuckets(true)\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(List.of(rootStorageBucket))\n                .build());\n\n        final var thisGetAwsBucketPolicy = DatabricksFunctions.getAwsBucketPolicy(GetAwsBucketPolicyArgs.builder()\n            .bucket(rootStorageBucket.bucket())\n            .build());\n\n        var rootBucketPolicy = new S3BucketPolicy(\"rootBucketPolicy\", S3BucketPolicyArgs.builder()\n            .bucket(rootStorageBucket.id())\n            .policy(thisGetAwsBucketPolicy.json())\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(List.of(rootStorageBucketS3BucketPublicAccessBlock))\n                .build());\n\n        var thisMwsStorageConfigurations = new MwsStorageConfigurations(\"thisMwsStorageConfigurations\", MwsStorageConfigurationsArgs.builder()\n            .accountId(databricksAccountId)\n            .storageConfigurationName(String.format(\"%s-storage\", prefix))\n            .bucketName(rootStorageBucket.bucket())\n            .build());\n\n        var thisMwsWorkspaces = new MwsWorkspaces(\"thisMwsWorkspaces\", MwsWorkspacesArgs.builder()\n            .accountId(databricksAccountId)\n            .workspaceName(prefix)\n            .awsRegion(\"us-east-1\")\n            .credentialsId(thisMwsCredentials.credentialsId())\n            .storageConfigurationId(thisMwsStorageConfigurations.storageConfigurationId())\n            .customTags(Map.of(\"SoldToCode\", \"1234\"))\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\nresources:\n  naming:\n    type: random:String\n    properties:\n      special: false\n      upper: false\n      length: 6\n  crossAccountRole:\n    type: aws:IamRole\n    name: cross_account_role\n    properties:\n      name: ${prefix}-crossaccount\n      assumeRolePolicy: ${this.json}\n      tags: ${tags}\n  thisIamRolePolicy:\n    type: aws:IamRolePolicy\n    name: this\n    properties:\n      name: ${prefix}-policy\n      role: ${crossAccountRole.id}\n      policy: ${thisGetAwsCrossAccountPolicy.json}\n  thisMwsCredentials:\n    type: databricks:MwsCredentials\n    name: this\n    properties:\n      accountId: ${databricksAccountId}\n      credentialsName: ${prefix}-creds\n      roleArn: ${crossAccountRole.arn}\n  rootStorageBucket:\n    type: aws:S3Bucket\n    name: root_storage_bucket\n    properties:\n      bucket: ${prefix}-rootbucket\n      acl: private\n      forceDestroy: true\n      tags: ${tags}\n  rootVersioning:\n    type: aws:S3BucketVersioning\n    name: root_versioning\n    properties:\n      bucket: ${rootStorageBucket.id}\n      versioningConfiguration:\n        - status: Disabled\n  rootStorageBucketS3BucketServerSideEncryptionConfiguration:\n    type: aws:S3BucketServerSideEncryptionConfiguration\n    name: root_storage_bucket\n    properties:\n      bucket: ${rootStorageBucket.bucket}\n      rule:\n        - applyServerSideEncryptionByDefault:\n            - sseAlgorithm: AES256\n  rootStorageBucketS3BucketPublicAccessBlock:\n    type: aws:S3BucketPublicAccessBlock\n    name: root_storage_bucket\n    properties:\n      bucket: ${rootStorageBucket.id}\n      blockPublicAcls: true\n      blockPublicPolicy: true\n      ignorePublicAcls: true\n      restrictPublicBuckets: true\n    options:\n      dependsOn:\n        - ${rootStorageBucket}\n  rootBucketPolicy:\n    type: aws:S3BucketPolicy\n    name: root_bucket_policy\n    properties:\n      bucket: ${rootStorageBucket.id}\n      policy: ${thisGetAwsBucketPolicy.json}\n    options:\n      dependsOn:\n        - ${rootStorageBucketS3BucketPublicAccessBlock}\n  thisMwsStorageConfigurations:\n    type: databricks:MwsStorageConfigurations\n    name: this\n    properties:\n      accountId: ${databricksAccountId}\n      storageConfigurationName: ${prefix}-storage\n      bucketName: ${rootStorageBucket.bucket}\n  thisMwsWorkspaces:\n    type: databricks:MwsWorkspaces\n    name: this\n    properties:\n      accountId: ${databricksAccountId}\n      workspaceName: ${prefix}\n      awsRegion: us-east-1\n      credentialsId: ${thisMwsCredentials.credentialsId}\n      storageConfigurationId: ${thisMwsStorageConfigurations.storageConfigurationId}\n      customTags:\n        SoldToCode: '1234'\nvariables:\n  prefix: dltp${naming.result}\n  this:\n    fn::invoke:\n      function: databricks:getAwsAssumeRolePolicy\n      arguments:\n        externalId: ${databricksAccountId}\n  thisGetAwsCrossAccountPolicy:\n    fn::invoke:\n      function: databricks:getAwsCrossAccountPolicy\n      arguments: {}\n  thisGetAwsBucketPolicy:\n    fn::invoke:\n      function: databricks:getAwsBucketPolicy\n      arguments:\n        bucket: ${rootStorageBucket.bucket}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nIn order to create a [Databricks Workspace that leverages AWS PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) please ensure that you have read and understood the [Enable Private Link](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) documentation and then customise the example above with the relevant examples from mws_vpc_endpoint,\u003cspan pulumi-lang-nodejs=\" mwsPrivateAccessSettings \" pulumi-lang-dotnet=\" MwsPrivateAccessSettings \" pulumi-lang-go=\" mwsPrivateAccessSettings \" pulumi-lang-python=\" mws_private_access_settings \" pulumi-lang-yaml=\" mwsPrivateAccessSettings \" pulumi-lang-java=\" mwsPrivateAccessSettings \"\u003e mws_private_access_settings \u003c/span\u003eand mws_networks.\n\n### Creating a workspace on GCP\n\nTo get workspace running, you have to configure a network object:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003e- (optional, but recommended) You can share one [customer-managed VPC](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/customer-managed-vpc.html) with multiple workspaces in a single account. You do not have to create a new VPC for each workspace. However, you cannot reuse subnets with other resources, including other workspaces or non-Databricks resources. If you plan to share one VPC with multiple workspaces, be sure to size your VPC and subnets accordingly. Because a Databricks\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eencapsulates this information, you cannot reuse it across workspaces.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\n// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\nconst databricksGoogleServiceAccount = config.requireObject\u003cany\u003e(\"databricksGoogleServiceAccount\");\nconst googleProject = config.requireObject\u003cany\u003e(\"googleProject\");\n// register VPC\nconst _this = new databricks.MwsNetworks(\"this\", {\n    accountId: databricksAccountId,\n    networkName: `${prefix}-network`,\n    gcpNetworkInfo: {\n        networkProjectId: googleProject,\n        vpcId: vpcId,\n        subnetId: subnetId,\n        subnetRegion: subnetRegion,\n        podIpRangeName: \"pods\",\n        serviceIpRangeName: \"svc\",\n    },\n});\n// create workspace in given VPC\nconst thisMwsWorkspaces = new databricks.MwsWorkspaces(\"this\", {\n    accountId: databricksAccountId,\n    workspaceName: prefix,\n    location: subnetRegion,\n    cloudResourceContainer: {\n        gcp: {\n            projectId: googleProject,\n        },\n    },\n    networkId: _this.networkId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\n# Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\ndatabricks_google_service_account = config.require_object(\"databricksGoogleServiceAccount\")\ngoogle_project = config.require_object(\"googleProject\")\n# register VPC\nthis = databricks.MwsNetworks(\"this\",\n    account_id=databricks_account_id,\n    network_name=f\"{prefix}-network\",\n    gcp_network_info={\n        \"network_project_id\": google_project,\n        \"vpc_id\": vpc_id,\n        \"subnet_id\": subnet_id,\n        \"subnet_region\": subnet_region,\n        \"pod_ip_range_name\": \"pods\",\n        \"service_ip_range_name\": \"svc\",\n    })\n# create workspace in given VPC\nthis_mws_workspaces = databricks.MwsWorkspaces(\"this\",\n    account_id=databricks_account_id,\n    workspace_name=prefix,\n    location=subnet_region,\n    cloud_resource_container={\n        \"gcp\": {\n            \"project_id\": google_project,\n        },\n    },\n    network_id=this.network_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    var databricksGoogleServiceAccount = config.RequireObject\u003cdynamic\u003e(\"databricksGoogleServiceAccount\");\n    var googleProject = config.RequireObject\u003cdynamic\u003e(\"googleProject\");\n    // register VPC\n    var @this = new Databricks.MwsNetworks(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        NetworkName = $\"{prefix}-network\",\n        GcpNetworkInfo = new Databricks.Inputs.MwsNetworksGcpNetworkInfoArgs\n        {\n            NetworkProjectId = googleProject,\n            VpcId = vpcId,\n            SubnetId = subnetId,\n            SubnetRegion = subnetRegion,\n            PodIpRangeName = \"pods\",\n            ServiceIpRangeName = \"svc\",\n        },\n    });\n\n    // create workspace in given VPC\n    var thisMwsWorkspaces = new Databricks.MwsWorkspaces(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        WorkspaceName = prefix,\n        Location = subnetRegion,\n        CloudResourceContainer = new Databricks.Inputs.MwsWorkspacesCloudResourceContainerArgs\n        {\n            Gcp = new Databricks.Inputs.MwsWorkspacesCloudResourceContainerGcpArgs\n            {\n                ProjectId = googleProject,\n            },\n        },\n        NetworkId = @this.NetworkId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\tdatabricksGoogleServiceAccount := cfg.RequireObject(\"databricksGoogleServiceAccount\")\n\t\tgoogleProject := cfg.RequireObject(\"googleProject\")\n\t\t// register VPC\n\t\tthis, err := databricks.NewMwsNetworks(ctx, \"this\", \u0026databricks.MwsNetworksArgs{\n\t\t\tAccountId:   pulumi.Any(databricksAccountId),\n\t\t\tNetworkName: pulumi.Sprintf(\"%v-network\", prefix),\n\t\t\tGcpNetworkInfo: \u0026databricks.MwsNetworksGcpNetworkInfoArgs{\n\t\t\t\tNetworkProjectId:   pulumi.Any(googleProject),\n\t\t\t\tVpcId:              pulumi.Any(vpcId),\n\t\t\t\tSubnetId:           pulumi.Any(subnetId),\n\t\t\t\tSubnetRegion:       pulumi.Any(subnetRegion),\n\t\t\t\tPodIpRangeName:     pulumi.String(\"pods\"),\n\t\t\t\tServiceIpRangeName: pulumi.String(\"svc\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// create workspace in given VPC\n\t\t_, err = databricks.NewMwsWorkspaces(ctx, \"this\", \u0026databricks.MwsWorkspacesArgs{\n\t\t\tAccountId:     pulumi.Any(databricksAccountId),\n\t\t\tWorkspaceName: pulumi.Any(prefix),\n\t\t\tLocation:      pulumi.Any(subnetRegion),\n\t\t\tCloudResourceContainer: \u0026databricks.MwsWorkspacesCloudResourceContainerArgs{\n\t\t\t\tGcp: \u0026databricks.MwsWorkspacesCloudResourceContainerGcpArgs{\n\t\t\t\t\tProjectId: pulumi.Any(googleProject),\n\t\t\t\t},\n\t\t\t},\n\t\t\tNetworkId: this.NetworkId,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MwsNetworks;\nimport com.pulumi.databricks.MwsNetworksArgs;\nimport com.pulumi.databricks.inputs.MwsNetworksGcpNetworkInfoArgs;\nimport com.pulumi.databricks.MwsWorkspaces;\nimport com.pulumi.databricks.MwsWorkspacesArgs;\nimport com.pulumi.databricks.inputs.MwsWorkspacesCloudResourceContainerArgs;\nimport com.pulumi.databricks.inputs.MwsWorkspacesCloudResourceContainerGcpArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        final var databricksGoogleServiceAccount = config.get(\"databricksGoogleServiceAccount\");\n        final var googleProject = config.get(\"googleProject\");\n        // register VPC\n        var this_ = new MwsNetworks(\"this\", MwsNetworksArgs.builder()\n            .accountId(databricksAccountId)\n            .networkName(String.format(\"%s-network\", prefix))\n            .gcpNetworkInfo(MwsNetworksGcpNetworkInfoArgs.builder()\n                .networkProjectId(googleProject)\n                .vpcId(vpcId)\n                .subnetId(subnetId)\n                .subnetRegion(subnetRegion)\n                .podIpRangeName(\"pods\")\n                .serviceIpRangeName(\"svc\")\n                .build())\n            .build());\n\n        // create workspace in given VPC\n        var thisMwsWorkspaces = new MwsWorkspaces(\"thisMwsWorkspaces\", MwsWorkspacesArgs.builder()\n            .accountId(databricksAccountId)\n            .workspaceName(prefix)\n            .location(subnetRegion)\n            .cloudResourceContainer(MwsWorkspacesCloudResourceContainerArgs.builder()\n                .gcp(MwsWorkspacesCloudResourceContainerGcpArgs.builder()\n                    .projectId(googleProject)\n                    .build())\n                .build())\n            .networkId(this_.networkId())\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\n  databricksGoogleServiceAccount:\n    type: dynamic\n  googleProject:\n    type: dynamic\nresources:\n  # register VPC\n  this:\n    type: databricks:MwsNetworks\n    properties:\n      accountId: ${databricksAccountId}\n      networkName: ${prefix}-network\n      gcpNetworkInfo:\n        networkProjectId: ${googleProject}\n        vpcId: ${vpcId}\n        subnetId: ${subnetId}\n        subnetRegion: ${subnetRegion}\n        podIpRangeName: pods\n        serviceIpRangeName: svc\n  # create workspace in given VPC\n  thisMwsWorkspaces:\n    type: databricks:MwsWorkspaces\n    name: this\n    properties:\n      accountId: ${databricksAccountId}\n      workspaceName: ${prefix}\n      location: ${subnetRegion}\n      cloudResourceContainer:\n        gcp:\n          projectId: ${googleProject}\n      networkId: ${this.networkId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nIn order to create a [Databricks Workspace that leverages GCP Private Service Connect](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html) please ensure that you have read and understood the [Enable Private Service Connect](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html) documentation and then customise the example above with the relevant examples from mws_vpc_endpoint,\u003cspan pulumi-lang-nodejs=\" mwsPrivateAccessSettings \" pulumi-lang-dotnet=\" MwsPrivateAccessSettings \" pulumi-lang-go=\" mwsPrivateAccessSettings \" pulumi-lang-python=\" mws_private_access_settings \" pulumi-lang-yaml=\" mwsPrivateAccessSettings \" pulumi-lang-java=\" mwsPrivateAccessSettings \"\u003e mws_private_access_settings \u003c/span\u003eand mws_networks.\n\n### Creating a workspace on GCP with Databricks-Managed VPC\n\n![VPCs](https://docs.databricks.com/_images/customer-managed-vpc.png)\n\nBy default, Databricks creates a VPC in your GCP project for each workspace. Databricks uses it for running clusters in the workspace. Optionally, you can use your VPC for the workspace, using the feature customer-managed VPC. Databricks recommends that you provide your VPC with\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eso that you can configure it according to your organization's enterprise cloud standards while still conforming to Databricks requirements. You cannot migrate an existing workspace to your VPC.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as google from \"@pulumi/google\";\n\nconst config = new pulumi.Config();\n// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\nconst me = google.index.ClientOpenidUserinfo({});\nconst current = google.index.ClientConfig({});\nconst _this = new databricks.MwsWorkspaces(\"this\", {\n    accountId: databricksAccountId,\n    workspaceName: prefix,\n    location: current.region,\n    cloudResourceContainer: {\n        gcp: {\n            projectId: current.project,\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_google as google\n\nconfig = pulumi.Config()\n# Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\nme = google.index.client_openid_userinfo()\ncurrent = google.index.client_config()\nthis = databricks.MwsWorkspaces(\"this\",\n    account_id=databricks_account_id,\n    workspace_name=prefix,\n    location=current[\"region\"],\n    cloud_resource_container={\n        \"gcp\": {\n            \"project_id\": current[\"project\"],\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Google = Pulumi.Google;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    var me = Google.Index.ClientOpenidUserinfo.Invoke();\n\n    var current = Google.Index.ClientConfig.Invoke();\n\n    var @this = new Databricks.MwsWorkspaces(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        WorkspaceName = prefix,\n        Location = current.Region,\n        CloudResourceContainer = new Databricks.Inputs.MwsWorkspacesCloudResourceContainerArgs\n        {\n            Gcp = new Databricks.Inputs.MwsWorkspacesCloudResourceContainerGcpArgs\n            {\n                ProjectId = current.Project,\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-google/sdk/go/google\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\t_, err := google.ClientOpenidUserinfo(ctx, map[string]interface{}{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tcurrent, err := google.ClientConfig(ctx, map[string]interface{}{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewMwsWorkspaces(ctx, \"this\", \u0026databricks.MwsWorkspacesArgs{\n\t\t\tAccountId:     pulumi.Any(databricksAccountId),\n\t\t\tWorkspaceName: pulumi.Any(prefix),\n\t\t\tLocation:      pulumi.Any(current.Region),\n\t\t\tCloudResourceContainer: \u0026databricks.MwsWorkspacesCloudResourceContainerArgs{\n\t\t\t\tGcp: \u0026databricks.MwsWorkspacesCloudResourceContainerGcpArgs{\n\t\t\t\t\tProjectId: pulumi.Any(current.Project),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.google.GoogleFunctions;\nimport com.pulumi.databricks.MwsWorkspaces;\nimport com.pulumi.databricks.MwsWorkspacesArgs;\nimport com.pulumi.databricks.inputs.MwsWorkspacesCloudResourceContainerArgs;\nimport com.pulumi.databricks.inputs.MwsWorkspacesCloudResourceContainerGcpArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        final var me = GoogleFunctions.ClientOpenidUserinfo(Map.ofEntries(\n        ));\n\n        final var current = GoogleFunctions.ClientConfig(Map.ofEntries(\n        ));\n\n        var this_ = new MwsWorkspaces(\"this\", MwsWorkspacesArgs.builder()\n            .accountId(databricksAccountId)\n            .workspaceName(prefix)\n            .location(current.region())\n            .cloudResourceContainer(MwsWorkspacesCloudResourceContainerArgs.builder()\n                .gcp(MwsWorkspacesCloudResourceContainerGcpArgs.builder()\n                    .projectId(current.project())\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\nresources:\n  this:\n    type: databricks:MwsWorkspaces\n    properties:\n      accountId: ${databricksAccountId}\n      workspaceName: ${prefix}\n      location: ${current.region}\n      cloudResourceContainer:\n        gcp:\n          projectId: ${current.project}\nvariables:\n  me:\n    fn::invoke:\n      function: google:ClientOpenidUserinfo\n      arguments: {}\n  current:\n    fn::invoke:\n      function: google:ClientConfig\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* Provisioning Databricks on AWS guide.\n* Provisioning Databricks on AWS with Private Link guide.\n* Provisioning AWS Databricks workspaces with a Hub \u0026 Spoke firewall for data exfiltration protection guide.\n* Provisioning Databricks on GCP guide.\n* Provisioning Databricks workspaces on GCP with Private Service Connect guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsCredentials \" pulumi-lang-dotnet=\" databricks.MwsCredentials \" pulumi-lang-go=\" MwsCredentials \" pulumi-lang-python=\" MwsCredentials \" pulumi-lang-yaml=\" databricks.MwsCredentials \" pulumi-lang-java=\" databricks.MwsCredentials \"\u003e databricks.MwsCredentials \u003c/span\u003eto configure the cross-account role for creation of new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-dotnet=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-go=\" MwsCustomerManagedKeys \" pulumi-lang-python=\" MwsCustomerManagedKeys \" pulumi-lang-yaml=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-java=\" databricks.MwsCustomerManagedKeys \"\u003e databricks.MwsCustomerManagedKeys \u003c/span\u003eto configure KMS keys for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsLogDelivery \" pulumi-lang-dotnet=\" databricks.MwsLogDelivery \" pulumi-lang-go=\" MwsLogDelivery \" pulumi-lang-python=\" MwsLogDelivery \" pulumi-lang-yaml=\" databricks.MwsLogDelivery \" pulumi-lang-java=\" databricks.MwsLogDelivery \"\u003e databricks.MwsLogDelivery \u003c/span\u003eto configure delivery of [billable usage logs](https://docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html) and [audit logs](https://docs.databricks.com/administration-guide/account-settings/audit-logs.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eto [configure VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) \u0026 subnets for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsStorageConfigurations \" pulumi-lang-dotnet=\" databricks.MwsStorageConfigurations \" pulumi-lang-go=\" MwsStorageConfigurations \" pulumi-lang-python=\" MwsStorageConfigurations \" pulumi-lang-yaml=\" databricks.MwsStorageConfigurations \" pulumi-lang-java=\" databricks.MwsStorageConfigurations \"\u003e databricks.MwsStorageConfigurations \u003c/span\u003eto configure root bucket new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-dotnet=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-go=\" MwsPrivateAccessSettings \" pulumi-lang-python=\" MwsPrivateAccessSettings \" pulumi-lang-yaml=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-java=\" databricks.MwsPrivateAccessSettings \"\u003e databricks.MwsPrivateAccessSettings \u003c/span\u003eto create a [Private Access Setting](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html#step-5-create-a-private-access-settings-configuration-using-the-databricks-account-api) that can be used as part of a\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eresource to create a [Databricks Workspace that leverages AWS PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html).\n\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/).\n","secret":true},"awsRegion":{"type":"string","description":"region of VPC.\n"},"cloud":{"type":"string"},"cloudResourceContainer":{"$ref":"#/types/databricks:index/MwsWorkspacesCloudResourceContainer:MwsWorkspacesCloudResourceContainer","description":"A block that specifies GCP workspace configurations, consisting of following blocks:\n"},"computeMode":{"type":"string","description":"The compute mode for the workspace. When unset, a classic workspace is created, and both \u003cspan pulumi-lang-nodejs=\"`credentialsId`\" pulumi-lang-dotnet=\"`CredentialsId`\" pulumi-lang-go=\"`credentialsId`\" pulumi-lang-python=\"`credentials_id`\" pulumi-lang-yaml=\"`credentialsId`\" pulumi-lang-java=\"`credentialsId`\"\u003e`credentials_id`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`storageConfigurationId`\" pulumi-lang-dotnet=\"`StorageConfigurationId`\" pulumi-lang-go=\"`storageConfigurationId`\" pulumi-lang-python=\"`storage_configuration_id`\" pulumi-lang-yaml=\"`storageConfigurationId`\" pulumi-lang-java=\"`storageConfigurationId`\"\u003e`storage_configuration_id`\u003c/span\u003e must be specified. When set to `SERVERLESS`, the resulting workspace is a serverless workspace, and \u003cspan pulumi-lang-nodejs=\"`credentialsId`\" pulumi-lang-dotnet=\"`CredentialsId`\" pulumi-lang-go=\"`credentialsId`\" pulumi-lang-python=\"`credentials_id`\" pulumi-lang-yaml=\"`credentialsId`\" pulumi-lang-java=\"`credentialsId`\"\u003e`credentials_id`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`storageConfigurationId`\" pulumi-lang-dotnet=\"`StorageConfigurationId`\" pulumi-lang-go=\"`storageConfigurationId`\" pulumi-lang-python=\"`storage_configuration_id`\" pulumi-lang-yaml=\"`storageConfigurationId`\" pulumi-lang-java=\"`storageConfigurationId`\"\u003e`storage_configuration_id`\u003c/span\u003e must not be set. The only allowed value for this is `SERVERLESS`. Changing this field requires recreation of the workspace.\n"},"creationTime":{"type":"integer","description":"(Integer) time when workspace was created\n"},"credentialsId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`credentialsId`\" pulumi-lang-dotnet=\"`CredentialsId`\" pulumi-lang-go=\"`credentialsId`\" pulumi-lang-python=\"`credentials_id`\" pulumi-lang-yaml=\"`credentialsId`\" pulumi-lang-java=\"`credentialsId`\"\u003e`credentials_id`\u003c/span\u003e from credentials. This must not be specified when \u003cspan pulumi-lang-nodejs=\"`computeMode`\" pulumi-lang-dotnet=\"`ComputeMode`\" pulumi-lang-go=\"`computeMode`\" pulumi-lang-python=\"`compute_mode`\" pulumi-lang-yaml=\"`computeMode`\" pulumi-lang-java=\"`computeMode`\"\u003e`compute_mode`\u003c/span\u003e is set to `SERVERLESS`.\n"},"customTags":{"type":"object","additionalProperties":{"type":"string"},"description":"The custom tags key-value pairing that is attached to this workspace. These tags will be applied to clusters automatically in addition to any \u003cspan pulumi-lang-nodejs=\"`defaultTags`\" pulumi-lang-dotnet=\"`DefaultTags`\" pulumi-lang-go=\"`defaultTags`\" pulumi-lang-python=\"`default_tags`\" pulumi-lang-yaml=\"`defaultTags`\" pulumi-lang-java=\"`defaultTags`\"\u003e`default_tags`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`customTags`\" pulumi-lang-dotnet=\"`CustomTags`\" pulumi-lang-go=\"`customTags`\" pulumi-lang-python=\"`custom_tags`\" pulumi-lang-yaml=\"`customTags`\" pulumi-lang-java=\"`customTags`\"\u003e`custom_tags`\u003c/span\u003e on a cluster level. Please note it can take up to an hour for\u003cspan pulumi-lang-nodejs=\" customTags \" pulumi-lang-dotnet=\" CustomTags \" pulumi-lang-go=\" customTags \" pulumi-lang-python=\" custom_tags \" pulumi-lang-yaml=\" customTags \" pulumi-lang-java=\" customTags \"\u003e custom_tags \u003c/span\u003eto be set due to scheduling on Control Plane. After custom tags are applied, they can be modified however they can never be completely removed.\n"},"customerManagedKeyId":{"type":"string","deprecationMessage":"Use\u003cspan pulumi-lang-nodejs=\" managedServicesCustomerManagedKeyId \" pulumi-lang-dotnet=\" ManagedServicesCustomerManagedKeyId \" pulumi-lang-go=\" managedServicesCustomerManagedKeyId \" pulumi-lang-python=\" managed_services_customer_managed_key_id \" pulumi-lang-yaml=\" managedServicesCustomerManagedKeyId \" pulumi-lang-java=\" managedServicesCustomerManagedKeyId \"\u003e managed_services_customer_managed_key_id \u003c/span\u003einstead"},"deploymentName":{"type":"string","description":"part of URL as in `https://\u003cprefix\u003e-\u003cdeployment-name\u003e.cloud.databricks.com`. Deployment name cannot be used until a deployment name prefix is defined. Please contact your Databricks representative. Once a new deployment prefix is added/updated, it only will affect the new workspaces created.\n"},"effectiveComputeMode":{"type":"string","description":"(String) The effective compute mode for the workspace. This is either `SERVERLESS` for serverless workspaces or `HYBRID` for classic workspaces.\n"},"expectedWorkspaceStatus":{"type":"string","description":"The expected status of the workspace. When unset, it defaults to `RUNNING`. When set to `PROVISIONING`, workspace provisioning will pause and not enter `RUNNING` status. The only allowed values for this is `RUNNING` and `PROVISIONING`.\n\n\u003e Databricks strongly recommends using OAuth instead of PATs for user account client authentication and authorization due to the improved security\n"},"externalCustomerInfo":{"$ref":"#/types/databricks:index/MwsWorkspacesExternalCustomerInfo:MwsWorkspacesExternalCustomerInfo"},"gcpManagedNetworkConfig":{"$ref":"#/types/databricks:index/MwsWorkspacesGcpManagedNetworkConfig:MwsWorkspacesGcpManagedNetworkConfig"},"gcpWorkspaceSa":{"type":"string","description":"(String, GCP only) identifier of a service account created for the workspace in form of `db-\u003cworkspace-id\u003e@prod-gcp-\u003cregion\u003e.iam.gserviceaccount.com`\n"},"gkeConfig":{"$ref":"#/types/databricks:index/MwsWorkspacesGkeConfig:MwsWorkspacesGkeConfig","deprecationMessage":"gke_config is deprecated and will be removed in a future release. For more information, review the documentation at https://registry.terraform.io/providers/databricks/databricks/1.109.0/docs/guides/gcp-workspace#creating-a-databricks-workspace"},"isNoPublicIpEnabled":{"type":"boolean"},"location":{"type":"string","description":"region of the subnet.\n"},"managedServicesCustomerManagedKeyId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`customerManagedKeyId`\" pulumi-lang-dotnet=\"`CustomerManagedKeyId`\" pulumi-lang-go=\"`customerManagedKeyId`\" pulumi-lang-python=\"`customer_managed_key_id`\" pulumi-lang-yaml=\"`customerManagedKeyId`\" pulumi-lang-java=\"`customerManagedKeyId`\"\u003e`customer_managed_key_id`\u003c/span\u003e from customer managed keys with \u003cspan pulumi-lang-nodejs=\"`useCases`\" pulumi-lang-dotnet=\"`UseCases`\" pulumi-lang-go=\"`useCases`\" pulumi-lang-python=\"`use_cases`\" pulumi-lang-yaml=\"`useCases`\" pulumi-lang-java=\"`useCases`\"\u003e`use_cases`\u003c/span\u003e set to `MANAGED_SERVICES`. This is used to encrypt the workspace's notebook and secret data in the control plane.\n"},"networkConnectivityConfigId":{"type":"string"},"networkId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`networkId`\" pulumi-lang-dotnet=\"`NetworkId`\" pulumi-lang-go=\"`networkId`\" pulumi-lang-python=\"`network_id`\" pulumi-lang-yaml=\"`networkId`\" pulumi-lang-java=\"`networkId`\"\u003e`network_id`\u003c/span\u003e from networks.\n"},"pricingTier":{"type":"string","description":"The pricing tier of the workspace.\n"},"privateAccessSettingsId":{"type":"string","description":"Canonical unique identifier of\u003cspan pulumi-lang-nodejs=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-dotnet=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-go=\" MwsPrivateAccessSettings \" pulumi-lang-python=\" MwsPrivateAccessSettings \" pulumi-lang-yaml=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-java=\" databricks.MwsPrivateAccessSettings \"\u003e databricks.MwsPrivateAccessSettings \u003c/span\u003ein Databricks Account.\n"},"storageConfigurationId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`storageConfigurationId`\" pulumi-lang-dotnet=\"`StorageConfigurationId`\" pulumi-lang-go=\"`storageConfigurationId`\" pulumi-lang-python=\"`storage_configuration_id`\" pulumi-lang-yaml=\"`storageConfigurationId`\" pulumi-lang-java=\"`storageConfigurationId`\"\u003e`storage_configuration_id`\u003c/span\u003e from storage configuration. This must not be specified when \u003cspan pulumi-lang-nodejs=\"`computeMode`\" pulumi-lang-dotnet=\"`ComputeMode`\" pulumi-lang-go=\"`computeMode`\" pulumi-lang-python=\"`compute_mode`\" pulumi-lang-yaml=\"`computeMode`\" pulumi-lang-java=\"`computeMode`\"\u003e`compute_mode`\u003c/span\u003e is set to `SERVERLESS`.\n"},"storageCustomerManagedKeyId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`customerManagedKeyId`\" pulumi-lang-dotnet=\"`CustomerManagedKeyId`\" pulumi-lang-go=\"`customerManagedKeyId`\" pulumi-lang-python=\"`customer_managed_key_id`\" pulumi-lang-yaml=\"`customerManagedKeyId`\" pulumi-lang-java=\"`customerManagedKeyId`\"\u003e`customer_managed_key_id`\u003c/span\u003e from customer managed keys with \u003cspan pulumi-lang-nodejs=\"`useCases`\" pulumi-lang-dotnet=\"`UseCases`\" pulumi-lang-go=\"`useCases`\" pulumi-lang-python=\"`use_cases`\" pulumi-lang-yaml=\"`useCases`\" pulumi-lang-java=\"`useCases`\"\u003e`use_cases`\u003c/span\u003e set to `STORAGE`. This is used to encrypt the DBFS Storage \u0026 Cluster Volumes.\n"},"token":{"$ref":"#/types/databricks:index/MwsWorkspacesToken:MwsWorkspacesToken"},"workspaceId":{"type":"string","description":"(String) workspace id\n"},"workspaceName":{"type":"string","description":"name of the workspace, will appear on UI.\n"},"workspaceStatus":{"type":"string","description":"(String) workspace status\n"},"workspaceStatusMessage":{"type":"string","description":"(String) updates on workspace status\n"},"workspaceUrl":{"type":"string","description":"(String) URL of the workspace\n"}},"required":["accountId","cloud","creationTime","effectiveComputeMode","gcpWorkspaceSa","networkConnectivityConfigId","pricingTier","workspaceId","workspaceName","workspaceStatus","workspaceStatusMessage","workspaceUrl"],"inputProperties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/).\n","secret":true,"willReplaceOnChanges":true},"awsRegion":{"type":"string","description":"region of VPC.\n","willReplaceOnChanges":true},"cloud":{"type":"string"},"cloudResourceContainer":{"$ref":"#/types/databricks:index/MwsWorkspacesCloudResourceContainer:MwsWorkspacesCloudResourceContainer","description":"A block that specifies GCP workspace configurations, consisting of following blocks:\n","willReplaceOnChanges":true},"computeMode":{"type":"string","description":"The compute mode for the workspace. When unset, a classic workspace is created, and both \u003cspan pulumi-lang-nodejs=\"`credentialsId`\" pulumi-lang-dotnet=\"`CredentialsId`\" pulumi-lang-go=\"`credentialsId`\" pulumi-lang-python=\"`credentials_id`\" pulumi-lang-yaml=\"`credentialsId`\" pulumi-lang-java=\"`credentialsId`\"\u003e`credentials_id`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`storageConfigurationId`\" pulumi-lang-dotnet=\"`StorageConfigurationId`\" pulumi-lang-go=\"`storageConfigurationId`\" pulumi-lang-python=\"`storage_configuration_id`\" pulumi-lang-yaml=\"`storageConfigurationId`\" pulumi-lang-java=\"`storageConfigurationId`\"\u003e`storage_configuration_id`\u003c/span\u003e must be specified. When set to `SERVERLESS`, the resulting workspace is a serverless workspace, and \u003cspan pulumi-lang-nodejs=\"`credentialsId`\" pulumi-lang-dotnet=\"`CredentialsId`\" pulumi-lang-go=\"`credentialsId`\" pulumi-lang-python=\"`credentials_id`\" pulumi-lang-yaml=\"`credentialsId`\" pulumi-lang-java=\"`credentialsId`\"\u003e`credentials_id`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`storageConfigurationId`\" pulumi-lang-dotnet=\"`StorageConfigurationId`\" pulumi-lang-go=\"`storageConfigurationId`\" pulumi-lang-python=\"`storage_configuration_id`\" pulumi-lang-yaml=\"`storageConfigurationId`\" pulumi-lang-java=\"`storageConfigurationId`\"\u003e`storage_configuration_id`\u003c/span\u003e must not be set. The only allowed value for this is `SERVERLESS`. Changing this field requires recreation of the workspace.\n","willReplaceOnChanges":true},"creationTime":{"type":"integer","description":"(Integer) time when workspace was created\n"},"credentialsId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`credentialsId`\" pulumi-lang-dotnet=\"`CredentialsId`\" pulumi-lang-go=\"`credentialsId`\" pulumi-lang-python=\"`credentials_id`\" pulumi-lang-yaml=\"`credentialsId`\" pulumi-lang-java=\"`credentialsId`\"\u003e`credentials_id`\u003c/span\u003e from credentials. This must not be specified when \u003cspan pulumi-lang-nodejs=\"`computeMode`\" pulumi-lang-dotnet=\"`ComputeMode`\" pulumi-lang-go=\"`computeMode`\" pulumi-lang-python=\"`compute_mode`\" pulumi-lang-yaml=\"`computeMode`\" pulumi-lang-java=\"`computeMode`\"\u003e`compute_mode`\u003c/span\u003e is set to `SERVERLESS`.\n"},"customTags":{"type":"object","additionalProperties":{"type":"string"},"description":"The custom tags key-value pairing that is attached to this workspace. These tags will be applied to clusters automatically in addition to any \u003cspan pulumi-lang-nodejs=\"`defaultTags`\" pulumi-lang-dotnet=\"`DefaultTags`\" pulumi-lang-go=\"`defaultTags`\" pulumi-lang-python=\"`default_tags`\" pulumi-lang-yaml=\"`defaultTags`\" pulumi-lang-java=\"`defaultTags`\"\u003e`default_tags`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`customTags`\" pulumi-lang-dotnet=\"`CustomTags`\" pulumi-lang-go=\"`customTags`\" pulumi-lang-python=\"`custom_tags`\" pulumi-lang-yaml=\"`customTags`\" pulumi-lang-java=\"`customTags`\"\u003e`custom_tags`\u003c/span\u003e on a cluster level. Please note it can take up to an hour for\u003cspan pulumi-lang-nodejs=\" customTags \" pulumi-lang-dotnet=\" CustomTags \" pulumi-lang-go=\" customTags \" pulumi-lang-python=\" custom_tags \" pulumi-lang-yaml=\" customTags \" pulumi-lang-java=\" customTags \"\u003e custom_tags \u003c/span\u003eto be set due to scheduling on Control Plane. After custom tags are applied, they can be modified however they can never be completely removed.\n"},"customerManagedKeyId":{"type":"string","deprecationMessage":"Use\u003cspan pulumi-lang-nodejs=\" managedServicesCustomerManagedKeyId \" pulumi-lang-dotnet=\" ManagedServicesCustomerManagedKeyId \" pulumi-lang-go=\" managedServicesCustomerManagedKeyId \" pulumi-lang-python=\" managed_services_customer_managed_key_id \" pulumi-lang-yaml=\" managedServicesCustomerManagedKeyId \" pulumi-lang-java=\" managedServicesCustomerManagedKeyId \"\u003e managed_services_customer_managed_key_id \u003c/span\u003einstead","willReplaceOnChanges":true},"deploymentName":{"type":"string","description":"part of URL as in `https://\u003cprefix\u003e-\u003cdeployment-name\u003e.cloud.databricks.com`. Deployment name cannot be used until a deployment name prefix is defined. Please contact your Databricks representative. Once a new deployment prefix is added/updated, it only will affect the new workspaces created.\n","willReplaceOnChanges":true},"expectedWorkspaceStatus":{"type":"string","description":"The expected status of the workspace. When unset, it defaults to `RUNNING`. When set to `PROVISIONING`, workspace provisioning will pause and not enter `RUNNING` status. The only allowed values for this is `RUNNING` and `PROVISIONING`.\n\n\u003e Databricks strongly recommends using OAuth instead of PATs for user account client authentication and authorization due to the improved security\n"},"externalCustomerInfo":{"$ref":"#/types/databricks:index/MwsWorkspacesExternalCustomerInfo:MwsWorkspacesExternalCustomerInfo","willReplaceOnChanges":true},"gcpManagedNetworkConfig":{"$ref":"#/types/databricks:index/MwsWorkspacesGcpManagedNetworkConfig:MwsWorkspacesGcpManagedNetworkConfig","willReplaceOnChanges":true},"gkeConfig":{"$ref":"#/types/databricks:index/MwsWorkspacesGkeConfig:MwsWorkspacesGkeConfig","deprecationMessage":"gke_config is deprecated and will be removed in a future release. For more information, review the documentation at https://registry.terraform.io/providers/databricks/databricks/1.109.0/docs/guides/gcp-workspace#creating-a-databricks-workspace","willReplaceOnChanges":true},"isNoPublicIpEnabled":{"type":"boolean","willReplaceOnChanges":true},"location":{"type":"string","description":"region of the subnet.\n","willReplaceOnChanges":true},"managedServicesCustomerManagedKeyId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`customerManagedKeyId`\" pulumi-lang-dotnet=\"`CustomerManagedKeyId`\" pulumi-lang-go=\"`customerManagedKeyId`\" pulumi-lang-python=\"`customer_managed_key_id`\" pulumi-lang-yaml=\"`customerManagedKeyId`\" pulumi-lang-java=\"`customerManagedKeyId`\"\u003e`customer_managed_key_id`\u003c/span\u003e from customer managed keys with \u003cspan pulumi-lang-nodejs=\"`useCases`\" pulumi-lang-dotnet=\"`UseCases`\" pulumi-lang-go=\"`useCases`\" pulumi-lang-python=\"`use_cases`\" pulumi-lang-yaml=\"`useCases`\" pulumi-lang-java=\"`useCases`\"\u003e`use_cases`\u003c/span\u003e set to `MANAGED_SERVICES`. This is used to encrypt the workspace's notebook and secret data in the control plane.\n"},"networkConnectivityConfigId":{"type":"string"},"networkId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`networkId`\" pulumi-lang-dotnet=\"`NetworkId`\" pulumi-lang-go=\"`networkId`\" pulumi-lang-python=\"`network_id`\" pulumi-lang-yaml=\"`networkId`\" pulumi-lang-java=\"`networkId`\"\u003e`network_id`\u003c/span\u003e from networks.\n"},"pricingTier":{"type":"string","description":"The pricing tier of the workspace.\n"},"privateAccessSettingsId":{"type":"string","description":"Canonical unique identifier of\u003cspan pulumi-lang-nodejs=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-dotnet=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-go=\" MwsPrivateAccessSettings \" pulumi-lang-python=\" MwsPrivateAccessSettings \" pulumi-lang-yaml=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-java=\" databricks.MwsPrivateAccessSettings \"\u003e databricks.MwsPrivateAccessSettings \u003c/span\u003ein Databricks Account.\n"},"storageConfigurationId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`storageConfigurationId`\" pulumi-lang-dotnet=\"`StorageConfigurationId`\" pulumi-lang-go=\"`storageConfigurationId`\" pulumi-lang-python=\"`storage_configuration_id`\" pulumi-lang-yaml=\"`storageConfigurationId`\" pulumi-lang-java=\"`storageConfigurationId`\"\u003e`storage_configuration_id`\u003c/span\u003e from storage configuration. This must not be specified when \u003cspan pulumi-lang-nodejs=\"`computeMode`\" pulumi-lang-dotnet=\"`ComputeMode`\" pulumi-lang-go=\"`computeMode`\" pulumi-lang-python=\"`compute_mode`\" pulumi-lang-yaml=\"`computeMode`\" pulumi-lang-java=\"`computeMode`\"\u003e`compute_mode`\u003c/span\u003e is set to `SERVERLESS`.\n","willReplaceOnChanges":true},"storageCustomerManagedKeyId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`customerManagedKeyId`\" pulumi-lang-dotnet=\"`CustomerManagedKeyId`\" pulumi-lang-go=\"`customerManagedKeyId`\" pulumi-lang-python=\"`customer_managed_key_id`\" pulumi-lang-yaml=\"`customerManagedKeyId`\" pulumi-lang-java=\"`customerManagedKeyId`\"\u003e`customer_managed_key_id`\u003c/span\u003e from customer managed keys with \u003cspan pulumi-lang-nodejs=\"`useCases`\" pulumi-lang-dotnet=\"`UseCases`\" pulumi-lang-go=\"`useCases`\" pulumi-lang-python=\"`use_cases`\" pulumi-lang-yaml=\"`useCases`\" pulumi-lang-java=\"`useCases`\"\u003e`use_cases`\u003c/span\u003e set to `STORAGE`. This is used to encrypt the DBFS Storage \u0026 Cluster Volumes.\n"},"token":{"$ref":"#/types/databricks:index/MwsWorkspacesToken:MwsWorkspacesToken"},"workspaceId":{"type":"string","description":"(String) workspace id\n"},"workspaceName":{"type":"string","description":"name of the workspace, will appear on UI.\n","willReplaceOnChanges":true},"workspaceStatus":{"type":"string","description":"(String) workspace status\n"},"workspaceStatusMessage":{"type":"string","description":"(String) updates on workspace status\n"},"workspaceUrl":{"type":"string","description":"(String) URL of the workspace\n"}},"requiredInputs":["accountId","workspaceName"],"stateInputs":{"description":"Input properties used for looking up and filtering MwsWorkspaces resources.\n","properties":{"accountId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/).\n","secret":true,"willReplaceOnChanges":true},"awsRegion":{"type":"string","description":"region of VPC.\n","willReplaceOnChanges":true},"cloud":{"type":"string"},"cloudResourceContainer":{"$ref":"#/types/databricks:index/MwsWorkspacesCloudResourceContainer:MwsWorkspacesCloudResourceContainer","description":"A block that specifies GCP workspace configurations, consisting of following blocks:\n","willReplaceOnChanges":true},"computeMode":{"type":"string","description":"The compute mode for the workspace. When unset, a classic workspace is created, and both \u003cspan pulumi-lang-nodejs=\"`credentialsId`\" pulumi-lang-dotnet=\"`CredentialsId`\" pulumi-lang-go=\"`credentialsId`\" pulumi-lang-python=\"`credentials_id`\" pulumi-lang-yaml=\"`credentialsId`\" pulumi-lang-java=\"`credentialsId`\"\u003e`credentials_id`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`storageConfigurationId`\" pulumi-lang-dotnet=\"`StorageConfigurationId`\" pulumi-lang-go=\"`storageConfigurationId`\" pulumi-lang-python=\"`storage_configuration_id`\" pulumi-lang-yaml=\"`storageConfigurationId`\" pulumi-lang-java=\"`storageConfigurationId`\"\u003e`storage_configuration_id`\u003c/span\u003e must be specified. When set to `SERVERLESS`, the resulting workspace is a serverless workspace, and \u003cspan pulumi-lang-nodejs=\"`credentialsId`\" pulumi-lang-dotnet=\"`CredentialsId`\" pulumi-lang-go=\"`credentialsId`\" pulumi-lang-python=\"`credentials_id`\" pulumi-lang-yaml=\"`credentialsId`\" pulumi-lang-java=\"`credentialsId`\"\u003e`credentials_id`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`storageConfigurationId`\" pulumi-lang-dotnet=\"`StorageConfigurationId`\" pulumi-lang-go=\"`storageConfigurationId`\" pulumi-lang-python=\"`storage_configuration_id`\" pulumi-lang-yaml=\"`storageConfigurationId`\" pulumi-lang-java=\"`storageConfigurationId`\"\u003e`storage_configuration_id`\u003c/span\u003e must not be set. The only allowed value for this is `SERVERLESS`. Changing this field requires recreation of the workspace.\n","willReplaceOnChanges":true},"creationTime":{"type":"integer","description":"(Integer) time when workspace was created\n"},"credentialsId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`credentialsId`\" pulumi-lang-dotnet=\"`CredentialsId`\" pulumi-lang-go=\"`credentialsId`\" pulumi-lang-python=\"`credentials_id`\" pulumi-lang-yaml=\"`credentialsId`\" pulumi-lang-java=\"`credentialsId`\"\u003e`credentials_id`\u003c/span\u003e from credentials. This must not be specified when \u003cspan pulumi-lang-nodejs=\"`computeMode`\" pulumi-lang-dotnet=\"`ComputeMode`\" pulumi-lang-go=\"`computeMode`\" pulumi-lang-python=\"`compute_mode`\" pulumi-lang-yaml=\"`computeMode`\" pulumi-lang-java=\"`computeMode`\"\u003e`compute_mode`\u003c/span\u003e is set to `SERVERLESS`.\n"},"customTags":{"type":"object","additionalProperties":{"type":"string"},"description":"The custom tags key-value pairing that is attached to this workspace. These tags will be applied to clusters automatically in addition to any \u003cspan pulumi-lang-nodejs=\"`defaultTags`\" pulumi-lang-dotnet=\"`DefaultTags`\" pulumi-lang-go=\"`defaultTags`\" pulumi-lang-python=\"`default_tags`\" pulumi-lang-yaml=\"`defaultTags`\" pulumi-lang-java=\"`defaultTags`\"\u003e`default_tags`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`customTags`\" pulumi-lang-dotnet=\"`CustomTags`\" pulumi-lang-go=\"`customTags`\" pulumi-lang-python=\"`custom_tags`\" pulumi-lang-yaml=\"`customTags`\" pulumi-lang-java=\"`customTags`\"\u003e`custom_tags`\u003c/span\u003e on a cluster level. Please note it can take up to an hour for\u003cspan pulumi-lang-nodejs=\" customTags \" pulumi-lang-dotnet=\" CustomTags \" pulumi-lang-go=\" customTags \" pulumi-lang-python=\" custom_tags \" pulumi-lang-yaml=\" customTags \" pulumi-lang-java=\" customTags \"\u003e custom_tags \u003c/span\u003eto be set due to scheduling on Control Plane. After custom tags are applied, they can be modified however they can never be completely removed.\n"},"customerManagedKeyId":{"type":"string","deprecationMessage":"Use\u003cspan pulumi-lang-nodejs=\" managedServicesCustomerManagedKeyId \" pulumi-lang-dotnet=\" ManagedServicesCustomerManagedKeyId \" pulumi-lang-go=\" managedServicesCustomerManagedKeyId \" pulumi-lang-python=\" managed_services_customer_managed_key_id \" pulumi-lang-yaml=\" managedServicesCustomerManagedKeyId \" pulumi-lang-java=\" managedServicesCustomerManagedKeyId \"\u003e managed_services_customer_managed_key_id \u003c/span\u003einstead","willReplaceOnChanges":true},"deploymentName":{"type":"string","description":"part of URL as in `https://\u003cprefix\u003e-\u003cdeployment-name\u003e.cloud.databricks.com`. Deployment name cannot be used until a deployment name prefix is defined. Please contact your Databricks representative. Once a new deployment prefix is added/updated, it only will affect the new workspaces created.\n","willReplaceOnChanges":true},"effectiveComputeMode":{"type":"string","description":"(String) The effective compute mode for the workspace. This is either `SERVERLESS` for serverless workspaces or `HYBRID` for classic workspaces.\n"},"expectedWorkspaceStatus":{"type":"string","description":"The expected status of the workspace. When unset, it defaults to `RUNNING`. When set to `PROVISIONING`, workspace provisioning will pause and not enter `RUNNING` status. The only allowed values for this is `RUNNING` and `PROVISIONING`.\n\n\u003e Databricks strongly recommends using OAuth instead of PATs for user account client authentication and authorization due to the improved security\n"},"externalCustomerInfo":{"$ref":"#/types/databricks:index/MwsWorkspacesExternalCustomerInfo:MwsWorkspacesExternalCustomerInfo","willReplaceOnChanges":true},"gcpManagedNetworkConfig":{"$ref":"#/types/databricks:index/MwsWorkspacesGcpManagedNetworkConfig:MwsWorkspacesGcpManagedNetworkConfig","willReplaceOnChanges":true},"gcpWorkspaceSa":{"type":"string","description":"(String, GCP only) identifier of a service account created for the workspace in form of `db-\u003cworkspace-id\u003e@prod-gcp-\u003cregion\u003e.iam.gserviceaccount.com`\n"},"gkeConfig":{"$ref":"#/types/databricks:index/MwsWorkspacesGkeConfig:MwsWorkspacesGkeConfig","deprecationMessage":"gke_config is deprecated and will be removed in a future release. For more information, review the documentation at https://registry.terraform.io/providers/databricks/databricks/1.109.0/docs/guides/gcp-workspace#creating-a-databricks-workspace","willReplaceOnChanges":true},"isNoPublicIpEnabled":{"type":"boolean","willReplaceOnChanges":true},"location":{"type":"string","description":"region of the subnet.\n","willReplaceOnChanges":true},"managedServicesCustomerManagedKeyId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`customerManagedKeyId`\" pulumi-lang-dotnet=\"`CustomerManagedKeyId`\" pulumi-lang-go=\"`customerManagedKeyId`\" pulumi-lang-python=\"`customer_managed_key_id`\" pulumi-lang-yaml=\"`customerManagedKeyId`\" pulumi-lang-java=\"`customerManagedKeyId`\"\u003e`customer_managed_key_id`\u003c/span\u003e from customer managed keys with \u003cspan pulumi-lang-nodejs=\"`useCases`\" pulumi-lang-dotnet=\"`UseCases`\" pulumi-lang-go=\"`useCases`\" pulumi-lang-python=\"`use_cases`\" pulumi-lang-yaml=\"`useCases`\" pulumi-lang-java=\"`useCases`\"\u003e`use_cases`\u003c/span\u003e set to `MANAGED_SERVICES`. This is used to encrypt the workspace's notebook and secret data in the control plane.\n"},"networkConnectivityConfigId":{"type":"string"},"networkId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`networkId`\" pulumi-lang-dotnet=\"`NetworkId`\" pulumi-lang-go=\"`networkId`\" pulumi-lang-python=\"`network_id`\" pulumi-lang-yaml=\"`networkId`\" pulumi-lang-java=\"`networkId`\"\u003e`network_id`\u003c/span\u003e from networks.\n"},"pricingTier":{"type":"string","description":"The pricing tier of the workspace.\n"},"privateAccessSettingsId":{"type":"string","description":"Canonical unique identifier of\u003cspan pulumi-lang-nodejs=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-dotnet=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-go=\" MwsPrivateAccessSettings \" pulumi-lang-python=\" MwsPrivateAccessSettings \" pulumi-lang-yaml=\" databricks.MwsPrivateAccessSettings \" pulumi-lang-java=\" databricks.MwsPrivateAccessSettings \"\u003e databricks.MwsPrivateAccessSettings \u003c/span\u003ein Databricks Account.\n"},"storageConfigurationId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`storageConfigurationId`\" pulumi-lang-dotnet=\"`StorageConfigurationId`\" pulumi-lang-go=\"`storageConfigurationId`\" pulumi-lang-python=\"`storage_configuration_id`\" pulumi-lang-yaml=\"`storageConfigurationId`\" pulumi-lang-java=\"`storageConfigurationId`\"\u003e`storage_configuration_id`\u003c/span\u003e from storage configuration. This must not be specified when \u003cspan pulumi-lang-nodejs=\"`computeMode`\" pulumi-lang-dotnet=\"`ComputeMode`\" pulumi-lang-go=\"`computeMode`\" pulumi-lang-python=\"`compute_mode`\" pulumi-lang-yaml=\"`computeMode`\" pulumi-lang-java=\"`computeMode`\"\u003e`compute_mode`\u003c/span\u003e is set to `SERVERLESS`.\n","willReplaceOnChanges":true},"storageCustomerManagedKeyId":{"type":"string","description":"\u003cspan pulumi-lang-nodejs=\"`customerManagedKeyId`\" pulumi-lang-dotnet=\"`CustomerManagedKeyId`\" pulumi-lang-go=\"`customerManagedKeyId`\" pulumi-lang-python=\"`customer_managed_key_id`\" pulumi-lang-yaml=\"`customerManagedKeyId`\" pulumi-lang-java=\"`customerManagedKeyId`\"\u003e`customer_managed_key_id`\u003c/span\u003e from customer managed keys with \u003cspan pulumi-lang-nodejs=\"`useCases`\" pulumi-lang-dotnet=\"`UseCases`\" pulumi-lang-go=\"`useCases`\" pulumi-lang-python=\"`use_cases`\" pulumi-lang-yaml=\"`useCases`\" pulumi-lang-java=\"`useCases`\"\u003e`use_cases`\u003c/span\u003e set to `STORAGE`. This is used to encrypt the DBFS Storage \u0026 Cluster Volumes.\n"},"token":{"$ref":"#/types/databricks:index/MwsWorkspacesToken:MwsWorkspacesToken"},"workspaceId":{"type":"string","description":"(String) workspace id\n"},"workspaceName":{"type":"string","description":"name of the workspace, will appear on UI.\n","willReplaceOnChanges":true},"workspaceStatus":{"type":"string","description":"(String) workspace status\n"},"workspaceStatusMessage":{"type":"string","description":"(String) updates on workspace status\n"},"workspaceUrl":{"type":"string","description":"(String) URL of the workspace\n"}},"type":"object"}},"databricks:index/notebook:Notebook":{"description":"This resource allows you to manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html). You can also work with\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" databricks.getNotebookPaths \" pulumi-lang-dotnet=\" databricks.getNotebookPaths \" pulumi-lang-go=\" getNotebookPaths \" pulumi-lang-python=\" get_notebook_paths \" pulumi-lang-yaml=\" databricks.getNotebookPaths \" pulumi-lang-java=\" databricks.getNotebookPaths \"\u003e databricks.getNotebookPaths \u003c/span\u003edata sources.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n","properties":{"contentBase64":{"type":"string","description":"The base64-encoded notebook source code. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances, like creating a notebook with configuration properties for a data pipeline.\n"},"format":{"type":"string"},"language":{"type":"string","description":"One of `SCALA`, `PYTHON`, `SQL`, `R`.\n"},"md5":{"type":"string"},"objectId":{"type":"integer","description":"Unique identifier for a NOTEBOOK\n"},"objectType":{"type":"string","deprecationMessage":"Always is a notebook"},"path":{"type":"string","description":"The absolute path of the notebook or directory, beginning with \"/\", e.g. \"/Demo\".\n"},"providerConfig":{"$ref":"#/types/databricks:index/NotebookProviderConfig:NotebookProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"source":{"type":"string","description":"Path to notebook in source code format on local filesystem. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n"},"url":{"type":"string","description":"Routable URL of the notebook\n"},"workspacePath":{"type":"string","description":"path on Workspace File System (WSFS) in form of `/Workspace` + \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e\n"}},"required":["format","language","objectId","objectType","path","url","workspacePath"],"inputProperties":{"contentBase64":{"type":"string","description":"The base64-encoded notebook source code. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances, like creating a notebook with configuration properties for a data pipeline.\n"},"format":{"type":"string"},"language":{"type":"string","description":"One of `SCALA`, `PYTHON`, `SQL`, `R`.\n"},"md5":{"type":"string"},"objectId":{"type":"integer","description":"Unique identifier for a NOTEBOOK\n"},"objectType":{"type":"string","deprecationMessage":"Always is a notebook"},"path":{"type":"string","description":"The absolute path of the notebook or directory, beginning with \"/\", e.g. \"/Demo\".\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/NotebookProviderConfig:NotebookProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"source":{"type":"string","description":"Path to notebook in source code format on local filesystem. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n"}},"requiredInputs":["path"],"stateInputs":{"description":"Input properties used for looking up and filtering Notebook resources.\n","properties":{"contentBase64":{"type":"string","description":"The base64-encoded notebook source code. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances, like creating a notebook with configuration properties for a data pipeline.\n"},"format":{"type":"string"},"language":{"type":"string","description":"One of `SCALA`, `PYTHON`, `SQL`, `R`.\n"},"md5":{"type":"string"},"objectId":{"type":"integer","description":"Unique identifier for a NOTEBOOK\n"},"objectType":{"type":"string","deprecationMessage":"Always is a notebook"},"path":{"type":"string","description":"The absolute path of the notebook or directory, beginning with \"/\", e.g. \"/Demo\".\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/NotebookProviderConfig:NotebookProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"source":{"type":"string","description":"Path to notebook in source code format on local filesystem. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n"},"url":{"type":"string","description":"Routable URL of the notebook\n"},"workspacePath":{"type":"string","description":"path on Workspace File System (WSFS) in form of `/Workspace` + \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e\n"}},"type":"object"}},"databricks:index/notificationDestination:NotificationDestination":{"description":"This resource allows you to manage [Notification Destinations](https://docs.databricks.com/api/workspace/notificationdestinations). Notification destinations are used to send notifications for query alerts and jobs to destinations outside of Databricks. Only workspace admins can create, update, and delete notification destinations.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n`Email` notification destination:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst ndresource = new databricks.NotificationDestination(\"ndresource\", {\n    displayName: \"Notification Destination\",\n    config: {\n        email: {\n            addresses: [\"abc@gmail.com\"],\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nndresource = databricks.NotificationDestination(\"ndresource\",\n    display_name=\"Notification Destination\",\n    config={\n        \"email\": {\n            \"addresses\": [\"abc@gmail.com\"],\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var ndresource = new Databricks.NotificationDestination(\"ndresource\", new()\n    {\n        DisplayName = \"Notification Destination\",\n        Config = new Databricks.Inputs.NotificationDestinationConfigArgs\n        {\n            Email = new Databricks.Inputs.NotificationDestinationConfigEmailArgs\n            {\n                Addresses = new[]\n                {\n                    \"abc@gmail.com\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewNotificationDestination(ctx, \"ndresource\", \u0026databricks.NotificationDestinationArgs{\n\t\t\tDisplayName: pulumi.String(\"Notification Destination\"),\n\t\t\tConfig: \u0026databricks.NotificationDestinationConfigArgs{\n\t\t\t\tEmail: \u0026databricks.NotificationDestinationConfigEmailArgs{\n\t\t\t\t\tAddresses: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"abc@gmail.com\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.NotificationDestination;\nimport com.pulumi.databricks.NotificationDestinationArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigEmailArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var ndresource = new NotificationDestination(\"ndresource\", NotificationDestinationArgs.builder()\n            .displayName(\"Notification Destination\")\n            .config(NotificationDestinationConfigArgs.builder()\n                .email(NotificationDestinationConfigEmailArgs.builder()\n                    .addresses(\"abc@gmail.com\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  ndresource:\n    type: databricks:NotificationDestination\n    properties:\n      displayName: Notification Destination\n      config:\n        email:\n          addresses:\n            - abc@gmail.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n`Slack` notification destination:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst ndresource = new databricks.NotificationDestination(\"ndresource\", {\n    displayName: \"Notification Destination\",\n    config: {\n        slack: {\n            url: \"https://hooks.slack.com/services/...\",\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nndresource = databricks.NotificationDestination(\"ndresource\",\n    display_name=\"Notification Destination\",\n    config={\n        \"slack\": {\n            \"url\": \"https://hooks.slack.com/services/...\",\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var ndresource = new Databricks.NotificationDestination(\"ndresource\", new()\n    {\n        DisplayName = \"Notification Destination\",\n        Config = new Databricks.Inputs.NotificationDestinationConfigArgs\n        {\n            Slack = new Databricks.Inputs.NotificationDestinationConfigSlackArgs\n            {\n                Url = \"https://hooks.slack.com/services/...\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewNotificationDestination(ctx, \"ndresource\", \u0026databricks.NotificationDestinationArgs{\n\t\t\tDisplayName: pulumi.String(\"Notification Destination\"),\n\t\t\tConfig: \u0026databricks.NotificationDestinationConfigArgs{\n\t\t\t\tSlack: \u0026databricks.NotificationDestinationConfigSlackArgs{\n\t\t\t\t\tUrl: pulumi.String(\"https://hooks.slack.com/services/...\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.NotificationDestination;\nimport com.pulumi.databricks.NotificationDestinationArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigSlackArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var ndresource = new NotificationDestination(\"ndresource\", NotificationDestinationArgs.builder()\n            .displayName(\"Notification Destination\")\n            .config(NotificationDestinationConfigArgs.builder()\n                .slack(NotificationDestinationConfigSlackArgs.builder()\n                    .url(\"https://hooks.slack.com/services/...\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  ndresource:\n    type: databricks:NotificationDestination\n    properties:\n      displayName: Notification Destination\n      config:\n        slack:\n          url: https://hooks.slack.com/services/...\n```\n\u003c!--End PulumiCodeChooser --\u003e\n`PagerDuty` notification destination:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst ndresource = new databricks.NotificationDestination(\"ndresource\", {\n    displayName: \"Notification Destination\",\n    config: {\n        pagerduty: {\n            integrationKey: \"xxxxxx\",\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nndresource = databricks.NotificationDestination(\"ndresource\",\n    display_name=\"Notification Destination\",\n    config={\n        \"pagerduty\": {\n            \"integration_key\": \"xxxxxx\",\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var ndresource = new Databricks.NotificationDestination(\"ndresource\", new()\n    {\n        DisplayName = \"Notification Destination\",\n        Config = new Databricks.Inputs.NotificationDestinationConfigArgs\n        {\n            Pagerduty = new Databricks.Inputs.NotificationDestinationConfigPagerdutyArgs\n            {\n                IntegrationKey = \"xxxxxx\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewNotificationDestination(ctx, \"ndresource\", \u0026databricks.NotificationDestinationArgs{\n\t\t\tDisplayName: pulumi.String(\"Notification Destination\"),\n\t\t\tConfig: \u0026databricks.NotificationDestinationConfigArgs{\n\t\t\t\tPagerduty: \u0026databricks.NotificationDestinationConfigPagerdutyArgs{\n\t\t\t\t\tIntegrationKey: pulumi.String(\"xxxxxx\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.NotificationDestination;\nimport com.pulumi.databricks.NotificationDestinationArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigPagerdutyArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var ndresource = new NotificationDestination(\"ndresource\", NotificationDestinationArgs.builder()\n            .displayName(\"Notification Destination\")\n            .config(NotificationDestinationConfigArgs.builder()\n                .pagerduty(NotificationDestinationConfigPagerdutyArgs.builder()\n                    .integrationKey(\"xxxxxx\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  ndresource:\n    type: databricks:NotificationDestination\n    properties:\n      displayName: Notification Destination\n      config:\n        pagerduty:\n          integrationKey: xxxxxx\n```\n\u003c!--End PulumiCodeChooser --\u003e\n`Microsoft Teams` notification destination:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst ndresource = new databricks.NotificationDestination(\"ndresource\", {\n    displayName: \"Notification Destination\",\n    config: {\n        microsoftTeams: {\n            url: \"https://outlook.office.com/webhook/...\",\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nndresource = databricks.NotificationDestination(\"ndresource\",\n    display_name=\"Notification Destination\",\n    config={\n        \"microsoft_teams\": {\n            \"url\": \"https://outlook.office.com/webhook/...\",\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var ndresource = new Databricks.NotificationDestination(\"ndresource\", new()\n    {\n        DisplayName = \"Notification Destination\",\n        Config = new Databricks.Inputs.NotificationDestinationConfigArgs\n        {\n            MicrosoftTeams = new Databricks.Inputs.NotificationDestinationConfigMicrosoftTeamsArgs\n            {\n                Url = \"https://outlook.office.com/webhook/...\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewNotificationDestination(ctx, \"ndresource\", \u0026databricks.NotificationDestinationArgs{\n\t\t\tDisplayName: pulumi.String(\"Notification Destination\"),\n\t\t\tConfig: \u0026databricks.NotificationDestinationConfigArgs{\n\t\t\t\tMicrosoftTeams: \u0026databricks.NotificationDestinationConfigMicrosoftTeamsArgs{\n\t\t\t\t\tUrl: pulumi.String(\"https://outlook.office.com/webhook/...\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.NotificationDestination;\nimport com.pulumi.databricks.NotificationDestinationArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigMicrosoftTeamsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var ndresource = new NotificationDestination(\"ndresource\", NotificationDestinationArgs.builder()\n            .displayName(\"Notification Destination\")\n            .config(NotificationDestinationConfigArgs.builder()\n                .microsoftTeams(NotificationDestinationConfigMicrosoftTeamsArgs.builder()\n                    .url(\"https://outlook.office.com/webhook/...\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  ndresource:\n    type: databricks:NotificationDestination\n    properties:\n      displayName: Notification Destination\n      config:\n        microsoftTeams:\n          url: https://outlook.office.com/webhook/...\n```\n\u003c!--End PulumiCodeChooser --\u003e\n`Generic Webhook` notification destination:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst ndresource = new databricks.NotificationDestination(\"ndresource\", {\n    displayName: \"Notification Destination\",\n    config: {\n        genericWebhook: {\n            url: \"https://example.com/webhook\",\n            username: \"username\",\n            password: \"password\",\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nndresource = databricks.NotificationDestination(\"ndresource\",\n    display_name=\"Notification Destination\",\n    config={\n        \"generic_webhook\": {\n            \"url\": \"https://example.com/webhook\",\n            \"username\": \"username\",\n            \"password\": \"password\",\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var ndresource = new Databricks.NotificationDestination(\"ndresource\", new()\n    {\n        DisplayName = \"Notification Destination\",\n        Config = new Databricks.Inputs.NotificationDestinationConfigArgs\n        {\n            GenericWebhook = new Databricks.Inputs.NotificationDestinationConfigGenericWebhookArgs\n            {\n                Url = \"https://example.com/webhook\",\n                Username = \"username\",\n                Password = \"password\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewNotificationDestination(ctx, \"ndresource\", \u0026databricks.NotificationDestinationArgs{\n\t\t\tDisplayName: pulumi.String(\"Notification Destination\"),\n\t\t\tConfig: \u0026databricks.NotificationDestinationConfigArgs{\n\t\t\t\tGenericWebhook: \u0026databricks.NotificationDestinationConfigGenericWebhookArgs{\n\t\t\t\t\tUrl:      pulumi.String(\"https://example.com/webhook\"),\n\t\t\t\t\tUsername: pulumi.String(\"username\"),\n\t\t\t\t\tPassword: pulumi.String(\"password\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.NotificationDestination;\nimport com.pulumi.databricks.NotificationDestinationArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigGenericWebhookArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var ndresource = new NotificationDestination(\"ndresource\", NotificationDestinationArgs.builder()\n            .displayName(\"Notification Destination\")\n            .config(NotificationDestinationConfigArgs.builder()\n                .genericWebhook(NotificationDestinationConfigGenericWebhookArgs.builder()\n                    .url(\"https://example.com/webhook\")\n                    .username(\"username\")\n                    .password(\"password\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  ndresource:\n    type: databricks:NotificationDestination\n    properties:\n      displayName: Notification Destination\n      config:\n        genericWebhook:\n          url: https://example.com/webhook\n          username: username\n          password: password\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"config":{"$ref":"#/types/databricks:index/NotificationDestinationConfig:NotificationDestinationConfig","description":"The configuration of the Notification Destination. It must contain exactly one of the following blocks:\n"},"destinationType":{"type":"string","description":"the type of Notification Destination.\n"},"displayName":{"type":"string","description":"The display name of the Notification Destination.\n"},"providerConfig":{"$ref":"#/types/databricks:index/NotificationDestinationProviderConfig:NotificationDestinationProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"required":["destinationType","displayName"],"inputProperties":{"config":{"$ref":"#/types/databricks:index/NotificationDestinationConfig:NotificationDestinationConfig","description":"The configuration of the Notification Destination. It must contain exactly one of the following blocks:\n"},"destinationType":{"type":"string","description":"the type of Notification Destination.\n"},"displayName":{"type":"string","description":"The display name of the Notification Destination.\n"},"providerConfig":{"$ref":"#/types/databricks:index/NotificationDestinationProviderConfig:NotificationDestinationProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"requiredInputs":["displayName"],"stateInputs":{"description":"Input properties used for looking up and filtering NotificationDestination resources.\n","properties":{"config":{"$ref":"#/types/databricks:index/NotificationDestinationConfig:NotificationDestinationConfig","description":"The configuration of the Notification Destination. It must contain exactly one of the following blocks:\n"},"destinationType":{"type":"string","description":"the type of Notification Destination.\n"},"displayName":{"type":"string","description":"The display name of the Notification Destination.\n"},"providerConfig":{"$ref":"#/types/databricks:index/NotificationDestinationProviderConfig:NotificationDestinationProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"type":"object"}},"databricks:index/oboToken:OboToken":{"description":"This resource creates [On-Behalf-Of tokens](https://docs.databricks.com/administration-guide/users-groups/service-principals.html#manage-personal-access-tokens-for-a-service-principal) for a\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003ein Databricks workspaces on AWS and GCP.  In general it's best to use OAuth authentication using client ID and secret, and use this resource mostly for integrations that doesn't support OAuth.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e To create On-Behalf-Of token for Azure Service Principal, configure Pulumi provider to use Azure service principal's client ID and secret, and use \u003cspan pulumi-lang-nodejs=\"`databricks.Token`\" pulumi-lang-dotnet=\"`databricks.Token`\" pulumi-lang-go=\"`Token`\" pulumi-lang-python=\"`Token`\" pulumi-lang-yaml=\"`databricks.Token`\" pulumi-lang-java=\"`databricks.Token`\"\u003e`databricks.Token`\u003c/span\u003e resource to create a personal access token.\n\n## Example Usage\n\nCreating a token for a narrowly-scoped service principal, that would be the only one (besides admins) allowed to use PAT token in this given workspace, keeping your automated deployment highly secure.\n\n\u003e A given declaration of `databricks_permissions.token_usage` would OVERWRITE permissions to use PAT tokens from any existing groups with token usage permissions such as the \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e group. To avoid this, be sure to include any desired groups in additional \u003cspan pulumi-lang-nodejs=\"`accessControl`\" pulumi-lang-dotnet=\"`AccessControl`\" pulumi-lang-go=\"`accessControl`\" pulumi-lang-python=\"`access_control`\" pulumi-lang-yaml=\"`accessControl`\" pulumi-lang-java=\"`accessControl`\"\u003e`access_control`\u003c/span\u003e blocks in the Pulumi configuration file.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.ServicePrincipal(\"this\", {displayName: \"Automation-only SP\"});\nconst tokenUsage = new databricks.Permissions(\"token_usage\", {\n    authorization: \"tokens\",\n    accessControls: [{\n        servicePrincipalName: _this.applicationId,\n        permissionLevel: \"CAN_USE\",\n    }],\n});\nconst thisOboToken = new databricks.OboToken(\"this\", {\n    applicationId: _this.applicationId,\n    comment: pulumi.interpolate`PAT on behalf of ${_this.displayName}`,\n    lifetimeSeconds: 3600,\n}, {\n    dependsOn: [tokenUsage],\n});\nexport const obo = thisOboToken.tokenValue;\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.ServicePrincipal(\"this\", display_name=\"Automation-only SP\")\ntoken_usage = databricks.Permissions(\"token_usage\",\n    authorization=\"tokens\",\n    access_controls=[{\n        \"service_principal_name\": this.application_id,\n        \"permission_level\": \"CAN_USE\",\n    }])\nthis_obo_token = databricks.OboToken(\"this\",\n    application_id=this.application_id,\n    comment=this.display_name.apply(lambda display_name: f\"PAT on behalf of {display_name}\"),\n    lifetime_seconds=3600,\n    opts = pulumi.ResourceOptions(depends_on=[token_usage]))\npulumi.export(\"obo\", this_obo_token.token_value)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.ServicePrincipal(\"this\", new()\n    {\n        DisplayName = \"Automation-only SP\",\n    });\n\n    var tokenUsage = new Databricks.Permissions(\"token_usage\", new()\n    {\n        Authorization = \"tokens\",\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                ServicePrincipalName = @this.ApplicationId,\n                PermissionLevel = \"CAN_USE\",\n            },\n        },\n    });\n\n    var thisOboToken = new Databricks.OboToken(\"this\", new()\n    {\n        ApplicationId = @this.ApplicationId,\n        Comment = @this.DisplayName.Apply(displayName =\u003e $\"PAT on behalf of {displayName}\"),\n        LifetimeSeconds = 3600,\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            tokenUsage,\n        },\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"obo\"] = thisOboToken.TokenValue,\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewServicePrincipal(ctx, \"this\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation-only SP\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\ttokenUsage, err := databricks.NewPermissions(ctx, \"token_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tAuthorization: pulumi.String(\"tokens\"),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tServicePrincipalName: this.ApplicationId,\n\t\t\t\t\tPermissionLevel:      pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisOboToken, err := databricks.NewOboToken(ctx, \"this\", \u0026databricks.OboTokenArgs{\n\t\t\tApplicationId: this.ApplicationId,\n\t\t\tComment: this.DisplayName.ApplyT(func(displayName string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"PAT on behalf of %v\", displayName), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tLifetimeSeconds: pulumi.Int(3600),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\ttokenUsage,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"obo\", thisOboToken.TokenValue)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport com.pulumi.databricks.OboToken;\nimport com.pulumi.databricks.OboTokenArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new ServicePrincipal(\"this\", ServicePrincipalArgs.builder()\n            .displayName(\"Automation-only SP\")\n            .build());\n\n        var tokenUsage = new Permissions(\"tokenUsage\", PermissionsArgs.builder()\n            .authorization(\"tokens\")\n            .accessControls(PermissionsAccessControlArgs.builder()\n                .servicePrincipalName(this_.applicationId())\n                .permissionLevel(\"CAN_USE\")\n                .build())\n            .build());\n\n        var thisOboToken = new OboToken(\"thisOboToken\", OboTokenArgs.builder()\n            .applicationId(this_.applicationId())\n            .comment(this_.displayName().applyValue(_displayName -\u003e String.format(\"PAT on behalf of %s\", _displayName)))\n            .lifetimeSeconds(3600)\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(tokenUsage)\n                .build());\n\n        ctx.export(\"obo\", thisOboToken.tokenValue());\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:ServicePrincipal\n    properties:\n      displayName: Automation-only SP\n  tokenUsage:\n    type: databricks:Permissions\n    name: token_usage\n    properties:\n      authorization: tokens\n      accessControls:\n        - servicePrincipalName: ${this.applicationId}\n          permissionLevel: CAN_USE\n  thisOboToken:\n    type: databricks:OboToken\n    name: this\n    properties:\n      applicationId: ${this.applicationId}\n      comment: PAT on behalf of ${this.displayName}\n      lifetimeSeconds: 3600\n    options:\n      dependsOn:\n        - ${tokenUsage}\noutputs:\n  obo: ${thisOboToken.tokenValue}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating a token for a service principal with admin privileges\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.ServicePrincipal(\"this\", {displayName: \"Pulumi\"});\nconst admins = databricks.getGroup({\n    displayName: \"admins\",\n});\nconst thisGroupMember = new databricks.GroupMember(\"this\", {\n    groupId: admins.then(admins =\u003e admins.id),\n    memberId: _this.id,\n});\nconst thisOboToken = new databricks.OboToken(\"this\", {\n    applicationId: _this.applicationId,\n    comment: pulumi.interpolate`PAT on behalf of ${_this.displayName}`,\n    lifetimeSeconds: 3600,\n}, {\n    dependsOn: [thisGroupMember],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.ServicePrincipal(\"this\", display_name=\"Pulumi\")\nadmins = databricks.get_group(display_name=\"admins\")\nthis_group_member = databricks.GroupMember(\"this\",\n    group_id=admins.id,\n    member_id=this.id)\nthis_obo_token = databricks.OboToken(\"this\",\n    application_id=this.application_id,\n    comment=this.display_name.apply(lambda display_name: f\"PAT on behalf of {display_name}\"),\n    lifetime_seconds=3600,\n    opts = pulumi.ResourceOptions(depends_on=[this_group_member]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.ServicePrincipal(\"this\", new()\n    {\n        DisplayName = \"Pulumi\",\n    });\n\n    var admins = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"admins\",\n    });\n\n    var thisGroupMember = new Databricks.GroupMember(\"this\", new()\n    {\n        GroupId = admins.Apply(getGroupResult =\u003e getGroupResult.Id),\n        MemberId = @this.Id,\n    });\n\n    var thisOboToken = new Databricks.OboToken(\"this\", new()\n    {\n        ApplicationId = @this.ApplicationId,\n        Comment = @this.DisplayName.Apply(displayName =\u003e $\"PAT on behalf of {displayName}\"),\n        LifetimeSeconds = 3600,\n    }, new CustomResourceOptions\n    {\n        DependsOn =\n        {\n            thisGroupMember,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewServicePrincipal(ctx, \"this\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"Pulumi\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tadmins, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"admins\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisGroupMember, err := databricks.NewGroupMember(ctx, \"this\", \u0026databricks.GroupMemberArgs{\n\t\t\tGroupId:  pulumi.String(admins.Id),\n\t\t\tMemberId: this.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewOboToken(ctx, \"this\", \u0026databricks.OboTokenArgs{\n\t\t\tApplicationId: this.ApplicationId,\n\t\t\tComment: this.DisplayName.ApplyT(func(displayName string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"PAT on behalf of %v\", displayName), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tLifetimeSeconds: pulumi.Int(3600),\n\t\t}, pulumi.DependsOn([]pulumi.Resource{\n\t\t\tthisGroupMember,\n\t\t}))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.GroupMember;\nimport com.pulumi.databricks.GroupMemberArgs;\nimport com.pulumi.databricks.OboToken;\nimport com.pulumi.databricks.OboTokenArgs;\nimport com.pulumi.resources.CustomResourceOptions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new ServicePrincipal(\"this\", ServicePrincipalArgs.builder()\n            .displayName(\"Pulumi\")\n            .build());\n\n        final var admins = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"admins\")\n            .build());\n\n        var thisGroupMember = new GroupMember(\"thisGroupMember\", GroupMemberArgs.builder()\n            .groupId(admins.id())\n            .memberId(this_.id())\n            .build());\n\n        var thisOboToken = new OboToken(\"thisOboToken\", OboTokenArgs.builder()\n            .applicationId(this_.applicationId())\n            .comment(this_.displayName().applyValue(_displayName -\u003e String.format(\"PAT on behalf of %s\", _displayName)))\n            .lifetimeSeconds(3600)\n            .build(), CustomResourceOptions.builder()\n                .dependsOn(thisGroupMember)\n                .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:ServicePrincipal\n    properties:\n      displayName: Pulumi\n  thisGroupMember:\n    type: databricks:GroupMember\n    name: this\n    properties:\n      groupId: ${admins.id}\n      memberId: ${this.id}\n  thisOboToken:\n    type: databricks:OboToken\n    name: this\n    properties:\n      applicationId: ${this.applicationId}\n      comment: PAT on behalf of ${this.displayName}\n      lifetimeSeconds: 3600\n    options:\n      dependsOn:\n        - ${thisGroupMember}\nvariables:\n  admins:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: admins\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eto manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eto manage [Service Principals](https://docs.databricks.com/administration-guide/users-groups/service-principals.html) that could be added to\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003ewithin workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage data access in Unity Catalog.\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"applicationId":{"type":"string","description":"Application ID of\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eto create a PAT token for.\n"},"comment":{"type":"string","description":"Comment that describes the purpose of the token.\n"},"lifetimeSeconds":{"type":"integer","description":"The number of seconds before the token expires. Token resource is re-created when it expires. If no lifetime is specified, the token remains valid indefinitely.\n"},"providerConfig":{"$ref":"#/types/databricks:index/OboTokenProviderConfig:OboTokenProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"tokenValue":{"type":"string","description":"**Sensitive** value of the newly-created token.\n","secret":true}},"required":["applicationId","tokenValue"],"inputProperties":{"applicationId":{"type":"string","description":"Application ID of\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eto create a PAT token for.\n","willReplaceOnChanges":true},"comment":{"type":"string","description":"Comment that describes the purpose of the token.\n","willReplaceOnChanges":true},"lifetimeSeconds":{"type":"integer","description":"The number of seconds before the token expires. Token resource is re-created when it expires. If no lifetime is specified, the token remains valid indefinitely.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/OboTokenProviderConfig:OboTokenProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"requiredInputs":["applicationId"],"stateInputs":{"description":"Input properties used for looking up and filtering OboToken resources.\n","properties":{"applicationId":{"type":"string","description":"Application ID of\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eto create a PAT token for.\n","willReplaceOnChanges":true},"comment":{"type":"string","description":"Comment that describes the purpose of the token.\n","willReplaceOnChanges":true},"lifetimeSeconds":{"type":"integer","description":"The number of seconds before the token expires. Token resource is re-created when it expires. If no lifetime is specified, the token remains valid indefinitely.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/OboTokenProviderConfig:OboTokenProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"tokenValue":{"type":"string","description":"**Sensitive** value of the newly-created token.\n","secret":true}},"type":"object"}},"databricks:index/onlineStore:OnlineStore":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n","properties":{"capacity":{"type":"string","description":"The capacity of the online store. Valid values are \"CU_1\", \"CU_2\", \"CU_4\", \"CU_8\"\n"},"creationTime":{"type":"string","description":"(string) - The timestamp when the online store was created\n"},"creator":{"type":"string","description":"(string) - The email of the creator of the online store\n"},"name":{"type":"string","description":"The name of the online store. This is the unique identifier for the online store\n"},"providerConfig":{"$ref":"#/types/databricks:index/OnlineStoreProviderConfig:OnlineStoreProviderConfig","description":"Configure the provider for management through account provider.\n"},"readReplicaCount":{"type":"integer","description":"The number of read replicas for the online store. Defaults to 0\n"},"state":{"type":"string","description":"(string) - The current state of the online store. Possible values are: `AVAILABLE`, `DELETING`, `FAILING_OVER`, `STARTING`, `STOPPED`, `UPDATING`\n"},"usagePolicyId":{"type":"string","description":"The usage policy applied to the online store to track billing\n"}},"required":["capacity","creationTime","creator","name","state"],"inputProperties":{"capacity":{"type":"string","description":"The capacity of the online store. Valid values are \"CU_1\", \"CU_2\", \"CU_4\", \"CU_8\"\n"},"name":{"type":"string","description":"The name of the online store. This is the unique identifier for the online store\n"},"providerConfig":{"$ref":"#/types/databricks:index/OnlineStoreProviderConfig:OnlineStoreProviderConfig","description":"Configure the provider for management through account provider.\n"},"readReplicaCount":{"type":"integer","description":"The number of read replicas for the online store. Defaults to 0\n"},"usagePolicyId":{"type":"string","description":"The usage policy applied to the online store to track billing\n"}},"requiredInputs":["capacity"],"stateInputs":{"description":"Input properties used for looking up and filtering OnlineStore resources.\n","properties":{"capacity":{"type":"string","description":"The capacity of the online store. Valid values are \"CU_1\", \"CU_2\", \"CU_4\", \"CU_8\"\n"},"creationTime":{"type":"string","description":"(string) - The timestamp when the online store was created\n"},"creator":{"type":"string","description":"(string) - The email of the creator of the online store\n"},"name":{"type":"string","description":"The name of the online store. This is the unique identifier for the online store\n"},"providerConfig":{"$ref":"#/types/databricks:index/OnlineStoreProviderConfig:OnlineStoreProviderConfig","description":"Configure the provider for management through account provider.\n"},"readReplicaCount":{"type":"integer","description":"The number of read replicas for the online store. Defaults to 0\n"},"state":{"type":"string","description":"(string) - The current state of the online store. Possible values are: `AVAILABLE`, `DELETING`, `FAILING_OVER`, `STARTING`, `STOPPED`, `UPDATING`\n"},"usagePolicyId":{"type":"string","description":"The usage policy applied to the online store to track billing\n"}},"type":"object"}},"databricks:index/onlineTable:OnlineTable":{"description":"This resource allows you to create [Online Table](https://docs.databricks.com/en/machine-learning/feature-store/online-tables.html) in Databricks.  An online table is a read-only copy of a Delta Table that is stored in row-oriented format optimized for online access. Online tables are fully serverless tables that auto-scale throughput capacity with the request load and provide low latency and high throughput access to data of any scale. Online tables are designed to work with Databricks Model Serving, Feature Serving, and retrieval-augmented generation (RAG) applications where they are used for fast data lookups.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.OnlineTable(\"this\", {\n    name: \"main.default.online_table\",\n    spec: {\n        sourceTableFullName: \"main.default.source_table\",\n        primaryKeyColumns: [\"id\"],\n        runTriggered: {},\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.OnlineTable(\"this\",\n    name=\"main.default.online_table\",\n    spec={\n        \"source_table_full_name\": \"main.default.source_table\",\n        \"primary_key_columns\": [\"id\"],\n        \"run_triggered\": {},\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.OnlineTable(\"this\", new()\n    {\n        Name = \"main.default.online_table\",\n        Spec = new Databricks.Inputs.OnlineTableSpecArgs\n        {\n            SourceTableFullName = \"main.default.source_table\",\n            PrimaryKeyColumns = new[]\n            {\n                \"id\",\n            },\n            RunTriggered = null,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewOnlineTable(ctx, \"this\", \u0026databricks.OnlineTableArgs{\n\t\t\tName: pulumi.String(\"main.default.online_table\"),\n\t\t\tSpec: \u0026databricks.OnlineTableSpecArgs{\n\t\t\t\tSourceTableFullName: pulumi.String(\"main.default.source_table\"),\n\t\t\t\tPrimaryKeyColumns: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"id\"),\n\t\t\t\t},\n\t\t\t\tRunTriggered: \u0026databricks.OnlineTableSpecRunTriggeredArgs{},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.OnlineTable;\nimport com.pulumi.databricks.OnlineTableArgs;\nimport com.pulumi.databricks.inputs.OnlineTableSpecArgs;\nimport com.pulumi.databricks.inputs.OnlineTableSpecRunTriggeredArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new OnlineTable(\"this\", OnlineTableArgs.builder()\n            .name(\"main.default.online_table\")\n            .spec(OnlineTableSpecArgs.builder()\n                .sourceTableFullName(\"main.default.source_table\")\n                .primaryKeyColumns(\"id\")\n                .runTriggered(OnlineTableSpecRunTriggeredArgs.builder()\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:OnlineTable\n    properties:\n      name: main.default.online_table\n      spec:\n        sourceTableFullName: main.default.source_table\n        primaryKeyColumns:\n          - id\n        runTriggered: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"name":{"type":"string","description":"3-level name of the Online Table to create.\n"},"providerConfig":{"$ref":"#/types/databricks:index/OnlineTableProviderConfig:OnlineTableProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"spec":{"$ref":"#/types/databricks:index/OnlineTableSpec:OnlineTableSpec","description":"object containing specification of the online table:\n"},"statuses":{"type":"array","items":{"$ref":"#/types/databricks:index/OnlineTableStatus:OnlineTableStatus"},"description":"object describing status of the online table:\n"},"tableServingUrl":{"type":"string","description":"Data serving REST API URL for this table.\n"},"unityCatalogProvisioningState":{"type":"string","description":"The provisioning state of the online table entity in Unity Catalog. This is distinct from the state of the data synchronization pipeline (i.e. the table may be in \"ACTIVE\" but the pipeline may be in \"PROVISIONING\" as it runs asynchronously).\n"}},"required":["name","statuses","tableServingUrl","unityCatalogProvisioningState"],"inputProperties":{"name":{"type":"string","description":"3-level name of the Online Table to create.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/OnlineTableProviderConfig:OnlineTableProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"spec":{"$ref":"#/types/databricks:index/OnlineTableSpec:OnlineTableSpec","description":"object containing specification of the online table:\n","willReplaceOnChanges":true}},"stateInputs":{"description":"Input properties used for looking up and filtering OnlineTable resources.\n","properties":{"name":{"type":"string","description":"3-level name of the Online Table to create.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/OnlineTableProviderConfig:OnlineTableProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"spec":{"$ref":"#/types/databricks:index/OnlineTableSpec:OnlineTableSpec","description":"object containing specification of the online table:\n","willReplaceOnChanges":true},"statuses":{"type":"array","items":{"$ref":"#/types/databricks:index/OnlineTableStatus:OnlineTableStatus"},"description":"object describing status of the online table:\n"},"tableServingUrl":{"type":"string","description":"Data serving REST API URL for this table.\n"},"unityCatalogProvisioningState":{"type":"string","description":"The provisioning state of the online table entity in Unity Catalog. This is distinct from the state of the data synchronization pipeline (i.e. the table may be in \"ACTIVE\" but the pipeline may be in \"PROVISIONING\" as it runs asynchronously).\n"}},"type":"object"}},"databricks:index/permissionAssignment:PermissionAssignment":{"description":"This resource is used to assign account-level users, service principals and groups to a Databricks workspace. To configure additional entitlements such as cluster creation, please use\u003cspan pulumi-lang-nodejs=\" databricks.Entitlements\n\" pulumi-lang-dotnet=\" databricks.Entitlements\n\" pulumi-lang-go=\" Entitlements\n\" pulumi-lang-python=\" Entitlements\n\" pulumi-lang-yaml=\" databricks.Entitlements\n\" pulumi-lang-java=\" databricks.Entitlements\n\"\u003e databricks.Entitlements\n\u003c/span\u003e\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n### Assign using \u003cspan pulumi-lang-nodejs=\"`principalId`\" pulumi-lang-dotnet=\"`PrincipalId`\" pulumi-lang-go=\"`principalId`\" pulumi-lang-python=\"`principal_id`\" pulumi-lang-yaml=\"`principalId`\" pulumi-lang-java=\"`principalId`\"\u003e`principal_id`\u003c/span\u003e\n\nIn workspace context, adding account-level user to a workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\n// Use the account provider\nconst me = databricks.getUser({\n    userName: \"me@example.com\",\n});\nconst addUser = new databricks.PermissionAssignment(\"add_user\", {\n    principalId: me.then(me =\u003e me.id),\n    permissions: [\"USER\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\n# Use the account provider\nme = databricks.get_user(user_name=\"me@example.com\")\nadd_user = databricks.PermissionAssignment(\"add_user\",\n    principal_id=me.id,\n    permissions=[\"USER\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    // Use the account provider\n    var me = Databricks.GetUser.Invoke(new()\n    {\n        UserName = \"me@example.com\",\n    });\n\n    var addUser = new Databricks.PermissionAssignment(\"add_user\", new()\n    {\n        PrincipalId = me.Apply(getUserResult =\u003e getUserResult.Id),\n        Permissions = new[]\n        {\n            \"USER\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t// Use the account provider\n\t\tme, err := databricks.LookupUser(ctx, \u0026databricks.LookupUserArgs{\n\t\t\tUserName: pulumi.StringRef(\"me@example.com\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissionAssignment(ctx, \"add_user\", \u0026databricks.PermissionAssignmentArgs{\n\t\t\tPrincipalId: pulumi.String(me.Id),\n\t\t\tPermissions: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USER\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetUserArgs;\nimport com.pulumi.databricks.PermissionAssignment;\nimport com.pulumi.databricks.PermissionAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        // Use the account provider\n        final var me = DatabricksFunctions.getUser(GetUserArgs.builder()\n            .userName(\"me@example.com\")\n            .build());\n\n        var addUser = new PermissionAssignment(\"addUser\", PermissionAssignmentArgs.builder()\n            .principalId(me.id())\n            .permissions(\"USER\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  addUser:\n    type: databricks:PermissionAssignment\n    name: add_user\n    properties:\n      principalId: ${me.id}\n      permissions:\n        - USER\nvariables:\n  # Use the account provider\n  me:\n    fn::invoke:\n      function: databricks:getUser\n      arguments:\n        userName: me@example.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nIn workspace context, adding account-level service principal to a workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\n// Use the account provider\nconst sp = databricks.getServicePrincipal({\n    displayName: \"Automation-only SP\",\n});\nconst addAdminSpn = new databricks.PermissionAssignment(\"add_admin_spn\", {\n    principalId: sp.then(sp =\u003e sp.id),\n    permissions: [\"ADMIN\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\n# Use the account provider\nsp = databricks.get_service_principal(display_name=\"Automation-only SP\")\nadd_admin_spn = databricks.PermissionAssignment(\"add_admin_spn\",\n    principal_id=sp.id,\n    permissions=[\"ADMIN\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    // Use the account provider\n    var sp = Databricks.GetServicePrincipal.Invoke(new()\n    {\n        DisplayName = \"Automation-only SP\",\n    });\n\n    var addAdminSpn = new Databricks.PermissionAssignment(\"add_admin_spn\", new()\n    {\n        PrincipalId = sp.Apply(getServicePrincipalResult =\u003e getServicePrincipalResult.Id),\n        Permissions = new[]\n        {\n            \"ADMIN\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t// Use the account provider\n\t\tsp, err := databricks.LookupServicePrincipal(ctx, \u0026databricks.LookupServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.StringRef(\"Automation-only SP\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissionAssignment(ctx, \"add_admin_spn\", \u0026databricks.PermissionAssignmentArgs{\n\t\t\tPrincipalId: pulumi.String(sp.Id),\n\t\t\tPermissions: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"ADMIN\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetServicePrincipalArgs;\nimport com.pulumi.databricks.PermissionAssignment;\nimport com.pulumi.databricks.PermissionAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        // Use the account provider\n        final var sp = DatabricksFunctions.getServicePrincipal(GetServicePrincipalArgs.builder()\n            .displayName(\"Automation-only SP\")\n            .build());\n\n        var addAdminSpn = new PermissionAssignment(\"addAdminSpn\", PermissionAssignmentArgs.builder()\n            .principalId(sp.id())\n            .permissions(\"ADMIN\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  addAdminSpn:\n    type: databricks:PermissionAssignment\n    name: add_admin_spn\n    properties:\n      principalId: ${sp.id}\n      permissions:\n        - ADMIN\nvariables:\n  # Use the account provider\n  sp:\n    fn::invoke:\n      function: databricks:getServicePrincipal\n      arguments:\n        displayName: Automation-only SP\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nIn workspace context, adding account-level group to a workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\n// Use the account provider\nconst accountLevel = databricks.getGroup({\n    displayName: \"example-group\",\n});\n// Use the workspace provider\nconst _this = new databricks.PermissionAssignment(\"this\", {\n    principalId: accountLevel.then(accountLevel =\u003e accountLevel.id),\n    permissions: [\"USER\"],\n});\nconst workspaceLevel = databricks.getGroup({\n    displayName: \"example-group\",\n});\nexport const databricksGroupId = workspaceLevel.then(workspaceLevel =\u003e workspaceLevel.id);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\n# Use the account provider\naccount_level = databricks.get_group(display_name=\"example-group\")\n# Use the workspace provider\nthis = databricks.PermissionAssignment(\"this\",\n    principal_id=account_level.id,\n    permissions=[\"USER\"])\nworkspace_level = databricks.get_group(display_name=\"example-group\")\npulumi.export(\"databricksGroupId\", workspace_level.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    // Use the account provider\n    var accountLevel = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"example-group\",\n    });\n\n    // Use the workspace provider\n    var @this = new Databricks.PermissionAssignment(\"this\", new()\n    {\n        PrincipalId = accountLevel.Apply(getGroupResult =\u003e getGroupResult.Id),\n        Permissions = new[]\n        {\n            \"USER\",\n        },\n    });\n\n    var workspaceLevel = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"example-group\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"databricksGroupId\"] = workspaceLevel.Apply(getGroupResult =\u003e getGroupResult.Id),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t// Use the account provider\n\t\taccountLevel, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"example-group\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// Use the workspace provider\n\t\t_, err = databricks.NewPermissionAssignment(ctx, \"this\", \u0026databricks.PermissionAssignmentArgs{\n\t\t\tPrincipalId: pulumi.String(accountLevel.Id),\n\t\t\tPermissions: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USER\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tworkspaceLevel, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"example-group\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"databricksGroupId\", workspaceLevel.Id)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.PermissionAssignment;\nimport com.pulumi.databricks.PermissionAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        // Use the account provider\n        final var accountLevel = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"example-group\")\n            .build());\n\n        // Use the workspace provider\n        var this_ = new PermissionAssignment(\"this\", PermissionAssignmentArgs.builder()\n            .principalId(accountLevel.id())\n            .permissions(\"USER\")\n            .build());\n\n        final var workspaceLevel = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"example-group\")\n            .build());\n\n        ctx.export(\"databricksGroupId\", workspaceLevel.id());\n    }\n}\n```\n```yaml\nresources:\n  # Use the workspace provider\n  this:\n    type: databricks:PermissionAssignment\n    properties:\n      principalId: ${accountLevel.id}\n      permissions:\n        - USER\nvariables:\n  # Use the account provider\n  accountLevel:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: example-group\n  workspaceLevel:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: example-group\noutputs:\n  databricksGroupId: ${workspaceLevel.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Assign using \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`groupName`\" pulumi-lang-dotnet=\"`GroupName`\" pulumi-lang-go=\"`groupName`\" pulumi-lang-python=\"`group_name`\" pulumi-lang-yaml=\"`groupName`\" pulumi-lang-java=\"`groupName`\"\u003e`group_name`\u003c/span\u003e, or \u003cspan pulumi-lang-nodejs=\"`servicePrincipalName`\" pulumi-lang-dotnet=\"`ServicePrincipalName`\" pulumi-lang-go=\"`servicePrincipalName`\" pulumi-lang-python=\"`service_principal_name`\" pulumi-lang-yaml=\"`servicePrincipalName`\" pulumi-lang-java=\"`servicePrincipalName`\"\u003e`service_principal_name`\u003c/span\u003e\n\nIn workspace context, adding account-level user to a workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst addUser = new databricks.PermissionAssignment(\"add_user\", {\n    userName: \"me@example.com\",\n    permissions: [\"USER\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nadd_user = databricks.PermissionAssignment(\"add_user\",\n    user_name=\"me@example.com\",\n    permissions=[\"USER\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var addUser = new Databricks.PermissionAssignment(\"add_user\", new()\n    {\n        UserName = \"me@example.com\",\n        Permissions = new[]\n        {\n            \"USER\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPermissionAssignment(ctx, \"add_user\", \u0026databricks.PermissionAssignmentArgs{\n\t\t\tUserName: pulumi.String(\"me@example.com\"),\n\t\t\tPermissions: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USER\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PermissionAssignment;\nimport com.pulumi.databricks.PermissionAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var addUser = new PermissionAssignment(\"addUser\", PermissionAssignmentArgs.builder()\n            .userName(\"me@example.com\")\n            .permissions(\"USER\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  addUser:\n    type: databricks:PermissionAssignment\n    name: add_user\n    properties:\n      userName: me@example.com\n      permissions:\n        - USER\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nIn workspace context, adding account-level service principal to a workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst addAdminSpn = new databricks.PermissionAssignment(\"add_admin_spn\", {\n    servicePrincipalName: \"00000000-0000-0000-0000-000000000000\",\n    permissions: [\"ADMIN\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nadd_admin_spn = databricks.PermissionAssignment(\"add_admin_spn\",\n    service_principal_name=\"00000000-0000-0000-0000-000000000000\",\n    permissions=[\"ADMIN\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var addAdminSpn = new Databricks.PermissionAssignment(\"add_admin_spn\", new()\n    {\n        ServicePrincipalName = \"00000000-0000-0000-0000-000000000000\",\n        Permissions = new[]\n        {\n            \"ADMIN\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPermissionAssignment(ctx, \"add_admin_spn\", \u0026databricks.PermissionAssignmentArgs{\n\t\t\tServicePrincipalName: pulumi.String(\"00000000-0000-0000-0000-000000000000\"),\n\t\t\tPermissions: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"ADMIN\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PermissionAssignment;\nimport com.pulumi.databricks.PermissionAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var addAdminSpn = new PermissionAssignment(\"addAdminSpn\", PermissionAssignmentArgs.builder()\n            .servicePrincipalName(\"00000000-0000-0000-0000-000000000000\")\n            .permissions(\"ADMIN\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  addAdminSpn:\n    type: databricks:PermissionAssignment\n    name: add_admin_spn\n    properties:\n      servicePrincipalName: 00000000-0000-0000-0000-000000000000\n      permissions:\n        - ADMIN\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nIn workspace context, adding account-level group to a workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.PermissionAssignment(\"this\", {\n    groupName: \"example-group\",\n    permissions: [\"USER\"],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.PermissionAssignment(\"this\",\n    group_name=\"example-group\",\n    permissions=[\"USER\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.PermissionAssignment(\"this\", new()\n    {\n        GroupName = \"example-group\",\n        Permissions = new[]\n        {\n            \"USER\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPermissionAssignment(ctx, \"this\", \u0026databricks.PermissionAssignmentArgs{\n\t\t\tGroupName: pulumi.String(\"example-group\"),\n\t\t\tPermissions: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"USER\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PermissionAssignment;\nimport com.pulumi.databricks.PermissionAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new PermissionAssignment(\"this\", PermissionAssignmentArgs.builder()\n            .groupName(\"example-group\")\n            .permissions(\"USER\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:PermissionAssignment\n    properties:\n      groupName: example-group\n      permissions:\n        - USER\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsPermissionAssignment \" pulumi-lang-dotnet=\" databricks.MwsPermissionAssignment \" pulumi-lang-go=\" MwsPermissionAssignment \" pulumi-lang-python=\" MwsPermissionAssignment \" pulumi-lang-yaml=\" databricks.MwsPermissionAssignment \" pulumi-lang-java=\" databricks.MwsPermissionAssignment \"\u003e databricks.MwsPermissionAssignment \u003c/span\u003eto manage permission assignment from an account context\n\n","properties":{"displayName":{"type":"string","description":"the display name of the assigned principal.\n"},"groupName":{"type":"string","description":"the group name to assign to a workspace.\n"},"permissions":{"type":"array","items":{"type":"string"},"description":"The list of workspace permissions to assign to the principal:\n* `\"USER\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e group. This gives basic workspace access.\n* `\"ADMIN\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`admins`\" pulumi-lang-dotnet=\"`Admins`\" pulumi-lang-go=\"`admins`\" pulumi-lang-python=\"`admins`\" pulumi-lang-yaml=\"`admins`\" pulumi-lang-java=\"`admins`\"\u003e`admins`\u003c/span\u003e group. This gives workspace admin privileges to manage users and groups, workspace configurations, and more.\n"},"principalId":{"type":"string","description":"Databricks ID of the user, service principal, or group. The principal ID can be retrieved using the account-level SCIM API, or using databricks_user,\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eor\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata sources with account API (and has to be an account admin). A more sensible approach is to retrieve the list of \u003cspan pulumi-lang-nodejs=\"`principalId`\" pulumi-lang-dotnet=\"`PrincipalId`\" pulumi-lang-go=\"`principalId`\" pulumi-lang-python=\"`principal_id`\" pulumi-lang-yaml=\"`principalId`\" pulumi-lang-java=\"`principalId`\"\u003e`principal_id`\u003c/span\u003e as outputs from another Pulumi stack.\n"},"providerConfig":{"$ref":"#/types/databricks:index/PermissionAssignmentProviderConfig:PermissionAssignmentProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"servicePrincipalName":{"type":"string","description":"the application ID of service principal to assign to a workspace.\n"},"userName":{"type":"string","description":"the user name (email) to assign to a workspace.\n"}},"required":["displayName","groupName","permissions","principalId","servicePrincipalName","userName"],"inputProperties":{"groupName":{"type":"string","description":"the group name to assign to a workspace.\n","willReplaceOnChanges":true},"permissions":{"type":"array","items":{"type":"string"},"description":"The list of workspace permissions to assign to the principal:\n* `\"USER\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e group. This gives basic workspace access.\n* `\"ADMIN\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`admins`\" pulumi-lang-dotnet=\"`Admins`\" pulumi-lang-go=\"`admins`\" pulumi-lang-python=\"`admins`\" pulumi-lang-yaml=\"`admins`\" pulumi-lang-java=\"`admins`\"\u003e`admins`\u003c/span\u003e group. This gives workspace admin privileges to manage users and groups, workspace configurations, and more.\n"},"principalId":{"type":"string","description":"Databricks ID of the user, service principal, or group. The principal ID can be retrieved using the account-level SCIM API, or using databricks_user,\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eor\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata sources with account API (and has to be an account admin). A more sensible approach is to retrieve the list of \u003cspan pulumi-lang-nodejs=\"`principalId`\" pulumi-lang-dotnet=\"`PrincipalId`\" pulumi-lang-go=\"`principalId`\" pulumi-lang-python=\"`principal_id`\" pulumi-lang-yaml=\"`principalId`\" pulumi-lang-java=\"`principalId`\"\u003e`principal_id`\u003c/span\u003e as outputs from another Pulumi stack.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/PermissionAssignmentProviderConfig:PermissionAssignmentProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"servicePrincipalName":{"type":"string","description":"the application ID of service principal to assign to a workspace.\n","willReplaceOnChanges":true},"userName":{"type":"string","description":"the user name (email) to assign to a workspace.\n","willReplaceOnChanges":true}},"requiredInputs":["permissions"],"stateInputs":{"description":"Input properties used for looking up and filtering PermissionAssignment resources.\n","properties":{"displayName":{"type":"string","description":"the display name of the assigned principal.\n"},"groupName":{"type":"string","description":"the group name to assign to a workspace.\n","willReplaceOnChanges":true},"permissions":{"type":"array","items":{"type":"string"},"description":"The list of workspace permissions to assign to the principal:\n* `\"USER\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e group. This gives basic workspace access.\n* `\"ADMIN\"` - Adds principal to the workspace \u003cspan pulumi-lang-nodejs=\"`admins`\" pulumi-lang-dotnet=\"`Admins`\" pulumi-lang-go=\"`admins`\" pulumi-lang-python=\"`admins`\" pulumi-lang-yaml=\"`admins`\" pulumi-lang-java=\"`admins`\"\u003e`admins`\u003c/span\u003e group. This gives workspace admin privileges to manage users and groups, workspace configurations, and more.\n"},"principalId":{"type":"string","description":"Databricks ID of the user, service principal, or group. The principal ID can be retrieved using the account-level SCIM API, or using databricks_user,\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eor\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata sources with account API (and has to be an account admin). A more sensible approach is to retrieve the list of \u003cspan pulumi-lang-nodejs=\"`principalId`\" pulumi-lang-dotnet=\"`PrincipalId`\" pulumi-lang-go=\"`principalId`\" pulumi-lang-python=\"`principal_id`\" pulumi-lang-yaml=\"`principalId`\" pulumi-lang-java=\"`principalId`\"\u003e`principal_id`\u003c/span\u003e as outputs from another Pulumi stack.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/PermissionAssignmentProviderConfig:PermissionAssignmentProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"servicePrincipalName":{"type":"string","description":"the application ID of service principal to assign to a workspace.\n","willReplaceOnChanges":true},"userName":{"type":"string","description":"the user name (email) to assign to a workspace.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/permissions:Permissions":{"description":"This resource allows you to generically manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspaces. It ensures that only _admins_, _authenticated principal_ and those declared within \u003cspan pulumi-lang-nodejs=\"`accessControl`\" pulumi-lang-dotnet=\"`AccessControl`\" pulumi-lang-go=\"`accessControl`\" pulumi-lang-python=\"`access_control`\" pulumi-lang-yaml=\"`accessControl`\" pulumi-lang-java=\"`accessControl`\"\u003e`access_control`\u003c/span\u003e blocks would have specified access. It is not possible to remove management rights from _admins_ group.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e This resource is _authoritative_ for permissions on objects. Configuring this resource for an object will **OVERWRITE** any existing permissions of the same type unless imported, and changes made outside of Pulumi will be reset.\n\n\u003e It is not possible to lower permissions for \u003cspan pulumi-lang-nodejs=\"`admins`\" pulumi-lang-dotnet=\"`Admins`\" pulumi-lang-go=\"`admins`\" pulumi-lang-python=\"`admins`\" pulumi-lang-yaml=\"`admins`\" pulumi-lang-java=\"`admins`\"\u003e`admins`\u003c/span\u003e, so Databricks Pulumi Provider removes those \u003cspan pulumi-lang-nodejs=\"`accessControl`\" pulumi-lang-dotnet=\"`AccessControl`\" pulumi-lang-go=\"`accessControl`\" pulumi-lang-python=\"`access_control`\" pulumi-lang-yaml=\"`accessControl`\" pulumi-lang-java=\"`accessControl`\"\u003e`access_control`\u003c/span\u003e blocks automatically.\n\n\u003e If multiple permission levels are specified for an identity (e.g. `CAN_RESTART` and `CAN_MANAGE` for a cluster), only the highest level permission is returned and will cause permanent drift.\n\n\u003e To manage access control on service principals, use databricks_access_control_rule_set.\n\n## Cluster usage\n\nIt's possible to separate [cluster access control](https://docs.databricks.com/security/access-control/cluster-acl.html) to three different permission levels: `CAN_ATTACH_TO`, `CAN_RESTART` and `CAN_MANAGE`:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst ds = new databricks.Group(\"ds\", {displayName: \"Data Science\"});\nconst latest = databricks.getSparkVersion({});\nconst smallest = databricks.getNodeType({\n    localDisk: true,\n});\nconst sharedAutoscaling = new databricks.Cluster(\"shared_autoscaling\", {\n    clusterName: \"Shared Autoscaling\",\n    sparkVersion: latest.then(latest =\u003e latest.id),\n    nodeTypeId: smallest.then(smallest =\u003e smallest.id),\n    autoterminationMinutes: 60,\n    autoscale: {\n        minWorkers: 1,\n        maxWorkers: 10,\n    },\n});\nconst clusterUsage = new databricks.Permissions(\"cluster_usage\", {\n    clusterId: sharedAutoscaling.id,\n    accessControls: [\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_ATTACH_TO\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_RESTART\",\n        },\n        {\n            groupName: ds.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nds = databricks.Group(\"ds\", display_name=\"Data Science\")\nlatest = databricks.get_spark_version()\nsmallest = databricks.get_node_type(local_disk=True)\nshared_autoscaling = databricks.Cluster(\"shared_autoscaling\",\n    cluster_name=\"Shared Autoscaling\",\n    spark_version=latest.id,\n    node_type_id=smallest.id,\n    autotermination_minutes=60,\n    autoscale={\n        \"min_workers\": 1,\n        \"max_workers\": 10,\n    })\ncluster_usage = databricks.Permissions(\"cluster_usage\",\n    cluster_id=shared_autoscaling.id,\n    access_controls=[\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_ATTACH_TO\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_RESTART\",\n        },\n        {\n            \"group_name\": ds.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var ds = new Databricks.Group(\"ds\", new()\n    {\n        DisplayName = \"Data Science\",\n    });\n\n    var latest = Databricks.GetSparkVersion.Invoke();\n\n    var smallest = Databricks.GetNodeType.Invoke(new()\n    {\n        LocalDisk = true,\n    });\n\n    var sharedAutoscaling = new Databricks.Cluster(\"shared_autoscaling\", new()\n    {\n        ClusterName = \"Shared Autoscaling\",\n        SparkVersion = latest.Apply(getSparkVersionResult =\u003e getSparkVersionResult.Id),\n        NodeTypeId = smallest.Apply(getNodeTypeResult =\u003e getNodeTypeResult.Id),\n        AutoterminationMinutes = 60,\n        Autoscale = new Databricks.Inputs.ClusterAutoscaleArgs\n        {\n            MinWorkers = 1,\n            MaxWorkers = 10,\n        },\n    });\n\n    var clusterUsage = new Databricks.Permissions(\"cluster_usage\", new()\n    {\n        ClusterId = sharedAutoscaling.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_ATTACH_TO\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_RESTART\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = ds.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tds, err := databricks.NewGroup(ctx, \"ds\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Data Science\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlatest, err := databricks.GetSparkVersion(ctx, \u0026databricks.GetSparkVersionArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsmallest, err := databricks.GetNodeType(ctx, \u0026databricks.GetNodeTypeArgs{\n\t\t\tLocalDisk: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsharedAutoscaling, err := databricks.NewCluster(ctx, \"shared_autoscaling\", \u0026databricks.ClusterArgs{\n\t\t\tClusterName:            pulumi.String(\"Shared Autoscaling\"),\n\t\t\tSparkVersion:           pulumi.String(latest.Id),\n\t\t\tNodeTypeId:             pulumi.String(smallest.Id),\n\t\t\tAutoterminationMinutes: pulumi.Int(60),\n\t\t\tAutoscale: \u0026databricks.ClusterAutoscaleArgs{\n\t\t\t\tMinWorkers: pulumi.Int(1),\n\t\t\t\tMaxWorkers: pulumi.Int(10),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"cluster_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tClusterId: sharedAutoscaling.ID(),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_ATTACH_TO\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RESTART\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       ds.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetSparkVersionArgs;\nimport com.pulumi.databricks.inputs.GetNodeTypeArgs;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport com.pulumi.databricks.inputs.ClusterAutoscaleArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var ds = new Group(\"ds\", GroupArgs.builder()\n            .displayName(\"Data Science\")\n            .build());\n\n        final var latest = DatabricksFunctions.getSparkVersion(GetSparkVersionArgs.builder()\n            .build());\n\n        final var smallest = DatabricksFunctions.getNodeType(GetNodeTypeArgs.builder()\n            .localDisk(true)\n            .build());\n\n        var sharedAutoscaling = new Cluster(\"sharedAutoscaling\", ClusterArgs.builder()\n            .clusterName(\"Shared Autoscaling\")\n            .sparkVersion(latest.id())\n            .nodeTypeId(smallest.id())\n            .autoterminationMinutes(60)\n            .autoscale(ClusterAutoscaleArgs.builder()\n                .minWorkers(1)\n                .maxWorkers(10)\n                .build())\n            .build());\n\n        var clusterUsage = new Permissions(\"clusterUsage\", PermissionsArgs.builder()\n            .clusterId(sharedAutoscaling.id())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_ATTACH_TO\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_RESTART\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(ds.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  ds:\n    type: databricks:Group\n    properties:\n      displayName: Data Science\n  sharedAutoscaling:\n    type: databricks:Cluster\n    name: shared_autoscaling\n    properties:\n      clusterName: Shared Autoscaling\n      sparkVersion: ${latest.id}\n      nodeTypeId: ${smallest.id}\n      autoterminationMinutes: 60\n      autoscale:\n        minWorkers: 1\n        maxWorkers: 10\n  clusterUsage:\n    type: databricks:Permissions\n    name: cluster_usage\n    properties:\n      clusterId: ${sharedAutoscaling.id}\n      accessControls:\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_ATTACH_TO\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_RESTART\n        - groupName: ${ds.displayName}\n          permissionLevel: CAN_MANAGE\nvariables:\n  latest:\n    fn::invoke:\n      function: databricks:getSparkVersion\n      arguments: {}\n  smallest:\n    fn::invoke:\n      function: databricks:getNodeType\n      arguments:\n        localDisk: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Cluster Policy usage\n\nCluster policies allow creation of clusters, that match [given policy](https://docs.databricks.com/administration-guide/clusters/policies.html). It's possible to assign `CAN_USE` permission to users and groups:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst ds = new databricks.Group(\"ds\", {displayName: \"Data Science\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst somethingSimple = new databricks.ClusterPolicy(\"something_simple\", {\n    name: \"Some simple policy\",\n    definition: JSON.stringify({\n        \"spark_conf.spark.hadoop.javax.jdo.option.ConnectionURL\": {\n            type: \"forbidden\",\n        },\n        \"spark_conf.spark.secondkey\": {\n            type: \"forbidden\",\n        },\n    }),\n});\nconst policyUsage = new databricks.Permissions(\"policy_usage\", {\n    clusterPolicyId: somethingSimple.id,\n    accessControls: [\n        {\n            groupName: ds.displayName,\n            permissionLevel: \"CAN_USE\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_USE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport json\nimport pulumi_databricks as databricks\n\nds = databricks.Group(\"ds\", display_name=\"Data Science\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nsomething_simple = databricks.ClusterPolicy(\"something_simple\",\n    name=\"Some simple policy\",\n    definition=json.dumps({\n        \"spark_conf.spark.hadoop.javax.jdo.option.ConnectionURL\": {\n            \"type\": \"forbidden\",\n        },\n        \"spark_conf.spark.secondkey\": {\n            \"type\": \"forbidden\",\n        },\n    }))\npolicy_usage = databricks.Permissions(\"policy_usage\",\n    cluster_policy_id=something_simple.id,\n    access_controls=[\n        {\n            \"group_name\": ds.display_name,\n            \"permission_level\": \"CAN_USE\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_USE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text.Json;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var ds = new Databricks.Group(\"ds\", new()\n    {\n        DisplayName = \"Data Science\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var somethingSimple = new Databricks.ClusterPolicy(\"something_simple\", new()\n    {\n        Name = \"Some simple policy\",\n        Definition = JsonSerializer.Serialize(new Dictionary\u003cstring, object?\u003e\n        {\n            [\"spark_conf.spark.hadoop.javax.jdo.option.ConnectionURL\"] = new Dictionary\u003cstring, object?\u003e\n            {\n                [\"type\"] = \"forbidden\",\n            },\n            [\"spark_conf.spark.secondkey\"] = new Dictionary\u003cstring, object?\u003e\n            {\n                [\"type\"] = \"forbidden\",\n            },\n        }),\n    });\n\n    var policyUsage = new Databricks.Permissions(\"policy_usage\", new()\n    {\n        ClusterPolicyId = somethingSimple.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = ds.DisplayName,\n                PermissionLevel = \"CAN_USE\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_USE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tds, err := databricks.NewGroup(ctx, \"ds\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Data Science\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\ttmpJSON0, err := json.Marshal(map[string]interface{}{\n\t\t\t\"spark_conf.spark.hadoop.javax.jdo.option.ConnectionURL\": map[string]interface{}{\n\t\t\t\t\"type\": \"forbidden\",\n\t\t\t},\n\t\t\t\"spark_conf.spark.secondkey\": map[string]interface{}{\n\t\t\t\t\"type\": \"forbidden\",\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tjson0 := string(tmpJSON0)\n\t\tsomethingSimple, err := databricks.NewClusterPolicy(ctx, \"something_simple\", \u0026databricks.ClusterPolicyArgs{\n\t\t\tName:       pulumi.String(\"Some simple policy\"),\n\t\t\tDefinition: pulumi.String(json0),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"policy_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tClusterPolicyId: somethingSimple.ID(),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       ds.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.ClusterPolicy;\nimport com.pulumi.databricks.ClusterPolicyArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport static com.pulumi.codegen.internal.Serialization.*;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var ds = new Group(\"ds\", GroupArgs.builder()\n            .displayName(\"Data Science\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var somethingSimple = new ClusterPolicy(\"somethingSimple\", ClusterPolicyArgs.builder()\n            .name(\"Some simple policy\")\n            .definition(serializeJson(\n                jsonObject(\n                    jsonProperty(\"spark_conf.spark.hadoop.javax.jdo.option.ConnectionURL\", jsonObject(\n                        jsonProperty(\"type\", \"forbidden\")\n                    )),\n                    jsonProperty(\"spark_conf.spark.secondkey\", jsonObject(\n                        jsonProperty(\"type\", \"forbidden\")\n                    ))\n                )))\n            .build());\n\n        var policyUsage = new Permissions(\"policyUsage\", PermissionsArgs.builder()\n            .clusterPolicyId(somethingSimple.id())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(ds.displayName())\n                    .permissionLevel(\"CAN_USE\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_USE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  ds:\n    type: databricks:Group\n    properties:\n      displayName: Data Science\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  somethingSimple:\n    type: databricks:ClusterPolicy\n    name: something_simple\n    properties:\n      name: Some simple policy\n      definition:\n        fn::toJSON:\n          spark_conf.spark.hadoop.javax.jdo.option.ConnectionURL:\n            type: forbidden\n          spark_conf.spark.secondkey:\n            type: forbidden\n  policyUsage:\n    type: databricks:Permissions\n    name: policy_usage\n    properties:\n      clusterPolicyId: ${somethingSimple.id}\n      accessControls:\n        - groupName: ${ds.displayName}\n          permissionLevel: CAN_USE\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_USE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Instance Pool usage\n\nInstance Pools access control [allows to](https://docs.databricks.com/security/access-control/pool-acl.html) assign `CAN_ATTACH_TO` and `CAN_MANAGE` permissions to users, service principals, and groups. It's also possible to grant creation of Instance Pools to individual groups and users, service principals.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst smallest = databricks.getNodeType({\n    localDisk: true,\n});\nconst _this = new databricks.InstancePool(\"this\", {\n    instancePoolName: \"Reserved Instances\",\n    idleInstanceAutoterminationMinutes: 60,\n    nodeTypeId: smallest.then(smallest =\u003e smallest.id),\n    minIdleInstances: 0,\n    maxCapacity: 10,\n});\nconst poolUsage = new databricks.Permissions(\"pool_usage\", {\n    instancePoolId: _this.id,\n    accessControls: [\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_ATTACH_TO\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nsmallest = databricks.get_node_type(local_disk=True)\nthis = databricks.InstancePool(\"this\",\n    instance_pool_name=\"Reserved Instances\",\n    idle_instance_autotermination_minutes=60,\n    node_type_id=smallest.id,\n    min_idle_instances=0,\n    max_capacity=10)\npool_usage = databricks.Permissions(\"pool_usage\",\n    instance_pool_id=this.id,\n    access_controls=[\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_ATTACH_TO\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var smallest = Databricks.GetNodeType.Invoke(new()\n    {\n        LocalDisk = true,\n    });\n\n    var @this = new Databricks.InstancePool(\"this\", new()\n    {\n        InstancePoolName = \"Reserved Instances\",\n        IdleInstanceAutoterminationMinutes = 60,\n        NodeTypeId = smallest.Apply(getNodeTypeResult =\u003e getNodeTypeResult.Id),\n        MinIdleInstances = 0,\n        MaxCapacity = 10,\n    });\n\n    var poolUsage = new Databricks.Permissions(\"pool_usage\", new()\n    {\n        InstancePoolId = @this.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_ATTACH_TO\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsmallest, err := databricks.GetNodeType(ctx, \u0026databricks.GetNodeTypeArgs{\n\t\t\tLocalDisk: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewInstancePool(ctx, \"this\", \u0026databricks.InstancePoolArgs{\n\t\t\tInstancePoolName:                   pulumi.String(\"Reserved Instances\"),\n\t\t\tIdleInstanceAutoterminationMinutes: pulumi.Int(60),\n\t\t\tNodeTypeId:                         pulumi.String(smallest.Id),\n\t\t\tMinIdleInstances:                   pulumi.Int(0),\n\t\t\tMaxCapacity:                        pulumi.Int(10),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"pool_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tInstancePoolId: this.ID(),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_ATTACH_TO\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetNodeTypeArgs;\nimport com.pulumi.databricks.InstancePool;\nimport com.pulumi.databricks.InstancePoolArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        final var smallest = DatabricksFunctions.getNodeType(GetNodeTypeArgs.builder()\n            .localDisk(true)\n            .build());\n\n        var this_ = new InstancePool(\"this\", InstancePoolArgs.builder()\n            .instancePoolName(\"Reserved Instances\")\n            .idleInstanceAutoterminationMinutes(60)\n            .nodeTypeId(smallest.id())\n            .minIdleInstances(0)\n            .maxCapacity(10)\n            .build());\n\n        var poolUsage = new Permissions(\"poolUsage\", PermissionsArgs.builder()\n            .instancePoolId(this_.id())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_ATTACH_TO\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  this:\n    type: databricks:InstancePool\n    properties:\n      instancePoolName: Reserved Instances\n      idleInstanceAutoterminationMinutes: 60\n      nodeTypeId: ${smallest.id}\n      minIdleInstances: 0\n      maxCapacity: 10\n  poolUsage:\n    type: databricks:Permissions\n    name: pool_usage\n    properties:\n      instancePoolId: ${this.id}\n      accessControls:\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_ATTACH_TO\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\nvariables:\n  smallest:\n    fn::invoke:\n      function: databricks:getNodeType\n      arguments:\n        localDisk: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Job usage\n\nThere are four assignable [permission levels](https://docs.databricks.com/security/access-control/jobs-acl.html#job-permissions) for databricks_job: `CAN_VIEW`, `CAN_MANAGE_RUN`, `IS_OWNER`, and `CAN_MANAGE`. Admins are granted the `CAN_MANAGE` permission by default, and they can assign that permission to non-admin users, and service principals.\n\n- The creator of a job has `IS_OWNER` permission. Destroying \u003cspan pulumi-lang-nodejs=\"`databricks.Permissions`\" pulumi-lang-dotnet=\"`databricks.Permissions`\" pulumi-lang-go=\"`Permissions`\" pulumi-lang-python=\"`Permissions`\" pulumi-lang-yaml=\"`databricks.Permissions`\" pulumi-lang-java=\"`databricks.Permissions`\"\u003e`databricks.Permissions`\u003c/span\u003e resource for a job would revert ownership to the creator.\n- A job must have exactly one owner. If a resource is changed and no owner is specified, the currently authenticated principal would become the new owner of the job. Nothing would change, per se, if the job was created through Pulumi.\n- A job cannot have a group as an owner.\n- Jobs triggered through _Run Now_ assume the permissions of the job owner and not the user, and service principal who issued Run Now.\n- Read [main documentation](https://docs.databricks.com/security/access-control/jobs-acl.html) for additional detail.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst awsPrincipal = new databricks.ServicePrincipal(\"aws_principal\", {displayName: \"main\"});\nconst latest = databricks.getSparkVersion({});\nconst smallest = databricks.getNodeType({\n    localDisk: true,\n});\nconst _this = new databricks.Job(\"this\", {\n    name: \"Featurization\",\n    maxConcurrentRuns: 1,\n    tasks: [{\n        taskKey: \"task1\",\n        newCluster: {\n            numWorkers: 300,\n            sparkVersion: latest.then(latest =\u003e latest.id),\n            nodeTypeId: smallest.then(smallest =\u003e smallest.id),\n        },\n        notebookTask: {\n            notebookPath: \"/Production/MakeFeatures\",\n        },\n    }],\n});\nconst jobUsage = new databricks.Permissions(\"job_usage\", {\n    jobId: _this.id,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_VIEW\",\n        },\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_MANAGE_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n        {\n            servicePrincipalName: awsPrincipal.applicationId,\n            permissionLevel: \"IS_OWNER\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\naws_principal = databricks.ServicePrincipal(\"aws_principal\", display_name=\"main\")\nlatest = databricks.get_spark_version()\nsmallest = databricks.get_node_type(local_disk=True)\nthis = databricks.Job(\"this\",\n    name=\"Featurization\",\n    max_concurrent_runs=1,\n    tasks=[{\n        \"task_key\": \"task1\",\n        \"new_cluster\": {\n            \"num_workers\": 300,\n            \"spark_version\": latest.id,\n            \"node_type_id\": smallest.id,\n        },\n        \"notebook_task\": {\n            \"notebook_path\": \"/Production/MakeFeatures\",\n        },\n    }])\njob_usage = databricks.Permissions(\"job_usage\",\n    job_id=this.id,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_VIEW\",\n        },\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_MANAGE_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n        {\n            \"service_principal_name\": aws_principal.application_id,\n            \"permission_level\": \"IS_OWNER\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var awsPrincipal = new Databricks.ServicePrincipal(\"aws_principal\", new()\n    {\n        DisplayName = \"main\",\n    });\n\n    var latest = Databricks.GetSparkVersion.Invoke();\n\n    var smallest = Databricks.GetNodeType.Invoke(new()\n    {\n        LocalDisk = true,\n    });\n\n    var @this = new Databricks.Job(\"this\", new()\n    {\n        Name = \"Featurization\",\n        MaxConcurrentRuns = 1,\n        Tasks = new[]\n        {\n            new Databricks.Inputs.JobTaskArgs\n            {\n                TaskKey = \"task1\",\n                NewCluster = new Databricks.Inputs.JobTaskNewClusterArgs\n                {\n                    NumWorkers = 300,\n                    SparkVersion = latest.Apply(getSparkVersionResult =\u003e getSparkVersionResult.Id),\n                    NodeTypeId = smallest.Apply(getNodeTypeResult =\u003e getNodeTypeResult.Id),\n                },\n                NotebookTask = new Databricks.Inputs.JobTaskNotebookTaskArgs\n                {\n                    NotebookPath = \"/Production/MakeFeatures\",\n                },\n            },\n        },\n    });\n\n    var jobUsage = new Databricks.Permissions(\"job_usage\", new()\n    {\n        JobId = @this.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_VIEW\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_MANAGE_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                ServicePrincipalName = awsPrincipal.ApplicationId,\n                PermissionLevel = \"IS_OWNER\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tawsPrincipal, err := databricks.NewServicePrincipal(ctx, \"aws_principal\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"main\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tlatest, err := databricks.GetSparkVersion(ctx, \u0026databricks.GetSparkVersionArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsmallest, err := databricks.GetNodeType(ctx, \u0026databricks.GetNodeTypeArgs{\n\t\t\tLocalDisk: pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewJob(ctx, \"this\", \u0026databricks.JobArgs{\n\t\t\tName:              pulumi.String(\"Featurization\"),\n\t\t\tMaxConcurrentRuns: pulumi.Int(1),\n\t\t\tTasks: databricks.JobTaskArray{\n\t\t\t\t\u0026databricks.JobTaskArgs{\n\t\t\t\t\tTaskKey: pulumi.String(\"task1\"),\n\t\t\t\t\tNewCluster: \u0026databricks.JobTaskNewClusterArgs{\n\t\t\t\t\t\tNumWorkers:   pulumi.Int(300),\n\t\t\t\t\t\tSparkVersion: pulumi.String(latest.Id),\n\t\t\t\t\t\tNodeTypeId:   pulumi.String(smallest.Id),\n\t\t\t\t\t},\n\t\t\t\t\tNotebookTask: \u0026databricks.JobTaskNotebookTaskArgs{\n\t\t\t\t\t\tNotebookPath: pulumi.String(\"/Production/MakeFeatures\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"job_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tJobId: this.ID(),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_VIEW\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tServicePrincipalName: awsPrincipal.ApplicationId,\n\t\t\t\t\tPermissionLevel:      pulumi.String(\"IS_OWNER\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetSparkVersionArgs;\nimport com.pulumi.databricks.inputs.GetNodeTypeArgs;\nimport com.pulumi.databricks.Job;\nimport com.pulumi.databricks.JobArgs;\nimport com.pulumi.databricks.inputs.JobTaskArgs;\nimport com.pulumi.databricks.inputs.JobTaskNewClusterArgs;\nimport com.pulumi.databricks.inputs.JobTaskNotebookTaskArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var awsPrincipal = new ServicePrincipal(\"awsPrincipal\", ServicePrincipalArgs.builder()\n            .displayName(\"main\")\n            .build());\n\n        final var latest = DatabricksFunctions.getSparkVersion(GetSparkVersionArgs.builder()\n            .build());\n\n        final var smallest = DatabricksFunctions.getNodeType(GetNodeTypeArgs.builder()\n            .localDisk(true)\n            .build());\n\n        var this_ = new Job(\"this\", JobArgs.builder()\n            .name(\"Featurization\")\n            .maxConcurrentRuns(1)\n            .tasks(JobTaskArgs.builder()\n                .taskKey(\"task1\")\n                .newCluster(JobTaskNewClusterArgs.builder()\n                    .numWorkers(300)\n                    .sparkVersion(latest.id())\n                    .nodeTypeId(smallest.id())\n                    .build())\n                .notebookTask(JobTaskNotebookTaskArgs.builder()\n                    .notebookPath(\"/Production/MakeFeatures\")\n                    .build())\n                .build())\n            .build());\n\n        var jobUsage = new Permissions(\"jobUsage\", PermissionsArgs.builder()\n            .jobId(this_.id())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_VIEW\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_MANAGE_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .servicePrincipalName(awsPrincipal.applicationId())\n                    .permissionLevel(\"IS_OWNER\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  awsPrincipal:\n    type: databricks:ServicePrincipal\n    name: aws_principal\n    properties:\n      displayName: main\n  this:\n    type: databricks:Job\n    properties:\n      name: Featurization\n      maxConcurrentRuns: 1\n      tasks:\n        - taskKey: task1\n          newCluster:\n            numWorkers: 300\n            sparkVersion: ${latest.id}\n            nodeTypeId: ${smallest.id}\n          notebookTask:\n            notebookPath: /Production/MakeFeatures\n  jobUsage:\n    type: databricks:Permissions\n    name: job_usage\n    properties:\n      jobId: ${this.id}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_VIEW\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_MANAGE_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\n        - servicePrincipalName: ${awsPrincipal.applicationId}\n          permissionLevel: IS_OWNER\nvariables:\n  latest:\n    fn::invoke:\n      function: databricks:getSparkVersion\n      arguments: {}\n  smallest:\n    fn::invoke:\n      function: databricks:getNodeType\n      arguments:\n        localDisk: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Lakeflow Declarative Pipelines usage\n\nThere are four assignable [permission levels](https://docs.databricks.com/aws/en/security/auth/access-control#lakeflow-declarative-pipelines-acls) for databricks_pipeline: `CAN_VIEW`, `CAN_RUN`, `CAN_MANAGE`, and `IS_OWNER`. Admins are granted the `CAN_MANAGE` permission by default, and they can assign that permission to non-admin users, and service principals.\n\n- The creator of a Lakeflow Declarative Pipeline has `IS_OWNER` permission. Destroying \u003cspan pulumi-lang-nodejs=\"`databricks.Permissions`\" pulumi-lang-dotnet=\"`databricks.Permissions`\" pulumi-lang-go=\"`Permissions`\" pulumi-lang-python=\"`Permissions`\" pulumi-lang-yaml=\"`databricks.Permissions`\" pulumi-lang-java=\"`databricks.Permissions`\"\u003e`databricks.Permissions`\u003c/span\u003e resource for a pipeline would revert ownership to the creator.\n- A Lakeflow Declarative Pipeline must have exactly one owner. If a resource is changed and no owner is specified, the currently authenticated principal would become the new owner of the pipeline. Nothing would change, per se, if the pipeline was created through Pulumi.\n- A Lakeflow Declarative Pipeline cannot have a group as an owner.\n- Lakeflow Declarative Pipelines triggered through _Start_ assume the permissions of the pipeline owner and not the user, and service principal who issued Run Now.\n- Read [main documentation](https://docs.databricks.com/aws/en/security/auth/access-control#lakeflow-declarative-pipelines-acls) for additional detail.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst me = databricks.getCurrentUser({});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst ldpDemo = new databricks.Notebook(\"ldp_demo\", {\n    contentBase64: std.base64encode({\n        input: `import dlt\njson_path = \\\\\"/databricks-datasets/wikipedia-datasets/data-001/clickstream/raw-uncompressed-json/2015_2_clickstream.json\\\\\"\n@dlt.table(\n   comment=\\\\\"The raw wikipedia clickstream dataset, ingested from /databricks-datasets.\\\\\"\n)\ndef clickstream_raw():\n    return (spark.read.format(\\\\\"json\\\\\").load(json_path))\n`,\n    }).then(invoke =\u003e invoke.result),\n    language: \"PYTHON\",\n    path: me.then(me =\u003e `${me.home}/ldp_demo`),\n});\nconst _this = new databricks.Pipeline(\"this\", {\n    name: me.then(me =\u003e `LDP Demo Pipeline (${me.alphanumeric})`),\n    storage: \"/test/tf-pipeline\",\n    configuration: {\n        key1: \"value1\",\n        key2: \"value2\",\n    },\n    libraries: [{\n        notebook: {\n            path: ldpDemo.id,\n        },\n    }],\n    continuous: false,\n    filters: {\n        includes: [\"com.databricks.include\"],\n        excludes: [\"com.databricks.exclude\"],\n    },\n});\nconst ldpUsage = new databricks.Permissions(\"ldp_usage\", {\n    pipelineId: _this.id,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_VIEW\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\nme = databricks.get_current_user()\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nldp_demo = databricks.Notebook(\"ldp_demo\",\n    content_base64=std.base64encode(input=\"\"\"import dlt\njson_path = \\\"/databricks-datasets/wikipedia-datasets/data-001/clickstream/raw-uncompressed-json/2015_2_clickstream.json\\\"\n@dlt.table(\n   comment=\\\"The raw wikipedia clickstream dataset, ingested from /databricks-datasets.\\\"\n)\ndef clickstream_raw():\n    return (spark.read.format(\\\"json\\\").load(json_path))\n\"\"\").result,\n    language=\"PYTHON\",\n    path=f\"{me.home}/ldp_demo\")\nthis = databricks.Pipeline(\"this\",\n    name=f\"LDP Demo Pipeline ({me.alphanumeric})\",\n    storage=\"/test/tf-pipeline\",\n    configuration={\n        \"key1\": \"value1\",\n        \"key2\": \"value2\",\n    },\n    libraries=[{\n        \"notebook\": {\n            \"path\": ldp_demo.id,\n        },\n    }],\n    continuous=False,\n    filters={\n        \"includes\": [\"com.databricks.include\"],\n        \"excludes\": [\"com.databricks.exclude\"],\n    })\nldp_usage = databricks.Permissions(\"ldp_usage\",\n    pipeline_id=this.id,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_VIEW\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var me = Databricks.GetCurrentUser.Invoke();\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var ldpDemo = new Databricks.Notebook(\"ldp_demo\", new()\n    {\n        ContentBase64 = Std.Base64encode.Invoke(new()\n        {\n            Input = @\"import dlt\njson_path = \\\"\"/databricks-datasets/wikipedia-datasets/data-001/clickstream/raw-uncompressed-json/2015_2_clickstream.json\\\"\"\n@dlt.table(\n   comment=\\\"\"The raw wikipedia clickstream dataset, ingested from /databricks-datasets.\\\"\"\n)\ndef clickstream_raw():\n    return (spark.read.format(\\\"\"json\\\"\").load(json_path))\n\",\n        }).Apply(invoke =\u003e invoke.Result),\n        Language = \"PYTHON\",\n        Path = $\"{me.Apply(getCurrentUserResult =\u003e getCurrentUserResult.Home)}/ldp_demo\",\n    });\n\n    var @this = new Databricks.Pipeline(\"this\", new()\n    {\n        Name = $\"LDP Demo Pipeline ({me.Apply(getCurrentUserResult =\u003e getCurrentUserResult.Alphanumeric)})\",\n        Storage = \"/test/tf-pipeline\",\n        Configuration = \n        {\n            { \"key1\", \"value1\" },\n            { \"key2\", \"value2\" },\n        },\n        Libraries = new[]\n        {\n            new Databricks.Inputs.PipelineLibraryArgs\n            {\n                Notebook = new Databricks.Inputs.PipelineLibraryNotebookArgs\n                {\n                    Path = ldpDemo.Id,\n                },\n            },\n        },\n        Continuous = false,\n        Filters = new Databricks.Inputs.PipelineFiltersArgs\n        {\n            Includes = new[]\n            {\n                \"com.databricks.include\",\n            },\n            Excludes = new[]\n            {\n                \"com.databricks.exclude\",\n            },\n        },\n    });\n\n    var ldpUsage = new Databricks.Permissions(\"ldp_usage\", new()\n    {\n        PipelineId = @this.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_VIEW\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tme, err := databricks.GetCurrentUser(ctx, \u0026databricks.GetCurrentUserArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tinvokeBase64encode, err := std.Base64encode(ctx, \u0026std.Base64encodeArgs{\n\t\t\tInput: `import dlt\njson_path = \\\"/databricks-datasets/wikipedia-datasets/data-001/clickstream/raw-uncompressed-json/2015_2_clickstream.json\\\"\n@dlt.table(\n   comment=\\\"The raw wikipedia clickstream dataset, ingested from /databricks-datasets.\\\"\n)\ndef clickstream_raw():\n    return (spark.read.format(\\\"json\\\").load(json_path))\n`,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tldpDemo, err := databricks.NewNotebook(ctx, \"ldp_demo\", \u0026databricks.NotebookArgs{\n\t\t\tContentBase64: pulumi.String(invokeBase64encode.Result),\n\t\t\tLanguage:      pulumi.String(\"PYTHON\"),\n\t\t\tPath:          pulumi.Sprintf(\"%v/ldp_demo\", me.Home),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewPipeline(ctx, \"this\", \u0026databricks.PipelineArgs{\n\t\t\tName:    pulumi.Sprintf(\"LDP Demo Pipeline (%v)\", me.Alphanumeric),\n\t\t\tStorage: pulumi.String(\"/test/tf-pipeline\"),\n\t\t\tConfiguration: pulumi.StringMap{\n\t\t\t\t\"key1\": pulumi.String(\"value1\"),\n\t\t\t\t\"key2\": pulumi.String(\"value2\"),\n\t\t\t},\n\t\t\tLibraries: databricks.PipelineLibraryArray{\n\t\t\t\t\u0026databricks.PipelineLibraryArgs{\n\t\t\t\t\tNotebook: \u0026databricks.PipelineLibraryNotebookArgs{\n\t\t\t\t\t\tPath: ldpDemo.ID(),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tContinuous: pulumi.Bool(false),\n\t\t\tFilters: \u0026databricks.PipelineFiltersArgs{\n\t\t\t\tIncludes: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"com.databricks.include\"),\n\t\t\t\t},\n\t\t\t\tExcludes: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"com.databricks.exclude\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"ldp_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tPipelineId: this.ID(),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_VIEW\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetCurrentUserArgs;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Notebook;\nimport com.pulumi.databricks.NotebookArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.Base64encodeArgs;\nimport com.pulumi.databricks.Pipeline;\nimport com.pulumi.databricks.PipelineArgs;\nimport com.pulumi.databricks.inputs.PipelineLibraryArgs;\nimport com.pulumi.databricks.inputs.PipelineLibraryNotebookArgs;\nimport com.pulumi.databricks.inputs.PipelineFiltersArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var me = DatabricksFunctions.getCurrentUser(GetCurrentUserArgs.builder()\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var ldpDemo = new Notebook(\"ldpDemo\", NotebookArgs.builder()\n            .contentBase64(StdFunctions.base64encode(Base64encodeArgs.builder()\n                .input(\"\"\"\nimport dlt\njson_path = \\\"/databricks-datasets/wikipedia-datasets/data-001/clickstream/raw-uncompressed-json/2015_2_clickstream.json\\\"\n@dlt.table(\n   comment=\\\"The raw wikipedia clickstream dataset, ingested from /databricks-datasets.\\\"\n)\ndef clickstream_raw():\n    return (spark.read.format(\\\"json\\\").load(json_path))\n                \"\"\")\n                .build()).result())\n            .language(\"PYTHON\")\n            .path(String.format(\"%s/ldp_demo\", me.home()))\n            .build());\n\n        var this_ = new Pipeline(\"this\", PipelineArgs.builder()\n            .name(String.format(\"LDP Demo Pipeline (%s)\", me.alphanumeric()))\n            .storage(\"/test/tf-pipeline\")\n            .configuration(Map.ofEntries(\n                Map.entry(\"key1\", \"value1\"),\n                Map.entry(\"key2\", \"value2\")\n            ))\n            .libraries(PipelineLibraryArgs.builder()\n                .notebook(PipelineLibraryNotebookArgs.builder()\n                    .path(ldpDemo.id())\n                    .build())\n                .build())\n            .continuous(false)\n            .filters(PipelineFiltersArgs.builder()\n                .includes(\"com.databricks.include\")\n                .excludes(\"com.databricks.exclude\")\n                .build())\n            .build());\n\n        var ldpUsage = new Permissions(\"ldpUsage\", PermissionsArgs.builder()\n            .pipelineId(this_.id())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_VIEW\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  ldpDemo:\n    type: databricks:Notebook\n    name: ldp_demo\n    properties:\n      contentBase64:\n        fn::invoke:\n          function: std:base64encode\n          arguments:\n            input: |\n              import dlt\n              json_path = \\\"/databricks-datasets/wikipedia-datasets/data-001/clickstream/raw-uncompressed-json/2015_2_clickstream.json\\\"\n              @dlt.table(\n                 comment=\\\"The raw wikipedia clickstream dataset, ingested from /databricks-datasets.\\\"\n              )\n              def clickstream_raw():\n                  return (spark.read.format(\\\"json\\\").load(json_path))\n          return: result\n      language: PYTHON\n      path: ${me.home}/ldp_demo\n  this:\n    type: databricks:Pipeline\n    properties:\n      name: LDP Demo Pipeline (${me.alphanumeric})\n      storage: /test/tf-pipeline\n      configuration:\n        key1: value1\n        key2: value2\n      libraries:\n        - notebook:\n            path: ${ldpDemo.id}\n      continuous: false\n      filters:\n        includes:\n          - com.databricks.include\n        excludes:\n          - com.databricks.exclude\n  ldpUsage:\n    type: databricks:Permissions\n    name: ldp_usage\n    properties:\n      pipelineId: ${this.id}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_VIEW\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\nvariables:\n  me:\n    fn::invoke:\n      function: databricks:getCurrentUser\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Notebook usage\n\nValid [permission levels](https://docs.databricks.com/security/access-control/workspace-acl.html#notebook-permissions) for\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eare: `CAN_READ`, `CAN_RUN`, `CAN_EDIT`, and `CAN_MANAGE`.\n\nA notebook could be specified by using either \u003cspan pulumi-lang-nodejs=\"`notebookPath`\" pulumi-lang-dotnet=\"`NotebookPath`\" pulumi-lang-go=\"`notebookPath`\" pulumi-lang-python=\"`notebook_path`\" pulumi-lang-yaml=\"`notebookPath`\" pulumi-lang-java=\"`notebookPath`\"\u003e`notebook_path`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`notebookId`\" pulumi-lang-dotnet=\"`NotebookId`\" pulumi-lang-go=\"`notebookId`\" pulumi-lang-python=\"`notebook_id`\" pulumi-lang-yaml=\"`notebookId`\" pulumi-lang-java=\"`notebookId`\"\u003e`notebook_id`\u003c/span\u003e attribute.  The value for the \u003cspan pulumi-lang-nodejs=\"`notebookId`\" pulumi-lang-dotnet=\"`NotebookId`\" pulumi-lang-go=\"`notebookId`\" pulumi-lang-python=\"`notebook_id`\" pulumi-lang-yaml=\"`notebookId`\" pulumi-lang-java=\"`notebookId`\"\u003e`notebook_id`\u003c/span\u003e is the object ID of the resource in the Databricks Workspace that is exposed as \u003cspan pulumi-lang-nodejs=\"`objectId`\" pulumi-lang-dotnet=\"`ObjectId`\" pulumi-lang-go=\"`objectId`\" pulumi-lang-python=\"`object_id`\" pulumi-lang-yaml=\"`objectId`\" pulumi-lang-java=\"`objectId`\"\u003e`object_id`\u003c/span\u003e attribute of the \u003cspan pulumi-lang-nodejs=\"`databricks.Notebook`\" pulumi-lang-dotnet=\"`databricks.Notebook`\" pulumi-lang-go=\"`Notebook`\" pulumi-lang-python=\"`Notebook`\" pulumi-lang-yaml=\"`databricks.Notebook`\" pulumi-lang-java=\"`databricks.Notebook`\"\u003e`databricks.Notebook`\u003c/span\u003e resource as shown below.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst _this = new databricks.Notebook(\"this\", {\n    contentBase64: std.base64encode({\n        input: \"# Welcome to your Python notebook\",\n    }).then(invoke =\u003e invoke.result),\n    path: \"/Production/ETL/Features\",\n    language: \"PYTHON\",\n});\nconst notebookUsageByPath = new databricks.Permissions(\"notebook_usage_by_path\", {\n    notebookPath: _this.path,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_READ\",\n        },\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_EDIT\",\n        },\n    ],\n});\nconst notebookUsageById = new databricks.Permissions(\"notebook_usage_by_id\", {\n    notebookId: _this.objectId,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_READ\",\n        },\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_EDIT\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nthis = databricks.Notebook(\"this\",\n    content_base64=std.base64encode(input=\"# Welcome to your Python notebook\").result,\n    path=\"/Production/ETL/Features\",\n    language=\"PYTHON\")\nnotebook_usage_by_path = databricks.Permissions(\"notebook_usage_by_path\",\n    notebook_path=this.path,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_READ\",\n        },\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_EDIT\",\n        },\n    ])\nnotebook_usage_by_id = databricks.Permissions(\"notebook_usage_by_id\",\n    notebook_id=this.object_id,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_READ\",\n        },\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_EDIT\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var @this = new Databricks.Notebook(\"this\", new()\n    {\n        ContentBase64 = Std.Base64encode.Invoke(new()\n        {\n            Input = \"# Welcome to your Python notebook\",\n        }).Apply(invoke =\u003e invoke.Result),\n        Path = \"/Production/ETL/Features\",\n        Language = \"PYTHON\",\n    });\n\n    var notebookUsageByPath = new Databricks.Permissions(\"notebook_usage_by_path\", new()\n    {\n        NotebookPath = @this.Path,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_READ\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_EDIT\",\n            },\n        },\n    });\n\n    var notebookUsageById = new Databricks.Permissions(\"notebook_usage_by_id\", new()\n    {\n        NotebookId = @this.ObjectId,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_READ\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_EDIT\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tinvokeBase64encode, err := std.Base64encode(ctx, \u0026std.Base64encodeArgs{\n\t\t\tInput: \"# Welcome to your Python notebook\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewNotebook(ctx, \"this\", \u0026databricks.NotebookArgs{\n\t\t\tContentBase64: pulumi.String(invokeBase64encode.Result),\n\t\t\tPath:          pulumi.String(\"/Production/ETL/Features\"),\n\t\t\tLanguage:      pulumi.String(\"PYTHON\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"notebook_usage_by_path\", \u0026databricks.PermissionsArgs{\n\t\t\tNotebookPath: this.Path,\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_READ\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_EDIT\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"notebook_usage_by_id\", \u0026databricks.PermissionsArgs{\n\t\t\tNotebookId: this.ObjectId,\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_READ\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_EDIT\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Notebook;\nimport com.pulumi.databricks.NotebookArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.Base64encodeArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var this_ = new Notebook(\"this\", NotebookArgs.builder()\n            .contentBase64(StdFunctions.base64encode(Base64encodeArgs.builder()\n                .input(\"# Welcome to your Python notebook\")\n                .build()).result())\n            .path(\"/Production/ETL/Features\")\n            .language(\"PYTHON\")\n            .build());\n\n        var notebookUsageByPath = new Permissions(\"notebookUsageByPath\", PermissionsArgs.builder()\n            .notebookPath(this_.path())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_READ\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_EDIT\")\n                    .build())\n            .build());\n\n        var notebookUsageById = new Permissions(\"notebookUsageById\", PermissionsArgs.builder()\n            .notebookId(this_.objectId())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_READ\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_EDIT\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  this:\n    type: databricks:Notebook\n    properties:\n      contentBase64:\n        fn::invoke:\n          function: std:base64encode\n          arguments:\n            input: '# Welcome to your Python notebook'\n          return: result\n      path: /Production/ETL/Features\n      language: PYTHON\n  notebookUsageByPath:\n    type: databricks:Permissions\n    name: notebook_usage_by_path\n    properties:\n      notebookPath: ${this.path}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_READ\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_EDIT\n  notebookUsageById:\n    type: databricks:Permissions\n    name: notebook_usage_by_id\n    properties:\n      notebookId: ${this.objectId}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_READ\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_EDIT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n\u003e when importing a permissions resource, only the \u003cspan pulumi-lang-nodejs=\"`notebookId`\" pulumi-lang-dotnet=\"`NotebookId`\" pulumi-lang-go=\"`notebookId`\" pulumi-lang-python=\"`notebook_id`\" pulumi-lang-yaml=\"`notebookId`\" pulumi-lang-java=\"`notebookId`\"\u003e`notebook_id`\u003c/span\u003e is filled!\n\n## Workspace file usage\n\nValid permission levels for\u003cspan pulumi-lang-nodejs=\" databricks.WorkspaceFile \" pulumi-lang-dotnet=\" databricks.WorkspaceFile \" pulumi-lang-go=\" WorkspaceFile \" pulumi-lang-python=\" WorkspaceFile \" pulumi-lang-yaml=\" databricks.WorkspaceFile \" pulumi-lang-java=\" databricks.WorkspaceFile \"\u003e databricks.WorkspaceFile \u003c/span\u003eare: `CAN_READ`, `CAN_RUN`, `CAN_EDIT`, and `CAN_MANAGE`.\n\nA workspace file could be specified by using either \u003cspan pulumi-lang-nodejs=\"`workspaceFilePath`\" pulumi-lang-dotnet=\"`WorkspaceFilePath`\" pulumi-lang-go=\"`workspaceFilePath`\" pulumi-lang-python=\"`workspace_file_path`\" pulumi-lang-yaml=\"`workspaceFilePath`\" pulumi-lang-java=\"`workspaceFilePath`\"\u003e`workspace_file_path`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`workspaceFileId`\" pulumi-lang-dotnet=\"`WorkspaceFileId`\" pulumi-lang-go=\"`workspaceFileId`\" pulumi-lang-python=\"`workspace_file_id`\" pulumi-lang-yaml=\"`workspaceFileId`\" pulumi-lang-java=\"`workspaceFileId`\"\u003e`workspace_file_id`\u003c/span\u003e attribute.  The value for the \u003cspan pulumi-lang-nodejs=\"`workspaceFileId`\" pulumi-lang-dotnet=\"`WorkspaceFileId`\" pulumi-lang-go=\"`workspaceFileId`\" pulumi-lang-python=\"`workspace_file_id`\" pulumi-lang-yaml=\"`workspaceFileId`\" pulumi-lang-java=\"`workspaceFileId`\"\u003e`workspace_file_id`\u003c/span\u003e is the object ID of the resource in the Databricks Workspace that is exposed as \u003cspan pulumi-lang-nodejs=\"`objectId`\" pulumi-lang-dotnet=\"`ObjectId`\" pulumi-lang-go=\"`objectId`\" pulumi-lang-python=\"`object_id`\" pulumi-lang-yaml=\"`objectId`\" pulumi-lang-java=\"`objectId`\"\u003e`object_id`\u003c/span\u003e attribute of the \u003cspan pulumi-lang-nodejs=\"`databricks.WorkspaceFile`\" pulumi-lang-dotnet=\"`databricks.WorkspaceFile`\" pulumi-lang-go=\"`WorkspaceFile`\" pulumi-lang-python=\"`WorkspaceFile`\" pulumi-lang-yaml=\"`databricks.WorkspaceFile`\" pulumi-lang-java=\"`databricks.WorkspaceFile`\"\u003e`databricks.WorkspaceFile`\u003c/span\u003e resource as shown below.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst _this = new databricks.WorkspaceFile(\"this\", {\n    contentBase64: std.base64encode({\n        input: \"print('Hello World')\",\n    }).then(invoke =\u003e invoke.result),\n    path: \"/Production/ETL/Features.py\",\n});\nconst workspaceFileUsageByPath = new databricks.Permissions(\"workspace_file_usage_by_path\", {\n    workspaceFilePath: _this.path,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_READ\",\n        },\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_EDIT\",\n        },\n    ],\n});\nconst workspaceFileUsageById = new databricks.Permissions(\"workspace_file_usage_by_id\", {\n    workspaceFileId: _this.objectId,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_READ\",\n        },\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_EDIT\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nthis = databricks.WorkspaceFile(\"this\",\n    content_base64=std.base64encode(input=\"print('Hello World')\").result,\n    path=\"/Production/ETL/Features.py\")\nworkspace_file_usage_by_path = databricks.Permissions(\"workspace_file_usage_by_path\",\n    workspace_file_path=this.path,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_READ\",\n        },\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_EDIT\",\n        },\n    ])\nworkspace_file_usage_by_id = databricks.Permissions(\"workspace_file_usage_by_id\",\n    workspace_file_id=this.object_id,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_READ\",\n        },\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_EDIT\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var @this = new Databricks.WorkspaceFile(\"this\", new()\n    {\n        ContentBase64 = Std.Base64encode.Invoke(new()\n        {\n            Input = \"print('Hello World')\",\n        }).Apply(invoke =\u003e invoke.Result),\n        Path = \"/Production/ETL/Features.py\",\n    });\n\n    var workspaceFileUsageByPath = new Databricks.Permissions(\"workspace_file_usage_by_path\", new()\n    {\n        WorkspaceFilePath = @this.Path,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_READ\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_EDIT\",\n            },\n        },\n    });\n\n    var workspaceFileUsageById = new Databricks.Permissions(\"workspace_file_usage_by_id\", new()\n    {\n        WorkspaceFileId = @this.ObjectId,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_READ\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_EDIT\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tinvokeBase64encode, err := std.Base64encode(ctx, \u0026std.Base64encodeArgs{\n\t\t\tInput: \"print('Hello World')\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewWorkspaceFile(ctx, \"this\", \u0026databricks.WorkspaceFileArgs{\n\t\t\tContentBase64: pulumi.String(invokeBase64encode.Result),\n\t\t\tPath:          pulumi.String(\"/Production/ETL/Features.py\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"workspace_file_usage_by_path\", \u0026databricks.PermissionsArgs{\n\t\t\tWorkspaceFilePath: this.Path,\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_READ\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_EDIT\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"workspace_file_usage_by_id\", \u0026databricks.PermissionsArgs{\n\t\t\tWorkspaceFileId: this.ObjectId,\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_READ\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_EDIT\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.WorkspaceFile;\nimport com.pulumi.databricks.WorkspaceFileArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.Base64encodeArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var this_ = new WorkspaceFile(\"this\", WorkspaceFileArgs.builder()\n            .contentBase64(StdFunctions.base64encode(Base64encodeArgs.builder()\n                .input(\"print('Hello World')\")\n                .build()).result())\n            .path(\"/Production/ETL/Features.py\")\n            .build());\n\n        var workspaceFileUsageByPath = new Permissions(\"workspaceFileUsageByPath\", PermissionsArgs.builder()\n            .workspaceFilePath(this_.path())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_READ\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_EDIT\")\n                    .build())\n            .build());\n\n        var workspaceFileUsageById = new Permissions(\"workspaceFileUsageById\", PermissionsArgs.builder()\n            .workspaceFileId(this_.objectId())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_READ\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_EDIT\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  this:\n    type: databricks:WorkspaceFile\n    properties:\n      contentBase64:\n        fn::invoke:\n          function: std:base64encode\n          arguments:\n            input: print('Hello World')\n          return: result\n      path: /Production/ETL/Features.py\n  workspaceFileUsageByPath:\n    type: databricks:Permissions\n    name: workspace_file_usage_by_path\n    properties:\n      workspaceFilePath: ${this.path}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_READ\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_EDIT\n  workspaceFileUsageById:\n    type: databricks:Permissions\n    name: workspace_file_usage_by_id\n    properties:\n      workspaceFileId: ${this.objectId}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_READ\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_EDIT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n\u003e when importing a permissions resource, only the \u003cspan pulumi-lang-nodejs=\"`workspaceFileId`\" pulumi-lang-dotnet=\"`WorkspaceFileId`\" pulumi-lang-go=\"`workspaceFileId`\" pulumi-lang-python=\"`workspace_file_id`\" pulumi-lang-yaml=\"`workspaceFileId`\" pulumi-lang-java=\"`workspaceFileId`\"\u003e`workspace_file_id`\u003c/span\u003e is filled!\n\n## Folder usage\n\nValid [permission levels](https://docs.databricks.com/security/access-control/workspace-acl.html#folder-permissions) for folders of\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eare: `CAN_READ`, `CAN_RUN`, `CAN_EDIT`, and `CAN_MANAGE`. Notebooks and experiments in a folder inherit all permissions settings of that folder. For example, a user (or service principal) that has `CAN_RUN` permission on a folder has `CAN_RUN` permission on the notebooks in that folder.\n\n- All users can list items in the folder without any permissions.\n- All users (or service principals) have `CAN_MANAGE` permission for items in the Workspace \u003e Shared Icon Shared folder. You can grant `CAN_MANAGE` permission to notebooks and folders by moving them to the Shared Icon Shared folder.\n- All users (or service principals) have `CAN_MANAGE` permission for objects the user creates.\n- User home directory - The user (or service principal) has `CAN_MANAGE` permission. All other users (or service principals) can list their directories.\n\nA folder could be specified by using either \u003cspan pulumi-lang-nodejs=\"`directoryPath`\" pulumi-lang-dotnet=\"`DirectoryPath`\" pulumi-lang-go=\"`directoryPath`\" pulumi-lang-python=\"`directory_path`\" pulumi-lang-yaml=\"`directoryPath`\" pulumi-lang-java=\"`directoryPath`\"\u003e`directory_path`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`directoryId`\" pulumi-lang-dotnet=\"`DirectoryId`\" pulumi-lang-go=\"`directoryId`\" pulumi-lang-python=\"`directory_id`\" pulumi-lang-yaml=\"`directoryId`\" pulumi-lang-java=\"`directoryId`\"\u003e`directory_id`\u003c/span\u003e attribute.  The value for the \u003cspan pulumi-lang-nodejs=\"`directoryId`\" pulumi-lang-dotnet=\"`DirectoryId`\" pulumi-lang-go=\"`directoryId`\" pulumi-lang-python=\"`directory_id`\" pulumi-lang-yaml=\"`directoryId`\" pulumi-lang-java=\"`directoryId`\"\u003e`directory_id`\u003c/span\u003e is the object ID of the resource in the Databricks Workspace that is exposed as \u003cspan pulumi-lang-nodejs=\"`objectId`\" pulumi-lang-dotnet=\"`ObjectId`\" pulumi-lang-go=\"`objectId`\" pulumi-lang-python=\"`object_id`\" pulumi-lang-yaml=\"`objectId`\" pulumi-lang-java=\"`objectId`\"\u003e`object_id`\u003c/span\u003e attribute of the \u003cspan pulumi-lang-nodejs=\"`databricks.Directory`\" pulumi-lang-dotnet=\"`databricks.Directory`\" pulumi-lang-go=\"`Directory`\" pulumi-lang-python=\"`Directory`\" pulumi-lang-yaml=\"`databricks.Directory`\" pulumi-lang-java=\"`databricks.Directory`\"\u003e`databricks.Directory`\u003c/span\u003e resource as shown below.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst _this = new databricks.Directory(\"this\", {path: \"/Production/ETL\"});\nconst folderUsageByPath = new databricks.Permissions(\"folder_usage_by_path\", {\n    directoryPath: _this.path,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_READ\",\n        },\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_EDIT\",\n        },\n    ],\n});\nconst folderUsageById = new databricks.Permissions(\"folder_usage_by_id\", {\n    directoryId: _this.objectId,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_READ\",\n        },\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_EDIT\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nthis = databricks.Directory(\"this\", path=\"/Production/ETL\")\nfolder_usage_by_path = databricks.Permissions(\"folder_usage_by_path\",\n    directory_path=this.path,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_READ\",\n        },\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_EDIT\",\n        },\n    ])\nfolder_usage_by_id = databricks.Permissions(\"folder_usage_by_id\",\n    directory_id=this.object_id,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_READ\",\n        },\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_EDIT\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var @this = new Databricks.Directory(\"this\", new()\n    {\n        Path = \"/Production/ETL\",\n    });\n\n    var folderUsageByPath = new Databricks.Permissions(\"folder_usage_by_path\", new()\n    {\n        DirectoryPath = @this.Path,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_READ\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_EDIT\",\n            },\n        },\n    });\n\n    var folderUsageById = new Databricks.Permissions(\"folder_usage_by_id\", new()\n    {\n        DirectoryId = @this.ObjectId,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_READ\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_EDIT\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewDirectory(ctx, \"this\", \u0026databricks.DirectoryArgs{\n\t\t\tPath: pulumi.String(\"/Production/ETL\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"folder_usage_by_path\", \u0026databricks.PermissionsArgs{\n\t\t\tDirectoryPath: this.Path,\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_READ\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_EDIT\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"folder_usage_by_id\", \u0026databricks.PermissionsArgs{\n\t\t\tDirectoryId: this.ObjectId,\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_READ\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_EDIT\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Directory;\nimport com.pulumi.databricks.DirectoryArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var this_ = new Directory(\"this\", DirectoryArgs.builder()\n            .path(\"/Production/ETL\")\n            .build());\n\n        var folderUsageByPath = new Permissions(\"folderUsageByPath\", PermissionsArgs.builder()\n            .directoryPath(this_.path())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_READ\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_EDIT\")\n                    .build())\n            .build());\n\n        var folderUsageById = new Permissions(\"folderUsageById\", PermissionsArgs.builder()\n            .directoryId(this_.objectId())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_READ\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_EDIT\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  this:\n    type: databricks:Directory\n    properties:\n      path: /Production/ETL\n  folderUsageByPath:\n    type: databricks:Permissions\n    name: folder_usage_by_path\n    properties:\n      directoryPath: ${this.path}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_READ\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_EDIT\n  folderUsageById:\n    type: databricks:Permissions\n    name: folder_usage_by_id\n    properties:\n      directoryId: ${this.objectId}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_READ\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_EDIT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n\u003e when importing a permissions resource, only the \u003cspan pulumi-lang-nodejs=\"`directoryId`\" pulumi-lang-dotnet=\"`DirectoryId`\" pulumi-lang-go=\"`directoryId`\" pulumi-lang-python=\"`directory_id`\" pulumi-lang-yaml=\"`directoryId`\" pulumi-lang-java=\"`directoryId`\"\u003e`directory_id`\u003c/span\u003e is filled!\n\n## Repos usage\n\nValid [permission levels](https://docs.databricks.com/security/access-control/workspace-acl.html) for\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eare: `CAN_READ`, `CAN_RUN`, `CAN_EDIT`, and `CAN_MANAGE`.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst _this = new databricks.Repo(\"this\", {url: \"https://github.com/user/demo.git\"});\nconst repoUsage = new databricks.Permissions(\"repo_usage\", {\n    repoId: _this.id,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_READ\",\n        },\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_EDIT\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nthis = databricks.Repo(\"this\", url=\"https://github.com/user/demo.git\")\nrepo_usage = databricks.Permissions(\"repo_usage\",\n    repo_id=this.id,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_READ\",\n        },\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_EDIT\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var @this = new Databricks.Repo(\"this\", new()\n    {\n        Url = \"https://github.com/user/demo.git\",\n    });\n\n    var repoUsage = new Databricks.Permissions(\"repo_usage\", new()\n    {\n        RepoId = @this.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_READ\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_EDIT\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewRepo(ctx, \"this\", \u0026databricks.RepoArgs{\n\t\t\tUrl: pulumi.String(\"https://github.com/user/demo.git\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"repo_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tRepoId: this.ID(),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_READ\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_EDIT\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Repo;\nimport com.pulumi.databricks.RepoArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var this_ = new Repo(\"this\", RepoArgs.builder()\n            .url(\"https://github.com/user/demo.git\")\n            .build());\n\n        var repoUsage = new Permissions(\"repoUsage\", PermissionsArgs.builder()\n            .repoId(this_.id())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_READ\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_EDIT\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  this:\n    type: databricks:Repo\n    properties:\n      url: https://github.com/user/demo.git\n  repoUsage:\n    type: databricks:Permissions\n    name: repo_usage\n    properties:\n      repoId: ${this.id}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_READ\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_EDIT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## MLflow Experiment usage\n\nValid [permission levels](https://docs.databricks.com/security/access-control/workspace-acl.html#mlflow-experiment-permissions-1) for\u003cspan pulumi-lang-nodejs=\" databricks.MlflowExperiment \" pulumi-lang-dotnet=\" databricks.MlflowExperiment \" pulumi-lang-go=\" MlflowExperiment \" pulumi-lang-python=\" MlflowExperiment \" pulumi-lang-yaml=\" databricks.MlflowExperiment \" pulumi-lang-java=\" databricks.MlflowExperiment \"\u003e databricks.MlflowExperiment \u003c/span\u003eare: `CAN_READ`, `CAN_EDIT`, and `CAN_MANAGE`.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst me = databricks.getCurrentUser({});\nconst _this = new databricks.MlflowExperiment(\"this\", {\n    name: me.then(me =\u003e `${me.home}/Sample`),\n    artifactLocation: \"s3://bucket/my-experiment\",\n    description: \"My MLflow experiment description\",\n});\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst experimentUsage = new databricks.Permissions(\"experiment_usage\", {\n    experimentId: _this.id,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_READ\",\n        },\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_EDIT\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nme = databricks.get_current_user()\nthis = databricks.MlflowExperiment(\"this\",\n    name=f\"{me.home}/Sample\",\n    artifact_location=\"s3://bucket/my-experiment\",\n    description=\"My MLflow experiment description\")\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nexperiment_usage = databricks.Permissions(\"experiment_usage\",\n    experiment_id=this.id,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_READ\",\n        },\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_EDIT\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var me = Databricks.GetCurrentUser.Invoke();\n\n    var @this = new Databricks.MlflowExperiment(\"this\", new()\n    {\n        Name = $\"{me.Apply(getCurrentUserResult =\u003e getCurrentUserResult.Home)}/Sample\",\n        ArtifactLocation = \"s3://bucket/my-experiment\",\n        Description = \"My MLflow experiment description\",\n    });\n\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var experimentUsage = new Databricks.Permissions(\"experiment_usage\", new()\n    {\n        ExperimentId = @this.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_READ\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_EDIT\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tme, err := databricks.GetCurrentUser(ctx, \u0026databricks.GetCurrentUserArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewMlflowExperiment(ctx, \"this\", \u0026databricks.MlflowExperimentArgs{\n\t\t\tName:             pulumi.Sprintf(\"%v/Sample\", me.Home),\n\t\t\tArtifactLocation: pulumi.String(\"s3://bucket/my-experiment\"),\n\t\t\tDescription:      pulumi.String(\"My MLflow experiment description\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"experiment_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tExperimentId: this.ID(),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_READ\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_EDIT\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetCurrentUserArgs;\nimport com.pulumi.databricks.MlflowExperiment;\nimport com.pulumi.databricks.MlflowExperimentArgs;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var me = DatabricksFunctions.getCurrentUser(GetCurrentUserArgs.builder()\n            .build());\n\n        var this_ = new MlflowExperiment(\"this\", MlflowExperimentArgs.builder()\n            .name(String.format(\"%s/Sample\", me.home()))\n            .artifactLocation(\"s3://bucket/my-experiment\")\n            .description(\"My MLflow experiment description\")\n            .build());\n\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var experimentUsage = new Permissions(\"experimentUsage\", PermissionsArgs.builder()\n            .experimentId(this_.id())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_READ\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_EDIT\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:MlflowExperiment\n    properties:\n      name: ${me.home}/Sample\n      artifactLocation: s3://bucket/my-experiment\n      description: My MLflow experiment description\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  experimentUsage:\n    type: databricks:Permissions\n    name: experiment_usage\n    properties:\n      experimentId: ${this.id}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_READ\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_MANAGE\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_EDIT\nvariables:\n  me:\n    fn::invoke:\n      function: databricks:getCurrentUser\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## MLflow Model usage\n\nValid [permission levels](https://docs.databricks.com/security/access-control/workspace-acl.html#mlflow-model-permissions-1) for\u003cspan pulumi-lang-nodejs=\" databricks.MlflowModel \" pulumi-lang-dotnet=\" databricks.MlflowModel \" pulumi-lang-go=\" MlflowModel \" pulumi-lang-python=\" MlflowModel \" pulumi-lang-yaml=\" databricks.MlflowModel \" pulumi-lang-java=\" databricks.MlflowModel \"\u003e databricks.MlflowModel \u003c/span\u003eare: `CAN_READ`, `CAN_EDIT`, `CAN_MANAGE_STAGING_VERSIONS`, `CAN_MANAGE_PRODUCTION_VERSIONS`, and `CAN_MANAGE`. You can also manage permissions for all MLflow models by \u003cspan pulumi-lang-nodejs=\"`registeredModelId \" pulumi-lang-dotnet=\"`RegisteredModelId \" pulumi-lang-go=\"`registeredModelId \" pulumi-lang-python=\"`registered_model_id \" pulumi-lang-yaml=\"`registeredModelId \" pulumi-lang-java=\"`registeredModelId \"\u003e`registered_model_id \u003c/span\u003e= \"root\"`.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.MlflowModel(\"this\", {name: \"SomePredictions\"});\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst modelUsage = new databricks.Permissions(\"model_usage\", {\n    registeredModelId: _this.registeredModelId,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_READ\",\n        },\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_MANAGE_PRODUCTION_VERSIONS\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE_STAGING_VERSIONS\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.MlflowModel(\"this\", name=\"SomePredictions\")\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nmodel_usage = databricks.Permissions(\"model_usage\",\n    registered_model_id=this.registered_model_id,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_READ\",\n        },\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_MANAGE_PRODUCTION_VERSIONS\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE_STAGING_VERSIONS\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.MlflowModel(\"this\", new()\n    {\n        Name = \"SomePredictions\",\n    });\n\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var modelUsage = new Databricks.Permissions(\"model_usage\", new()\n    {\n        RegisteredModelId = @this.RegisteredModelId,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_READ\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_MANAGE_PRODUCTION_VERSIONS\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE_STAGING_VERSIONS\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewMlflowModel(ctx, \"this\", \u0026databricks.MlflowModelArgs{\n\t\t\tName: pulumi.String(\"SomePredictions\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"model_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tRegisteredModelId: this.RegisteredModelId,\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_READ\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE_PRODUCTION_VERSIONS\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE_STAGING_VERSIONS\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MlflowModel;\nimport com.pulumi.databricks.MlflowModelArgs;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new MlflowModel(\"this\", MlflowModelArgs.builder()\n            .name(\"SomePredictions\")\n            .build());\n\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var modelUsage = new Permissions(\"modelUsage\", PermissionsArgs.builder()\n            .registeredModelId(this_.registeredModelId())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_READ\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_MANAGE_PRODUCTION_VERSIONS\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE_STAGING_VERSIONS\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:MlflowModel\n    properties:\n      name: SomePredictions\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  modelUsage:\n    type: databricks:Permissions\n    name: model_usage\n    properties:\n      registeredModelId: ${this.registeredModelId}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_READ\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_MANAGE_PRODUCTION_VERSIONS\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE_STAGING_VERSIONS\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Model serving usage\n\nValid permission levels for\u003cspan pulumi-lang-nodejs=\" databricks.ModelServing \" pulumi-lang-dotnet=\" databricks.ModelServing \" pulumi-lang-go=\" ModelServing \" pulumi-lang-python=\" ModelServing \" pulumi-lang-yaml=\" databricks.ModelServing \" pulumi-lang-java=\" databricks.ModelServing \"\u003e databricks.ModelServing \u003c/span\u003eare: `CAN_VIEW`, `CAN_QUERY`, and `CAN_MANAGE`.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.ModelServing(\"this\", {\n    name: \"tf-test\",\n    config: {\n        servedModels: [{\n            name: \"prod_model\",\n            modelName: \"test\",\n            modelVersion: \"1\",\n            workloadSize: \"Small\",\n            scaleToZeroEnabled: true,\n        }],\n    },\n});\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst mlServingUsage = new databricks.Permissions(\"ml_serving_usage\", {\n    servingEndpointId: _this.servingEndpointId,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_VIEW\",\n        },\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_QUERY\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.ModelServing(\"this\",\n    name=\"tf-test\",\n    config={\n        \"served_models\": [{\n            \"name\": \"prod_model\",\n            \"model_name\": \"test\",\n            \"model_version\": \"1\",\n            \"workload_size\": \"Small\",\n            \"scale_to_zero_enabled\": True,\n        }],\n    })\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nml_serving_usage = databricks.Permissions(\"ml_serving_usage\",\n    serving_endpoint_id=this.serving_endpoint_id,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_VIEW\",\n        },\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_QUERY\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.ModelServing(\"this\", new()\n    {\n        Name = \"tf-test\",\n        Config = new Databricks.Inputs.ModelServingConfigArgs\n        {\n            ServedModels = new[]\n            {\n                new Databricks.Inputs.ModelServingConfigServedModelArgs\n                {\n                    Name = \"prod_model\",\n                    ModelName = \"test\",\n                    ModelVersion = \"1\",\n                    WorkloadSize = \"Small\",\n                    ScaleToZeroEnabled = true,\n                },\n            },\n        },\n    });\n\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var mlServingUsage = new Databricks.Permissions(\"ml_serving_usage\", new()\n    {\n        ServingEndpointId = @this.ServingEndpointId,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_VIEW\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_QUERY\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewModelServing(ctx, \"this\", \u0026databricks.ModelServingArgs{\n\t\t\tName: pulumi.String(\"tf-test\"),\n\t\t\tConfig: \u0026databricks.ModelServingConfigArgs{\n\t\t\t\tServedModels: databricks.ModelServingConfigServedModelArray{\n\t\t\t\t\t\u0026databricks.ModelServingConfigServedModelArgs{\n\t\t\t\t\t\tName:               pulumi.String(\"prod_model\"),\n\t\t\t\t\t\tModelName:          pulumi.String(\"test\"),\n\t\t\t\t\t\tModelVersion:       pulumi.String(\"1\"),\n\t\t\t\t\t\tWorkloadSize:       pulumi.String(\"Small\"),\n\t\t\t\t\t\tScaleToZeroEnabled: pulumi.Bool(true),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"ml_serving_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tServingEndpointId: this.ServingEndpointId,\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_VIEW\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_QUERY\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ModelServing;\nimport com.pulumi.databricks.ModelServingArgs;\nimport com.pulumi.databricks.inputs.ModelServingConfigArgs;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new ModelServing(\"this\", ModelServingArgs.builder()\n            .name(\"tf-test\")\n            .config(ModelServingConfigArgs.builder()\n                .servedModels(ModelServingConfigServedModelArgs.builder()\n                    .name(\"prod_model\")\n                    .modelName(\"test\")\n                    .modelVersion(\"1\")\n                    .workloadSize(\"Small\")\n                    .scaleToZeroEnabled(true)\n                    .build())\n                .build())\n            .build());\n\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var mlServingUsage = new Permissions(\"mlServingUsage\", PermissionsArgs.builder()\n            .servingEndpointId(this_.servingEndpointId())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_VIEW\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_QUERY\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:ModelServing\n    properties:\n      name: tf-test\n      config:\n        servedModels:\n          - name: prod_model\n            modelName: test\n            modelVersion: '1'\n            workloadSize: Small\n            scaleToZeroEnabled: true\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  mlServingUsage:\n    type: databricks:Permissions\n    name: ml_serving_usage\n    properties:\n      servingEndpointId: ${this.servingEndpointId}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_VIEW\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_MANAGE\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_QUERY\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Mosaic AI Vector Search usage\n\nValid permission levels for\u003cspan pulumi-lang-nodejs=\" databricks.VectorSearchEndpoint \" pulumi-lang-dotnet=\" databricks.VectorSearchEndpoint \" pulumi-lang-go=\" VectorSearchEndpoint \" pulumi-lang-python=\" VectorSearchEndpoint \" pulumi-lang-yaml=\" databricks.VectorSearchEndpoint \" pulumi-lang-java=\" databricks.VectorSearchEndpoint \"\u003e databricks.VectorSearchEndpoint \u003c/span\u003eare: `CAN_USE` and `CAN_MANAGE`.\n\n\u003e You need to use the \u003cspan pulumi-lang-nodejs=\"`endpointId`\" pulumi-lang-dotnet=\"`EndpointId`\" pulumi-lang-go=\"`endpointId`\" pulumi-lang-python=\"`endpoint_id`\" pulumi-lang-yaml=\"`endpointId`\" pulumi-lang-java=\"`endpointId`\"\u003e`endpoint_id`\u003c/span\u003e attribute of \u003cspan pulumi-lang-nodejs=\"`databricks.VectorSearchEndpoint`\" pulumi-lang-dotnet=\"`databricks.VectorSearchEndpoint`\" pulumi-lang-go=\"`VectorSearchEndpoint`\" pulumi-lang-python=\"`VectorSearchEndpoint`\" pulumi-lang-yaml=\"`databricks.VectorSearchEndpoint`\" pulumi-lang-java=\"`databricks.VectorSearchEndpoint`\"\u003e`databricks.VectorSearchEndpoint`\u003c/span\u003e as value for \u003cspan pulumi-lang-nodejs=\"`vectorSearchEndpointId`\" pulumi-lang-dotnet=\"`VectorSearchEndpointId`\" pulumi-lang-go=\"`vectorSearchEndpointId`\" pulumi-lang-python=\"`vector_search_endpoint_id`\" pulumi-lang-yaml=\"`vectorSearchEndpointId`\" pulumi-lang-java=\"`vectorSearchEndpointId`\"\u003e`vector_search_endpoint_id`\u003c/span\u003e, not the \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e!\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.VectorSearchEndpoint(\"this\", {\n    name: \"vector-search-test\",\n    endpointType: \"STANDARD\",\n});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst vectorSearchEndpointUsage = new databricks.Permissions(\"vector_search_endpoint_usage\", {\n    vectorSearchEndpointId: _this.endpointId,\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_USE\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.VectorSearchEndpoint(\"this\",\n    name=\"vector-search-test\",\n    endpoint_type=\"STANDARD\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nvector_search_endpoint_usage = databricks.Permissions(\"vector_search_endpoint_usage\",\n    vector_search_endpoint_id=this.endpoint_id,\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_USE\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.VectorSearchEndpoint(\"this\", new()\n    {\n        Name = \"vector-search-test\",\n        EndpointType = \"STANDARD\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var vectorSearchEndpointUsage = new Databricks.Permissions(\"vector_search_endpoint_usage\", new()\n    {\n        VectorSearchEndpointId = @this.EndpointId,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_USE\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewVectorSearchEndpoint(ctx, \"this\", \u0026databricks.VectorSearchEndpointArgs{\n\t\t\tName:         pulumi.String(\"vector-search-test\"),\n\t\t\tEndpointType: pulumi.String(\"STANDARD\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"vector_search_endpoint_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tVectorSearchEndpointId: this.EndpointId,\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.VectorSearchEndpoint;\nimport com.pulumi.databricks.VectorSearchEndpointArgs;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new VectorSearchEndpoint(\"this\", VectorSearchEndpointArgs.builder()\n            .name(\"vector-search-test\")\n            .endpointType(\"STANDARD\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var vectorSearchEndpointUsage = new Permissions(\"vectorSearchEndpointUsage\", PermissionsArgs.builder()\n            .vectorSearchEndpointId(this_.endpointId())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_USE\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:VectorSearchEndpoint\n    properties:\n      name: vector-search-test\n      endpointType: STANDARD\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  vectorSearchEndpointUsage:\n    type: databricks:Permissions\n    name: vector_search_endpoint_usage\n    properties:\n      vectorSearchEndpointId: ${this.endpointId}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_USE\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Passwords usage\n\nBy default on AWS deployments, all admin users can sign in to Databricks using either SSO or their username and password, and all API users can authenticate to the Databricks REST APIs using their username and password. As an admin, you [can limit](https://docs.databricks.com/administration-guide/users-groups/single-sign-on/index.html#optional-configure-password-access-control) admin users' and API users' ability to authenticate with their username and password by configuring `CAN_USE` permissions using password access control.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst guests = new databricks.Group(\"guests\", {displayName: \"Guest Users\"});\nconst passwordUsage = new databricks.Permissions(\"password_usage\", {\n    authorization: \"passwords\",\n    accessControls: [{\n        groupName: guests.displayName,\n        permissionLevel: \"CAN_USE\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nguests = databricks.Group(\"guests\", display_name=\"Guest Users\")\npassword_usage = databricks.Permissions(\"password_usage\",\n    authorization=\"passwords\",\n    access_controls=[{\n        \"group_name\": guests.display_name,\n        \"permission_level\": \"CAN_USE\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var guests = new Databricks.Group(\"guests\", new()\n    {\n        DisplayName = \"Guest Users\",\n    });\n\n    var passwordUsage = new Databricks.Permissions(\"password_usage\", new()\n    {\n        Authorization = \"passwords\",\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = guests.DisplayName,\n                PermissionLevel = \"CAN_USE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tguests, err := databricks.NewGroup(ctx, \"guests\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Guest Users\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"password_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tAuthorization: pulumi.String(\"passwords\"),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       guests.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var guests = new Group(\"guests\", GroupArgs.builder()\n            .displayName(\"Guest Users\")\n            .build());\n\n        var passwordUsage = new Permissions(\"passwordUsage\", PermissionsArgs.builder()\n            .authorization(\"passwords\")\n            .accessControls(PermissionsAccessControlArgs.builder()\n                .groupName(guests.displayName())\n                .permissionLevel(\"CAN_USE\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  guests:\n    type: databricks:Group\n    properties:\n      displayName: Guest Users\n  passwordUsage:\n    type: databricks:Permissions\n    name: password_usage\n    properties:\n      authorization: passwords\n      accessControls:\n        - groupName: ${guests.displayName}\n          permissionLevel: CAN_USE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Token usage\n\nIt is required to have at least 1 personal access token in the workspace before you can manage tokens permissions.\n\n!\u003e **Warning** There can be only one `authorization = \"tokens\"` permissions resource per workspace, otherwise there'll be a permanent configuration drift. After applying changes, users who previously had either `CAN_USE` or `CAN_MANAGE` permission but no longer have either permission have their access to token-based authentication revoked. Their active tokens are immediately deleted (revoked).\n\nOnly [possible permission](https://docs.databricks.com/administration-guide/access-control/tokens.html) to assign to non-admin group is `CAN_USE`, where _admins_ `CAN_MANAGE` all tokens:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst tokenUsage = new databricks.Permissions(\"token_usage\", {\n    authorization: \"tokens\",\n    accessControls: [\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_USE\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_USE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\ntoken_usage = databricks.Permissions(\"token_usage\",\n    authorization=\"tokens\",\n    access_controls=[\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_USE\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_USE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var tokenUsage = new Databricks.Permissions(\"token_usage\", new()\n    {\n        Authorization = \"tokens\",\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_USE\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_USE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"token_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tAuthorization: pulumi.String(\"tokens\"),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var tokenUsage = new Permissions(\"tokenUsage\", PermissionsArgs.builder()\n            .authorization(\"tokens\")\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_USE\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_USE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  tokenUsage:\n    type: databricks:Permissions\n    name: token_usage\n    properties:\n      authorization: tokens\n      accessControls:\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_USE\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_USE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## SQL warehouse usage\n\n[SQL warehouses](https://docs.databricks.com/sql/user/security/access-control/sql-endpoint-acl.html) have five possible permissions: `CAN_USE`, `CAN_MONITOR`, `CAN_MANAGE`, `CAN_VIEW` and `IS_OWNER`:\n\n- The creator of a warehouse has `IS_OWNER` permission. Destroying \u003cspan pulumi-lang-nodejs=\"`databricks.Permissions`\" pulumi-lang-dotnet=\"`databricks.Permissions`\" pulumi-lang-go=\"`Permissions`\" pulumi-lang-python=\"`Permissions`\" pulumi-lang-yaml=\"`databricks.Permissions`\" pulumi-lang-java=\"`databricks.Permissions`\"\u003e`databricks.Permissions`\u003c/span\u003e resource for a warehouse would revert ownership to the creator.\n- A warehouse must have exactly one owner. If a resource is changed and no owner is specified, the currently authenticated principal would become the new owner of the warehouse. Nothing would change, per se, if the warehouse was created through Pulumi.\n- A warehouse cannot have a group as an owner.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst me = databricks.getCurrentUser({});\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst _this = new databricks.SqlEndpoint(\"this\", {\n    name: me.then(me =\u003e `Endpoint of ${me.alphanumeric}`),\n    clusterSize: \"Small\",\n    maxNumClusters: 1,\n    tags: {\n        customTags: [{\n            key: \"City\",\n            value: \"Amsterdam\",\n        }],\n    },\n});\nconst endpointUsage = new databricks.Permissions(\"endpoint_usage\", {\n    sqlEndpointId: _this.id,\n    accessControls: [\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_USE\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nme = databricks.get_current_user()\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nthis = databricks.SqlEndpoint(\"this\",\n    name=f\"Endpoint of {me.alphanumeric}\",\n    cluster_size=\"Small\",\n    max_num_clusters=1,\n    tags={\n        \"custom_tags\": [{\n            \"key\": \"City\",\n            \"value\": \"Amsterdam\",\n        }],\n    })\nendpoint_usage = databricks.Permissions(\"endpoint_usage\",\n    sql_endpoint_id=this.id,\n    access_controls=[\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_USE\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var me = Databricks.GetCurrentUser.Invoke();\n\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var @this = new Databricks.SqlEndpoint(\"this\", new()\n    {\n        Name = $\"Endpoint of {me.Apply(getCurrentUserResult =\u003e getCurrentUserResult.Alphanumeric)}\",\n        ClusterSize = \"Small\",\n        MaxNumClusters = 1,\n        Tags = new Databricks.Inputs.SqlEndpointTagsArgs\n        {\n            CustomTags = new[]\n            {\n                new Databricks.Inputs.SqlEndpointTagsCustomTagArgs\n                {\n                    Key = \"City\",\n                    Value = \"Amsterdam\",\n                },\n            },\n        },\n    });\n\n    var endpointUsage = new Databricks.Permissions(\"endpoint_usage\", new()\n    {\n        SqlEndpointId = @this.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_USE\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tme, err := databricks.GetCurrentUser(ctx, \u0026databricks.GetCurrentUserArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewSqlEndpoint(ctx, \"this\", \u0026databricks.SqlEndpointArgs{\n\t\t\tName:           pulumi.Sprintf(\"Endpoint of %v\", me.Alphanumeric),\n\t\t\tClusterSize:    pulumi.String(\"Small\"),\n\t\t\tMaxNumClusters: pulumi.Int(1),\n\t\t\tTags: \u0026databricks.SqlEndpointTagsArgs{\n\t\t\t\tCustomTags: databricks.SqlEndpointTagsCustomTagArray{\n\t\t\t\t\t\u0026databricks.SqlEndpointTagsCustomTagArgs{\n\t\t\t\t\t\tKey:   pulumi.String(\"City\"),\n\t\t\t\t\t\tValue: pulumi.String(\"Amsterdam\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"endpoint_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tSqlEndpointId: this.ID(),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetCurrentUserArgs;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.SqlEndpoint;\nimport com.pulumi.databricks.SqlEndpointArgs;\nimport com.pulumi.databricks.inputs.SqlEndpointTagsArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var me = DatabricksFunctions.getCurrentUser(GetCurrentUserArgs.builder()\n            .build());\n\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var this_ = new SqlEndpoint(\"this\", SqlEndpointArgs.builder()\n            .name(String.format(\"Endpoint of %s\", me.alphanumeric()))\n            .clusterSize(\"Small\")\n            .maxNumClusters(1)\n            .tags(SqlEndpointTagsArgs.builder()\n                .customTags(SqlEndpointTagsCustomTagArgs.builder()\n                    .key(\"City\")\n                    .value(\"Amsterdam\")\n                    .build())\n                .build())\n            .build());\n\n        var endpointUsage = new Permissions(\"endpointUsage\", PermissionsArgs.builder()\n            .sqlEndpointId(this_.id())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_USE\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  this:\n    type: databricks:SqlEndpoint\n    properties:\n      name: Endpoint of ${me.alphanumeric}\n      clusterSize: Small\n      maxNumClusters: 1\n      tags:\n        customTags:\n          - key: City\n            value: Amsterdam\n  endpointUsage:\n    type: databricks:Permissions\n    name: endpoint_usage\n    properties:\n      sqlEndpointId: ${this.id}\n      accessControls:\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_USE\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\nvariables:\n  me:\n    fn::invoke:\n      function: databricks:getCurrentUser\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Dashboard usage\n\n[Dashboards](https://docs.databricks.com/en/dashboards/tutorials/manage-permissions.html) have four possible permissions: `CAN_READ`, `CAN_RUN`, `CAN_EDIT` and `CAN_MANAGE`:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst dashboard = new databricks.Dashboard(\"dashboard\", {displayName: \"TF New Dashboard\"});\nconst dashboardUsage = new databricks.Permissions(\"dashboard_usage\", {\n    dashboardId: dashboard.id,\n    accessControls: [\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\ndashboard = databricks.Dashboard(\"dashboard\", display_name=\"TF New Dashboard\")\ndashboard_usage = databricks.Permissions(\"dashboard_usage\",\n    dashboard_id=dashboard.id,\n    access_controls=[\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var dashboard = new Databricks.Dashboard(\"dashboard\", new()\n    {\n        DisplayName = \"TF New Dashboard\",\n    });\n\n    var dashboardUsage = new Databricks.Permissions(\"dashboard_usage\", new()\n    {\n        DashboardId = dashboard.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdashboard, err := databricks.NewDashboard(ctx, \"dashboard\", \u0026databricks.DashboardArgs{\n\t\t\tDisplayName: pulumi.String(\"TF New Dashboard\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"dashboard_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tDashboardId: dashboard.ID(),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Dashboard;\nimport com.pulumi.databricks.DashboardArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var dashboard = new Dashboard(\"dashboard\", DashboardArgs.builder()\n            .displayName(\"TF New Dashboard\")\n            .build());\n\n        var dashboardUsage = new Permissions(\"dashboardUsage\", PermissionsArgs.builder()\n            .dashboardId(dashboard.id())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  dashboard:\n    type: databricks:Dashboard\n    properties:\n      displayName: TF New Dashboard\n  dashboardUsage:\n    type: databricks:Permissions\n    name: dashboard_usage\n    properties:\n      dashboardId: ${dashboard.id}\n      accessControls:\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Legacy SQL Dashboard usage\n\n[Legacy SQL dashboards](https://docs.databricks.com/sql/user/security/access-control/dashboard-acl.html) have three possible permissions: `CAN_VIEW`, `CAN_RUN` and `CAN_MANAGE`:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst sqlDashboardUsage = new databricks.Permissions(\"sql_dashboard_usage\", {\n    sqlDashboardId: \"3244325\",\n    accessControls: [\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nsql_dashboard_usage = databricks.Permissions(\"sql_dashboard_usage\",\n    sql_dashboard_id=\"3244325\",\n    access_controls=[\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var sqlDashboardUsage = new Databricks.Permissions(\"sql_dashboard_usage\", new()\n    {\n        SqlDashboardId = \"3244325\",\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"sql_dashboard_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tSqlDashboardId: pulumi.String(\"3244325\"),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var sqlDashboardUsage = new Permissions(\"sqlDashboardUsage\", PermissionsArgs.builder()\n            .sqlDashboardId(\"3244325\")\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  sqlDashboardUsage:\n    type: databricks:Permissions\n    name: sql_dashboard_usage\n    properties:\n      sqlDashboardId: '3244325'\n      accessControls:\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## SQL Query usage\n\n[SQL queries](https://docs.databricks.com/sql/user/security/access-control/query-acl.html) have three possible permissions: `CAN_VIEW`, `CAN_RUN` and `CAN_MANAGE`:\n\n\u003e If you do not define an \u003cspan pulumi-lang-nodejs=\"`accessControl`\" pulumi-lang-dotnet=\"`AccessControl`\" pulumi-lang-go=\"`accessControl`\" pulumi-lang-python=\"`access_control`\" pulumi-lang-yaml=\"`accessControl`\" pulumi-lang-java=\"`accessControl`\"\u003e`access_control`\u003c/span\u003e block granting `CAN_MANAGE` explictly for the user calling this provider, Databricks Pulumi Provider will add `CAN_MANAGE` permission for the caller. This is a failsafe to prevent situations where the caller is locked out from making changes to the targeted \u003cspan pulumi-lang-nodejs=\"`databricks.SqlQuery`\" pulumi-lang-dotnet=\"`databricks.SqlQuery`\" pulumi-lang-go=\"`SqlQuery`\" pulumi-lang-python=\"`SqlQuery`\" pulumi-lang-yaml=\"`databricks.SqlQuery`\" pulumi-lang-java=\"`databricks.SqlQuery`\"\u003e`databricks.SqlQuery`\u003c/span\u003e resource when backend API do not apply permission inheritance correctly.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst queryUsage = new databricks.Permissions(\"query_usage\", {\n    sqlQueryId: \"3244325\",\n    accessControls: [\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nquery_usage = databricks.Permissions(\"query_usage\",\n    sql_query_id=\"3244325\",\n    access_controls=[\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var queryUsage = new Databricks.Permissions(\"query_usage\", new()\n    {\n        SqlQueryId = \"3244325\",\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"query_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tSqlQueryId: pulumi.String(\"3244325\"),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var queryUsage = new Permissions(\"queryUsage\", PermissionsArgs.builder()\n            .sqlQueryId(\"3244325\")\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  queryUsage:\n    type: databricks:Permissions\n    name: query_usage\n    properties:\n      sqlQueryId: '3244325'\n      accessControls:\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## SQL Alert (AlertV2) usage\n\n[Alert V2](https://docs.databricks.com/sql/user/security/access-control/alert-acl.html) which is the new version of SQL Alert have 4 possible permission levels: `CAN_READ`, `CAN_RUN`, `CAN_EDIT`, and `CAN_MANAGE`.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst appUsage = new databricks.Permissions(\"app_usage\", {\n    alertV2Id: \"12345\",\n    accessControls: [\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_EDIT\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\napp_usage = databricks.Permissions(\"app_usage\",\n    alert_v2_id=\"12345\",\n    access_controls=[\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_EDIT\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var appUsage = new Databricks.Permissions(\"app_usage\", new()\n    {\n        AlertV2Id = \"12345\",\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_EDIT\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"app_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tAlertV2Id: pulumi.String(\"12345\"),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_EDIT\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var appUsage = new Permissions(\"appUsage\", PermissionsArgs.builder()\n            .alertV2Id(\"12345\")\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_EDIT\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  appUsage:\n    type: databricks:Permissions\n    name: app_usage\n    properties:\n      alertV2Id: '12345'\n      accessControls:\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_EDIT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## SQL Alert (legacy) usage\n\n[SQL alerts](https://docs.databricks.com/sql/user/security/access-control/alert-acl.html) have three possible permissions: `CAN_VIEW`, `CAN_RUN` and `CAN_MANAGE`:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst auto = new databricks.Group(\"auto\", {displayName: \"Automation\"});\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst alertUsage = new databricks.Permissions(\"alert_usage\", {\n    sqlAlertId: \"3244325\",\n    accessControls: [\n        {\n            groupName: auto.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nauto = databricks.Group(\"auto\", display_name=\"Automation\")\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\nalert_usage = databricks.Permissions(\"alert_usage\",\n    sql_alert_id=\"3244325\",\n    access_controls=[\n        {\n            \"group_name\": auto.display_name,\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var auto = new Databricks.Group(\"auto\", new()\n    {\n        DisplayName = \"Automation\",\n    });\n\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var alertUsage = new Databricks.Permissions(\"alert_usage\", new()\n    {\n        SqlAlertId = \"3244325\",\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = auto.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tauto, err := databricks.NewGroup(ctx, \"auto\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"alert_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tSqlAlertId: pulumi.String(\"3244325\"),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       auto.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var auto = new Group(\"auto\", GroupArgs.builder()\n            .displayName(\"Automation\")\n            .build());\n\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var alertUsage = new Permissions(\"alertUsage\", PermissionsArgs.builder()\n            .sqlAlertId(\"3244325\")\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(auto.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  auto:\n    type: databricks:Group\n    properties:\n      displayName: Automation\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  alertUsage:\n    type: databricks:Permissions\n    name: alert_usage\n    properties:\n      sqlAlertId: '3244325'\n      accessControls:\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Databricks Apps usage\n\n[Databricks Apps](https://docs.databricks.com/en/dev-tools/databricks-apps/index.html) have two possible permissions: `CAN_USE` and `CAN_MANAGE`:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst appUsage = new databricks.Permissions(\"app_usage\", {\n    appName: \"myapp\",\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_USE\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\napp_usage = databricks.Permissions(\"app_usage\",\n    app_name=\"myapp\",\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_USE\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var appUsage = new Databricks.Permissions(\"app_usage\", new()\n    {\n        AppName = \"myapp\",\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_USE\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"app_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tAppName: pulumi.String(\"myapp\"),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var appUsage = new Permissions(\"appUsage\", PermissionsArgs.builder()\n            .appName(\"myapp\")\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_USE\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  appUsage:\n    type: databricks:Permissions\n    name: app_usage\n    properties:\n      appName: myapp\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_USE\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Lakebase Database Instances usage\n\n[Databricks Lakebase](https://docs.databricks.com/aws/en/oltp/) have two possible permissions: `CAN_USE` and `CAN_MANAGE`:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst appUsage = new databricks.Permissions(\"app_usage\", {\n    databaseInstanceName: \"my_database\",\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_USE\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\napp_usage = databricks.Permissions(\"app_usage\",\n    database_instance_name=\"my_database\",\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_USE\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var appUsage = new Databricks.Permissions(\"app_usage\", new()\n    {\n        DatabaseInstanceName = \"my_database\",\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_USE\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"app_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tDatabaseInstanceName: pulumi.String(\"my_database\"),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var appUsage = new Permissions(\"appUsage\", PermissionsArgs.builder()\n            .databaseInstanceName(\"my_database\")\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_USE\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  appUsage:\n    type: databricks:Permissions\n    name: app_usage\n    properties:\n      databaseInstanceName: my_database\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_USE\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Lakebase Database Projects usage\n\n[Databricks Lakebase](https://docs.databricks.com/aws/en/oltp/) database projects have two possible permissions: `CAN_USE` and `CAN_MANAGE`:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst eng = new databricks.Group(\"eng\", {displayName: \"Engineering\"});\nconst dbProjectUsage = new databricks.Permissions(\"db_project_usage\", {\n    databaseProjectName: \"my_project\",\n    accessControls: [\n        {\n            groupName: \"users\",\n            permissionLevel: \"CAN_USE\",\n        },\n        {\n            groupName: eng.displayName,\n            permissionLevel: \"CAN_MANAGE\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\neng = databricks.Group(\"eng\", display_name=\"Engineering\")\ndb_project_usage = databricks.Permissions(\"db_project_usage\",\n    database_project_name=\"my_project\",\n    access_controls=[\n        {\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_USE\",\n        },\n        {\n            \"group_name\": eng.display_name,\n            \"permission_level\": \"CAN_MANAGE\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var eng = new Databricks.Group(\"eng\", new()\n    {\n        DisplayName = \"Engineering\",\n    });\n\n    var dbProjectUsage = new Databricks.Permissions(\"db_project_usage\", new()\n    {\n        DatabaseProjectName = \"my_project\",\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_USE\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = eng.DisplayName,\n                PermissionLevel = \"CAN_MANAGE\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\teng, err := databricks.NewGroup(ctx, \"eng\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"Engineering\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPermissions(ctx, \"db_project_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tDatabaseProjectName: pulumi.String(\"my_project\"),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_USE\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       eng.DisplayName,\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var eng = new Group(\"eng\", GroupArgs.builder()\n            .displayName(\"Engineering\")\n            .build());\n\n        var dbProjectUsage = new Permissions(\"dbProjectUsage\", PermissionsArgs.builder()\n            .databaseProjectName(\"my_project\")\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(\"users\")\n                    .permissionLevel(\"CAN_USE\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(eng.displayName())\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  eng:\n    type: databricks:Group\n    properties:\n      displayName: Engineering\n  dbProjectUsage:\n    type: databricks:Permissions\n    name: db_project_usage\n    properties:\n      databaseProjectName: my_project\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_USE\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_MANAGE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Instance Profiles\n\nInstance Profiles are not managed by General Permissions API and therefore\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" databricks.UserInstanceProfile \" pulumi-lang-dotnet=\" databricks.UserInstanceProfile \" pulumi-lang-go=\" UserInstanceProfile \" pulumi-lang-python=\" UserInstanceProfile \" pulumi-lang-yaml=\" databricks.UserInstanceProfile \" pulumi-lang-java=\" databricks.UserInstanceProfile \"\u003e databricks.UserInstanceProfile \u003c/span\u003eshould be used to allow usage of specific AWS EC2 IAM roles to users or groups.\n\n## Secrets\n\nOne can control access to\u003cspan pulumi-lang-nodejs=\" databricks.Secret \" pulumi-lang-dotnet=\" databricks.Secret \" pulumi-lang-go=\" Secret \" pulumi-lang-python=\" Secret \" pulumi-lang-yaml=\" databricks.Secret \" pulumi-lang-java=\" databricks.Secret \"\u003e databricks.Secret \u003c/span\u003ethrough \u003cspan pulumi-lang-nodejs=\"`initialManagePrincipal`\" pulumi-lang-dotnet=\"`InitialManagePrincipal`\" pulumi-lang-go=\"`initialManagePrincipal`\" pulumi-lang-python=\"`initial_manage_principal`\" pulumi-lang-yaml=\"`initialManagePrincipal`\" pulumi-lang-java=\"`initialManagePrincipal`\"\u003e`initial_manage_principal`\u003c/span\u003e argument on\u003cspan pulumi-lang-nodejs=\" databricks.SecretScope \" pulumi-lang-dotnet=\" databricks.SecretScope \" pulumi-lang-go=\" SecretScope \" pulumi-lang-python=\" SecretScope \" pulumi-lang-yaml=\" databricks.SecretScope \" pulumi-lang-java=\" databricks.SecretScope \"\u003e databricks.SecretScope \u003c/span\u003eor databricks_secret_acl, so that users (or service principals) can `READ`, `WRITE` or `MANAGE` entries within secret scope.\n\n## Tables, Views and Databases\n\nGeneral Permissions API does not apply to access control for tables and they have to be managed separately using the\u003cspan pulumi-lang-nodejs=\" databricks.SqlPermissions \" pulumi-lang-dotnet=\" databricks.SqlPermissions \" pulumi-lang-go=\" SqlPermissions \" pulumi-lang-python=\" SqlPermissions \" pulumi-lang-yaml=\" databricks.SqlPermissions \" pulumi-lang-java=\" databricks.SqlPermissions \"\u003e databricks.SqlPermissions \u003c/span\u003eresource, though you're encouraged to use Unity Catalog or migrate to it.\n\n## Data Access with Unity Catalog\n\nInitially in Unity Catalog all users have no access to data, which has to be later assigned through\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eor\u003cspan pulumi-lang-nodejs=\" databricks.Grant \" pulumi-lang-dotnet=\" databricks.Grant \" pulumi-lang-go=\" Grant \" pulumi-lang-python=\" Grant \" pulumi-lang-yaml=\" databricks.Grant \" pulumi-lang-java=\" databricks.Grant \"\u003e databricks.Grant \u003c/span\u003eresource.\n\n## Import\n\nThe resource permissions can be imported using the object id\n\n```sh\nterraform import databricks_permissions \u003cobject type\u003e/\u003cobject id\u003e\n```\n\n","properties":{"accessControls":{"type":"array","items":{"$ref":"#/types/databricks:index/PermissionsAccessControl:PermissionsAccessControl"}},"alertV2Id":{"type":"string"},"appName":{"type":"string"},"authorization":{"type":"string"},"clusterId":{"type":"string"},"clusterPolicyId":{"type":"string"},"dashboardId":{"type":"string"},"databaseInstanceName":{"type":"string"},"databaseProjectName":{"type":"string"},"directoryId":{"type":"string"},"directoryPath":{"type":"string"},"experimentId":{"type":"string"},"instancePoolId":{"type":"string"},"jobId":{"type":"string"},"notebookId":{"type":"string"},"notebookPath":{"type":"string"},"objectType":{"type":"string","description":"type of permissions.\n"},"pipelineId":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/PermissionsProviderConfig:PermissionsProviderConfig"},"registeredModelId":{"type":"string"},"repoId":{"type":"string"},"repoPath":{"type":"string"},"servingEndpointId":{"type":"string"},"sqlAlertId":{"type":"string"},"sqlDashboardId":{"type":"string"},"sqlEndpointId":{"type":"string"},"sqlQueryId":{"type":"string"},"vectorSearchEndpointId":{"type":"string"},"workspaceFileId":{"type":"string"},"workspaceFilePath":{"type":"string"}},"required":["accessControls","objectType"],"inputProperties":{"accessControls":{"type":"array","items":{"$ref":"#/types/databricks:index/PermissionsAccessControl:PermissionsAccessControl"}},"alertV2Id":{"type":"string","willReplaceOnChanges":true},"appName":{"type":"string","willReplaceOnChanges":true},"authorization":{"type":"string","willReplaceOnChanges":true},"clusterId":{"type":"string","willReplaceOnChanges":true},"clusterPolicyId":{"type":"string","willReplaceOnChanges":true},"dashboardId":{"type":"string","willReplaceOnChanges":true},"databaseInstanceName":{"type":"string","willReplaceOnChanges":true},"databaseProjectName":{"type":"string","willReplaceOnChanges":true},"directoryId":{"type":"string","willReplaceOnChanges":true},"directoryPath":{"type":"string","willReplaceOnChanges":true},"experimentId":{"type":"string","willReplaceOnChanges":true},"instancePoolId":{"type":"string","willReplaceOnChanges":true},"jobId":{"type":"string","willReplaceOnChanges":true},"notebookId":{"type":"string","willReplaceOnChanges":true},"notebookPath":{"type":"string","willReplaceOnChanges":true},"objectType":{"type":"string","description":"type of permissions.\n"},"pipelineId":{"type":"string","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/PermissionsProviderConfig:PermissionsProviderConfig"},"registeredModelId":{"type":"string","willReplaceOnChanges":true},"repoId":{"type":"string","willReplaceOnChanges":true},"repoPath":{"type":"string","willReplaceOnChanges":true},"servingEndpointId":{"type":"string","willReplaceOnChanges":true},"sqlAlertId":{"type":"string","willReplaceOnChanges":true},"sqlDashboardId":{"type":"string","willReplaceOnChanges":true},"sqlEndpointId":{"type":"string","willReplaceOnChanges":true},"sqlQueryId":{"type":"string","willReplaceOnChanges":true},"vectorSearchEndpointId":{"type":"string","willReplaceOnChanges":true},"workspaceFileId":{"type":"string","willReplaceOnChanges":true},"workspaceFilePath":{"type":"string","willReplaceOnChanges":true}},"requiredInputs":["accessControls"],"stateInputs":{"description":"Input properties used for looking up and filtering Permissions resources.\n","properties":{"accessControls":{"type":"array","items":{"$ref":"#/types/databricks:index/PermissionsAccessControl:PermissionsAccessControl"}},"alertV2Id":{"type":"string","willReplaceOnChanges":true},"appName":{"type":"string","willReplaceOnChanges":true},"authorization":{"type":"string","willReplaceOnChanges":true},"clusterId":{"type":"string","willReplaceOnChanges":true},"clusterPolicyId":{"type":"string","willReplaceOnChanges":true},"dashboardId":{"type":"string","willReplaceOnChanges":true},"databaseInstanceName":{"type":"string","willReplaceOnChanges":true},"databaseProjectName":{"type":"string","willReplaceOnChanges":true},"directoryId":{"type":"string","willReplaceOnChanges":true},"directoryPath":{"type":"string","willReplaceOnChanges":true},"experimentId":{"type":"string","willReplaceOnChanges":true},"instancePoolId":{"type":"string","willReplaceOnChanges":true},"jobId":{"type":"string","willReplaceOnChanges":true},"notebookId":{"type":"string","willReplaceOnChanges":true},"notebookPath":{"type":"string","willReplaceOnChanges":true},"objectType":{"type":"string","description":"type of permissions.\n"},"pipelineId":{"type":"string","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/PermissionsProviderConfig:PermissionsProviderConfig"},"registeredModelId":{"type":"string","willReplaceOnChanges":true},"repoId":{"type":"string","willReplaceOnChanges":true},"repoPath":{"type":"string","willReplaceOnChanges":true},"servingEndpointId":{"type":"string","willReplaceOnChanges":true},"sqlAlertId":{"type":"string","willReplaceOnChanges":true},"sqlDashboardId":{"type":"string","willReplaceOnChanges":true},"sqlEndpointId":{"type":"string","willReplaceOnChanges":true},"sqlQueryId":{"type":"string","willReplaceOnChanges":true},"vectorSearchEndpointId":{"type":"string","willReplaceOnChanges":true},"workspaceFileId":{"type":"string","willReplaceOnChanges":true},"workspaceFilePath":{"type":"string","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/pipeline:Pipeline":{"description":"Use \u003cspan pulumi-lang-nodejs=\"`databricks.Pipeline`\" pulumi-lang-dotnet=\"`databricks.Pipeline`\" pulumi-lang-go=\"`Pipeline`\" pulumi-lang-python=\"`Pipeline`\" pulumi-lang-yaml=\"`databricks.Pipeline`\" pulumi-lang-java=\"`databricks.Pipeline`\"\u003e`databricks.Pipeline`\u003c/span\u003e to deploy [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt).\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst ldpDemo = new databricks.Notebook(\"ldp_demo\", {});\nconst ldpDemoRepo = new databricks.Repo(\"ldp_demo\", {});\nconst _this = new databricks.Pipeline(\"this\", {\n    name: \"Pipeline Name\",\n    catalog: \"main\",\n    schema: \"ldp_demo\",\n    configuration: {\n        key1: \"value1\",\n        key2: \"value2\",\n    },\n    clusters: [\n        {\n            label: \"default\",\n            numWorkers: 2,\n            customTags: {\n                cluster_type: \"default\",\n            },\n        },\n        {\n            label: \"maintenance\",\n            numWorkers: 1,\n            customTags: {\n                cluster_type: \"maintenance\",\n            },\n        },\n    ],\n    libraries: [\n        {\n            notebook: {\n                path: ldpDemo.id,\n            },\n        },\n        {\n            file: {\n                path: pulumi.interpolate`${ldpDemoRepo.path}/pipeline.sql`,\n            },\n        },\n        {\n            glob: {\n                include: pulumi.interpolate`${ldpDemoRepo.path}/subfolder/**`,\n            },\n        },\n    ],\n    continuous: false,\n    notifications: [{\n        emailRecipients: [\n            \"user@domain.com\",\n            \"user1@domain.com\",\n        ],\n        alerts: [\n            \"on-update-failure\",\n            \"on-update-fatal-failure\",\n            \"on-update-success\",\n            \"on-flow-failure\",\n        ],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nldp_demo = databricks.Notebook(\"ldp_demo\")\nldp_demo_repo = databricks.Repo(\"ldp_demo\")\nthis = databricks.Pipeline(\"this\",\n    name=\"Pipeline Name\",\n    catalog=\"main\",\n    schema=\"ldp_demo\",\n    configuration={\n        \"key1\": \"value1\",\n        \"key2\": \"value2\",\n    },\n    clusters=[\n        {\n            \"label\": \"default\",\n            \"num_workers\": 2,\n            \"custom_tags\": {\n                \"cluster_type\": \"default\",\n            },\n        },\n        {\n            \"label\": \"maintenance\",\n            \"num_workers\": 1,\n            \"custom_tags\": {\n                \"cluster_type\": \"maintenance\",\n            },\n        },\n    ],\n    libraries=[\n        {\n            \"notebook\": {\n                \"path\": ldp_demo.id,\n            },\n        },\n        {\n            \"file\": {\n                \"path\": ldp_demo_repo.path.apply(lambda path: f\"{path}/pipeline.sql\"),\n            },\n        },\n        {\n            \"glob\": {\n                \"include\": ldp_demo_repo.path.apply(lambda path: f\"{path}/subfolder/**\"),\n            },\n        },\n    ],\n    continuous=False,\n    notifications=[{\n        \"email_recipients\": [\n            \"user@domain.com\",\n            \"user1@domain.com\",\n        ],\n        \"alerts\": [\n            \"on-update-failure\",\n            \"on-update-fatal-failure\",\n            \"on-update-success\",\n            \"on-flow-failure\",\n        ],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var ldpDemo = new Databricks.Notebook(\"ldp_demo\");\n\n    var ldpDemoRepo = new Databricks.Repo(\"ldp_demo\");\n\n    var @this = new Databricks.Pipeline(\"this\", new()\n    {\n        Name = \"Pipeline Name\",\n        Catalog = \"main\",\n        Schema = \"ldp_demo\",\n        Configuration = \n        {\n            { \"key1\", \"value1\" },\n            { \"key2\", \"value2\" },\n        },\n        Clusters = new[]\n        {\n            new Databricks.Inputs.PipelineClusterArgs\n            {\n                Label = \"default\",\n                NumWorkers = 2,\n                CustomTags = \n                {\n                    { \"cluster_type\", \"default\" },\n                },\n            },\n            new Databricks.Inputs.PipelineClusterArgs\n            {\n                Label = \"maintenance\",\n                NumWorkers = 1,\n                CustomTags = \n                {\n                    { \"cluster_type\", \"maintenance\" },\n                },\n            },\n        },\n        Libraries = new[]\n        {\n            new Databricks.Inputs.PipelineLibraryArgs\n            {\n                Notebook = new Databricks.Inputs.PipelineLibraryNotebookArgs\n                {\n                    Path = ldpDemo.Id,\n                },\n            },\n            new Databricks.Inputs.PipelineLibraryArgs\n            {\n                File = new Databricks.Inputs.PipelineLibraryFileArgs\n                {\n                    Path = ldpDemoRepo.Path.Apply(path =\u003e $\"{path}/pipeline.sql\"),\n                },\n            },\n            new Databricks.Inputs.PipelineLibraryArgs\n            {\n                Glob = new Databricks.Inputs.PipelineLibraryGlobArgs\n                {\n                    Include = ldpDemoRepo.Path.Apply(path =\u003e $\"{path}/subfolder/**\"),\n                },\n            },\n        },\n        Continuous = false,\n        Notifications = new[]\n        {\n            new Databricks.Inputs.PipelineNotificationArgs\n            {\n                EmailRecipients = new[]\n                {\n                    \"user@domain.com\",\n                    \"user1@domain.com\",\n                },\n                Alerts = new[]\n                {\n                    \"on-update-failure\",\n                    \"on-update-fatal-failure\",\n                    \"on-update-success\",\n                    \"on-flow-failure\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tldpDemo, err := databricks.NewNotebook(ctx, \"ldp_demo\", nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tldpDemoRepo, err := databricks.NewRepo(ctx, \"ldp_demo\", nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPipeline(ctx, \"this\", \u0026databricks.PipelineArgs{\n\t\t\tName:    pulumi.String(\"Pipeline Name\"),\n\t\t\tCatalog: pulumi.String(\"main\"),\n\t\t\tSchema:  pulumi.String(\"ldp_demo\"),\n\t\t\tConfiguration: pulumi.StringMap{\n\t\t\t\t\"key1\": pulumi.String(\"value1\"),\n\t\t\t\t\"key2\": pulumi.String(\"value2\"),\n\t\t\t},\n\t\t\tClusters: databricks.PipelineClusterArray{\n\t\t\t\t\u0026databricks.PipelineClusterArgs{\n\t\t\t\t\tLabel:      pulumi.String(\"default\"),\n\t\t\t\t\tNumWorkers: pulumi.Int(2),\n\t\t\t\t\tCustomTags: pulumi.StringMap{\n\t\t\t\t\t\t\"cluster_type\": pulumi.String(\"default\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PipelineClusterArgs{\n\t\t\t\t\tLabel:      pulumi.String(\"maintenance\"),\n\t\t\t\t\tNumWorkers: pulumi.Int(1),\n\t\t\t\t\tCustomTags: pulumi.StringMap{\n\t\t\t\t\t\t\"cluster_type\": pulumi.String(\"maintenance\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tLibraries: databricks.PipelineLibraryArray{\n\t\t\t\t\u0026databricks.PipelineLibraryArgs{\n\t\t\t\t\tNotebook: \u0026databricks.PipelineLibraryNotebookArgs{\n\t\t\t\t\t\tPath: ldpDemo.ID(),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PipelineLibraryArgs{\n\t\t\t\t\tFile: \u0026databricks.PipelineLibraryFileArgs{\n\t\t\t\t\t\tPath: ldpDemoRepo.Path.ApplyT(func(path string) (string, error) {\n\t\t\t\t\t\t\treturn fmt.Sprintf(\"%v/pipeline.sql\", path), nil\n\t\t\t\t\t\t}).(pulumi.StringOutput),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PipelineLibraryArgs{\n\t\t\t\t\tGlob: \u0026databricks.PipelineLibraryGlobArgs{\n\t\t\t\t\t\tInclude: ldpDemoRepo.Path.ApplyT(func(path string) (string, error) {\n\t\t\t\t\t\t\treturn fmt.Sprintf(\"%v/subfolder/**\", path), nil\n\t\t\t\t\t\t}).(pulumi.StringOutput),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tContinuous: pulumi.Bool(false),\n\t\t\tNotifications: databricks.PipelineNotificationArray{\n\t\t\t\t\u0026databricks.PipelineNotificationArgs{\n\t\t\t\t\tEmailRecipients: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"user@domain.com\"),\n\t\t\t\t\t\tpulumi.String(\"user1@domain.com\"),\n\t\t\t\t\t},\n\t\t\t\t\tAlerts: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"on-update-failure\"),\n\t\t\t\t\t\tpulumi.String(\"on-update-fatal-failure\"),\n\t\t\t\t\t\tpulumi.String(\"on-update-success\"),\n\t\t\t\t\t\tpulumi.String(\"on-flow-failure\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Notebook;\nimport com.pulumi.databricks.Repo;\nimport com.pulumi.databricks.Pipeline;\nimport com.pulumi.databricks.PipelineArgs;\nimport com.pulumi.databricks.inputs.PipelineClusterArgs;\nimport com.pulumi.databricks.inputs.PipelineLibraryArgs;\nimport com.pulumi.databricks.inputs.PipelineLibraryNotebookArgs;\nimport com.pulumi.databricks.inputs.PipelineLibraryFileArgs;\nimport com.pulumi.databricks.inputs.PipelineLibraryGlobArgs;\nimport com.pulumi.databricks.inputs.PipelineNotificationArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var ldpDemo = new Notebook(\"ldpDemo\");\n\n        var ldpDemoRepo = new Repo(\"ldpDemoRepo\");\n\n        var this_ = new Pipeline(\"this\", PipelineArgs.builder()\n            .name(\"Pipeline Name\")\n            .catalog(\"main\")\n            .schema(\"ldp_demo\")\n            .configuration(Map.ofEntries(\n                Map.entry(\"key1\", \"value1\"),\n                Map.entry(\"key2\", \"value2\")\n            ))\n            .clusters(            \n                PipelineClusterArgs.builder()\n                    .label(\"default\")\n                    .numWorkers(2)\n                    .customTags(Map.of(\"cluster_type\", \"default\"))\n                    .build(),\n                PipelineClusterArgs.builder()\n                    .label(\"maintenance\")\n                    .numWorkers(1)\n                    .customTags(Map.of(\"cluster_type\", \"maintenance\"))\n                    .build())\n            .libraries(            \n                PipelineLibraryArgs.builder()\n                    .notebook(PipelineLibraryNotebookArgs.builder()\n                        .path(ldpDemo.id())\n                        .build())\n                    .build(),\n                PipelineLibraryArgs.builder()\n                    .file(PipelineLibraryFileArgs.builder()\n                        .path(ldpDemoRepo.path().applyValue(_path -\u003e String.format(\"%s/pipeline.sql\", _path)))\n                        .build())\n                    .build(),\n                PipelineLibraryArgs.builder()\n                    .glob(PipelineLibraryGlobArgs.builder()\n                        .include(ldpDemoRepo.path().applyValue(_path -\u003e String.format(\"%s/subfolder/**\", _path)))\n                        .build())\n                    .build())\n            .continuous(false)\n            .notifications(PipelineNotificationArgs.builder()\n                .emailRecipients(                \n                    \"user@domain.com\",\n                    \"user1@domain.com\")\n                .alerts(                \n                    \"on-update-failure\",\n                    \"on-update-fatal-failure\",\n                    \"on-update-success\",\n                    \"on-flow-failure\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  ldpDemo:\n    type: databricks:Notebook\n    name: ldp_demo\n  ldpDemoRepo:\n    type: databricks:Repo\n    name: ldp_demo\n  this:\n    type: databricks:Pipeline\n    properties:\n      name: Pipeline Name\n      catalog: main\n      schema: ldp_demo\n      configuration:\n        key1: value1\n        key2: value2\n      clusters:\n        - label: default\n          numWorkers: 2\n          customTags:\n            cluster_type: default\n        - label: maintenance\n          numWorkers: 1\n          customTags:\n            cluster_type: maintenance\n      libraries:\n        - notebook:\n            path: ${ldpDemo.id}\n        - file:\n            path: ${ldpDemoRepo.path}/pipeline.sql\n        - glob:\n            include: ${ldpDemoRepo.path}/subfolder/**\n      continuous: false\n      notifications:\n        - emailRecipients:\n            - user@domain.com\n            - user1@domain.com\n          alerts:\n            - on-update-failure\n            - on-update-fatal-failure\n            - on-update-success\n            - on-flow-failure\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getPipelines \" pulumi-lang-dotnet=\" databricks.getPipelines \" pulumi-lang-go=\" getPipelines \" pulumi-lang-python=\" get_pipelines \" pulumi-lang-yaml=\" databricks.getPipelines \" pulumi-lang-java=\" databricks.getPipelines \"\u003e databricks.getPipelines \u003c/span\u003eto retrieve [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt) data.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a databricks_cluster.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n\n","properties":{"allowDuplicateNames":{"type":"boolean","description":"Optional boolean flag. If false, deployment will fail if name conflicts with that of another pipeline. default is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"budgetPolicyId":{"type":"string","description":"optional string specifying ID of the budget policy for this Lakeflow Declarative Pipeline.\n"},"catalog":{"type":"string","description":"The name of default catalog in Unity Catalog. *Change of this parameter forces recreation of the pipeline if you switch from \u003cspan pulumi-lang-nodejs=\"`storage`\" pulumi-lang-dotnet=\"`Storage`\" pulumi-lang-go=\"`storage`\" pulumi-lang-python=\"`storage`\" pulumi-lang-yaml=\"`storage`\" pulumi-lang-java=\"`storage`\"\u003e`storage`\u003c/span\u003e to \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e or vice versa.  If pipeline was already created with \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e set, the value could be changed.* (Conflicts with \u003cspan pulumi-lang-nodejs=\"`storage`\" pulumi-lang-dotnet=\"`Storage`\" pulumi-lang-go=\"`storage`\" pulumi-lang-python=\"`storage`\" pulumi-lang-yaml=\"`storage`\" pulumi-lang-java=\"`storage`\"\u003e`storage`\u003c/span\u003e).\n"},"cause":{"type":"string"},"channel":{"type":"string","description":"optional name of the release channel for Spark version used by Lakeflow Declarative Pipeline.  Supported values are: `CURRENT` (default) and `PREVIEW`.\n"},"clusterId":{"type":"string"},"clusters":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineCluster:PipelineCluster"},"description":"blocks - Clusters to run the pipeline. If none is specified, pipelines will automatically select a default cluster configuration for the pipeline. *Please note that Lakeflow Declarative Pipeline clusters are supporting only subset of attributes as described in [documentation](https://docs.databricks.com/api/workspace/pipelines/create#clusters).*  Also, note that \u003cspan pulumi-lang-nodejs=\"`autoscale`\" pulumi-lang-dotnet=\"`Autoscale`\" pulumi-lang-go=\"`autoscale`\" pulumi-lang-python=\"`autoscale`\" pulumi-lang-yaml=\"`autoscale`\" pulumi-lang-java=\"`autoscale`\"\u003e`autoscale`\u003c/span\u003e block is extended with the \u003cspan pulumi-lang-nodejs=\"`mode`\" pulumi-lang-dotnet=\"`Mode`\" pulumi-lang-go=\"`mode`\" pulumi-lang-python=\"`mode`\" pulumi-lang-yaml=\"`mode`\" pulumi-lang-java=\"`mode`\"\u003e`mode`\u003c/span\u003e parameter that controls the autoscaling algorithm (possible values are `ENHANCED` for new, enhanced autoscaling algorithm, or `LEGACY` for old algorithm).\n"},"configuration":{"type":"object","additionalProperties":{"type":"string"},"description":"An optional list of values to apply to the entire pipeline. Elements must be formatted as key:value pairs.\n"},"continuous":{"type":"boolean","description":"A flag indicating whether to run the pipeline continuously. The default value is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"creatorUserName":{"type":"string"},"deployment":{"$ref":"#/types/databricks:index/PipelineDeployment:PipelineDeployment","description":"Deployment type of this pipeline. Supports following attributes:\n"},"development":{"type":"boolean","description":"A flag indicating whether to run the pipeline in development mode. The default value is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"edition":{"type":"string","description":"optional name of the [product edition](https://docs.databricks.com/aws/en/dlt/configure-pipeline#choose-a-product-edition). Supported values are: `CORE`, `PRO`, `ADVANCED` (default).  Not required when \u003cspan pulumi-lang-nodejs=\"`serverless`\" pulumi-lang-dotnet=\"`Serverless`\" pulumi-lang-go=\"`serverless`\" pulumi-lang-python=\"`serverless`\" pulumi-lang-yaml=\"`serverless`\" pulumi-lang-java=\"`serverless`\"\u003e`serverless`\u003c/span\u003e is set to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e.\n"},"environment":{"$ref":"#/types/databricks:index/PipelineEnvironment:PipelineEnvironment"},"eventLog":{"$ref":"#/types/databricks:index/PipelineEventLog:PipelineEventLog","description":"an optional block specifying a table where LDP Event Log will be stored.  Consists of the following fields:\n"},"expectedLastModified":{"type":"integer"},"filters":{"$ref":"#/types/databricks:index/PipelineFilters:PipelineFilters","description":"Filters on which Pipeline packages to include in the deployed graph.  This block consists of following attributes:\n"},"gatewayDefinition":{"$ref":"#/types/databricks:index/PipelineGatewayDefinition:PipelineGatewayDefinition","description":"The definition of a gateway pipeline to support CDC. Consists of following attributes:\n"},"health":{"type":"string"},"ingestionDefinition":{"$ref":"#/types/databricks:index/PipelineIngestionDefinition:PipelineIngestionDefinition"},"lastModified":{"type":"integer"},"latestUpdates":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineLatestUpdate:PipelineLatestUpdate"}},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineLibrary:PipelineLibrary"},"description":"blocks - Specifies pipeline code.\n"},"name":{"type":"string","description":"A user-friendly name for this pipeline. The name can be used to identify pipeline jobs in the UI.\n"},"notifications":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineNotification:PipelineNotification"}},"photon":{"type":"boolean","description":"A flag indicating whether to use Photon engine. The default value is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"providerConfig":{"$ref":"#/types/databricks:index/PipelineProviderConfig:PipelineProviderConfig"},"restartWindow":{"$ref":"#/types/databricks:index/PipelineRestartWindow:PipelineRestartWindow"},"rootPath":{"type":"string","description":"An optional string specifying the root path for this pipeline. This is used as the root directory when editing the pipeline in the Databricks user interface and it is added to `sys.path` when executing Python sources during pipeline execution.\n"},"runAs":{"$ref":"#/types/databricks:index/PipelineRunAs:PipelineRunAs","description":"The user or the service principal the pipeline runs as. See\u003cspan pulumi-lang-nodejs=\" runAs \" pulumi-lang-dotnet=\" RunAs \" pulumi-lang-go=\" runAs \" pulumi-lang-python=\" run_as \" pulumi-lang-yaml=\" runAs \" pulumi-lang-java=\" runAs \"\u003e run_as \u003c/span\u003eConfiguration Block below.\n"},"runAsUserName":{"type":"string"},"schema":{"type":"string","description":"The default schema (database) where tables are read from or published to. The presence of this attribute implies that the pipeline is in direct publishing mode.\n"},"serverless":{"type":"boolean","description":"An optional flag indicating if serverless compute should be used for this Lakeflow Declarative Pipeline.  Requires \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e to be set, as it could be used only with Unity Catalog.\n"},"state":{"type":"string"},"storage":{"type":"string","description":"A location on cloud storage where output data and metadata required for pipeline execution are stored. By default, tables are stored in a subdirectory of this location. *Change of this parameter forces recreation of the pipeline.* (Conflicts with \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e).\n"},"tags":{"type":"object","additionalProperties":{"type":"string"},"description":"A map of tags associated with the pipeline. These are forwarded to the cluster as cluster tags, and are therefore subject to the same limitations. A maximum of 25 tags can be added to the pipeline.\n"},"target":{"type":"string","description":"The name of a database (in either the Hive metastore or in a UC catalog) for persisting pipeline output data. Configuring the target setting allows you to view and query the pipeline output data from the Databricks UI.\n"},"trigger":{"$ref":"#/types/databricks:index/PipelineTrigger:PipelineTrigger"},"url":{"type":"string","description":"URL of the Lakeflow Declarative Pipeline on the given workspace.\n"},"usagePolicyId":{"type":"string"}},"required":["cause","clusterId","creatorUserName","health","lastModified","latestUpdates","name","runAsUserName","state","url"],"inputProperties":{"allowDuplicateNames":{"type":"boolean","description":"Optional boolean flag. If false, deployment will fail if name conflicts with that of another pipeline. default is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"budgetPolicyId":{"type":"string","description":"optional string specifying ID of the budget policy for this Lakeflow Declarative Pipeline.\n"},"catalog":{"type":"string","description":"The name of default catalog in Unity Catalog. *Change of this parameter forces recreation of the pipeline if you switch from \u003cspan pulumi-lang-nodejs=\"`storage`\" pulumi-lang-dotnet=\"`Storage`\" pulumi-lang-go=\"`storage`\" pulumi-lang-python=\"`storage`\" pulumi-lang-yaml=\"`storage`\" pulumi-lang-java=\"`storage`\"\u003e`storage`\u003c/span\u003e to \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e or vice versa.  If pipeline was already created with \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e set, the value could be changed.* (Conflicts with \u003cspan pulumi-lang-nodejs=\"`storage`\" pulumi-lang-dotnet=\"`Storage`\" pulumi-lang-go=\"`storage`\" pulumi-lang-python=\"`storage`\" pulumi-lang-yaml=\"`storage`\" pulumi-lang-java=\"`storage`\"\u003e`storage`\u003c/span\u003e).\n"},"cause":{"type":"string"},"channel":{"type":"string","description":"optional name of the release channel for Spark version used by Lakeflow Declarative Pipeline.  Supported values are: `CURRENT` (default) and `PREVIEW`.\n"},"clusterId":{"type":"string"},"clusters":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineCluster:PipelineCluster"},"description":"blocks - Clusters to run the pipeline. If none is specified, pipelines will automatically select a default cluster configuration for the pipeline. *Please note that Lakeflow Declarative Pipeline clusters are supporting only subset of attributes as described in [documentation](https://docs.databricks.com/api/workspace/pipelines/create#clusters).*  Also, note that \u003cspan pulumi-lang-nodejs=\"`autoscale`\" pulumi-lang-dotnet=\"`Autoscale`\" pulumi-lang-go=\"`autoscale`\" pulumi-lang-python=\"`autoscale`\" pulumi-lang-yaml=\"`autoscale`\" pulumi-lang-java=\"`autoscale`\"\u003e`autoscale`\u003c/span\u003e block is extended with the \u003cspan pulumi-lang-nodejs=\"`mode`\" pulumi-lang-dotnet=\"`Mode`\" pulumi-lang-go=\"`mode`\" pulumi-lang-python=\"`mode`\" pulumi-lang-yaml=\"`mode`\" pulumi-lang-java=\"`mode`\"\u003e`mode`\u003c/span\u003e parameter that controls the autoscaling algorithm (possible values are `ENHANCED` for new, enhanced autoscaling algorithm, or `LEGACY` for old algorithm).\n"},"configuration":{"type":"object","additionalProperties":{"type":"string"},"description":"An optional list of values to apply to the entire pipeline. Elements must be formatted as key:value pairs.\n"},"continuous":{"type":"boolean","description":"A flag indicating whether to run the pipeline continuously. The default value is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"creatorUserName":{"type":"string"},"deployment":{"$ref":"#/types/databricks:index/PipelineDeployment:PipelineDeployment","description":"Deployment type of this pipeline. Supports following attributes:\n"},"development":{"type":"boolean","description":"A flag indicating whether to run the pipeline in development mode. The default value is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"edition":{"type":"string","description":"optional name of the [product edition](https://docs.databricks.com/aws/en/dlt/configure-pipeline#choose-a-product-edition). Supported values are: `CORE`, `PRO`, `ADVANCED` (default).  Not required when \u003cspan pulumi-lang-nodejs=\"`serverless`\" pulumi-lang-dotnet=\"`Serverless`\" pulumi-lang-go=\"`serverless`\" pulumi-lang-python=\"`serverless`\" pulumi-lang-yaml=\"`serverless`\" pulumi-lang-java=\"`serverless`\"\u003e`serverless`\u003c/span\u003e is set to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e.\n"},"environment":{"$ref":"#/types/databricks:index/PipelineEnvironment:PipelineEnvironment"},"eventLog":{"$ref":"#/types/databricks:index/PipelineEventLog:PipelineEventLog","description":"an optional block specifying a table where LDP Event Log will be stored.  Consists of the following fields:\n"},"expectedLastModified":{"type":"integer"},"filters":{"$ref":"#/types/databricks:index/PipelineFilters:PipelineFilters","description":"Filters on which Pipeline packages to include in the deployed graph.  This block consists of following attributes:\n"},"gatewayDefinition":{"$ref":"#/types/databricks:index/PipelineGatewayDefinition:PipelineGatewayDefinition","description":"The definition of a gateway pipeline to support CDC. Consists of following attributes:\n"},"health":{"type":"string"},"ingestionDefinition":{"$ref":"#/types/databricks:index/PipelineIngestionDefinition:PipelineIngestionDefinition"},"lastModified":{"type":"integer"},"latestUpdates":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineLatestUpdate:PipelineLatestUpdate"}},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineLibrary:PipelineLibrary"},"description":"blocks - Specifies pipeline code.\n"},"name":{"type":"string","description":"A user-friendly name for this pipeline. The name can be used to identify pipeline jobs in the UI.\n"},"notifications":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineNotification:PipelineNotification"}},"photon":{"type":"boolean","description":"A flag indicating whether to use Photon engine. The default value is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"providerConfig":{"$ref":"#/types/databricks:index/PipelineProviderConfig:PipelineProviderConfig"},"restartWindow":{"$ref":"#/types/databricks:index/PipelineRestartWindow:PipelineRestartWindow"},"rootPath":{"type":"string","description":"An optional string specifying the root path for this pipeline. This is used as the root directory when editing the pipeline in the Databricks user interface and it is added to `sys.path` when executing Python sources during pipeline execution.\n"},"runAs":{"$ref":"#/types/databricks:index/PipelineRunAs:PipelineRunAs","description":"The user or the service principal the pipeline runs as. See\u003cspan pulumi-lang-nodejs=\" runAs \" pulumi-lang-dotnet=\" RunAs \" pulumi-lang-go=\" runAs \" pulumi-lang-python=\" run_as \" pulumi-lang-yaml=\" runAs \" pulumi-lang-java=\" runAs \"\u003e run_as \u003c/span\u003eConfiguration Block below.\n"},"runAsUserName":{"type":"string"},"schema":{"type":"string","description":"The default schema (database) where tables are read from or published to. The presence of this attribute implies that the pipeline is in direct publishing mode.\n"},"serverless":{"type":"boolean","description":"An optional flag indicating if serverless compute should be used for this Lakeflow Declarative Pipeline.  Requires \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e to be set, as it could be used only with Unity Catalog.\n"},"state":{"type":"string"},"storage":{"type":"string","description":"A location on cloud storage where output data and metadata required for pipeline execution are stored. By default, tables are stored in a subdirectory of this location. *Change of this parameter forces recreation of the pipeline.* (Conflicts with \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e).\n","willReplaceOnChanges":true},"tags":{"type":"object","additionalProperties":{"type":"string"},"description":"A map of tags associated with the pipeline. These are forwarded to the cluster as cluster tags, and are therefore subject to the same limitations. A maximum of 25 tags can be added to the pipeline.\n"},"target":{"type":"string","description":"The name of a database (in either the Hive metastore or in a UC catalog) for persisting pipeline output data. Configuring the target setting allows you to view and query the pipeline output data from the Databricks UI.\n"},"trigger":{"$ref":"#/types/databricks:index/PipelineTrigger:PipelineTrigger"},"url":{"type":"string","description":"URL of the Lakeflow Declarative Pipeline on the given workspace.\n"},"usagePolicyId":{"type":"string"}},"stateInputs":{"description":"Input properties used for looking up and filtering Pipeline resources.\n","properties":{"allowDuplicateNames":{"type":"boolean","description":"Optional boolean flag. If false, deployment will fail if name conflicts with that of another pipeline. default is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"budgetPolicyId":{"type":"string","description":"optional string specifying ID of the budget policy for this Lakeflow Declarative Pipeline.\n"},"catalog":{"type":"string","description":"The name of default catalog in Unity Catalog. *Change of this parameter forces recreation of the pipeline if you switch from \u003cspan pulumi-lang-nodejs=\"`storage`\" pulumi-lang-dotnet=\"`Storage`\" pulumi-lang-go=\"`storage`\" pulumi-lang-python=\"`storage`\" pulumi-lang-yaml=\"`storage`\" pulumi-lang-java=\"`storage`\"\u003e`storage`\u003c/span\u003e to \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e or vice versa.  If pipeline was already created with \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e set, the value could be changed.* (Conflicts with \u003cspan pulumi-lang-nodejs=\"`storage`\" pulumi-lang-dotnet=\"`Storage`\" pulumi-lang-go=\"`storage`\" pulumi-lang-python=\"`storage`\" pulumi-lang-yaml=\"`storage`\" pulumi-lang-java=\"`storage`\"\u003e`storage`\u003c/span\u003e).\n"},"cause":{"type":"string"},"channel":{"type":"string","description":"optional name of the release channel for Spark version used by Lakeflow Declarative Pipeline.  Supported values are: `CURRENT` (default) and `PREVIEW`.\n"},"clusterId":{"type":"string"},"clusters":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineCluster:PipelineCluster"},"description":"blocks - Clusters to run the pipeline. If none is specified, pipelines will automatically select a default cluster configuration for the pipeline. *Please note that Lakeflow Declarative Pipeline clusters are supporting only subset of attributes as described in [documentation](https://docs.databricks.com/api/workspace/pipelines/create#clusters).*  Also, note that \u003cspan pulumi-lang-nodejs=\"`autoscale`\" pulumi-lang-dotnet=\"`Autoscale`\" pulumi-lang-go=\"`autoscale`\" pulumi-lang-python=\"`autoscale`\" pulumi-lang-yaml=\"`autoscale`\" pulumi-lang-java=\"`autoscale`\"\u003e`autoscale`\u003c/span\u003e block is extended with the \u003cspan pulumi-lang-nodejs=\"`mode`\" pulumi-lang-dotnet=\"`Mode`\" pulumi-lang-go=\"`mode`\" pulumi-lang-python=\"`mode`\" pulumi-lang-yaml=\"`mode`\" pulumi-lang-java=\"`mode`\"\u003e`mode`\u003c/span\u003e parameter that controls the autoscaling algorithm (possible values are `ENHANCED` for new, enhanced autoscaling algorithm, or `LEGACY` for old algorithm).\n"},"configuration":{"type":"object","additionalProperties":{"type":"string"},"description":"An optional list of values to apply to the entire pipeline. Elements must be formatted as key:value pairs.\n"},"continuous":{"type":"boolean","description":"A flag indicating whether to run the pipeline continuously. The default value is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"creatorUserName":{"type":"string"},"deployment":{"$ref":"#/types/databricks:index/PipelineDeployment:PipelineDeployment","description":"Deployment type of this pipeline. Supports following attributes:\n"},"development":{"type":"boolean","description":"A flag indicating whether to run the pipeline in development mode. The default value is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"edition":{"type":"string","description":"optional name of the [product edition](https://docs.databricks.com/aws/en/dlt/configure-pipeline#choose-a-product-edition). Supported values are: `CORE`, `PRO`, `ADVANCED` (default).  Not required when \u003cspan pulumi-lang-nodejs=\"`serverless`\" pulumi-lang-dotnet=\"`Serverless`\" pulumi-lang-go=\"`serverless`\" pulumi-lang-python=\"`serverless`\" pulumi-lang-yaml=\"`serverless`\" pulumi-lang-java=\"`serverless`\"\u003e`serverless`\u003c/span\u003e is set to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e.\n"},"environment":{"$ref":"#/types/databricks:index/PipelineEnvironment:PipelineEnvironment"},"eventLog":{"$ref":"#/types/databricks:index/PipelineEventLog:PipelineEventLog","description":"an optional block specifying a table where LDP Event Log will be stored.  Consists of the following fields:\n"},"expectedLastModified":{"type":"integer"},"filters":{"$ref":"#/types/databricks:index/PipelineFilters:PipelineFilters","description":"Filters on which Pipeline packages to include in the deployed graph.  This block consists of following attributes:\n"},"gatewayDefinition":{"$ref":"#/types/databricks:index/PipelineGatewayDefinition:PipelineGatewayDefinition","description":"The definition of a gateway pipeline to support CDC. Consists of following attributes:\n"},"health":{"type":"string"},"ingestionDefinition":{"$ref":"#/types/databricks:index/PipelineIngestionDefinition:PipelineIngestionDefinition"},"lastModified":{"type":"integer"},"latestUpdates":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineLatestUpdate:PipelineLatestUpdate"}},"libraries":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineLibrary:PipelineLibrary"},"description":"blocks - Specifies pipeline code.\n"},"name":{"type":"string","description":"A user-friendly name for this pipeline. The name can be used to identify pipeline jobs in the UI.\n"},"notifications":{"type":"array","items":{"$ref":"#/types/databricks:index/PipelineNotification:PipelineNotification"}},"photon":{"type":"boolean","description":"A flag indicating whether to use Photon engine. The default value is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"providerConfig":{"$ref":"#/types/databricks:index/PipelineProviderConfig:PipelineProviderConfig"},"restartWindow":{"$ref":"#/types/databricks:index/PipelineRestartWindow:PipelineRestartWindow"},"rootPath":{"type":"string","description":"An optional string specifying the root path for this pipeline. This is used as the root directory when editing the pipeline in the Databricks user interface and it is added to `sys.path` when executing Python sources during pipeline execution.\n"},"runAs":{"$ref":"#/types/databricks:index/PipelineRunAs:PipelineRunAs","description":"The user or the service principal the pipeline runs as. See\u003cspan pulumi-lang-nodejs=\" runAs \" pulumi-lang-dotnet=\" RunAs \" pulumi-lang-go=\" runAs \" pulumi-lang-python=\" run_as \" pulumi-lang-yaml=\" runAs \" pulumi-lang-java=\" runAs \"\u003e run_as \u003c/span\u003eConfiguration Block below.\n"},"runAsUserName":{"type":"string"},"schema":{"type":"string","description":"The default schema (database) where tables are read from or published to. The presence of this attribute implies that the pipeline is in direct publishing mode.\n"},"serverless":{"type":"boolean","description":"An optional flag indicating if serverless compute should be used for this Lakeflow Declarative Pipeline.  Requires \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e to be set, as it could be used only with Unity Catalog.\n"},"state":{"type":"string"},"storage":{"type":"string","description":"A location on cloud storage where output data and metadata required for pipeline execution are stored. By default, tables are stored in a subdirectory of this location. *Change of this parameter forces recreation of the pipeline.* (Conflicts with \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e).\n","willReplaceOnChanges":true},"tags":{"type":"object","additionalProperties":{"type":"string"},"description":"A map of tags associated with the pipeline. These are forwarded to the cluster as cluster tags, and are therefore subject to the same limitations. A maximum of 25 tags can be added to the pipeline.\n"},"target":{"type":"string","description":"The name of a database (in either the Hive metastore or in a UC catalog) for persisting pipeline output data. Configuring the target setting allows you to view and query the pipeline output data from the Databricks UI.\n"},"trigger":{"$ref":"#/types/databricks:index/PipelineTrigger:PipelineTrigger"},"url":{"type":"string","description":"URL of the Lakeflow Declarative Pipeline on the given workspace.\n"},"usagePolicyId":{"type":"string"}},"type":"object"}},"databricks:index/policyInfo:PolicyInfo":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nAttribute-Based Access Control (ABAC) policies in Unity Catalog provide high leverage governance for enforcing compliance policies. With ABAC policies, access is controlled in a hierarchical and scalable manner, based on data attributes rather than specific resources, enabling more flexible and comprehensive access control.\n\nABAC policies in Unity Catalog support conditions on governance tags and the user identity. Callers must have the `MANAGE` privilege on a securable to view, create, update, or delete ABAC policies.\n\n## Example Usage\n\n### Row Filter Policy\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst piiRowFilter = new databricks.PolicyInfo(\"pii_row_filter\", {\n    onSecurableType: \"CATALOG\",\n    onSecurableFullname: \"main\",\n    name: \"pii_data_policy\",\n    policyType: \"POLICY_TYPE_ROW_FILTER\",\n    forSecurableType: \"TABLE\",\n    toPrincipals: [\"account users\"],\n    whenCondition: \"hasTag('pii')\",\n    matchColumns: [{\n        condition: \"hasTag('pii')\",\n        alias: \"pii_col\",\n    }],\n    rowFilter: {\n        functionName: \"main.filters.mask_pii_rows\",\n        usings: [{\n            alias: \"pii_col\",\n        }],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\npii_row_filter = databricks.PolicyInfo(\"pii_row_filter\",\n    on_securable_type=\"CATALOG\",\n    on_securable_fullname=\"main\",\n    name=\"pii_data_policy\",\n    policy_type=\"POLICY_TYPE_ROW_FILTER\",\n    for_securable_type=\"TABLE\",\n    to_principals=[\"account users\"],\n    when_condition=\"hasTag('pii')\",\n    match_columns=[{\n        \"condition\": \"hasTag('pii')\",\n        \"alias\": \"pii_col\",\n    }],\n    row_filter={\n        \"function_name\": \"main.filters.mask_pii_rows\",\n        \"usings\": [{\n            \"alias\": \"pii_col\",\n        }],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var piiRowFilter = new Databricks.PolicyInfo(\"pii_row_filter\", new()\n    {\n        OnSecurableType = \"CATALOG\",\n        OnSecurableFullname = \"main\",\n        Name = \"pii_data_policy\",\n        PolicyType = \"POLICY_TYPE_ROW_FILTER\",\n        ForSecurableType = \"TABLE\",\n        ToPrincipals = new[]\n        {\n            \"account users\",\n        },\n        WhenCondition = \"hasTag('pii')\",\n        MatchColumns = new[]\n        {\n            new Databricks.Inputs.PolicyInfoMatchColumnArgs\n            {\n                Condition = \"hasTag('pii')\",\n                Alias = \"pii_col\",\n            },\n        },\n        RowFilter = new Databricks.Inputs.PolicyInfoRowFilterArgs\n        {\n            FunctionName = \"main.filters.mask_pii_rows\",\n            Usings = new[]\n            {\n                new Databricks.Inputs.PolicyInfoRowFilterUsingArgs\n                {\n                    Alias = \"pii_col\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPolicyInfo(ctx, \"pii_row_filter\", \u0026databricks.PolicyInfoArgs{\n\t\t\tOnSecurableType:     pulumi.String(\"CATALOG\"),\n\t\t\tOnSecurableFullname: pulumi.String(\"main\"),\n\t\t\tName:                pulumi.String(\"pii_data_policy\"),\n\t\t\tPolicyType:          pulumi.String(\"POLICY_TYPE_ROW_FILTER\"),\n\t\t\tForSecurableType:    pulumi.String(\"TABLE\"),\n\t\t\tToPrincipals: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"account users\"),\n\t\t\t},\n\t\t\tWhenCondition: pulumi.String(\"hasTag('pii')\"),\n\t\t\tMatchColumns: databricks.PolicyInfoMatchColumnArray{\n\t\t\t\t\u0026databricks.PolicyInfoMatchColumnArgs{\n\t\t\t\t\tCondition: pulumi.String(\"hasTag('pii')\"),\n\t\t\t\t\tAlias:     pulumi.String(\"pii_col\"),\n\t\t\t\t},\n\t\t\t},\n\t\t\tRowFilter: \u0026databricks.PolicyInfoRowFilterArgs{\n\t\t\t\tFunctionName: pulumi.String(\"main.filters.mask_pii_rows\"),\n\t\t\t\tUsings: databricks.PolicyInfoRowFilterUsingArray{\n\t\t\t\t\t\u0026databricks.PolicyInfoRowFilterUsingArgs{\n\t\t\t\t\t\tAlias: pulumi.String(\"pii_col\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PolicyInfo;\nimport com.pulumi.databricks.PolicyInfoArgs;\nimport com.pulumi.databricks.inputs.PolicyInfoMatchColumnArgs;\nimport com.pulumi.databricks.inputs.PolicyInfoRowFilterArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var piiRowFilter = new PolicyInfo(\"piiRowFilter\", PolicyInfoArgs.builder()\n            .onSecurableType(\"CATALOG\")\n            .onSecurableFullname(\"main\")\n            .name(\"pii_data_policy\")\n            .policyType(\"POLICY_TYPE_ROW_FILTER\")\n            .forSecurableType(\"TABLE\")\n            .toPrincipals(\"account users\")\n            .whenCondition(\"hasTag('pii')\")\n            .matchColumns(PolicyInfoMatchColumnArgs.builder()\n                .condition(\"hasTag('pii')\")\n                .alias(\"pii_col\")\n                .build())\n            .rowFilter(PolicyInfoRowFilterArgs.builder()\n                .functionName(\"main.filters.mask_pii_rows\")\n                .usings(PolicyInfoRowFilterUsingArgs.builder()\n                    .alias(\"pii_col\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  piiRowFilter:\n    type: databricks:PolicyInfo\n    name: pii_row_filter\n    properties:\n      onSecurableType: CATALOG\n      onSecurableFullname: main\n      name: pii_data_policy\n      policyType: POLICY_TYPE_ROW_FILTER\n      forSecurableType: TABLE\n      toPrincipals:\n        - account users\n      whenCondition: hasTag('pii')\n      matchColumns:\n        - condition: hasTag('pii')\n          alias: pii_col\n      rowFilter:\n        functionName: main.filters.mask_pii_rows\n        usings:\n          - alias: pii_col\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Column Mask Policy\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sensitiveColumnMask = new databricks.PolicyInfo(\"sensitive_column_mask\", {\n    onSecurableType: \"SCHEMA\",\n    onSecurableFullname: \"main.finance\",\n    name: \"sensitive_data_mask\",\n    policyType: \"POLICY_TYPE_COLUMN_MASK\",\n    forSecurableType: \"TABLE\",\n    toPrincipals: [\"account users\"],\n    exceptPrincipals: [\"finance_admins\"],\n    whenCondition: \"hasTag('pii')\",\n    matchColumns: [{\n        condition: \"hasTag('pii')\",\n        alias: \"sensitive_col\",\n    }],\n    columnMask: {\n        functionName: \"main.masks.redact_sensitive\",\n        onColumn: \"sensitive_col\",\n        usings: [{\n            constant: \"4\",\n        }],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsensitive_column_mask = databricks.PolicyInfo(\"sensitive_column_mask\",\n    on_securable_type=\"SCHEMA\",\n    on_securable_fullname=\"main.finance\",\n    name=\"sensitive_data_mask\",\n    policy_type=\"POLICY_TYPE_COLUMN_MASK\",\n    for_securable_type=\"TABLE\",\n    to_principals=[\"account users\"],\n    except_principals=[\"finance_admins\"],\n    when_condition=\"hasTag('pii')\",\n    match_columns=[{\n        \"condition\": \"hasTag('pii')\",\n        \"alias\": \"sensitive_col\",\n    }],\n    column_mask={\n        \"function_name\": \"main.masks.redact_sensitive\",\n        \"on_column\": \"sensitive_col\",\n        \"usings\": [{\n            \"constant\": \"4\",\n        }],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sensitiveColumnMask = new Databricks.PolicyInfo(\"sensitive_column_mask\", new()\n    {\n        OnSecurableType = \"SCHEMA\",\n        OnSecurableFullname = \"main.finance\",\n        Name = \"sensitive_data_mask\",\n        PolicyType = \"POLICY_TYPE_COLUMN_MASK\",\n        ForSecurableType = \"TABLE\",\n        ToPrincipals = new[]\n        {\n            \"account users\",\n        },\n        ExceptPrincipals = new[]\n        {\n            \"finance_admins\",\n        },\n        WhenCondition = \"hasTag('pii')\",\n        MatchColumns = new[]\n        {\n            new Databricks.Inputs.PolicyInfoMatchColumnArgs\n            {\n                Condition = \"hasTag('pii')\",\n                Alias = \"sensitive_col\",\n            },\n        },\n        ColumnMask = new Databricks.Inputs.PolicyInfoColumnMaskArgs\n        {\n            FunctionName = \"main.masks.redact_sensitive\",\n            OnColumn = \"sensitive_col\",\n            Usings = new[]\n            {\n                new Databricks.Inputs.PolicyInfoColumnMaskUsingArgs\n                {\n                    Constant = \"4\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPolicyInfo(ctx, \"sensitive_column_mask\", \u0026databricks.PolicyInfoArgs{\n\t\t\tOnSecurableType:     pulumi.String(\"SCHEMA\"),\n\t\t\tOnSecurableFullname: pulumi.String(\"main.finance\"),\n\t\t\tName:                pulumi.String(\"sensitive_data_mask\"),\n\t\t\tPolicyType:          pulumi.String(\"POLICY_TYPE_COLUMN_MASK\"),\n\t\t\tForSecurableType:    pulumi.String(\"TABLE\"),\n\t\t\tToPrincipals: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"account users\"),\n\t\t\t},\n\t\t\tExceptPrincipals: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"finance_admins\"),\n\t\t\t},\n\t\t\tWhenCondition: pulumi.String(\"hasTag('pii')\"),\n\t\t\tMatchColumns: databricks.PolicyInfoMatchColumnArray{\n\t\t\t\t\u0026databricks.PolicyInfoMatchColumnArgs{\n\t\t\t\t\tCondition: pulumi.String(\"hasTag('pii')\"),\n\t\t\t\t\tAlias:     pulumi.String(\"sensitive_col\"),\n\t\t\t\t},\n\t\t\t},\n\t\t\tColumnMask: \u0026databricks.PolicyInfoColumnMaskArgs{\n\t\t\t\tFunctionName: pulumi.String(\"main.masks.redact_sensitive\"),\n\t\t\t\tOnColumn:     pulumi.String(\"sensitive_col\"),\n\t\t\t\tUsings: databricks.PolicyInfoColumnMaskUsingArray{\n\t\t\t\t\t\u0026databricks.PolicyInfoColumnMaskUsingArgs{\n\t\t\t\t\t\tConstant: pulumi.String(\"4\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PolicyInfo;\nimport com.pulumi.databricks.PolicyInfoArgs;\nimport com.pulumi.databricks.inputs.PolicyInfoMatchColumnArgs;\nimport com.pulumi.databricks.inputs.PolicyInfoColumnMaskArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sensitiveColumnMask = new PolicyInfo(\"sensitiveColumnMask\", PolicyInfoArgs.builder()\n            .onSecurableType(\"SCHEMA\")\n            .onSecurableFullname(\"main.finance\")\n            .name(\"sensitive_data_mask\")\n            .policyType(\"POLICY_TYPE_COLUMN_MASK\")\n            .forSecurableType(\"TABLE\")\n            .toPrincipals(\"account users\")\n            .exceptPrincipals(\"finance_admins\")\n            .whenCondition(\"hasTag('pii')\")\n            .matchColumns(PolicyInfoMatchColumnArgs.builder()\n                .condition(\"hasTag('pii')\")\n                .alias(\"sensitive_col\")\n                .build())\n            .columnMask(PolicyInfoColumnMaskArgs.builder()\n                .functionName(\"main.masks.redact_sensitive\")\n                .onColumn(\"sensitive_col\")\n                .usings(PolicyInfoColumnMaskUsingArgs.builder()\n                    .constant(\"4\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sensitiveColumnMask:\n    type: databricks:PolicyInfo\n    name: sensitive_column_mask\n    properties:\n      onSecurableType: SCHEMA\n      onSecurableFullname: main.finance\n      name: sensitive_data_mask\n      policyType: POLICY_TYPE_COLUMN_MASK\n      forSecurableType: TABLE\n      toPrincipals:\n        - account users\n      exceptPrincipals:\n        - finance_admins\n      whenCondition: hasTag('pii')\n      matchColumns:\n        - condition: hasTag('pii')\n          alias: sensitive_col\n      columnMask:\n        functionName: main.masks.redact_sensitive\n        onColumn: sensitive_col\n        usings:\n          - constant: '4'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"columnMask":{"$ref":"#/types/databricks:index/PolicyInfoColumnMask:PolicyInfoColumnMask","description":"Options for column mask policies. Valid only if \u003cspan pulumi-lang-nodejs=\"`policyType`\" pulumi-lang-dotnet=\"`PolicyType`\" pulumi-lang-go=\"`policyType`\" pulumi-lang-python=\"`policy_type`\" pulumi-lang-yaml=\"`policyType`\" pulumi-lang-java=\"`policyType`\"\u003e`policy_type`\u003c/span\u003e is `POLICY_TYPE_COLUMN_MASK`.\nRequired on create and optional on update. When specified on update,\nthe new options will replace the existing options as a whole\n"},"comment":{"type":"string","description":"Optional description of the policy\n"},"createdAt":{"type":"integer","description":"(integer) - Time at which the policy was created, in epoch milliseconds. Output only\n"},"createdBy":{"type":"string","description":"(string) - Username of the user who created the policy. Output only\n"},"exceptPrincipals":{"type":"array","items":{"type":"string"},"description":"Optional list of user or group names that should be excluded from the policy\n"},"forSecurableType":{"type":"string","description":"Type of securables that the policy should take effect on.\nOnly `TABLE` is supported at this moment.\nRequired on create and optional on update. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"},"matchColumns":{"type":"array","items":{"$ref":"#/types/databricks:index/PolicyInfoMatchColumn:PolicyInfoMatchColumn"},"description":"Optional list of condition expressions used to match table columns.\nOnly valid when \u003cspan pulumi-lang-nodejs=\"`forSecurableType`\" pulumi-lang-dotnet=\"`ForSecurableType`\" pulumi-lang-go=\"`forSecurableType`\" pulumi-lang-python=\"`for_securable_type`\" pulumi-lang-yaml=\"`forSecurableType`\" pulumi-lang-java=\"`forSecurableType`\"\u003e`for_securable_type`\u003c/span\u003e is `TABLE`.\nWhen specified, the policy only applies to tables whose columns satisfy all match conditions\n"},"name":{"type":"string","description":"Name of the policy. Required on create and optional on update.\nTo rename the policy, set \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e to a different value on update\n"},"onSecurableFullname":{"type":"string","description":"Full name of the securable on which the policy is defined.\nRequired on create\n"},"onSecurableType":{"type":"string","description":"Type of the securable on which the policy is defined.\nOnly `CATALOG`, `SCHEMA` and `TABLE` are supported at this moment.\nRequired on create. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"},"policyType":{"type":"string","description":"Type of the policy. Required on create. Possible values are: `POLICY_TYPE_COLUMN_MASK`, `POLICY_TYPE_ROW_FILTER`\n"},"providerConfig":{"$ref":"#/types/databricks:index/PolicyInfoProviderConfig:PolicyInfoProviderConfig","description":"Configure the provider for management through account provider.\n"},"rowFilter":{"$ref":"#/types/databricks:index/PolicyInfoRowFilter:PolicyInfoRowFilter","description":"Options for row filter policies. Valid only if \u003cspan pulumi-lang-nodejs=\"`policyType`\" pulumi-lang-dotnet=\"`PolicyType`\" pulumi-lang-go=\"`policyType`\" pulumi-lang-python=\"`policy_type`\" pulumi-lang-yaml=\"`policyType`\" pulumi-lang-java=\"`policyType`\"\u003e`policy_type`\u003c/span\u003e is `POLICY_TYPE_ROW_FILTER`.\nRequired on create and optional on update. When specified on update,\nthe new options will replace the existing options as a whole\n"},"toPrincipals":{"type":"array","items":{"type":"string"},"description":"List of user or group names that the policy applies to.\nRequired on create and optional on update\n"},"updatedAt":{"type":"integer","description":"(integer) - Time at which the policy was last modified, in epoch milliseconds. Output only\n"},"updatedBy":{"type":"string","description":"(string) - Username of the user who last modified the policy. Output only\n"},"whenCondition":{"type":"string","description":"Optional condition when the policy should take effect\n"}},"required":["createdAt","createdBy","forSecurableType","name","policyType","toPrincipals","updatedAt","updatedBy"],"inputProperties":{"columnMask":{"$ref":"#/types/databricks:index/PolicyInfoColumnMask:PolicyInfoColumnMask","description":"Options for column mask policies. Valid only if \u003cspan pulumi-lang-nodejs=\"`policyType`\" pulumi-lang-dotnet=\"`PolicyType`\" pulumi-lang-go=\"`policyType`\" pulumi-lang-python=\"`policy_type`\" pulumi-lang-yaml=\"`policyType`\" pulumi-lang-java=\"`policyType`\"\u003e`policy_type`\u003c/span\u003e is `POLICY_TYPE_COLUMN_MASK`.\nRequired on create and optional on update. When specified on update,\nthe new options will replace the existing options as a whole\n"},"comment":{"type":"string","description":"Optional description of the policy\n"},"exceptPrincipals":{"type":"array","items":{"type":"string"},"description":"Optional list of user or group names that should be excluded from the policy\n"},"forSecurableType":{"type":"string","description":"Type of securables that the policy should take effect on.\nOnly `TABLE` is supported at this moment.\nRequired on create and optional on update. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"},"matchColumns":{"type":"array","items":{"$ref":"#/types/databricks:index/PolicyInfoMatchColumn:PolicyInfoMatchColumn"},"description":"Optional list of condition expressions used to match table columns.\nOnly valid when \u003cspan pulumi-lang-nodejs=\"`forSecurableType`\" pulumi-lang-dotnet=\"`ForSecurableType`\" pulumi-lang-go=\"`forSecurableType`\" pulumi-lang-python=\"`for_securable_type`\" pulumi-lang-yaml=\"`forSecurableType`\" pulumi-lang-java=\"`forSecurableType`\"\u003e`for_securable_type`\u003c/span\u003e is `TABLE`.\nWhen specified, the policy only applies to tables whose columns satisfy all match conditions\n"},"name":{"type":"string","description":"Name of the policy. Required on create and optional on update.\nTo rename the policy, set \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e to a different value on update\n"},"onSecurableFullname":{"type":"string","description":"Full name of the securable on which the policy is defined.\nRequired on create\n"},"onSecurableType":{"type":"string","description":"Type of the securable on which the policy is defined.\nOnly `CATALOG`, `SCHEMA` and `TABLE` are supported at this moment.\nRequired on create. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"},"policyType":{"type":"string","description":"Type of the policy. Required on create. Possible values are: `POLICY_TYPE_COLUMN_MASK`, `POLICY_TYPE_ROW_FILTER`\n"},"providerConfig":{"$ref":"#/types/databricks:index/PolicyInfoProviderConfig:PolicyInfoProviderConfig","description":"Configure the provider for management through account provider.\n"},"rowFilter":{"$ref":"#/types/databricks:index/PolicyInfoRowFilter:PolicyInfoRowFilter","description":"Options for row filter policies. Valid only if \u003cspan pulumi-lang-nodejs=\"`policyType`\" pulumi-lang-dotnet=\"`PolicyType`\" pulumi-lang-go=\"`policyType`\" pulumi-lang-python=\"`policy_type`\" pulumi-lang-yaml=\"`policyType`\" pulumi-lang-java=\"`policyType`\"\u003e`policy_type`\u003c/span\u003e is `POLICY_TYPE_ROW_FILTER`.\nRequired on create and optional on update. When specified on update,\nthe new options will replace the existing options as a whole\n"},"toPrincipals":{"type":"array","items":{"type":"string"},"description":"List of user or group names that the policy applies to.\nRequired on create and optional on update\n"},"whenCondition":{"type":"string","description":"Optional condition when the policy should take effect\n"}},"requiredInputs":["forSecurableType","policyType","toPrincipals"],"stateInputs":{"description":"Input properties used for looking up and filtering PolicyInfo resources.\n","properties":{"columnMask":{"$ref":"#/types/databricks:index/PolicyInfoColumnMask:PolicyInfoColumnMask","description":"Options for column mask policies. Valid only if \u003cspan pulumi-lang-nodejs=\"`policyType`\" pulumi-lang-dotnet=\"`PolicyType`\" pulumi-lang-go=\"`policyType`\" pulumi-lang-python=\"`policy_type`\" pulumi-lang-yaml=\"`policyType`\" pulumi-lang-java=\"`policyType`\"\u003e`policy_type`\u003c/span\u003e is `POLICY_TYPE_COLUMN_MASK`.\nRequired on create and optional on update. When specified on update,\nthe new options will replace the existing options as a whole\n"},"comment":{"type":"string","description":"Optional description of the policy\n"},"createdAt":{"type":"integer","description":"(integer) - Time at which the policy was created, in epoch milliseconds. Output only\n"},"createdBy":{"type":"string","description":"(string) - Username of the user who created the policy. Output only\n"},"exceptPrincipals":{"type":"array","items":{"type":"string"},"description":"Optional list of user or group names that should be excluded from the policy\n"},"forSecurableType":{"type":"string","description":"Type of securables that the policy should take effect on.\nOnly `TABLE` is supported at this moment.\nRequired on create and optional on update. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"},"matchColumns":{"type":"array","items":{"$ref":"#/types/databricks:index/PolicyInfoMatchColumn:PolicyInfoMatchColumn"},"description":"Optional list of condition expressions used to match table columns.\nOnly valid when \u003cspan pulumi-lang-nodejs=\"`forSecurableType`\" pulumi-lang-dotnet=\"`ForSecurableType`\" pulumi-lang-go=\"`forSecurableType`\" pulumi-lang-python=\"`for_securable_type`\" pulumi-lang-yaml=\"`forSecurableType`\" pulumi-lang-java=\"`forSecurableType`\"\u003e`for_securable_type`\u003c/span\u003e is `TABLE`.\nWhen specified, the policy only applies to tables whose columns satisfy all match conditions\n"},"name":{"type":"string","description":"Name of the policy. Required on create and optional on update.\nTo rename the policy, set \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e to a different value on update\n"},"onSecurableFullname":{"type":"string","description":"Full name of the securable on which the policy is defined.\nRequired on create\n"},"onSecurableType":{"type":"string","description":"Type of the securable on which the policy is defined.\nOnly `CATALOG`, `SCHEMA` and `TABLE` are supported at this moment.\nRequired on create. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"},"policyType":{"type":"string","description":"Type of the policy. Required on create. Possible values are: `POLICY_TYPE_COLUMN_MASK`, `POLICY_TYPE_ROW_FILTER`\n"},"providerConfig":{"$ref":"#/types/databricks:index/PolicyInfoProviderConfig:PolicyInfoProviderConfig","description":"Configure the provider for management through account provider.\n"},"rowFilter":{"$ref":"#/types/databricks:index/PolicyInfoRowFilter:PolicyInfoRowFilter","description":"Options for row filter policies. Valid only if \u003cspan pulumi-lang-nodejs=\"`policyType`\" pulumi-lang-dotnet=\"`PolicyType`\" pulumi-lang-go=\"`policyType`\" pulumi-lang-python=\"`policy_type`\" pulumi-lang-yaml=\"`policyType`\" pulumi-lang-java=\"`policyType`\"\u003e`policy_type`\u003c/span\u003e is `POLICY_TYPE_ROW_FILTER`.\nRequired on create and optional on update. When specified on update,\nthe new options will replace the existing options as a whole\n"},"toPrincipals":{"type":"array","items":{"type":"string"},"description":"List of user or group names that the policy applies to.\nRequired on create and optional on update\n"},"updatedAt":{"type":"integer","description":"(integer) - Time at which the policy was last modified, in epoch milliseconds. Output only\n"},"updatedBy":{"type":"string","description":"(string) - Username of the user who last modified the policy. Output only\n"},"whenCondition":{"type":"string","description":"Optional condition when the policy should take effect\n"}},"type":"object"}},"databricks:index/postgresBranch:PostgresBranch":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n## Example Usage\n\n### Basic Branch Creation\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.PostgresProject(\"this\", {\n    projectId: \"my-project\",\n    spec: {\n        pgVersion: 17,\n        displayName: \"My Project\",\n    },\n});\nconst dev = new databricks.PostgresBranch(\"dev\", {\n    branchId: \"dev-branch\",\n    parent: _this.name,\n    spec: {\n        noExpiry: true,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.PostgresProject(\"this\",\n    project_id=\"my-project\",\n    spec={\n        \"pg_version\": 17,\n        \"display_name\": \"My Project\",\n    })\ndev = databricks.PostgresBranch(\"dev\",\n    branch_id=\"dev-branch\",\n    parent=this.name,\n    spec={\n        \"no_expiry\": True,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.PostgresProject(\"this\", new()\n    {\n        ProjectId = \"my-project\",\n        Spec = new Databricks.Inputs.PostgresProjectSpecArgs\n        {\n            PgVersion = 17,\n            DisplayName = \"My Project\",\n        },\n    });\n\n    var dev = new Databricks.PostgresBranch(\"dev\", new()\n    {\n        BranchId = \"dev-branch\",\n        Parent = @this.Name,\n        Spec = new Databricks.Inputs.PostgresBranchSpecArgs\n        {\n            NoExpiry = true,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewPostgresProject(ctx, \"this\", \u0026databricks.PostgresProjectArgs{\n\t\t\tProjectId: pulumi.String(\"my-project\"),\n\t\t\tSpec: \u0026databricks.PostgresProjectSpecArgs{\n\t\t\t\tPgVersion:   pulumi.Int(17),\n\t\t\t\tDisplayName: pulumi.String(\"My Project\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPostgresBranch(ctx, \"dev\", \u0026databricks.PostgresBranchArgs{\n\t\t\tBranchId: pulumi.String(\"dev-branch\"),\n\t\t\tParent:   this.Name,\n\t\t\tSpec: \u0026databricks.PostgresBranchSpecArgs{\n\t\t\t\tNoExpiry: pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresProject;\nimport com.pulumi.databricks.PostgresProjectArgs;\nimport com.pulumi.databricks.inputs.PostgresProjectSpecArgs;\nimport com.pulumi.databricks.PostgresBranch;\nimport com.pulumi.databricks.PostgresBranchArgs;\nimport com.pulumi.databricks.inputs.PostgresBranchSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new PostgresProject(\"this\", PostgresProjectArgs.builder()\n            .projectId(\"my-project\")\n            .spec(PostgresProjectSpecArgs.builder()\n                .pgVersion(17)\n                .displayName(\"My Project\")\n                .build())\n            .build());\n\n        var dev = new PostgresBranch(\"dev\", PostgresBranchArgs.builder()\n            .branchId(\"dev-branch\")\n            .parent(this_.name())\n            .spec(PostgresBranchSpecArgs.builder()\n                .noExpiry(true)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:PostgresProject\n    properties:\n      projectId: my-project\n      spec:\n        pgVersion: 17\n        displayName: My Project\n  dev:\n    type: databricks:PostgresBranch\n    properties:\n      branchId: dev-branch\n      parent: ${this.name}\n      spec:\n        noExpiry: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Protected Branch\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst production = new databricks.PostgresBranch(\"production\", {\n    branchId: \"production\",\n    parent: _this.name,\n    spec: {\n        isProtected: true,\n        noExpiry: true,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nproduction = databricks.PostgresBranch(\"production\",\n    branch_id=\"production\",\n    parent=this[\"name\"],\n    spec={\n        \"is_protected\": True,\n        \"no_expiry\": True,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var production = new Databricks.PostgresBranch(\"production\", new()\n    {\n        BranchId = \"production\",\n        Parent = @this.Name,\n        Spec = new Databricks.Inputs.PostgresBranchSpecArgs\n        {\n            IsProtected = true,\n            NoExpiry = true,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPostgresBranch(ctx, \"production\", \u0026databricks.PostgresBranchArgs{\n\t\t\tBranchId: pulumi.String(\"production\"),\n\t\t\tParent:   pulumi.Any(this.Name),\n\t\t\tSpec: \u0026databricks.PostgresBranchSpecArgs{\n\t\t\t\tIsProtected: pulumi.Bool(true),\n\t\t\t\tNoExpiry:    pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresBranch;\nimport com.pulumi.databricks.PostgresBranchArgs;\nimport com.pulumi.databricks.inputs.PostgresBranchSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var production = new PostgresBranch(\"production\", PostgresBranchArgs.builder()\n            .branchId(\"production\")\n            .parent(this_.name())\n            .spec(PostgresBranchSpecArgs.builder()\n                .isProtected(true)\n                .noExpiry(true)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  production:\n    type: databricks:PostgresBranch\n    properties:\n      branchId: production\n      parent: ${this.name}\n      spec:\n        isProtected: true\n        noExpiry: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Branch with Expiration (TTL)\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst temporary = new databricks.PostgresBranch(\"temporary\", {\n    branchId: \"temp-feature-test\",\n    parent: _this.name,\n    spec: {\n        ttl: \"604800s\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ntemporary = databricks.PostgresBranch(\"temporary\",\n    branch_id=\"temp-feature-test\",\n    parent=this[\"name\"],\n    spec={\n        \"ttl\": \"604800s\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var temporary = new Databricks.PostgresBranch(\"temporary\", new()\n    {\n        BranchId = \"temp-feature-test\",\n        Parent = @this.Name,\n        Spec = new Databricks.Inputs.PostgresBranchSpecArgs\n        {\n            Ttl = \"604800s\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPostgresBranch(ctx, \"temporary\", \u0026databricks.PostgresBranchArgs{\n\t\t\tBranchId: pulumi.String(\"temp-feature-test\"),\n\t\t\tParent:   pulumi.Any(this.Name),\n\t\t\tSpec: \u0026databricks.PostgresBranchSpecArgs{\n\t\t\t\tTtl: pulumi.String(\"604800s\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresBranch;\nimport com.pulumi.databricks.PostgresBranchArgs;\nimport com.pulumi.databricks.inputs.PostgresBranchSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var temporary = new PostgresBranch(\"temporary\", PostgresBranchArgs.builder()\n            .branchId(\"temp-feature-test\")\n            .parent(this_.name())\n            .spec(PostgresBranchSpecArgs.builder()\n                .ttl(\"604800s\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  temporary:\n    type: databricks:PostgresBranch\n    properties:\n      branchId: temp-feature-test\n      parent: ${this.name}\n      spec:\n        ttl: 604800s\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"branchId":{"type":"string","description":"The ID to use for the Branch. This becomes the final component of the branch's resource name.\nThe ID is required and must be 1-63 characters long, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.\nFor example, \u003cspan pulumi-lang-nodejs=\"`development`\" pulumi-lang-dotnet=\"`Development`\" pulumi-lang-go=\"`development`\" pulumi-lang-python=\"`development`\" pulumi-lang-yaml=\"`development`\" pulumi-lang-java=\"`development`\"\u003e`development`\u003c/span\u003e becomes `projects/my-app/branches/development`\n"},"createTime":{"type":"string","description":"(string) - A timestamp indicating when the branch was created\n"},"name":{"type":"string","description":"(string) - Output only. The full resource path of the branch.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"parent":{"type":"string","description":"The project containing this branch (API resource hierarchy).\nFormat: projects/{project_id}\n\nNote: This field indicates where the branch exists in the resource hierarchy.\nFor point-in-time branching from another branch, see `status.source_branch`\n"},"providerConfig":{"$ref":"#/types/databricks:index/PostgresBranchProviderConfig:PostgresBranchProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/PostgresBranchSpec:PostgresBranchSpec","description":"The spec contains the branch configuration\n"},"status":{"$ref":"#/types/databricks:index/PostgresBranchStatus:PostgresBranchStatus","description":"(BranchStatus) - The current status of a Branch\n"},"uid":{"type":"string","description":"(string) - System-generated unique ID for the branch\n"},"updateTime":{"type":"string","description":"(string) - A timestamp indicating when the branch was last updated\n"}},"required":["branchId","createTime","name","parent","spec","status","uid","updateTime"],"inputProperties":{"branchId":{"type":"string","description":"The ID to use for the Branch. This becomes the final component of the branch's resource name.\nThe ID is required and must be 1-63 characters long, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.\nFor example, \u003cspan pulumi-lang-nodejs=\"`development`\" pulumi-lang-dotnet=\"`Development`\" pulumi-lang-go=\"`development`\" pulumi-lang-python=\"`development`\" pulumi-lang-yaml=\"`development`\" pulumi-lang-java=\"`development`\"\u003e`development`\u003c/span\u003e becomes `projects/my-app/branches/development`\n"},"parent":{"type":"string","description":"The project containing this branch (API resource hierarchy).\nFormat: projects/{project_id}\n\nNote: This field indicates where the branch exists in the resource hierarchy.\nFor point-in-time branching from another branch, see `status.source_branch`\n"},"providerConfig":{"$ref":"#/types/databricks:index/PostgresBranchProviderConfig:PostgresBranchProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/PostgresBranchSpec:PostgresBranchSpec","description":"The spec contains the branch configuration\n"}},"requiredInputs":["branchId","parent"],"stateInputs":{"description":"Input properties used for looking up and filtering PostgresBranch resources.\n","properties":{"branchId":{"type":"string","description":"The ID to use for the Branch. This becomes the final component of the branch's resource name.\nThe ID is required and must be 1-63 characters long, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.\nFor example, \u003cspan pulumi-lang-nodejs=\"`development`\" pulumi-lang-dotnet=\"`Development`\" pulumi-lang-go=\"`development`\" pulumi-lang-python=\"`development`\" pulumi-lang-yaml=\"`development`\" pulumi-lang-java=\"`development`\"\u003e`development`\u003c/span\u003e becomes `projects/my-app/branches/development`\n"},"createTime":{"type":"string","description":"(string) - A timestamp indicating when the branch was created\n"},"name":{"type":"string","description":"(string) - Output only. The full resource path of the branch.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"parent":{"type":"string","description":"The project containing this branch (API resource hierarchy).\nFormat: projects/{project_id}\n\nNote: This field indicates where the branch exists in the resource hierarchy.\nFor point-in-time branching from another branch, see `status.source_branch`\n"},"providerConfig":{"$ref":"#/types/databricks:index/PostgresBranchProviderConfig:PostgresBranchProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/PostgresBranchSpec:PostgresBranchSpec","description":"The spec contains the branch configuration\n"},"status":{"$ref":"#/types/databricks:index/PostgresBranchStatus:PostgresBranchStatus","description":"(BranchStatus) - The current status of a Branch\n"},"uid":{"type":"string","description":"(string) - System-generated unique ID for the branch\n"},"updateTime":{"type":"string","description":"(string) - A timestamp indicating when the branch was last updated\n"}},"type":"object"}},"databricks:index/postgresEndpoint:PostgresEndpoint":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n## Example Usage\n\n### Basic Read-Write Endpoint\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.PostgresProject(\"this\", {\n    projectId: \"my-project\",\n    spec: {\n        pgVersion: 17,\n        displayName: \"My Project\",\n    },\n});\nconst dev = new databricks.PostgresBranch(\"dev\", {\n    branchId: \"dev-branch\",\n    parent: _this.name,\n    spec: {\n        noExpiry: true,\n    },\n});\nconst primary = new databricks.PostgresEndpoint(\"primary\", {\n    endpointId: \"primary\",\n    parent: dev.name,\n    spec: {\n        endpointType: \"ENDPOINT_TYPE_READ_WRITE\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.PostgresProject(\"this\",\n    project_id=\"my-project\",\n    spec={\n        \"pg_version\": 17,\n        \"display_name\": \"My Project\",\n    })\ndev = databricks.PostgresBranch(\"dev\",\n    branch_id=\"dev-branch\",\n    parent=this.name,\n    spec={\n        \"no_expiry\": True,\n    })\nprimary = databricks.PostgresEndpoint(\"primary\",\n    endpoint_id=\"primary\",\n    parent=dev.name,\n    spec={\n        \"endpoint_type\": \"ENDPOINT_TYPE_READ_WRITE\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.PostgresProject(\"this\", new()\n    {\n        ProjectId = \"my-project\",\n        Spec = new Databricks.Inputs.PostgresProjectSpecArgs\n        {\n            PgVersion = 17,\n            DisplayName = \"My Project\",\n        },\n    });\n\n    var dev = new Databricks.PostgresBranch(\"dev\", new()\n    {\n        BranchId = \"dev-branch\",\n        Parent = @this.Name,\n        Spec = new Databricks.Inputs.PostgresBranchSpecArgs\n        {\n            NoExpiry = true,\n        },\n    });\n\n    var primary = new Databricks.PostgresEndpoint(\"primary\", new()\n    {\n        EndpointId = \"primary\",\n        Parent = dev.Name,\n        Spec = new Databricks.Inputs.PostgresEndpointSpecArgs\n        {\n            EndpointType = \"ENDPOINT_TYPE_READ_WRITE\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewPostgresProject(ctx, \"this\", \u0026databricks.PostgresProjectArgs{\n\t\t\tProjectId: pulumi.String(\"my-project\"),\n\t\t\tSpec: \u0026databricks.PostgresProjectSpecArgs{\n\t\t\t\tPgVersion:   pulumi.Int(17),\n\t\t\t\tDisplayName: pulumi.String(\"My Project\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tdev, err := databricks.NewPostgresBranch(ctx, \"dev\", \u0026databricks.PostgresBranchArgs{\n\t\t\tBranchId: pulumi.String(\"dev-branch\"),\n\t\t\tParent:   this.Name,\n\t\t\tSpec: \u0026databricks.PostgresBranchSpecArgs{\n\t\t\t\tNoExpiry: pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPostgresEndpoint(ctx, \"primary\", \u0026databricks.PostgresEndpointArgs{\n\t\t\tEndpointId: pulumi.String(\"primary\"),\n\t\t\tParent:     dev.Name,\n\t\t\tSpec: \u0026databricks.PostgresEndpointSpecArgs{\n\t\t\t\tEndpointType: pulumi.String(\"ENDPOINT_TYPE_READ_WRITE\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresProject;\nimport com.pulumi.databricks.PostgresProjectArgs;\nimport com.pulumi.databricks.inputs.PostgresProjectSpecArgs;\nimport com.pulumi.databricks.PostgresBranch;\nimport com.pulumi.databricks.PostgresBranchArgs;\nimport com.pulumi.databricks.inputs.PostgresBranchSpecArgs;\nimport com.pulumi.databricks.PostgresEndpoint;\nimport com.pulumi.databricks.PostgresEndpointArgs;\nimport com.pulumi.databricks.inputs.PostgresEndpointSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new PostgresProject(\"this\", PostgresProjectArgs.builder()\n            .projectId(\"my-project\")\n            .spec(PostgresProjectSpecArgs.builder()\n                .pgVersion(17)\n                .displayName(\"My Project\")\n                .build())\n            .build());\n\n        var dev = new PostgresBranch(\"dev\", PostgresBranchArgs.builder()\n            .branchId(\"dev-branch\")\n            .parent(this_.name())\n            .spec(PostgresBranchSpecArgs.builder()\n                .noExpiry(true)\n                .build())\n            .build());\n\n        var primary = new PostgresEndpoint(\"primary\", PostgresEndpointArgs.builder()\n            .endpointId(\"primary\")\n            .parent(dev.name())\n            .spec(PostgresEndpointSpecArgs.builder()\n                .endpointType(\"ENDPOINT_TYPE_READ_WRITE\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:PostgresProject\n    properties:\n      projectId: my-project\n      spec:\n        pgVersion: 17\n        displayName: My Project\n  dev:\n    type: databricks:PostgresBranch\n    properties:\n      branchId: dev-branch\n      parent: ${this.name}\n      spec:\n        noExpiry: true\n  primary:\n    type: databricks:PostgresEndpoint\n    properties:\n      endpointId: primary\n      parent: ${dev.name}\n      spec:\n        endpointType: ENDPOINT_TYPE_READ_WRITE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Read-Only Endpoint with Autoscaling\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst readReplica = new databricks.PostgresEndpoint(\"read_replica\", {\n    endpointId: \"read-replica-1\",\n    parent: dev.name,\n    spec: {\n        endpointType: \"ENDPOINT_TYPE_READ_ONLY\",\n        autoscalingLimitMinCu: 0.5,\n        autoscalingLimitMaxCu: 4,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nread_replica = databricks.PostgresEndpoint(\"read_replica\",\n    endpoint_id=\"read-replica-1\",\n    parent=dev[\"name\"],\n    spec={\n        \"endpoint_type\": \"ENDPOINT_TYPE_READ_ONLY\",\n        \"autoscaling_limit_min_cu\": 0.5,\n        \"autoscaling_limit_max_cu\": 4,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var readReplica = new Databricks.PostgresEndpoint(\"read_replica\", new()\n    {\n        EndpointId = \"read-replica-1\",\n        Parent = dev.Name,\n        Spec = new Databricks.Inputs.PostgresEndpointSpecArgs\n        {\n            EndpointType = \"ENDPOINT_TYPE_READ_ONLY\",\n            AutoscalingLimitMinCu = 0.5,\n            AutoscalingLimitMaxCu = 4,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPostgresEndpoint(ctx, \"read_replica\", \u0026databricks.PostgresEndpointArgs{\n\t\t\tEndpointId: pulumi.String(\"read-replica-1\"),\n\t\t\tParent:     pulumi.Any(dev.Name),\n\t\t\tSpec: \u0026databricks.PostgresEndpointSpecArgs{\n\t\t\t\tEndpointType:          pulumi.String(\"ENDPOINT_TYPE_READ_ONLY\"),\n\t\t\t\tAutoscalingLimitMinCu: pulumi.Float64(0.5),\n\t\t\t\tAutoscalingLimitMaxCu: pulumi.Float64(4),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresEndpoint;\nimport com.pulumi.databricks.PostgresEndpointArgs;\nimport com.pulumi.databricks.inputs.PostgresEndpointSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var readReplica = new PostgresEndpoint(\"readReplica\", PostgresEndpointArgs.builder()\n            .endpointId(\"read-replica-1\")\n            .parent(dev.name())\n            .spec(PostgresEndpointSpecArgs.builder()\n                .endpointType(\"ENDPOINT_TYPE_READ_ONLY\")\n                .autoscalingLimitMinCu(0.5)\n                .autoscalingLimitMaxCu(4.0)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  readReplica:\n    type: databricks:PostgresEndpoint\n    name: read_replica\n    properties:\n      endpointId: read-replica-1\n      parent: ${dev.name}\n      spec:\n        endpointType: ENDPOINT_TYPE_READ_ONLY\n        autoscalingLimitMinCu: 0.5\n        autoscalingLimitMaxCu: 4\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Endpoint with Custom Autoscaling and Suspension\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst analytics = new databricks.PostgresEndpoint(\"analytics\", {\n    endpointId: \"analytics\",\n    parent: dev.name,\n    spec: {\n        endpointType: \"ENDPOINT_TYPE_READ_ONLY\",\n        autoscalingLimitMinCu: 1,\n        autoscalingLimitMaxCu: 8,\n        suspendTimeoutDuration: \"600s\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nanalytics = databricks.PostgresEndpoint(\"analytics\",\n    endpoint_id=\"analytics\",\n    parent=dev[\"name\"],\n    spec={\n        \"endpoint_type\": \"ENDPOINT_TYPE_READ_ONLY\",\n        \"autoscaling_limit_min_cu\": 1,\n        \"autoscaling_limit_max_cu\": 8,\n        \"suspend_timeout_duration\": \"600s\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var analytics = new Databricks.PostgresEndpoint(\"analytics\", new()\n    {\n        EndpointId = \"analytics\",\n        Parent = dev.Name,\n        Spec = new Databricks.Inputs.PostgresEndpointSpecArgs\n        {\n            EndpointType = \"ENDPOINT_TYPE_READ_ONLY\",\n            AutoscalingLimitMinCu = 1,\n            AutoscalingLimitMaxCu = 8,\n            SuspendTimeoutDuration = \"600s\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPostgresEndpoint(ctx, \"analytics\", \u0026databricks.PostgresEndpointArgs{\n\t\t\tEndpointId: pulumi.String(\"analytics\"),\n\t\t\tParent:     pulumi.Any(dev.Name),\n\t\t\tSpec: \u0026databricks.PostgresEndpointSpecArgs{\n\t\t\t\tEndpointType:           pulumi.String(\"ENDPOINT_TYPE_READ_ONLY\"),\n\t\t\t\tAutoscalingLimitMinCu:  pulumi.Float64(1),\n\t\t\t\tAutoscalingLimitMaxCu:  pulumi.Float64(8),\n\t\t\t\tSuspendTimeoutDuration: pulumi.String(\"600s\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresEndpoint;\nimport com.pulumi.databricks.PostgresEndpointArgs;\nimport com.pulumi.databricks.inputs.PostgresEndpointSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var analytics = new PostgresEndpoint(\"analytics\", PostgresEndpointArgs.builder()\n            .endpointId(\"analytics\")\n            .parent(dev.name())\n            .spec(PostgresEndpointSpecArgs.builder()\n                .endpointType(\"ENDPOINT_TYPE_READ_ONLY\")\n                .autoscalingLimitMinCu(1.0)\n                .autoscalingLimitMaxCu(8.0)\n                .suspendTimeoutDuration(\"600s\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  analytics:\n    type: databricks:PostgresEndpoint\n    properties:\n      endpointId: analytics\n      parent: ${dev.name}\n      spec:\n        endpointType: ENDPOINT_TYPE_READ_ONLY\n        autoscalingLimitMinCu: 1\n        autoscalingLimitMaxCu: 8\n        suspendTimeoutDuration: 600s\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Disabled Endpoint\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst maintenance = new databricks.PostgresEndpoint(\"maintenance\", {\n    endpointId: \"primary\",\n    parent: dev.name,\n    spec: {\n        endpointType: \"ENDPOINT_TYPE_READ_WRITE\",\n        disabled: true,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nmaintenance = databricks.PostgresEndpoint(\"maintenance\",\n    endpoint_id=\"primary\",\n    parent=dev[\"name\"],\n    spec={\n        \"endpoint_type\": \"ENDPOINT_TYPE_READ_WRITE\",\n        \"disabled\": True,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var maintenance = new Databricks.PostgresEndpoint(\"maintenance\", new()\n    {\n        EndpointId = \"primary\",\n        Parent = dev.Name,\n        Spec = new Databricks.Inputs.PostgresEndpointSpecArgs\n        {\n            EndpointType = \"ENDPOINT_TYPE_READ_WRITE\",\n            Disabled = true,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPostgresEndpoint(ctx, \"maintenance\", \u0026databricks.PostgresEndpointArgs{\n\t\t\tEndpointId: pulumi.String(\"primary\"),\n\t\t\tParent:     pulumi.Any(dev.Name),\n\t\t\tSpec: \u0026databricks.PostgresEndpointSpecArgs{\n\t\t\t\tEndpointType: pulumi.String(\"ENDPOINT_TYPE_READ_WRITE\"),\n\t\t\t\tDisabled:     pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresEndpoint;\nimport com.pulumi.databricks.PostgresEndpointArgs;\nimport com.pulumi.databricks.inputs.PostgresEndpointSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var maintenance = new PostgresEndpoint(\"maintenance\", PostgresEndpointArgs.builder()\n            .endpointId(\"primary\")\n            .parent(dev.name())\n            .spec(PostgresEndpointSpecArgs.builder()\n                .endpointType(\"ENDPOINT_TYPE_READ_WRITE\")\n                .disabled(true)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  maintenance:\n    type: databricks:PostgresEndpoint\n    properties:\n      endpointId: primary\n      parent: ${dev.name}\n      spec:\n        endpointType: ENDPOINT_TYPE_READ_WRITE\n        disabled: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Endpoint with No Suspension\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst alwaysOn = new databricks.PostgresEndpoint(\"always_on\", {\n    endpointId: \"always-on\",\n    parent: dev.name,\n    spec: {\n        endpointType: \"ENDPOINT_TYPE_READ_WRITE\",\n        noSuspension: true,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nalways_on = databricks.PostgresEndpoint(\"always_on\",\n    endpoint_id=\"always-on\",\n    parent=dev[\"name\"],\n    spec={\n        \"endpoint_type\": \"ENDPOINT_TYPE_READ_WRITE\",\n        \"no_suspension\": True,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var alwaysOn = new Databricks.PostgresEndpoint(\"always_on\", new()\n    {\n        EndpointId = \"always-on\",\n        Parent = dev.Name,\n        Spec = new Databricks.Inputs.PostgresEndpointSpecArgs\n        {\n            EndpointType = \"ENDPOINT_TYPE_READ_WRITE\",\n            NoSuspension = true,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPostgresEndpoint(ctx, \"always_on\", \u0026databricks.PostgresEndpointArgs{\n\t\t\tEndpointId: pulumi.String(\"always-on\"),\n\t\t\tParent:     pulumi.Any(dev.Name),\n\t\t\tSpec: \u0026databricks.PostgresEndpointSpecArgs{\n\t\t\t\tEndpointType: pulumi.String(\"ENDPOINT_TYPE_READ_WRITE\"),\n\t\t\t\tNoSuspension: pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresEndpoint;\nimport com.pulumi.databricks.PostgresEndpointArgs;\nimport com.pulumi.databricks.inputs.PostgresEndpointSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var alwaysOn = new PostgresEndpoint(\"alwaysOn\", PostgresEndpointArgs.builder()\n            .endpointId(\"always-on\")\n            .parent(dev.name())\n            .spec(PostgresEndpointSpecArgs.builder()\n                .endpointType(\"ENDPOINT_TYPE_READ_WRITE\")\n                .noSuspension(true)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  alwaysOn:\n    type: databricks:PostgresEndpoint\n    name: always_on\n    properties:\n      endpointId: always-on\n      parent: ${dev.name}\n      spec:\n        endpointType: ENDPOINT_TYPE_READ_WRITE\n        noSuspension: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Complete Example\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst prod = new databricks.PostgresProject(\"prod\", {\n    projectId: \"production\",\n    spec: {\n        pgVersion: 17,\n        displayName: \"Production Workloads\",\n        historyRetentionDuration: \"2592000s\",\n        defaultEndpointSettings: {\n            autoscalingLimitMinCu: 1,\n            autoscalingLimitMaxCu: 8,\n            suspendTimeoutDuration: \"300s\",\n        },\n    },\n});\nconst main = new databricks.PostgresBranch(\"main\", {\n    branchId: \"main\",\n    parent: prod.name,\n    spec: {\n        noExpiry: true,\n    },\n});\nconst primary = new databricks.PostgresEndpoint(\"primary\", {\n    endpointId: \"primary\",\n    parent: main.name,\n    spec: {\n        endpointType: \"ENDPOINT_TYPE_READ_WRITE\",\n        autoscalingLimitMinCu: 1,\n        autoscalingLimitMaxCu: 9,\n        noSuspension: true,\n    },\n});\nconst readReplica = new databricks.PostgresEndpoint(\"read_replica\", {\n    endpointId: \"read-replica\",\n    parent: main.name,\n    spec: {\n        endpointType: \"ENDPOINT_TYPE_READ_ONLY\",\n        autoscalingLimitMinCu: 0.5,\n        autoscalingLimitMaxCu: 8,\n        suspendTimeoutDuration: \"600s\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nprod = databricks.PostgresProject(\"prod\",\n    project_id=\"production\",\n    spec={\n        \"pg_version\": 17,\n        \"display_name\": \"Production Workloads\",\n        \"history_retention_duration\": \"2592000s\",\n        \"default_endpoint_settings\": {\n            \"autoscaling_limit_min_cu\": 1,\n            \"autoscaling_limit_max_cu\": 8,\n            \"suspend_timeout_duration\": \"300s\",\n        },\n    })\nmain = databricks.PostgresBranch(\"main\",\n    branch_id=\"main\",\n    parent=prod.name,\n    spec={\n        \"no_expiry\": True,\n    })\nprimary = databricks.PostgresEndpoint(\"primary\",\n    endpoint_id=\"primary\",\n    parent=main.name,\n    spec={\n        \"endpoint_type\": \"ENDPOINT_TYPE_READ_WRITE\",\n        \"autoscaling_limit_min_cu\": 1,\n        \"autoscaling_limit_max_cu\": 9,\n        \"no_suspension\": True,\n    })\nread_replica = databricks.PostgresEndpoint(\"read_replica\",\n    endpoint_id=\"read-replica\",\n    parent=main.name,\n    spec={\n        \"endpoint_type\": \"ENDPOINT_TYPE_READ_ONLY\",\n        \"autoscaling_limit_min_cu\": 0.5,\n        \"autoscaling_limit_max_cu\": 8,\n        \"suspend_timeout_duration\": \"600s\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var prod = new Databricks.PostgresProject(\"prod\", new()\n    {\n        ProjectId = \"production\",\n        Spec = new Databricks.Inputs.PostgresProjectSpecArgs\n        {\n            PgVersion = 17,\n            DisplayName = \"Production Workloads\",\n            HistoryRetentionDuration = \"2592000s\",\n            DefaultEndpointSettings = new Databricks.Inputs.PostgresProjectSpecDefaultEndpointSettingsArgs\n            {\n                AutoscalingLimitMinCu = 1,\n                AutoscalingLimitMaxCu = 8,\n                SuspendTimeoutDuration = \"300s\",\n            },\n        },\n    });\n\n    var main = new Databricks.PostgresBranch(\"main\", new()\n    {\n        BranchId = \"main\",\n        Parent = prod.Name,\n        Spec = new Databricks.Inputs.PostgresBranchSpecArgs\n        {\n            NoExpiry = true,\n        },\n    });\n\n    var primary = new Databricks.PostgresEndpoint(\"primary\", new()\n    {\n        EndpointId = \"primary\",\n        Parent = main.Name,\n        Spec = new Databricks.Inputs.PostgresEndpointSpecArgs\n        {\n            EndpointType = \"ENDPOINT_TYPE_READ_WRITE\",\n            AutoscalingLimitMinCu = 1,\n            AutoscalingLimitMaxCu = 9,\n            NoSuspension = true,\n        },\n    });\n\n    var readReplica = new Databricks.PostgresEndpoint(\"read_replica\", new()\n    {\n        EndpointId = \"read-replica\",\n        Parent = main.Name,\n        Spec = new Databricks.Inputs.PostgresEndpointSpecArgs\n        {\n            EndpointType = \"ENDPOINT_TYPE_READ_ONLY\",\n            AutoscalingLimitMinCu = 0.5,\n            AutoscalingLimitMaxCu = 8,\n            SuspendTimeoutDuration = \"600s\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tprod, err := databricks.NewPostgresProject(ctx, \"prod\", \u0026databricks.PostgresProjectArgs{\n\t\t\tProjectId: pulumi.String(\"production\"),\n\t\t\tSpec: \u0026databricks.PostgresProjectSpecArgs{\n\t\t\t\tPgVersion:                pulumi.Int(17),\n\t\t\t\tDisplayName:              pulumi.String(\"Production Workloads\"),\n\t\t\t\tHistoryRetentionDuration: pulumi.String(\"2592000s\"),\n\t\t\t\tDefaultEndpointSettings: \u0026databricks.PostgresProjectSpecDefaultEndpointSettingsArgs{\n\t\t\t\t\tAutoscalingLimitMinCu:  pulumi.Float64(1),\n\t\t\t\t\tAutoscalingLimitMaxCu:  pulumi.Float64(8),\n\t\t\t\t\tSuspendTimeoutDuration: pulumi.String(\"300s\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmain, err := databricks.NewPostgresBranch(ctx, \"main\", \u0026databricks.PostgresBranchArgs{\n\t\t\tBranchId: pulumi.String(\"main\"),\n\t\t\tParent:   prod.Name,\n\t\t\tSpec: \u0026databricks.PostgresBranchSpecArgs{\n\t\t\t\tNoExpiry: pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPostgresEndpoint(ctx, \"primary\", \u0026databricks.PostgresEndpointArgs{\n\t\t\tEndpointId: pulumi.String(\"primary\"),\n\t\t\tParent:     main.Name,\n\t\t\tSpec: \u0026databricks.PostgresEndpointSpecArgs{\n\t\t\t\tEndpointType:          pulumi.String(\"ENDPOINT_TYPE_READ_WRITE\"),\n\t\t\t\tAutoscalingLimitMinCu: pulumi.Float64(1),\n\t\t\t\tAutoscalingLimitMaxCu: pulumi.Float64(9),\n\t\t\t\tNoSuspension:          pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPostgresEndpoint(ctx, \"read_replica\", \u0026databricks.PostgresEndpointArgs{\n\t\t\tEndpointId: pulumi.String(\"read-replica\"),\n\t\t\tParent:     main.Name,\n\t\t\tSpec: \u0026databricks.PostgresEndpointSpecArgs{\n\t\t\t\tEndpointType:           pulumi.String(\"ENDPOINT_TYPE_READ_ONLY\"),\n\t\t\t\tAutoscalingLimitMinCu:  pulumi.Float64(0.5),\n\t\t\t\tAutoscalingLimitMaxCu:  pulumi.Float64(8),\n\t\t\t\tSuspendTimeoutDuration: pulumi.String(\"600s\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresProject;\nimport com.pulumi.databricks.PostgresProjectArgs;\nimport com.pulumi.databricks.inputs.PostgresProjectSpecArgs;\nimport com.pulumi.databricks.inputs.PostgresProjectSpecDefaultEndpointSettingsArgs;\nimport com.pulumi.databricks.PostgresBranch;\nimport com.pulumi.databricks.PostgresBranchArgs;\nimport com.pulumi.databricks.inputs.PostgresBranchSpecArgs;\nimport com.pulumi.databricks.PostgresEndpoint;\nimport com.pulumi.databricks.PostgresEndpointArgs;\nimport com.pulumi.databricks.inputs.PostgresEndpointSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var prod = new PostgresProject(\"prod\", PostgresProjectArgs.builder()\n            .projectId(\"production\")\n            .spec(PostgresProjectSpecArgs.builder()\n                .pgVersion(17)\n                .displayName(\"Production Workloads\")\n                .historyRetentionDuration(\"2592000s\")\n                .defaultEndpointSettings(PostgresProjectSpecDefaultEndpointSettingsArgs.builder()\n                    .autoscalingLimitMinCu(1.0)\n                    .autoscalingLimitMaxCu(8.0)\n                    .suspendTimeoutDuration(\"300s\")\n                    .build())\n                .build())\n            .build());\n\n        var main = new PostgresBranch(\"main\", PostgresBranchArgs.builder()\n            .branchId(\"main\")\n            .parent(prod.name())\n            .spec(PostgresBranchSpecArgs.builder()\n                .noExpiry(true)\n                .build())\n            .build());\n\n        var primary = new PostgresEndpoint(\"primary\", PostgresEndpointArgs.builder()\n            .endpointId(\"primary\")\n            .parent(main.name())\n            .spec(PostgresEndpointSpecArgs.builder()\n                .endpointType(\"ENDPOINT_TYPE_READ_WRITE\")\n                .autoscalingLimitMinCu(1.0)\n                .autoscalingLimitMaxCu(9.0)\n                .noSuspension(true)\n                .build())\n            .build());\n\n        var readReplica = new PostgresEndpoint(\"readReplica\", PostgresEndpointArgs.builder()\n            .endpointId(\"read-replica\")\n            .parent(main.name())\n            .spec(PostgresEndpointSpecArgs.builder()\n                .endpointType(\"ENDPOINT_TYPE_READ_ONLY\")\n                .autoscalingLimitMinCu(0.5)\n                .autoscalingLimitMaxCu(8.0)\n                .suspendTimeoutDuration(\"600s\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  prod:\n    type: databricks:PostgresProject\n    properties:\n      projectId: production\n      spec:\n        pgVersion: 17\n        displayName: Production Workloads\n        historyRetentionDuration: 2592000s\n        defaultEndpointSettings:\n          autoscalingLimitMinCu: 1\n          autoscalingLimitMaxCu: 8\n          suspendTimeoutDuration: 300s\n  main:\n    type: databricks:PostgresBranch\n    properties:\n      branchId: main\n      parent: ${prod.name}\n      spec:\n        noExpiry: true\n  primary:\n    type: databricks:PostgresEndpoint\n    properties:\n      endpointId: primary\n      parent: ${main.name}\n      spec:\n        endpointType: ENDPOINT_TYPE_READ_WRITE\n        autoscalingLimitMinCu: 1\n        autoscalingLimitMaxCu: 9\n        noSuspension: true\n  readReplica:\n    type: databricks:PostgresEndpoint\n    name: read_replica\n    properties:\n      endpointId: read-replica\n      parent: ${main.name}\n      spec:\n        endpointType: ENDPOINT_TYPE_READ_ONLY\n        autoscalingLimitMinCu: 0.5\n        autoscalingLimitMaxCu: 8\n        suspendTimeoutDuration: 600s\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"createTime":{"type":"string","description":"(string) - A timestamp indicating when the compute endpoint was created\n"},"endpointId":{"type":"string","description":"The ID to use for the Endpoint. This becomes the final component of the endpoint's resource name.\nThe ID is required and must be 1-63 characters long, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.\nFor example, \u003cspan pulumi-lang-nodejs=\"`primary`\" pulumi-lang-dotnet=\"`Primary`\" pulumi-lang-go=\"`primary`\" pulumi-lang-python=\"`primary`\" pulumi-lang-yaml=\"`primary`\" pulumi-lang-java=\"`primary`\"\u003e`primary`\u003c/span\u003e becomes `projects/my-app/branches/development/endpoints/primary`\n"},"name":{"type":"string","description":"(string) - Output only. The full resource path of the endpoint.\nFormat: projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}\n"},"parent":{"type":"string","description":"The branch containing this endpoint (API resource hierarchy).\nFormat: projects/{project_id}/branches/{branch_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/PostgresEndpointProviderConfig:PostgresEndpointProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/PostgresEndpointSpec:PostgresEndpointSpec","description":"The spec contains the compute endpoint configuration, including autoscaling limits, suspend timeout, and disabled state\n"},"status":{"$ref":"#/types/databricks:index/PostgresEndpointStatus:PostgresEndpointStatus","description":"(EndpointStatus) - Current operational status of the compute endpoint\n"},"uid":{"type":"string","description":"(string) - System-generated unique ID for the endpoint\n"},"updateTime":{"type":"string","description":"(string) - A timestamp indicating when the compute endpoint was last updated\n"}},"required":["createTime","endpointId","name","parent","spec","status","uid","updateTime"],"inputProperties":{"endpointId":{"type":"string","description":"The ID to use for the Endpoint. This becomes the final component of the endpoint's resource name.\nThe ID is required and must be 1-63 characters long, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.\nFor example, \u003cspan pulumi-lang-nodejs=\"`primary`\" pulumi-lang-dotnet=\"`Primary`\" pulumi-lang-go=\"`primary`\" pulumi-lang-python=\"`primary`\" pulumi-lang-yaml=\"`primary`\" pulumi-lang-java=\"`primary`\"\u003e`primary`\u003c/span\u003e becomes `projects/my-app/branches/development/endpoints/primary`\n"},"parent":{"type":"string","description":"The branch containing this endpoint (API resource hierarchy).\nFormat: projects/{project_id}/branches/{branch_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/PostgresEndpointProviderConfig:PostgresEndpointProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/PostgresEndpointSpec:PostgresEndpointSpec","description":"The spec contains the compute endpoint configuration, including autoscaling limits, suspend timeout, and disabled state\n"}},"requiredInputs":["endpointId","parent"],"stateInputs":{"description":"Input properties used for looking up and filtering PostgresEndpoint resources.\n","properties":{"createTime":{"type":"string","description":"(string) - A timestamp indicating when the compute endpoint was created\n"},"endpointId":{"type":"string","description":"The ID to use for the Endpoint. This becomes the final component of the endpoint's resource name.\nThe ID is required and must be 1-63 characters long, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.\nFor example, \u003cspan pulumi-lang-nodejs=\"`primary`\" pulumi-lang-dotnet=\"`Primary`\" pulumi-lang-go=\"`primary`\" pulumi-lang-python=\"`primary`\" pulumi-lang-yaml=\"`primary`\" pulumi-lang-java=\"`primary`\"\u003e`primary`\u003c/span\u003e becomes `projects/my-app/branches/development/endpoints/primary`\n"},"name":{"type":"string","description":"(string) - Output only. The full resource path of the endpoint.\nFormat: projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}\n"},"parent":{"type":"string","description":"The branch containing this endpoint (API resource hierarchy).\nFormat: projects/{project_id}/branches/{branch_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/PostgresEndpointProviderConfig:PostgresEndpointProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/PostgresEndpointSpec:PostgresEndpointSpec","description":"The spec contains the compute endpoint configuration, including autoscaling limits, suspend timeout, and disabled state\n"},"status":{"$ref":"#/types/databricks:index/PostgresEndpointStatus:PostgresEndpointStatus","description":"(EndpointStatus) - Current operational status of the compute endpoint\n"},"uid":{"type":"string","description":"(string) - System-generated unique ID for the endpoint\n"},"updateTime":{"type":"string","description":"(string) - A timestamp indicating when the compute endpoint was last updated\n"}},"type":"object"}},"databricks:index/postgresProject:PostgresProject":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n## Example Usage\n\n### Basic Project Creation\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.PostgresProject(\"this\", {\n    projectId: \"my-project\",\n    spec: {\n        pgVersion: 17,\n        displayName: \"My Application Project\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.PostgresProject(\"this\",\n    project_id=\"my-project\",\n    spec={\n        \"pg_version\": 17,\n        \"display_name\": \"My Application Project\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.PostgresProject(\"this\", new()\n    {\n        ProjectId = \"my-project\",\n        Spec = new Databricks.Inputs.PostgresProjectSpecArgs\n        {\n            PgVersion = 17,\n            DisplayName = \"My Application Project\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPostgresProject(ctx, \"this\", \u0026databricks.PostgresProjectArgs{\n\t\t\tProjectId: pulumi.String(\"my-project\"),\n\t\t\tSpec: \u0026databricks.PostgresProjectSpecArgs{\n\t\t\t\tPgVersion:   pulumi.Int(17),\n\t\t\t\tDisplayName: pulumi.String(\"My Application Project\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresProject;\nimport com.pulumi.databricks.PostgresProjectArgs;\nimport com.pulumi.databricks.inputs.PostgresProjectSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new PostgresProject(\"this\", PostgresProjectArgs.builder()\n            .projectId(\"my-project\")\n            .spec(PostgresProjectSpecArgs.builder()\n                .pgVersion(17)\n                .displayName(\"My Application Project\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:PostgresProject\n    properties:\n      projectId: my-project\n      spec:\n        pgVersion: 17\n        displayName: My Application Project\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Project with Custom Settings\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.PostgresProject(\"this\", {\n    projectId: \"analytics-project\",\n    spec: {\n        pgVersion: 16,\n        displayName: \"Analytics Workloads\",\n        historyRetentionDuration: \"1209600s\",\n        defaultEndpointSettings: {\n            autoscalingLimitMinCu: 1,\n            autoscalingLimitMaxCu: 8,\n            suspendTimeoutDuration: \"300s\",\n        },\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.PostgresProject(\"this\",\n    project_id=\"analytics-project\",\n    spec={\n        \"pg_version\": 16,\n        \"display_name\": \"Analytics Workloads\",\n        \"history_retention_duration\": \"1209600s\",\n        \"default_endpoint_settings\": {\n            \"autoscaling_limit_min_cu\": 1,\n            \"autoscaling_limit_max_cu\": 8,\n            \"suspend_timeout_duration\": \"300s\",\n        },\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.PostgresProject(\"this\", new()\n    {\n        ProjectId = \"analytics-project\",\n        Spec = new Databricks.Inputs.PostgresProjectSpecArgs\n        {\n            PgVersion = 16,\n            DisplayName = \"Analytics Workloads\",\n            HistoryRetentionDuration = \"1209600s\",\n            DefaultEndpointSettings = new Databricks.Inputs.PostgresProjectSpecDefaultEndpointSettingsArgs\n            {\n                AutoscalingLimitMinCu = 1,\n                AutoscalingLimitMaxCu = 8,\n                SuspendTimeoutDuration = \"300s\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPostgresProject(ctx, \"this\", \u0026databricks.PostgresProjectArgs{\n\t\t\tProjectId: pulumi.String(\"analytics-project\"),\n\t\t\tSpec: \u0026databricks.PostgresProjectSpecArgs{\n\t\t\t\tPgVersion:                pulumi.Int(16),\n\t\t\t\tDisplayName:              pulumi.String(\"Analytics Workloads\"),\n\t\t\t\tHistoryRetentionDuration: pulumi.String(\"1209600s\"),\n\t\t\t\tDefaultEndpointSettings: \u0026databricks.PostgresProjectSpecDefaultEndpointSettingsArgs{\n\t\t\t\t\tAutoscalingLimitMinCu:  pulumi.Float64(1),\n\t\t\t\t\tAutoscalingLimitMaxCu:  pulumi.Float64(8),\n\t\t\t\t\tSuspendTimeoutDuration: pulumi.String(\"300s\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresProject;\nimport com.pulumi.databricks.PostgresProjectArgs;\nimport com.pulumi.databricks.inputs.PostgresProjectSpecArgs;\nimport com.pulumi.databricks.inputs.PostgresProjectSpecDefaultEndpointSettingsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new PostgresProject(\"this\", PostgresProjectArgs.builder()\n            .projectId(\"analytics-project\")\n            .spec(PostgresProjectSpecArgs.builder()\n                .pgVersion(16)\n                .displayName(\"Analytics Workloads\")\n                .historyRetentionDuration(\"1209600s\")\n                .defaultEndpointSettings(PostgresProjectSpecDefaultEndpointSettingsArgs.builder()\n                    .autoscalingLimitMinCu(1.0)\n                    .autoscalingLimitMaxCu(8.0)\n                    .suspendTimeoutDuration(\"300s\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:PostgresProject\n    properties:\n      projectId: analytics-project\n      spec:\n        pgVersion: 16\n        displayName: Analytics Workloads\n        historyRetentionDuration: 1209600s\n        defaultEndpointSettings:\n          autoscalingLimitMinCu: 1\n          autoscalingLimitMaxCu: 8\n          suspendTimeoutDuration: 300s\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Referencing in Other Resources\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.PostgresProject(\"this\", {\n    projectId: \"my-project\",\n    spec: {\n        pgVersion: 17,\n        displayName: \"My Project\",\n    },\n});\nconst dev = new databricks.PostgresBranch(\"dev\", {\n    branchId: \"dev-branch\",\n    parent: _this.name,\n    spec: {\n        noExpiry: true,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.PostgresProject(\"this\",\n    project_id=\"my-project\",\n    spec={\n        \"pg_version\": 17,\n        \"display_name\": \"My Project\",\n    })\ndev = databricks.PostgresBranch(\"dev\",\n    branch_id=\"dev-branch\",\n    parent=this.name,\n    spec={\n        \"no_expiry\": True,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.PostgresProject(\"this\", new()\n    {\n        ProjectId = \"my-project\",\n        Spec = new Databricks.Inputs.PostgresProjectSpecArgs\n        {\n            PgVersion = 17,\n            DisplayName = \"My Project\",\n        },\n    });\n\n    var dev = new Databricks.PostgresBranch(\"dev\", new()\n    {\n        BranchId = \"dev-branch\",\n        Parent = @this.Name,\n        Spec = new Databricks.Inputs.PostgresBranchSpecArgs\n        {\n            NoExpiry = true,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewPostgresProject(ctx, \"this\", \u0026databricks.PostgresProjectArgs{\n\t\t\tProjectId: pulumi.String(\"my-project\"),\n\t\t\tSpec: \u0026databricks.PostgresProjectSpecArgs{\n\t\t\t\tPgVersion:   pulumi.Int(17),\n\t\t\t\tDisplayName: pulumi.String(\"My Project\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewPostgresBranch(ctx, \"dev\", \u0026databricks.PostgresBranchArgs{\n\t\t\tBranchId: pulumi.String(\"dev-branch\"),\n\t\t\tParent:   this.Name,\n\t\t\tSpec: \u0026databricks.PostgresBranchSpecArgs{\n\t\t\t\tNoExpiry: pulumi.Bool(true),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.PostgresProject;\nimport com.pulumi.databricks.PostgresProjectArgs;\nimport com.pulumi.databricks.inputs.PostgresProjectSpecArgs;\nimport com.pulumi.databricks.PostgresBranch;\nimport com.pulumi.databricks.PostgresBranchArgs;\nimport com.pulumi.databricks.inputs.PostgresBranchSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new PostgresProject(\"this\", PostgresProjectArgs.builder()\n            .projectId(\"my-project\")\n            .spec(PostgresProjectSpecArgs.builder()\n                .pgVersion(17)\n                .displayName(\"My Project\")\n                .build())\n            .build());\n\n        var dev = new PostgresBranch(\"dev\", PostgresBranchArgs.builder()\n            .branchId(\"dev-branch\")\n            .parent(this_.name())\n            .spec(PostgresBranchSpecArgs.builder()\n                .noExpiry(true)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:PostgresProject\n    properties:\n      projectId: my-project\n      spec:\n        pgVersion: 17\n        displayName: My Project\n  dev:\n    type: databricks:PostgresBranch\n    properties:\n      branchId: dev-branch\n      parent: ${this.name}\n      spec:\n        noExpiry: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"createTime":{"type":"string","description":"(string) - A timestamp indicating when the project was created\n"},"name":{"type":"string","description":"(string) - Output only. The full resource path of the project.\nFormat: projects/{project_id}\n"},"projectId":{"type":"string","description":"The ID to use for the Project. This becomes the final component of the project's resource name.\nThe ID is required and must be 1-63 characters long, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.\nFor example, `my-app` becomes `projects/my-app`\n"},"providerConfig":{"$ref":"#/types/databricks:index/PostgresProjectProviderConfig:PostgresProjectProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/PostgresProjectSpec:PostgresProjectSpec","description":"The spec contains the project configuration, including display_name,\u003cspan pulumi-lang-nodejs=\" pgVersion \" pulumi-lang-dotnet=\" PgVersion \" pulumi-lang-go=\" pgVersion \" pulumi-lang-python=\" pg_version \" pulumi-lang-yaml=\" pgVersion \" pulumi-lang-java=\" pgVersion \"\u003e pg_version \u003c/span\u003e(Postgres version), history_retention_duration, and default_endpoint_settings\n"},"status":{"$ref":"#/types/databricks:index/PostgresProjectStatus:PostgresProjectStatus","description":"(ProjectStatus) - The current status of a Project\n"},"uid":{"type":"string","description":"(string) - System-generated unique ID for the project\n"},"updateTime":{"type":"string","description":"(string) - A timestamp indicating when the project was last updated\n"}},"required":["createTime","name","projectId","spec","status","uid","updateTime"],"inputProperties":{"projectId":{"type":"string","description":"The ID to use for the Project. This becomes the final component of the project's resource name.\nThe ID is required and must be 1-63 characters long, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.\nFor example, `my-app` becomes `projects/my-app`\n"},"providerConfig":{"$ref":"#/types/databricks:index/PostgresProjectProviderConfig:PostgresProjectProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/PostgresProjectSpec:PostgresProjectSpec","description":"The spec contains the project configuration, including display_name,\u003cspan pulumi-lang-nodejs=\" pgVersion \" pulumi-lang-dotnet=\" PgVersion \" pulumi-lang-go=\" pgVersion \" pulumi-lang-python=\" pg_version \" pulumi-lang-yaml=\" pgVersion \" pulumi-lang-java=\" pgVersion \"\u003e pg_version \u003c/span\u003e(Postgres version), history_retention_duration, and default_endpoint_settings\n"}},"requiredInputs":["projectId"],"stateInputs":{"description":"Input properties used for looking up and filtering PostgresProject resources.\n","properties":{"createTime":{"type":"string","description":"(string) - A timestamp indicating when the project was created\n"},"name":{"type":"string","description":"(string) - Output only. The full resource path of the project.\nFormat: projects/{project_id}\n"},"projectId":{"type":"string","description":"The ID to use for the Project. This becomes the final component of the project's resource name.\nThe ID is required and must be 1-63 characters long, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.\nFor example, `my-app` becomes `projects/my-app`\n"},"providerConfig":{"$ref":"#/types/databricks:index/PostgresProjectProviderConfig:PostgresProjectProviderConfig","description":"Configure the provider for management through account provider.\n"},"spec":{"$ref":"#/types/databricks:index/PostgresProjectSpec:PostgresProjectSpec","description":"The spec contains the project configuration, including display_name,\u003cspan pulumi-lang-nodejs=\" pgVersion \" pulumi-lang-dotnet=\" PgVersion \" pulumi-lang-go=\" pgVersion \" pulumi-lang-python=\" pg_version \" pulumi-lang-yaml=\" pgVersion \" pulumi-lang-java=\" pgVersion \"\u003e pg_version \u003c/span\u003e(Postgres version), history_retention_duration, and default_endpoint_settings\n"},"status":{"$ref":"#/types/databricks:index/PostgresProjectStatus:PostgresProjectStatus","description":"(ProjectStatus) - The current status of a Project\n"},"uid":{"type":"string","description":"(string) - System-generated unique ID for the project\n"},"updateTime":{"type":"string","description":"(string) - A timestamp indicating when the project was last updated\n"}},"type":"object"}},"databricks:index/qualityMonitor:QualityMonitor":{"description":"This resource allows you to manage [Lakehouse Monitors](https://docs.databricks.com/en/lakehouse-monitoring/index.html) in Databricks.\n\n\u003e This resource can only be used with a workspace-level provider!\n\nA \u003cspan pulumi-lang-nodejs=\"`databricks.QualityMonitor`\" pulumi-lang-dotnet=\"`databricks.QualityMonitor`\" pulumi-lang-go=\"`QualityMonitor`\" pulumi-lang-python=\"`QualityMonitor`\" pulumi-lang-yaml=\"`databricks.QualityMonitor`\" pulumi-lang-java=\"`databricks.QualityMonitor`\"\u003e`databricks.QualityMonitor`\u003c/span\u003e is attached to a\u003cspan pulumi-lang-nodejs=\" databricks.SqlTable \" pulumi-lang-dotnet=\" databricks.SqlTable \" pulumi-lang-go=\" SqlTable \" pulumi-lang-python=\" SqlTable \" pulumi-lang-yaml=\" databricks.SqlTable \" pulumi-lang-java=\" databricks.SqlTable \"\u003e databricks.SqlTable \u003c/span\u003eand can be of type timeseries, snapshot or inference.\n\n## Plugin Framework Migration\n\nThe quality monitor resource has been migrated from sdkv2 to plugin framework。 If you encounter any problem with this resource and suspect it is due to the migration, you can fallback to sdkv2 by setting the environment variable in the following way `export USE_SDK_V2_RESOURCES=\u003cspan pulumi-lang-nodejs=\"\"databricks.QualityMonitor\"\" pulumi-lang-dotnet=\"\"databricks.QualityMonitor\"\" pulumi-lang-go=\"\"QualityMonitor\"\" pulumi-lang-python=\"\"QualityMonitor\"\" pulumi-lang-yaml=\"\"databricks.QualityMonitor\"\" pulumi-lang-java=\"\"databricks.QualityMonitor\"\"\u003e\"databricks.QualityMonitor\"\u003c/span\u003e`.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    comment: \"this catalog is managed by terraform\",\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst things = new databricks.Schema(\"things\", {\n    catalogName: sandbox.id,\n    name: \"things\",\n    comment: \"this database is managed by terraform\",\n    properties: {\n        kind: \"various\",\n    },\n});\nconst myTestTable = new databricks.SqlTable(\"myTestTable\", {\n    catalogName: \"main\",\n    schemaName: things.name,\n    name: \"bar\",\n    tableType: \"MANAGED\",\n    dataSourceFormat: \"DELTA\",\n    columns: [{\n        name: \"timestamp\",\n        type: \"int\",\n    }],\n});\nconst testTimeseriesMonitor = new databricks.QualityMonitor(\"testTimeseriesMonitor\", {\n    tableName: pulumi.interpolate`${sandbox.name}.${things.name}.${myTestTable.name}`,\n    assetsDir: pulumi.interpolate`/Shared/provider-test/databricks_quality_monitoring/${myTestTable.name}`,\n    outputSchemaName: pulumi.interpolate`${sandbox.name}.${things.name}`,\n    timeSeries: {\n        granularities: [\"1 hour\"],\n        timestampCol: \"timestamp\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    comment=\"this catalog is managed by terraform\",\n    properties={\n        \"purpose\": \"testing\",\n    })\nthings = databricks.Schema(\"things\",\n    catalog_name=sandbox.id,\n    name=\"things\",\n    comment=\"this database is managed by terraform\",\n    properties={\n        \"kind\": \"various\",\n    })\nmy_test_table = databricks.SqlTable(\"myTestTable\",\n    catalog_name=\"main\",\n    schema_name=things.name,\n    name=\"bar\",\n    table_type=\"MANAGED\",\n    data_source_format=\"DELTA\",\n    columns=[{\n        \"name\": \"timestamp\",\n        \"type\": \"int\",\n    }])\ntest_timeseries_monitor = databricks.QualityMonitor(\"testTimeseriesMonitor\",\n    table_name=pulumi.Output.all(\n        sandboxName=sandbox.name,\n        thingsName=things.name,\n        myTestTableName=my_test_table.name\n).apply(lambda resolved_outputs: f\"{resolved_outputs['sandboxName']}.{resolved_outputs['thingsName']}.{resolved_outputs['myTestTableName']}\")\n,\n    assets_dir=my_test_table.name.apply(lambda name: f\"/Shared/provider-test/databricks_quality_monitoring/{name}\"),\n    output_schema_name=pulumi.Output.all(\n        sandboxName=sandbox.name,\n        thingsName=things.name\n).apply(lambda resolved_outputs: f\"{resolved_outputs['sandboxName']}.{resolved_outputs['thingsName']}\")\n,\n    time_series={\n        \"granularities\": [\"1 hour\"],\n        \"timestamp_col\": \"timestamp\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        Comment = \"this catalog is managed by terraform\",\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var things = new Databricks.Schema(\"things\", new()\n    {\n        CatalogName = sandbox.Id,\n        Name = \"things\",\n        Comment = \"this database is managed by terraform\",\n        Properties = \n        {\n            { \"kind\", \"various\" },\n        },\n    });\n\n    var myTestTable = new Databricks.SqlTable(\"myTestTable\", new()\n    {\n        CatalogName = \"main\",\n        SchemaName = things.Name,\n        Name = \"bar\",\n        TableType = \"MANAGED\",\n        DataSourceFormat = \"DELTA\",\n        Columns = new[]\n        {\n            new Databricks.Inputs.SqlTableColumnArgs\n            {\n                Name = \"timestamp\",\n                Type = \"int\",\n            },\n        },\n    });\n\n    var testTimeseriesMonitor = new Databricks.QualityMonitor(\"testTimeseriesMonitor\", new()\n    {\n        TableName = Output.Tuple(sandbox.Name, things.Name, myTestTable.Name).Apply(values =\u003e\n        {\n            var sandboxName = values.Item1;\n            var thingsName = values.Item2;\n            var myTestTableName = values.Item3;\n            return $\"{sandboxName}.{thingsName}.{myTestTableName}\";\n        }),\n        AssetsDir = myTestTable.Name.Apply(name =\u003e $\"/Shared/provider-test/databricks_quality_monitoring/{name}\"),\n        OutputSchemaName = Output.Tuple(sandbox.Name, things.Name).Apply(values =\u003e\n        {\n            var sandboxName = values.Item1;\n            var thingsName = values.Item2;\n            return $\"{sandboxName}.{thingsName}\";\n        }),\n        TimeSeries = new Databricks.Inputs.QualityMonitorTimeSeriesArgs\n        {\n            Granularities = new[]\n            {\n                \"1 hour\",\n            },\n            TimestampCol = \"timestamp\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:    pulumi.String(\"sandbox\"),\n\t\t\tComment: pulumi.String(\"this catalog is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthings, err := databricks.NewSchema(ctx, \"things\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: sandbox.ID(),\n\t\t\tName:        pulumi.String(\"things\"),\n\t\t\tComment:     pulumi.String(\"this database is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"kind\": pulumi.String(\"various\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmyTestTable, err := databricks.NewSqlTable(ctx, \"myTestTable\", \u0026databricks.SqlTableArgs{\n\t\t\tCatalogName:      pulumi.String(\"main\"),\n\t\t\tSchemaName:       things.Name,\n\t\t\tName:             pulumi.String(\"bar\"),\n\t\t\tTableType:        pulumi.String(\"MANAGED\"),\n\t\t\tDataSourceFormat: pulumi.String(\"DELTA\"),\n\t\t\tColumns: databricks.SqlTableColumnArray{\n\t\t\t\t\u0026databricks.SqlTableColumnArgs{\n\t\t\t\t\tName: pulumi.String(\"timestamp\"),\n\t\t\t\t\tType: pulumi.String(\"int\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewQualityMonitor(ctx, \"testTimeseriesMonitor\", \u0026databricks.QualityMonitorArgs{\n\t\t\tTableName: pulumi.All(sandbox.Name, things.Name, myTestTable.Name).ApplyT(func(_args []interface{}) (string, error) {\n\t\t\t\tsandboxName := _args[0].(string)\n\t\t\t\tthingsName := _args[1].(string)\n\t\t\t\tmyTestTableName := _args[2].(string)\n\t\t\t\treturn fmt.Sprintf(\"%v.%v.%v\", sandboxName, thingsName, myTestTableName), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tAssetsDir: myTestTable.Name.ApplyT(func(name string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"/Shared/provider-test/databricks_quality_monitoring/%v\", name), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tOutputSchemaName: pulumi.All(sandbox.Name, things.Name).ApplyT(func(_args []interface{}) (string, error) {\n\t\t\t\tsandboxName := _args[0].(string)\n\t\t\t\tthingsName := _args[1].(string)\n\t\t\t\treturn fmt.Sprintf(\"%v.%v\", sandboxName, thingsName), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tTimeSeries: \u0026databricks.QualityMonitorTimeSeriesArgs{\n\t\t\t\tGranularities: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"1 hour\"),\n\t\t\t\t},\n\t\t\t\tTimestampCol: pulumi.String(\"timestamp\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport com.pulumi.databricks.SqlTable;\nimport com.pulumi.databricks.SqlTableArgs;\nimport com.pulumi.databricks.inputs.SqlTableColumnArgs;\nimport com.pulumi.databricks.QualityMonitor;\nimport com.pulumi.databricks.QualityMonitorArgs;\nimport com.pulumi.databricks.inputs.QualityMonitorTimeSeriesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .comment(\"this catalog is managed by terraform\")\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var things = new Schema(\"things\", SchemaArgs.builder()\n            .catalogName(sandbox.id())\n            .name(\"things\")\n            .comment(\"this database is managed by terraform\")\n            .properties(Map.of(\"kind\", \"various\"))\n            .build());\n\n        var myTestTable = new SqlTable(\"myTestTable\", SqlTableArgs.builder()\n            .catalogName(\"main\")\n            .schemaName(things.name())\n            .name(\"bar\")\n            .tableType(\"MANAGED\")\n            .dataSourceFormat(\"DELTA\")\n            .columns(SqlTableColumnArgs.builder()\n                .name(\"timestamp\")\n                .type(\"int\")\n                .build())\n            .build());\n\n        var testTimeseriesMonitor = new QualityMonitor(\"testTimeseriesMonitor\", QualityMonitorArgs.builder()\n            .tableName(Output.tuple(sandbox.name(), things.name(), myTestTable.name()).applyValue(values -\u003e {\n                var sandboxName = values.t1;\n                var thingsName = values.t2;\n                var myTestTableName = values.t3;\n                return String.format(\"%s.%s.%s\", sandboxName,thingsName,myTestTableName);\n            }))\n            .assetsDir(myTestTable.name().applyValue(_name -\u003e String.format(\"/Shared/provider-test/databricks_quality_monitoring/%s\", _name)))\n            .outputSchemaName(Output.tuple(sandbox.name(), things.name()).applyValue(values -\u003e {\n                var sandboxName = values.t1;\n                var thingsName = values.t2;\n                return String.format(\"%s.%s\", sandboxName,thingsName);\n            }))\n            .timeSeries(QualityMonitorTimeSeriesArgs.builder()\n                .granularities(\"1 hour\")\n                .timestampCol(\"timestamp\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      comment: this catalog is managed by terraform\n      properties:\n        purpose: testing\n  things:\n    type: databricks:Schema\n    properties:\n      catalogName: ${sandbox.id}\n      name: things\n      comment: this database is managed by terraform\n      properties:\n        kind: various\n  myTestTable:\n    type: databricks:SqlTable\n    properties:\n      catalogName: main\n      schemaName: ${things.name}\n      name: bar\n      tableType: MANAGED\n      dataSourceFormat: DELTA\n      columns:\n        - name: timestamp\n          type: int\n  testTimeseriesMonitor:\n    type: databricks:QualityMonitor\n    properties:\n      tableName: ${sandbox.name}.${things.name}.${myTestTable.name}\n      assetsDir: /Shared/provider-test/databricks_quality_monitoring/${myTestTable.name}\n      outputSchemaName: ${sandbox.name}.${things.name}\n      timeSeries:\n        granularities:\n          - 1 hour\n        timestampCol: timestamp\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Inference Monitor\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst testMonitorInference = new databricks.QualityMonitor(\"testMonitorInference\", {\n    tableName: `${sandbox.name}.${things.name}.${myTestTable.name}`,\n    assetsDir: `/Shared/provider-test/databricks_quality_monitoring/${myTestTable.name}`,\n    outputSchemaName: `${sandbox.name}.${things.name}`,\n    inferenceLog: {\n        granularities: [\"1 hour\"],\n        timestampCol: \"timestamp\",\n        predictionCol: \"prediction\",\n        modelIdCol: \"model_id\",\n        problemType: \"PROBLEM_TYPE_REGRESSION\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ntest_monitor_inference = databricks.QualityMonitor(\"testMonitorInference\",\n    table_name=f\"{sandbox['name']}.{things['name']}.{my_test_table['name']}\",\n    assets_dir=f\"/Shared/provider-test/databricks_quality_monitoring/{my_test_table['name']}\",\n    output_schema_name=f\"{sandbox['name']}.{things['name']}\",\n    inference_log={\n        \"granularities\": [\"1 hour\"],\n        \"timestamp_col\": \"timestamp\",\n        \"prediction_col\": \"prediction\",\n        \"model_id_col\": \"model_id\",\n        \"problem_type\": \"PROBLEM_TYPE_REGRESSION\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var testMonitorInference = new Databricks.QualityMonitor(\"testMonitorInference\", new()\n    {\n        TableName = $\"{sandbox.Name}.{things.Name}.{myTestTable.Name}\",\n        AssetsDir = $\"/Shared/provider-test/databricks_quality_monitoring/{myTestTable.Name}\",\n        OutputSchemaName = $\"{sandbox.Name}.{things.Name}\",\n        InferenceLog = new Databricks.Inputs.QualityMonitorInferenceLogArgs\n        {\n            Granularities = new[]\n            {\n                \"1 hour\",\n            },\n            TimestampCol = \"timestamp\",\n            PredictionCol = \"prediction\",\n            ModelIdCol = \"model_id\",\n            ProblemType = \"PROBLEM_TYPE_REGRESSION\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewQualityMonitor(ctx, \"testMonitorInference\", \u0026databricks.QualityMonitorArgs{\n\t\t\tTableName:        pulumi.Sprintf(\"%v.%v.%v\", sandbox.Name, things.Name, myTestTable.Name),\n\t\t\tAssetsDir:        pulumi.Sprintf(\"/Shared/provider-test/databricks_quality_monitoring/%v\", myTestTable.Name),\n\t\t\tOutputSchemaName: pulumi.Sprintf(\"%v.%v\", sandbox.Name, things.Name),\n\t\t\tInferenceLog: \u0026databricks.QualityMonitorInferenceLogArgs{\n\t\t\t\tGranularities: pulumi.StringArray{\n\t\t\t\t\tpulumi.String(\"1 hour\"),\n\t\t\t\t},\n\t\t\t\tTimestampCol:  pulumi.String(\"timestamp\"),\n\t\t\t\tPredictionCol: pulumi.String(\"prediction\"),\n\t\t\t\tModelIdCol:    pulumi.String(\"model_id\"),\n\t\t\t\tProblemType:   pulumi.String(\"PROBLEM_TYPE_REGRESSION\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.QualityMonitor;\nimport com.pulumi.databricks.QualityMonitorArgs;\nimport com.pulumi.databricks.inputs.QualityMonitorInferenceLogArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var testMonitorInference = new QualityMonitor(\"testMonitorInference\", QualityMonitorArgs.builder()\n            .tableName(String.format(\"%s.%s.%s\", sandbox.name(),things.name(),myTestTable.name()))\n            .assetsDir(String.format(\"/Shared/provider-test/databricks_quality_monitoring/%s\", myTestTable.name()))\n            .outputSchemaName(String.format(\"%s.%s\", sandbox.name(),things.name()))\n            .inferenceLog(QualityMonitorInferenceLogArgs.builder()\n                .granularities(\"1 hour\")\n                .timestampCol(\"timestamp\")\n                .predictionCol(\"prediction\")\n                .modelIdCol(\"model_id\")\n                .problemType(\"PROBLEM_TYPE_REGRESSION\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  testMonitorInference:\n    type: databricks:QualityMonitor\n    properties:\n      tableName: ${sandbox.name}.${things.name}.${myTestTable.name}\n      assetsDir: /Shared/provider-test/databricks_quality_monitoring/${myTestTable.name}\n      outputSchemaName: ${sandbox.name}.${things.name}\n      inferenceLog:\n        granularities:\n          - 1 hour\n        timestampCol: timestamp\n        predictionCol: prediction\n        modelIdCol: model_id\n        problemType: PROBLEM_TYPE_REGRESSION\n```\n\u003c!--End PulumiCodeChooser --\u003e\n### Snapshot Monitor\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst testMonitorInference = new databricks.QualityMonitor(\"testMonitorInference\", {\n    tableName: `${sandbox.name}.${things.name}.${myTestTable.name}`,\n    assetsDir: `/Shared/provider-test/databricks_quality_monitoring/${myTestTable.name}`,\n    outputSchemaName: `${sandbox.name}.${things.name}`,\n    snapshot: {},\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ntest_monitor_inference = databricks.QualityMonitor(\"testMonitorInference\",\n    table_name=f\"{sandbox['name']}.{things['name']}.{my_test_table['name']}\",\n    assets_dir=f\"/Shared/provider-test/databricks_quality_monitoring/{my_test_table['name']}\",\n    output_schema_name=f\"{sandbox['name']}.{things['name']}\",\n    snapshot={})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var testMonitorInference = new Databricks.QualityMonitor(\"testMonitorInference\", new()\n    {\n        TableName = $\"{sandbox.Name}.{things.Name}.{myTestTable.Name}\",\n        AssetsDir = $\"/Shared/provider-test/databricks_quality_monitoring/{myTestTable.Name}\",\n        OutputSchemaName = $\"{sandbox.Name}.{things.Name}\",\n        Snapshot = null,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewQualityMonitor(ctx, \"testMonitorInference\", \u0026databricks.QualityMonitorArgs{\n\t\t\tTableName:        pulumi.Sprintf(\"%v.%v.%v\", sandbox.Name, things.Name, myTestTable.Name),\n\t\t\tAssetsDir:        pulumi.Sprintf(\"/Shared/provider-test/databricks_quality_monitoring/%v\", myTestTable.Name),\n\t\t\tOutputSchemaName: pulumi.Sprintf(\"%v.%v\", sandbox.Name, things.Name),\n\t\t\tSnapshot:         \u0026databricks.QualityMonitorSnapshotArgs{},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.QualityMonitor;\nimport com.pulumi.databricks.QualityMonitorArgs;\nimport com.pulumi.databricks.inputs.QualityMonitorSnapshotArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var testMonitorInference = new QualityMonitor(\"testMonitorInference\", QualityMonitorArgs.builder()\n            .tableName(String.format(\"%s.%s.%s\", sandbox.name(),things.name(),myTestTable.name()))\n            .assetsDir(String.format(\"/Shared/provider-test/databricks_quality_monitoring/%s\", myTestTable.name()))\n            .outputSchemaName(String.format(\"%s.%s\", sandbox.name(),things.name()))\n            .snapshot(QualityMonitorSnapshotArgs.builder()\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  testMonitorInference:\n    type: databricks:QualityMonitor\n    properties:\n      tableName: ${sandbox.name}.${things.name}.${myTestTable.name}\n      assetsDir: /Shared/provider-test/databricks_quality_monitoring/${myTestTable.name}\n      outputSchemaName: ${sandbox.name}.${things.name}\n      snapshot: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog\n\" pulumi-lang-dotnet=\" databricks.Catalog\n\" pulumi-lang-go=\" Catalog\n\" pulumi-lang-python=\" Catalog\n\" pulumi-lang-yaml=\" databricks.Catalog\n\" pulumi-lang-java=\" databricks.Catalog\n\"\u003e databricks.Catalog\n\u003c/span\u003e*\u003cspan pulumi-lang-nodejs=\" databricks.Schema\n\" pulumi-lang-dotnet=\" databricks.Schema\n\" pulumi-lang-go=\" Schema\n\" pulumi-lang-python=\" Schema\n\" pulumi-lang-yaml=\" databricks.Schema\n\" pulumi-lang-java=\" databricks.Schema\n\"\u003e databricks.Schema\n\u003c/span\u003e*\u003cspan pulumi-lang-nodejs=\" databricks.SqlTable\n\" pulumi-lang-dotnet=\" databricks.SqlTable\n\" pulumi-lang-go=\" SqlTable\n\" pulumi-lang-python=\" SqlTable\n\" pulumi-lang-yaml=\" databricks.SqlTable\n\" pulumi-lang-java=\" databricks.SqlTable\n\"\u003e databricks.SqlTable\n\u003c/span\u003e\n","properties":{"assetsDir":{"type":"string","description":"The directory to store the monitoring assets (Eg. Dashboard and Metric Tables)\n"},"baselineTableName":{"type":"string","description":"Name of the baseline table from which drift metrics are computed from.Columns in the monitored table should also be present in the baseline\ntable.\n"},"customMetrics":{"type":"array","items":{"$ref":"#/types/databricks:index/QualityMonitorCustomMetric:QualityMonitorCustomMetric"},"description":"Custom metrics to compute on the monitored table. These can be aggregate metrics, derived metrics (from already computed aggregate metrics), or drift metrics (comparing metrics across time windows).\n"},"dashboardId":{"type":"string","description":"The ID of the generated dashboard.\n"},"dataClassificationConfig":{"$ref":"#/types/databricks:index/QualityMonitorDataClassificationConfig:QualityMonitorDataClassificationConfig","description":"The data classification config for the monitor\n"},"driftMetricsTableName":{"type":"string","description":"The full name of the drift metrics table. Format: __catalog_name__.__schema_name__.__table_name__.\n"},"inferenceLog":{"$ref":"#/types/databricks:index/QualityMonitorInferenceLog:QualityMonitorInferenceLog","description":"Configuration for the inference log monitor\n"},"latestMonitorFailureMsg":{"type":"string"},"monitorId":{"type":"string","description":"ID of this monitor is the same as the full table name of the format `{catalog}.{schema_name}.{table_name}`\n"},"monitorVersion":{"type":"integer","description":"The version of the monitor config (e.g. 1,2,3). If negative, the monitor may be corrupted\n"},"notifications":{"$ref":"#/types/databricks:index/QualityMonitorNotifications:QualityMonitorNotifications","description":"The notification settings for the monitor.  The following optional blocks are supported, each consisting of the single string array field with name \u003cspan pulumi-lang-nodejs=\"`emailAddresses`\" pulumi-lang-dotnet=\"`EmailAddresses`\" pulumi-lang-go=\"`emailAddresses`\" pulumi-lang-python=\"`email_addresses`\" pulumi-lang-yaml=\"`emailAddresses`\" pulumi-lang-java=\"`emailAddresses`\"\u003e`email_addresses`\u003c/span\u003e containing a list of emails to notify:\n"},"outputSchemaName":{"type":"string","description":"Schema where output metric tables are created\n"},"profileMetricsTableName":{"type":"string","description":"The full name of the profile metrics table. Format: __catalog_name__.__schema_name__.__table_name__.\n"},"providerConfig":{"$ref":"#/types/databricks:index/QualityMonitorProviderConfig:QualityMonitorProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schedule":{"$ref":"#/types/databricks:index/QualityMonitorSchedule:QualityMonitorSchedule","description":"The schedule for automatically updating and refreshing metric tables.  This block consists of following fields:\n"},"skipBuiltinDashboard":{"type":"boolean","description":"Whether to skip creating a default dashboard summarizing data quality metrics.  (Can't be updated after creation).\n"},"slicingExprs":{"type":"array","items":{"type":"string"},"description":"List of column expressions to slice data with for targeted analysis. The data is grouped by each expression independently, resulting in a separate slice for each predicate and its complements. For high-cardinality columns, only the top 100 unique values by frequency will generate slices.\n"},"snapshot":{"$ref":"#/types/databricks:index/QualityMonitorSnapshot:QualityMonitorSnapshot","description":"Configuration for monitoring snapshot tables.\n"},"status":{"type":"string","description":"Status of the Monitor\n"},"tableName":{"type":"string","description":"The full name of the table to attach the monitor too. Its of the format {catalog}.{schema}.{tableName}\n"},"timeSeries":{"$ref":"#/types/databricks:index/QualityMonitorTimeSeries:QualityMonitorTimeSeries","description":"Configuration for monitoring timeseries tables.\n"},"warehouseId":{"type":"string","description":"Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used.  (Can't be updated after creation)\n"}},"required":["assetsDir","dashboardId","driftMetricsTableName","monitorId","monitorVersion","outputSchemaName","profileMetricsTableName","status","tableName"],"inputProperties":{"assetsDir":{"type":"string","description":"The directory to store the monitoring assets (Eg. Dashboard and Metric Tables)\n"},"baselineTableName":{"type":"string","description":"Name of the baseline table from which drift metrics are computed from.Columns in the monitored table should also be present in the baseline\ntable.\n"},"customMetrics":{"type":"array","items":{"$ref":"#/types/databricks:index/QualityMonitorCustomMetric:QualityMonitorCustomMetric"},"description":"Custom metrics to compute on the monitored table. These can be aggregate metrics, derived metrics (from already computed aggregate metrics), or drift metrics (comparing metrics across time windows).\n"},"dataClassificationConfig":{"$ref":"#/types/databricks:index/QualityMonitorDataClassificationConfig:QualityMonitorDataClassificationConfig","description":"The data classification config for the monitor\n"},"inferenceLog":{"$ref":"#/types/databricks:index/QualityMonitorInferenceLog:QualityMonitorInferenceLog","description":"Configuration for the inference log monitor\n"},"latestMonitorFailureMsg":{"type":"string"},"monitorId":{"type":"string","description":"ID of this monitor is the same as the full table name of the format `{catalog}.{schema_name}.{table_name}`\n"},"notifications":{"$ref":"#/types/databricks:index/QualityMonitorNotifications:QualityMonitorNotifications","description":"The notification settings for the monitor.  The following optional blocks are supported, each consisting of the single string array field with name \u003cspan pulumi-lang-nodejs=\"`emailAddresses`\" pulumi-lang-dotnet=\"`EmailAddresses`\" pulumi-lang-go=\"`emailAddresses`\" pulumi-lang-python=\"`email_addresses`\" pulumi-lang-yaml=\"`emailAddresses`\" pulumi-lang-java=\"`emailAddresses`\"\u003e`email_addresses`\u003c/span\u003e containing a list of emails to notify:\n"},"outputSchemaName":{"type":"string","description":"Schema where output metric tables are created\n"},"providerConfig":{"$ref":"#/types/databricks:index/QualityMonitorProviderConfig:QualityMonitorProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schedule":{"$ref":"#/types/databricks:index/QualityMonitorSchedule:QualityMonitorSchedule","description":"The schedule for automatically updating and refreshing metric tables.  This block consists of following fields:\n"},"skipBuiltinDashboard":{"type":"boolean","description":"Whether to skip creating a default dashboard summarizing data quality metrics.  (Can't be updated after creation).\n"},"slicingExprs":{"type":"array","items":{"type":"string"},"description":"List of column expressions to slice data with for targeted analysis. The data is grouped by each expression independently, resulting in a separate slice for each predicate and its complements. For high-cardinality columns, only the top 100 unique values by frequency will generate slices.\n"},"snapshot":{"$ref":"#/types/databricks:index/QualityMonitorSnapshot:QualityMonitorSnapshot","description":"Configuration for monitoring snapshot tables.\n"},"tableName":{"type":"string","description":"The full name of the table to attach the monitor too. Its of the format {catalog}.{schema}.{tableName}\n"},"timeSeries":{"$ref":"#/types/databricks:index/QualityMonitorTimeSeries:QualityMonitorTimeSeries","description":"Configuration for monitoring timeseries tables.\n"},"warehouseId":{"type":"string","description":"Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used.  (Can't be updated after creation)\n"}},"requiredInputs":["assetsDir","outputSchemaName","tableName"],"stateInputs":{"description":"Input properties used for looking up and filtering QualityMonitor resources.\n","properties":{"assetsDir":{"type":"string","description":"The directory to store the monitoring assets (Eg. Dashboard and Metric Tables)\n"},"baselineTableName":{"type":"string","description":"Name of the baseline table from which drift metrics are computed from.Columns in the monitored table should also be present in the baseline\ntable.\n"},"customMetrics":{"type":"array","items":{"$ref":"#/types/databricks:index/QualityMonitorCustomMetric:QualityMonitorCustomMetric"},"description":"Custom metrics to compute on the monitored table. These can be aggregate metrics, derived metrics (from already computed aggregate metrics), or drift metrics (comparing metrics across time windows).\n"},"dashboardId":{"type":"string","description":"The ID of the generated dashboard.\n"},"dataClassificationConfig":{"$ref":"#/types/databricks:index/QualityMonitorDataClassificationConfig:QualityMonitorDataClassificationConfig","description":"The data classification config for the monitor\n"},"driftMetricsTableName":{"type":"string","description":"The full name of the drift metrics table. Format: __catalog_name__.__schema_name__.__table_name__.\n"},"inferenceLog":{"$ref":"#/types/databricks:index/QualityMonitorInferenceLog:QualityMonitorInferenceLog","description":"Configuration for the inference log monitor\n"},"latestMonitorFailureMsg":{"type":"string"},"monitorId":{"type":"string","description":"ID of this monitor is the same as the full table name of the format `{catalog}.{schema_name}.{table_name}`\n"},"monitorVersion":{"type":"integer","description":"The version of the monitor config (e.g. 1,2,3). If negative, the monitor may be corrupted\n"},"notifications":{"$ref":"#/types/databricks:index/QualityMonitorNotifications:QualityMonitorNotifications","description":"The notification settings for the monitor.  The following optional blocks are supported, each consisting of the single string array field with name \u003cspan pulumi-lang-nodejs=\"`emailAddresses`\" pulumi-lang-dotnet=\"`EmailAddresses`\" pulumi-lang-go=\"`emailAddresses`\" pulumi-lang-python=\"`email_addresses`\" pulumi-lang-yaml=\"`emailAddresses`\" pulumi-lang-java=\"`emailAddresses`\"\u003e`email_addresses`\u003c/span\u003e containing a list of emails to notify:\n"},"outputSchemaName":{"type":"string","description":"Schema where output metric tables are created\n"},"profileMetricsTableName":{"type":"string","description":"The full name of the profile metrics table. Format: __catalog_name__.__schema_name__.__table_name__.\n"},"providerConfig":{"$ref":"#/types/databricks:index/QualityMonitorProviderConfig:QualityMonitorProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schedule":{"$ref":"#/types/databricks:index/QualityMonitorSchedule:QualityMonitorSchedule","description":"The schedule for automatically updating and refreshing metric tables.  This block consists of following fields:\n"},"skipBuiltinDashboard":{"type":"boolean","description":"Whether to skip creating a default dashboard summarizing data quality metrics.  (Can't be updated after creation).\n"},"slicingExprs":{"type":"array","items":{"type":"string"},"description":"List of column expressions to slice data with for targeted analysis. The data is grouped by each expression independently, resulting in a separate slice for each predicate and its complements. For high-cardinality columns, only the top 100 unique values by frequency will generate slices.\n"},"snapshot":{"$ref":"#/types/databricks:index/QualityMonitorSnapshot:QualityMonitorSnapshot","description":"Configuration for monitoring snapshot tables.\n"},"status":{"type":"string","description":"Status of the Monitor\n"},"tableName":{"type":"string","description":"The full name of the table to attach the monitor too. Its of the format {catalog}.{schema}.{tableName}\n"},"timeSeries":{"$ref":"#/types/databricks:index/QualityMonitorTimeSeries:QualityMonitorTimeSeries","description":"Configuration for monitoring timeseries tables.\n"},"warehouseId":{"type":"string","description":"Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used.  (Can't be updated after creation)\n"}},"type":"object"}},"databricks:index/qualityMonitorV2:QualityMonitorV2":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n\u003e **Deprecated** This resource is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`databricks.DataQualityMonitor`\" pulumi-lang-dotnet=\"`databricks.DataQualityMonitor`\" pulumi-lang-go=\"`DataQualityMonitor`\" pulumi-lang-python=\"`DataQualityMonitor`\" pulumi-lang-yaml=\"`databricks.DataQualityMonitor`\" pulumi-lang-java=\"`databricks.DataQualityMonitor`\"\u003e`databricks.DataQualityMonitor`\u003c/span\u003e instead.\n\nUsers with MANAGE Schema can use quality monitor v2 to set up data quality monitoring checks for UC objects, currently support schema. \n\n\n\u003e **Note** This resource can only be used with an workspace-level provider!\n\n\n## Example Usage\n\n\u003e **Deprecated** This resource is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`databricks.DataQualityMonitor`\" pulumi-lang-dotnet=\"`databricks.DataQualityMonitor`\" pulumi-lang-go=\"`DataQualityMonitor`\" pulumi-lang-python=\"`DataQualityMonitor`\" pulumi-lang-yaml=\"`databricks.DataQualityMonitor`\" pulumi-lang-java=\"`databricks.DataQualityMonitor`\"\u003e`databricks.DataQualityMonitor`\u003c/span\u003e instead.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Schema(\"this\", {\n    catalogName: \"my_catalog\",\n    name: \"my_schema\",\n});\nconst thisQualityMonitorV2 = new databricks.QualityMonitorV2(\"this\", {\n    objectType: \"schema\",\n    objectId: _this.schemaId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Schema(\"this\",\n    catalog_name=\"my_catalog\",\n    name=\"my_schema\")\nthis_quality_monitor_v2 = databricks.QualityMonitorV2(\"this\",\n    object_type=\"schema\",\n    object_id=this.schema_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Schema(\"this\", new()\n    {\n        CatalogName = \"my_catalog\",\n        Name = \"my_schema\",\n    });\n\n    var thisQualityMonitorV2 = new Databricks.QualityMonitorV2(\"this\", new()\n    {\n        ObjectType = \"schema\",\n        ObjectId = @this.SchemaId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewSchema(ctx, \"this\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: pulumi.String(\"my_catalog\"),\n\t\t\tName:        pulumi.String(\"my_schema\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewQualityMonitorV2(ctx, \"this\", \u0026databricks.QualityMonitorV2Args{\n\t\t\tObjectType: pulumi.String(\"schema\"),\n\t\t\tObjectId:   this.SchemaId,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport com.pulumi.databricks.QualityMonitorV2;\nimport com.pulumi.databricks.QualityMonitorV2Args;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Schema(\"this\", SchemaArgs.builder()\n            .catalogName(\"my_catalog\")\n            .name(\"my_schema\")\n            .build());\n\n        var thisQualityMonitorV2 = new QualityMonitorV2(\"thisQualityMonitorV2\", QualityMonitorV2Args.builder()\n            .objectType(\"schema\")\n            .objectId(this_.schemaId())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Schema\n    properties:\n      catalogName: my_catalog\n      name: my_schema\n  thisQualityMonitorV2:\n    type: databricks:QualityMonitorV2\n    name: this\n    properties:\n      objectType: schema\n      objectId: ${this.schemaId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"anomalyDetectionConfig":{"$ref":"#/types/databricks:index/QualityMonitorV2AnomalyDetectionConfig:QualityMonitorV2AnomalyDetectionConfig","description":"(AnomalyDetectionConfig)\n"},"objectId":{"type":"string","description":"The uuid of the request object. For example, schema id\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: schema\n"},"providerConfig":{"$ref":"#/types/databricks:index/QualityMonitorV2ProviderConfig:QualityMonitorV2ProviderConfig","description":"Configure the provider for management through account provider.\n"},"validityCheckConfigurations":{"type":"array","items":{"$ref":"#/types/databricks:index/QualityMonitorV2ValidityCheckConfiguration:QualityMonitorV2ValidityCheckConfiguration"},"description":"Validity check configurations for anomaly detection\n"}},"required":["anomalyDetectionConfig","objectId","objectType","validityCheckConfigurations"],"inputProperties":{"objectId":{"type":"string","description":"The uuid of the request object. For example, schema id\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: schema\n"},"providerConfig":{"$ref":"#/types/databricks:index/QualityMonitorV2ProviderConfig:QualityMonitorV2ProviderConfig","description":"Configure the provider for management through account provider.\n"},"validityCheckConfigurations":{"type":"array","items":{"$ref":"#/types/databricks:index/QualityMonitorV2ValidityCheckConfiguration:QualityMonitorV2ValidityCheckConfiguration"},"description":"Validity check configurations for anomaly detection\n"}},"requiredInputs":["objectId","objectType"],"stateInputs":{"description":"Input properties used for looking up and filtering QualityMonitorV2 resources.\n","properties":{"anomalyDetectionConfig":{"$ref":"#/types/databricks:index/QualityMonitorV2AnomalyDetectionConfig:QualityMonitorV2AnomalyDetectionConfig","description":"(AnomalyDetectionConfig)\n"},"objectId":{"type":"string","description":"The uuid of the request object. For example, schema id\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: schema\n"},"providerConfig":{"$ref":"#/types/databricks:index/QualityMonitorV2ProviderConfig:QualityMonitorV2ProviderConfig","description":"Configure the provider for management through account provider.\n"},"validityCheckConfigurations":{"type":"array","items":{"$ref":"#/types/databricks:index/QualityMonitorV2ValidityCheckConfiguration:QualityMonitorV2ValidityCheckConfiguration"},"description":"Validity check configurations for anomaly detection\n"}},"type":"object"}},"databricks:index/query:Query":{"description":"This resource allows you to manage [Databricks SQL Queries](https://docs.databricks.com/en/sql/user/queries/index.html).  It supersedes\u003cspan pulumi-lang-nodejs=\" databricks.SqlQuery \" pulumi-lang-dotnet=\" databricks.SqlQuery \" pulumi-lang-go=\" SqlQuery \" pulumi-lang-python=\" SqlQuery \" pulumi-lang-yaml=\" databricks.SqlQuery \" pulumi-lang-java=\" databricks.SqlQuery \"\u003e databricks.SqlQuery \u003c/span\u003eresource - see migration guide below for more details.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sharedDir = new databricks.Directory(\"shared_dir\", {path: \"/Shared/Queries\"});\n// This will be replaced with new databricks_query resource\nconst _this = new databricks.Query(\"this\", {\n    warehouseId: example.id,\n    displayName: \"My Query Name\",\n    queryText: \"SELECT 42 as value\",\n    parentPath: sharedDir.path,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nshared_dir = databricks.Directory(\"shared_dir\", path=\"/Shared/Queries\")\n# This will be replaced with new databricks_query resource\nthis = databricks.Query(\"this\",\n    warehouse_id=example[\"id\"],\n    display_name=\"My Query Name\",\n    query_text=\"SELECT 42 as value\",\n    parent_path=shared_dir.path)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sharedDir = new Databricks.Directory(\"shared_dir\", new()\n    {\n        Path = \"/Shared/Queries\",\n    });\n\n    // This will be replaced with new databricks_query resource\n    var @this = new Databricks.Query(\"this\", new()\n    {\n        WarehouseId = example.Id,\n        DisplayName = \"My Query Name\",\n        QueryText = \"SELECT 42 as value\",\n        ParentPath = sharedDir.Path,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsharedDir, err := databricks.NewDirectory(ctx, \"shared_dir\", \u0026databricks.DirectoryArgs{\n\t\t\tPath: pulumi.String(\"/Shared/Queries\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// This will be replaced with new databricks_query resource\n\t\t_, err = databricks.NewQuery(ctx, \"this\", \u0026databricks.QueryArgs{\n\t\t\tWarehouseId: pulumi.Any(example.Id),\n\t\t\tDisplayName: pulumi.String(\"My Query Name\"),\n\t\t\tQueryText:   pulumi.String(\"SELECT 42 as value\"),\n\t\t\tParentPath:  sharedDir.Path,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Directory;\nimport com.pulumi.databricks.DirectoryArgs;\nimport com.pulumi.databricks.Query;\nimport com.pulumi.databricks.QueryArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sharedDir = new Directory(\"sharedDir\", DirectoryArgs.builder()\n            .path(\"/Shared/Queries\")\n            .build());\n\n        // This will be replaced with new databricks_query resource\n        var this_ = new Query(\"this\", QueryArgs.builder()\n            .warehouseId(example.id())\n            .displayName(\"My Query Name\")\n            .queryText(\"SELECT 42 as value\")\n            .parentPath(sharedDir.path())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sharedDir:\n    type: databricks:Directory\n    name: shared_dir\n    properties:\n      path: /Shared/Queries\n  # This will be replaced with new databricks_query resource\n  this:\n    type: databricks:Query\n    properties:\n      warehouseId: ${example.id}\n      displayName: My Query Name\n      queryText: SELECT 42 as value\n      parentPath: ${sharedDir.path}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Migrating from \u003cspan pulumi-lang-nodejs=\"`databricks.SqlQuery`\" pulumi-lang-dotnet=\"`databricks.SqlQuery`\" pulumi-lang-go=\"`SqlQuery`\" pulumi-lang-python=\"`SqlQuery`\" pulumi-lang-yaml=\"`databricks.SqlQuery`\" pulumi-lang-java=\"`databricks.SqlQuery`\"\u003e`databricks.SqlQuery`\u003c/span\u003e resource\n\nUnder the hood, the new resource uses the same data as the \u003cspan pulumi-lang-nodejs=\"`databricks.SqlQuery`\" pulumi-lang-dotnet=\"`databricks.SqlQuery`\" pulumi-lang-go=\"`SqlQuery`\" pulumi-lang-python=\"`SqlQuery`\" pulumi-lang-yaml=\"`databricks.SqlQuery`\" pulumi-lang-java=\"`databricks.SqlQuery`\"\u003e`databricks.SqlQuery`\u003c/span\u003e, but exposed via different API. This means that we can migrate existing queries without recreating them.  This operation is done in few steps:\n\n* Record the ID of existing \u003cspan pulumi-lang-nodejs=\"`databricks.SqlQuery`\" pulumi-lang-dotnet=\"`databricks.SqlQuery`\" pulumi-lang-go=\"`SqlQuery`\" pulumi-lang-python=\"`SqlQuery`\" pulumi-lang-yaml=\"`databricks.SqlQuery`\" pulumi-lang-java=\"`databricks.SqlQuery`\"\u003e`databricks.SqlQuery`\u003c/span\u003e, for example, by executing the `terraform state show databricks_sql_query.query` command.\n* Create the code for the new implementation performing following changes:\n  * the \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e attribute is now named \u003cspan pulumi-lang-nodejs=\"`displayName`\" pulumi-lang-dotnet=\"`DisplayName`\" pulumi-lang-go=\"`displayName`\" pulumi-lang-python=\"`display_name`\" pulumi-lang-yaml=\"`displayName`\" pulumi-lang-java=\"`displayName`\"\u003e`display_name`\u003c/span\u003e\n  * the \u003cspan pulumi-lang-nodejs=\"`parent`\" pulumi-lang-dotnet=\"`Parent`\" pulumi-lang-go=\"`parent`\" pulumi-lang-python=\"`parent`\" pulumi-lang-yaml=\"`parent`\" pulumi-lang-java=\"`parent`\"\u003e`parent`\u003c/span\u003e (if exists) is renamed to \u003cspan pulumi-lang-nodejs=\"`parentPath`\" pulumi-lang-dotnet=\"`ParentPath`\" pulumi-lang-go=\"`parentPath`\" pulumi-lang-python=\"`parent_path`\" pulumi-lang-yaml=\"`parentPath`\" pulumi-lang-java=\"`parentPath`\"\u003e`parent_path`\u003c/span\u003e attribute, and should be converted from `folders/object_id` to the actual path.\n  * Blocks that specify values in the \u003cspan pulumi-lang-nodejs=\"`parameter`\" pulumi-lang-dotnet=\"`Parameter`\" pulumi-lang-go=\"`parameter`\" pulumi-lang-python=\"`parameter`\" pulumi-lang-yaml=\"`parameter`\" pulumi-lang-java=\"`parameter`\"\u003e`parameter`\u003c/span\u003e block were renamed (see above).\n  \nFor example, if we have the original \u003cspan pulumi-lang-nodejs=\"`databricks.SqlQuery`\" pulumi-lang-dotnet=\"`databricks.SqlQuery`\" pulumi-lang-go=\"`SqlQuery`\" pulumi-lang-python=\"`SqlQuery`\" pulumi-lang-yaml=\"`databricks.SqlQuery`\" pulumi-lang-java=\"`databricks.SqlQuery`\"\u003e`databricks.SqlQuery`\u003c/span\u003e defined as:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst query = new databricks.SqlQuery(\"query\", {\n    dataSourceId: example.dataSourceId,\n    query: \"select 42 as value\",\n    name: \"My Query\",\n    parent: `folders/${sharedDir.objectId}`,\n    parameters: [{\n        name: \"p1\",\n        title: \"Title for p1\",\n        text: {\n            value: \"default\",\n        },\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nquery = databricks.SqlQuery(\"query\",\n    data_source_id=example[\"dataSourceId\"],\n    query=\"select 42 as value\",\n    name=\"My Query\",\n    parent=f\"folders/{shared_dir['objectId']}\",\n    parameters=[{\n        \"name\": \"p1\",\n        \"title\": \"Title for p1\",\n        \"text\": {\n            \"value\": \"default\",\n        },\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var query = new Databricks.SqlQuery(\"query\", new()\n    {\n        DataSourceId = example.DataSourceId,\n        Query = \"select 42 as value\",\n        Name = \"My Query\",\n        Parent = $\"folders/{sharedDir.ObjectId}\",\n        Parameters = new[]\n        {\n            new Databricks.Inputs.SqlQueryParameterArgs\n            {\n                Name = \"p1\",\n                Title = \"Title for p1\",\n                Text = new Databricks.Inputs.SqlQueryParameterTextArgs\n                {\n                    Value = \"default\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSqlQuery(ctx, \"query\", \u0026databricks.SqlQueryArgs{\n\t\t\tDataSourceId: pulumi.Any(example.DataSourceId),\n\t\t\tQuery:        pulumi.String(\"select 42 as value\"),\n\t\t\tName:         pulumi.String(\"My Query\"),\n\t\t\tParent:       pulumi.Sprintf(\"folders/%v\", sharedDir.ObjectId),\n\t\t\tParameters: databricks.SqlQueryParameterArray{\n\t\t\t\t\u0026databricks.SqlQueryParameterArgs{\n\t\t\t\t\tName:  pulumi.String(\"p1\"),\n\t\t\t\t\tTitle: pulumi.String(\"Title for p1\"),\n\t\t\t\t\tText: \u0026databricks.SqlQueryParameterTextArgs{\n\t\t\t\t\t\tValue: pulumi.String(\"default\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlQuery;\nimport com.pulumi.databricks.SqlQueryArgs;\nimport com.pulumi.databricks.inputs.SqlQueryParameterArgs;\nimport com.pulumi.databricks.inputs.SqlQueryParameterTextArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var query = new SqlQuery(\"query\", SqlQueryArgs.builder()\n            .dataSourceId(example.dataSourceId())\n            .query(\"select 42 as value\")\n            .name(\"My Query\")\n            .parent(String.format(\"folders/%s\", sharedDir.objectId()))\n            .parameters(SqlQueryParameterArgs.builder()\n                .name(\"p1\")\n                .title(\"Title for p1\")\n                .text(SqlQueryParameterTextArgs.builder()\n                    .value(\"default\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  query:\n    type: databricks:SqlQuery\n    properties:\n      dataSourceId: ${example.dataSourceId}\n      query: select 42 as value\n      name: My Query\n      parent: folders/${sharedDir.objectId}\n      parameters:\n        - name: p1\n          title: Title for p1\n          text:\n            value: default\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nwe'll have a new resource defined as:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst query = new databricks.Query(\"query\", {\n    warehouseId: example.id,\n    queryText: \"select 42 as value\",\n    displayName: \"My Query\",\n    parentPath: sharedDir.path,\n    parameters: [{\n        name: \"p1\",\n        title: \"Title for p1\",\n        textValue: {\n            value: \"default\",\n        },\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nquery = databricks.Query(\"query\",\n    warehouse_id=example[\"id\"],\n    query_text=\"select 42 as value\",\n    display_name=\"My Query\",\n    parent_path=shared_dir[\"path\"],\n    parameters=[{\n        \"name\": \"p1\",\n        \"title\": \"Title for p1\",\n        \"text_value\": {\n            \"value\": \"default\",\n        },\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var query = new Databricks.Query(\"query\", new()\n    {\n        WarehouseId = example.Id,\n        QueryText = \"select 42 as value\",\n        DisplayName = \"My Query\",\n        ParentPath = sharedDir.Path,\n        Parameters = new[]\n        {\n            new Databricks.Inputs.QueryParameterArgs\n            {\n                Name = \"p1\",\n                Title = \"Title for p1\",\n                TextValue = new Databricks.Inputs.QueryParameterTextValueArgs\n                {\n                    Value = \"default\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewQuery(ctx, \"query\", \u0026databricks.QueryArgs{\n\t\t\tWarehouseId: pulumi.Any(example.Id),\n\t\t\tQueryText:   pulumi.String(\"select 42 as value\"),\n\t\t\tDisplayName: pulumi.String(\"My Query\"),\n\t\t\tParentPath:  pulumi.Any(sharedDir.Path),\n\t\t\tParameters: databricks.QueryParameterArray{\n\t\t\t\t\u0026databricks.QueryParameterArgs{\n\t\t\t\t\tName:  pulumi.String(\"p1\"),\n\t\t\t\t\tTitle: pulumi.String(\"Title for p1\"),\n\t\t\t\t\tTextValue: \u0026databricks.QueryParameterTextValueArgs{\n\t\t\t\t\t\tValue: pulumi.String(\"default\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Query;\nimport com.pulumi.databricks.QueryArgs;\nimport com.pulumi.databricks.inputs.QueryParameterArgs;\nimport com.pulumi.databricks.inputs.QueryParameterTextValueArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var query = new Query(\"query\", QueryArgs.builder()\n            .warehouseId(example.id())\n            .queryText(\"select 42 as value\")\n            .displayName(\"My Query\")\n            .parentPath(sharedDir.path())\n            .parameters(QueryParameterArgs.builder()\n                .name(\"p1\")\n                .title(\"Title for p1\")\n                .textValue(QueryParameterTextValueArgs.builder()\n                    .value(\"default\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  query:\n    type: databricks:Query\n    properties:\n      warehouseId: ${example.id}\n      queryText: select 42 as value\n      displayName: My Query\n      parentPath: ${sharedDir.path}\n      parameters:\n        - name: p1\n          title: Title for p1\n          textValue:\n            value: default\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\u003cspan pulumi-lang-nodejs=\"\ndatabricks.Permissions \" pulumi-lang-dotnet=\"\ndatabricks.Permissions \" pulumi-lang-go=\"\nPermissions \" pulumi-lang-python=\"\nPermissions \" pulumi-lang-yaml=\"\ndatabricks.Permissions \" pulumi-lang-java=\"\ndatabricks.Permissions \"\u003e\ndatabricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Manage*, *Edit*, *Run* or *View* individual queries.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst queryUsage = new databricks.Permissions(\"query_usage\", {\n    sqlQueryId: query.id,\n    accessControls: [{\n        groupName: \"users\",\n        permissionLevel: \"CAN_RUN\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nquery_usage = databricks.Permissions(\"query_usage\",\n    sql_query_id=query[\"id\"],\n    access_controls=[{\n        \"group_name\": \"users\",\n        \"permission_level\": \"CAN_RUN\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var queryUsage = new Databricks.Permissions(\"query_usage\", new()\n    {\n        SqlQueryId = query.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = \"users\",\n                PermissionLevel = \"CAN_RUN\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPermissions(ctx, \"query_usage\", \u0026databricks.PermissionsArgs{\n\t\t\tSqlQueryId: pulumi.Any(query.Id),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var queryUsage = new Permissions(\"queryUsage\", PermissionsArgs.builder()\n            .sqlQueryId(query.id())\n            .accessControls(PermissionsAccessControlArgs.builder()\n                .groupName(\"users\")\n                .permissionLevel(\"CAN_RUN\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  queryUsage:\n    type: databricks:Permissions\n    name: query_usage\n    properties:\n      sqlQueryId: ${query.id}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_RUN\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Alert \" pulumi-lang-dotnet=\" databricks.Alert \" pulumi-lang-go=\" Alert \" pulumi-lang-python=\" Alert \" pulumi-lang-yaml=\" databricks.Alert \" pulumi-lang-java=\" databricks.Alert \"\u003e databricks.Alert \u003c/span\u003eto manage [Databricks SQL Alerts](https://docs.databricks.com/en/sql/user/alerts/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eto manage [Databricks SQL Endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eto manage directories in [Databricks Workpace](https://docs.databricks.com/workspace/workspace-objects.html).\n\n","properties":{"applyAutoLimit":{"type":"boolean","description":"Whether to apply a 1000 row limit to the query result.\n"},"catalog":{"type":"string","description":"Name of the catalog where this query will be executed.\n"},"createTime":{"type":"string","description":"The timestamp string indicating when the query was created.\n"},"description":{"type":"string","description":"General description that conveys additional information about this query such as usage notes.\n"},"displayName":{"type":"string","description":"Name of the query.\n"},"lastModifierUserName":{"type":"string","description":"Username of the user who last saved changes to this query.\n"},"lifecycleState":{"type":"string","description":"The workspace state of the query. Used for tracking trashed status. (Possible values are `ACTIVE` or `TRASHED`).\n"},"ownerUserName":{"type":"string","description":"Query owner's username.\n"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/QueryParameter:QueryParameter"},"description":"Query parameter definition.  Consists of following attributes (one of `*_value` is required):\n"},"parentPath":{"type":"string","description":"The path to a workspace folder containing the query. The default is the user's home folder.  If changed, the query will be recreated.\n"},"providerConfig":{"$ref":"#/types/databricks:index/QueryProviderConfig:QueryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"queryText":{"type":"string","description":"Text of SQL query.\n"},"runAsMode":{"type":"string","description":"Sets the \"Run as\" role for the object.  Should be one of `OWNER`, `VIEWER`.\n"},"schema":{"type":"string","description":"Name of the schema where this query will be executed.\n"},"tags":{"type":"array","items":{"type":"string"},"description":"Tags that will be added to the query.\n"},"updateTime":{"type":"string","description":"The timestamp string indicating when the query was updated.\n"},"warehouseId":{"type":"string","description":"ID of a SQL warehouse which will be used to execute this query.\n"}},"required":["createTime","displayName","lastModifierUserName","lifecycleState","queryText","updateTime","warehouseId"],"inputProperties":{"applyAutoLimit":{"type":"boolean","description":"Whether to apply a 1000 row limit to the query result.\n"},"catalog":{"type":"string","description":"Name of the catalog where this query will be executed.\n"},"description":{"type":"string","description":"General description that conveys additional information about this query such as usage notes.\n"},"displayName":{"type":"string","description":"Name of the query.\n"},"ownerUserName":{"type":"string","description":"Query owner's username.\n"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/QueryParameter:QueryParameter"},"description":"Query parameter definition.  Consists of following attributes (one of `*_value` is required):\n"},"parentPath":{"type":"string","description":"The path to a workspace folder containing the query. The default is the user's home folder.  If changed, the query will be recreated.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/QueryProviderConfig:QueryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"queryText":{"type":"string","description":"Text of SQL query.\n"},"runAsMode":{"type":"string","description":"Sets the \"Run as\" role for the object.  Should be one of `OWNER`, `VIEWER`.\n"},"schema":{"type":"string","description":"Name of the schema where this query will be executed.\n"},"tags":{"type":"array","items":{"type":"string"},"description":"Tags that will be added to the query.\n"},"warehouseId":{"type":"string","description":"ID of a SQL warehouse which will be used to execute this query.\n"}},"requiredInputs":["displayName","queryText","warehouseId"],"stateInputs":{"description":"Input properties used for looking up and filtering Query resources.\n","properties":{"applyAutoLimit":{"type":"boolean","description":"Whether to apply a 1000 row limit to the query result.\n"},"catalog":{"type":"string","description":"Name of the catalog where this query will be executed.\n"},"createTime":{"type":"string","description":"The timestamp string indicating when the query was created.\n"},"description":{"type":"string","description":"General description that conveys additional information about this query such as usage notes.\n"},"displayName":{"type":"string","description":"Name of the query.\n"},"lastModifierUserName":{"type":"string","description":"Username of the user who last saved changes to this query.\n"},"lifecycleState":{"type":"string","description":"The workspace state of the query. Used for tracking trashed status. (Possible values are `ACTIVE` or `TRASHED`).\n"},"ownerUserName":{"type":"string","description":"Query owner's username.\n"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/QueryParameter:QueryParameter"},"description":"Query parameter definition.  Consists of following attributes (one of `*_value` is required):\n"},"parentPath":{"type":"string","description":"The path to a workspace folder containing the query. The default is the user's home folder.  If changed, the query will be recreated.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/QueryProviderConfig:QueryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"queryText":{"type":"string","description":"Text of SQL query.\n"},"runAsMode":{"type":"string","description":"Sets the \"Run as\" role for the object.  Should be one of `OWNER`, `VIEWER`.\n"},"schema":{"type":"string","description":"Name of the schema where this query will be executed.\n"},"tags":{"type":"array","items":{"type":"string"},"description":"Tags that will be added to the query.\n"},"updateTime":{"type":"string","description":"The timestamp string indicating when the query was updated.\n"},"warehouseId":{"type":"string","description":"ID of a SQL warehouse which will be used to execute this query.\n"}},"type":"object"}},"databricks:index/recipient:Recipient":{"description":"\u003e This resource can only be used with a workspace-level provider!\n\nIn Delta Sharing, a recipient is an entity that receives shares from a provider. In Unity Catalog, a share is a securable object that represents an organization and associates it with a credential or secure sharing identifier that allows that organization to access one or more shares.\n\nAs a data provider (sharer), you can define multiple recipients for any given Unity Catalog metastore, but if you want to share data from multiple metastores with a particular user or group of users, you must define the recipient separately for each metastore. A recipient can have access to multiple shares.\n\nA \u003cspan pulumi-lang-nodejs=\"`databricks.Recipient`\" pulumi-lang-dotnet=\"`databricks.Recipient`\" pulumi-lang-go=\"`Recipient`\" pulumi-lang-python=\"`Recipient`\" pulumi-lang-yaml=\"`databricks.Recipient`\" pulumi-lang-java=\"`databricks.Recipient`\"\u003e`databricks.Recipient`\u003c/span\u003e is contained within\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eand can have permissions to `SELECT` from a list of shares.\n\n## Example Usage\n\n### Databricks Sharing with non databricks recipient\n\nSetting \u003cspan pulumi-lang-nodejs=\"`authenticationType`\" pulumi-lang-dotnet=\"`AuthenticationType`\" pulumi-lang-go=\"`authenticationType`\" pulumi-lang-python=\"`authentication_type`\" pulumi-lang-yaml=\"`authenticationType`\" pulumi-lang-java=\"`authenticationType`\"\u003e`authentication_type`\u003c/span\u003e type to `TOKEN` creates a temporary url to download a credentials file. This is used to\nauthenticate to the sharing server to access data. This is for when the recipient is not using Databricks.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as random from \"@pulumi/random\";\n\nconst db2opensharecode = new random.index.Password(\"db2opensharecode\", {\n    length: 16,\n    special: true,\n});\nconst current = databricks.getCurrentUser({});\nconst db2open = new databricks.Recipient(\"db2open\", {\n    name: current.then(current =\u003e `${current.alphanumeric}-recipient`),\n    comment: \"Made by Pulumi\",\n    authenticationType: \"TOKEN\",\n    sharingCode: db2opensharecode.result,\n    ipAccessList: {\n        allowedIpAddresses: [],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_random as random\n\ndb2opensharecode = random.index.Password(\"db2opensharecode\",\n    length=16,\n    special=True)\ncurrent = databricks.get_current_user()\ndb2open = databricks.Recipient(\"db2open\",\n    name=f\"{current.alphanumeric}-recipient\",\n    comment=\"Made by Pulumi\",\n    authentication_type=\"TOKEN\",\n    sharing_code=db2opensharecode[\"result\"],\n    ip_access_list={\n        \"allowed_ip_addresses\": [],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Random = Pulumi.Random;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var db2opensharecode = new Random.Index.Password(\"db2opensharecode\", new()\n    {\n        Length = 16,\n        Special = true,\n    });\n\n    var current = Databricks.GetCurrentUser.Invoke();\n\n    var db2open = new Databricks.Recipient(\"db2open\", new()\n    {\n        Name = $\"{current.Apply(getCurrentUserResult =\u003e getCurrentUserResult.Alphanumeric)}-recipient\",\n        Comment = \"Made by Pulumi\",\n        AuthenticationType = \"TOKEN\",\n        SharingCode = db2opensharecode.Result,\n        IpAccessList = new Databricks.Inputs.RecipientIpAccessListArgs\n        {\n            AllowedIpAddresses = new() { },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-random/sdk/v4/go/random\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tdb2opensharecode, err := random.NewPassword(ctx, \"db2opensharecode\", \u0026random.PasswordArgs{\n\t\t\tLength:  16,\n\t\t\tSpecial: true,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tcurrent, err := databricks.GetCurrentUser(ctx, \u0026databricks.GetCurrentUserArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewRecipient(ctx, \"db2open\", \u0026databricks.RecipientArgs{\n\t\t\tName:               pulumi.Sprintf(\"%v-recipient\", current.Alphanumeric),\n\t\t\tComment:            pulumi.String(\"Made by Pulumi\"),\n\t\t\tAuthenticationType: pulumi.String(\"TOKEN\"),\n\t\t\tSharingCode:        db2opensharecode.Result,\n\t\t\tIpAccessList: \u0026databricks.RecipientIpAccessListArgs{\n\t\t\t\tAllowedIpAddresses: pulumi.StringArray{},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.random.Password;\nimport com.pulumi.random.PasswordArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetCurrentUserArgs;\nimport com.pulumi.databricks.Recipient;\nimport com.pulumi.databricks.RecipientArgs;\nimport com.pulumi.databricks.inputs.RecipientIpAccessListArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var db2opensharecode = new Password(\"db2opensharecode\", PasswordArgs.builder()\n            .length(16)\n            .special(true)\n            .build());\n\n        final var current = DatabricksFunctions.getCurrentUser(GetCurrentUserArgs.builder()\n            .build());\n\n        var db2open = new Recipient(\"db2open\", RecipientArgs.builder()\n            .name(String.format(\"%s-recipient\", current.alphanumeric()))\n            .comment(\"Made by Pulumi\")\n            .authenticationType(\"TOKEN\")\n            .sharingCode(db2opensharecode.result())\n            .ipAccessList(RecipientIpAccessListArgs.builder()\n                .allowedIpAddresses()\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  db2opensharecode:\n    type: random:Password\n    properties:\n      length: 16\n      special: true\n  db2open:\n    type: databricks:Recipient\n    properties:\n      name: ${current.alphanumeric}-recipient\n      comment: Made by Pulumi\n      authenticationType: TOKEN\n      sharingCode: ${db2opensharecode.result}\n      ipAccessList:\n        allowedIpAddresses: []\nvariables:\n  current:\n    fn::invoke:\n      function: databricks:getCurrentUser\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Databricks to Databricks Sharing\n\nSetting \u003cspan pulumi-lang-nodejs=\"`authenticationType`\" pulumi-lang-dotnet=\"`AuthenticationType`\" pulumi-lang-go=\"`authenticationType`\" pulumi-lang-python=\"`authentication_type`\" pulumi-lang-yaml=\"`authenticationType`\" pulumi-lang-java=\"`authenticationType`\"\u003e`authentication_type`\u003c/span\u003e type to `DATABRICKS` allows you to automatically create a provider for a recipient who\nis using Databricks. To do this they would need to provide the global metastore id that you will be sharing with. The\nglobal metastore id follows the format: `\u003ccloud\u003e:\u003cregion\u003e:\u003cguid\u003e`\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst current = databricks.getCurrentUser({});\nconst recipientMetastore = new databricks.Metastore(\"recipient_metastore\", {\n    name: \"recipient\",\n    storageRoot: std.format({\n        input: \"abfss://%s@%s.dfs.core.windows.net/\",\n        args: [\n            unityCatalog.name,\n            unityCatalogAzurermStorageAccount.name,\n        ],\n    }).then(invoke =\u003e invoke.result),\n    deltaSharingScope: \"INTERNAL\",\n    deltaSharingRecipientTokenLifetimeInSeconds: 60000000,\n    forceDestroy: true,\n});\nconst db2db = new databricks.Recipient(\"db2db\", {\n    name: current.then(current =\u003e `${current.alphanumeric}-recipient`),\n    comment: \"Made by Pulumi\",\n    authenticationType: \"DATABRICKS\",\n    dataRecipientGlobalMetastoreId: recipientMetastore.globalMetastoreId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\ncurrent = databricks.get_current_user()\nrecipient_metastore = databricks.Metastore(\"recipient_metastore\",\n    name=\"recipient\",\n    storage_root=std.format(input=\"abfss://%s@%s.dfs.core.windows.net/\",\n        args=[\n            unity_catalog[\"name\"],\n            unity_catalog_azurerm_storage_account[\"name\"],\n        ]).result,\n    delta_sharing_scope=\"INTERNAL\",\n    delta_sharing_recipient_token_lifetime_in_seconds=60000000,\n    force_destroy=True)\ndb2db = databricks.Recipient(\"db2db\",\n    name=f\"{current.alphanumeric}-recipient\",\n    comment=\"Made by Pulumi\",\n    authentication_type=\"DATABRICKS\",\n    data_recipient_global_metastore_id=recipient_metastore.global_metastore_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var current = Databricks.GetCurrentUser.Invoke();\n\n    var recipientMetastore = new Databricks.Metastore(\"recipient_metastore\", new()\n    {\n        Name = \"recipient\",\n        StorageRoot = Std.Format.Invoke(new()\n        {\n            Input = \"abfss://%s@%s.dfs.core.windows.net/\",\n            Args = new[]\n            {\n                unityCatalog.Name,\n                unityCatalogAzurermStorageAccount.Name,\n            },\n        }).Apply(invoke =\u003e invoke.Result),\n        DeltaSharingScope = \"INTERNAL\",\n        DeltaSharingRecipientTokenLifetimeInSeconds = 60000000,\n        ForceDestroy = true,\n    });\n\n    var db2db = new Databricks.Recipient(\"db2db\", new()\n    {\n        Name = $\"{current.Apply(getCurrentUserResult =\u003e getCurrentUserResult.Alphanumeric)}-recipient\",\n        Comment = \"Made by Pulumi\",\n        AuthenticationType = \"DATABRICKS\",\n        DataRecipientGlobalMetastoreId = recipientMetastore.GlobalMetastoreId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcurrent, err := databricks.GetCurrentUser(ctx, \u0026databricks.GetCurrentUserArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tinvokeFormat, err := std.Format(ctx, \u0026std.FormatArgs{\n\t\t\tInput: \"abfss://%s@%s.dfs.core.windows.net/\",\n\t\t\tArgs: []interface{}{\n\t\t\t\tunityCatalog.Name,\n\t\t\t\tunityCatalogAzurermStorageAccount.Name,\n\t\t\t},\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\trecipientMetastore, err := databricks.NewMetastore(ctx, \"recipient_metastore\", \u0026databricks.MetastoreArgs{\n\t\t\tName:              pulumi.String(\"recipient\"),\n\t\t\tStorageRoot:       pulumi.String(invokeFormat.Result),\n\t\t\tDeltaSharingScope: pulumi.String(\"INTERNAL\"),\n\t\t\tDeltaSharingRecipientTokenLifetimeInSeconds: pulumi.Int(60000000),\n\t\t\tForceDestroy: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewRecipient(ctx, \"db2db\", \u0026databricks.RecipientArgs{\n\t\t\tName:                           pulumi.Sprintf(\"%v-recipient\", current.Alphanumeric),\n\t\t\tComment:                        pulumi.String(\"Made by Pulumi\"),\n\t\t\tAuthenticationType:             pulumi.String(\"DATABRICKS\"),\n\t\t\tDataRecipientGlobalMetastoreId: recipientMetastore.GlobalMetastoreId,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetCurrentUserArgs;\nimport com.pulumi.databricks.Metastore;\nimport com.pulumi.databricks.MetastoreArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.FormatArgs;\nimport com.pulumi.databricks.Recipient;\nimport com.pulumi.databricks.RecipientArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var current = DatabricksFunctions.getCurrentUser(GetCurrentUserArgs.builder()\n            .build());\n\n        var recipientMetastore = new Metastore(\"recipientMetastore\", MetastoreArgs.builder()\n            .name(\"recipient\")\n            .storageRoot(StdFunctions.format(FormatArgs.builder()\n                .input(\"abfss://%s@%s.dfs.core.windows.net/\")\n                .args(                \n                    unityCatalog.name(),\n                    unityCatalogAzurermStorageAccount.name())\n                .build()).result())\n            .deltaSharingScope(\"INTERNAL\")\n            .deltaSharingRecipientTokenLifetimeInSeconds(60000000)\n            .forceDestroy(true)\n            .build());\n\n        var db2db = new Recipient(\"db2db\", RecipientArgs.builder()\n            .name(String.format(\"%s-recipient\", current.alphanumeric()))\n            .comment(\"Made by Pulumi\")\n            .authenticationType(\"DATABRICKS\")\n            .dataRecipientGlobalMetastoreId(recipientMetastore.globalMetastoreId())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  recipientMetastore:\n    type: databricks:Metastore\n    name: recipient_metastore\n    properties:\n      name: recipient\n      storageRoot:\n        fn::invoke:\n          function: std:format\n          arguments:\n            input: abfss://%s@%s.dfs.core.windows.net/\n            args:\n              - ${unityCatalog.name}\n              - ${unityCatalogAzurermStorageAccount.name}\n          return: result\n      deltaSharingScope: INTERNAL\n      deltaSharingRecipientTokenLifetimeInSeconds: '60000000'\n      forceDestroy: true\n  db2db:\n    type: databricks:Recipient\n    properties:\n      name: ${current.alphanumeric}-recipient\n      comment: Made by Pulumi\n      authenticationType: DATABRICKS\n      dataRecipientGlobalMetastoreId: ${recipientMetastore.globalMetastoreId}\nvariables:\n  current:\n    fn::invoke:\n      function: databricks:getCurrentUser\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Share \" pulumi-lang-dotnet=\" databricks.Share \" pulumi-lang-go=\" Share \" pulumi-lang-python=\" Share \" pulumi-lang-yaml=\" databricks.Share \" pulumi-lang-java=\" databricks.Share \"\u003e databricks.Share \u003c/span\u003eto create Delta Sharing shares.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage Delta Sharing permissions.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getShares \" pulumi-lang-dotnet=\" databricks.getShares \" pulumi-lang-go=\" getShares \" pulumi-lang-python=\" get_shares \" pulumi-lang-yaml=\" databricks.getShares \" pulumi-lang-java=\" databricks.getShares \"\u003e databricks.getShares \u003c/span\u003eto read existing Delta Sharing shares.\n\n","properties":{"activated":{"type":"boolean"},"activationUrl":{"type":"string","description":"Full activation URL to retrieve the access token. It will be empty if the token is already retrieved.\n"},"authenticationType":{"type":"string","description":"The delta sharing authentication type. Valid values are `TOKEN` and `DATABRICKS`.\n"},"cloud":{"type":"string","description":"Cloud vendor of the recipient's Unity Catalog Metstore. This field is only present when the\u003cspan pulumi-lang-nodejs=\" authenticationType \" pulumi-lang-dotnet=\" AuthenticationType \" pulumi-lang-go=\" authenticationType \" pulumi-lang-python=\" authentication_type \" pulumi-lang-yaml=\" authenticationType \" pulumi-lang-java=\" authenticationType \"\u003e authentication_type \u003c/span\u003eis `DATABRICKS`.\n"},"comment":{"type":"string","description":"Description about the recipient.\n"},"createdAt":{"type":"integer","description":"Time at which this recipient was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of recipient creator.\n"},"dataRecipientGlobalMetastoreId":{"type":"string","description":"Required when \u003cspan pulumi-lang-nodejs=\"`authenticationType`\" pulumi-lang-dotnet=\"`AuthenticationType`\" pulumi-lang-go=\"`authenticationType`\" pulumi-lang-python=\"`authentication_type`\" pulumi-lang-yaml=\"`authenticationType`\" pulumi-lang-java=\"`authenticationType`\"\u003e`authentication_type`\u003c/span\u003e is `DATABRICKS`.\n"},"expirationTime":{"type":"integer","description":"Expiration timestamp of the token in epoch milliseconds.\n"},"ipAccessList":{"$ref":"#/types/databricks:index/RecipientIpAccessList:RecipientIpAccessList","description":"Recipient IP access list.\n"},"metastoreId":{"type":"string","description":"Unique identifier of recipient's Unity Catalog metastore. This field is only present when the\u003cspan pulumi-lang-nodejs=\" authenticationType \" pulumi-lang-dotnet=\" AuthenticationType \" pulumi-lang-go=\" authenticationType \" pulumi-lang-python=\" authentication_type \" pulumi-lang-yaml=\" authenticationType \" pulumi-lang-java=\" authenticationType \"\u003e authentication_type \u003c/span\u003eis `DATABRICKS`.\n"},"name":{"type":"string","description":"Name of recipient. Change forces creation of a new resource.\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the recipient owner.\n"},"propertiesKvpairs":{"$ref":"#/types/databricks:index/RecipientPropertiesKvpairs:RecipientPropertiesKvpairs","description":"Recipient properties - object consisting of following fields:\n"},"providerConfig":{"$ref":"#/types/databricks:index/RecipientProviderConfig:RecipientProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"region":{"type":"string","description":"Cloud region of the recipient's Unity Catalog Metstore. This field is only present when the\u003cspan pulumi-lang-nodejs=\" authenticationType \" pulumi-lang-dotnet=\" AuthenticationType \" pulumi-lang-go=\" authenticationType \" pulumi-lang-python=\" authentication_type \" pulumi-lang-yaml=\" authenticationType \" pulumi-lang-java=\" authenticationType \"\u003e authentication_type \u003c/span\u003eis `DATABRICKS`.\n"},"sharingCode":{"type":"string","description":"The one-time sharing code provided by the data recipient.\n","secret":true},"tokens":{"type":"array","items":{"$ref":"#/types/databricks:index/RecipientToken:RecipientToken"},"description":"List of Recipient Tokens. This field is only present when the\u003cspan pulumi-lang-nodejs=\" authenticationType \" pulumi-lang-dotnet=\" AuthenticationType \" pulumi-lang-go=\" authenticationType \" pulumi-lang-python=\" authentication_type \" pulumi-lang-yaml=\" authenticationType \" pulumi-lang-java=\" authenticationType \"\u003e authentication_type \u003c/span\u003eis TOKEN. Each list element is an object with following attributes:\n"},"updatedAt":{"type":"integer","description":"Time at which this recipient was updated, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of recipient Token updater.\n"}},"required":["activated","activationUrl","authenticationType","cloud","createdAt","createdBy","metastoreId","name","region","tokens","updatedAt","updatedBy"],"inputProperties":{"authenticationType":{"type":"string","description":"The delta sharing authentication type. Valid values are `TOKEN` and `DATABRICKS`.\n","willReplaceOnChanges":true},"comment":{"type":"string","description":"Description about the recipient.\n"},"dataRecipientGlobalMetastoreId":{"type":"string","description":"Required when \u003cspan pulumi-lang-nodejs=\"`authenticationType`\" pulumi-lang-dotnet=\"`AuthenticationType`\" pulumi-lang-go=\"`authenticationType`\" pulumi-lang-python=\"`authentication_type`\" pulumi-lang-yaml=\"`authenticationType`\" pulumi-lang-java=\"`authenticationType`\"\u003e`authentication_type`\u003c/span\u003e is `DATABRICKS`.\n","willReplaceOnChanges":true},"expirationTime":{"type":"integer","description":"Expiration timestamp of the token in epoch milliseconds.\n"},"ipAccessList":{"$ref":"#/types/databricks:index/RecipientIpAccessList:RecipientIpAccessList","description":"Recipient IP access list.\n"},"name":{"type":"string","description":"Name of recipient. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the recipient owner.\n"},"propertiesKvpairs":{"$ref":"#/types/databricks:index/RecipientPropertiesKvpairs:RecipientPropertiesKvpairs","description":"Recipient properties - object consisting of following fields:\n"},"providerConfig":{"$ref":"#/types/databricks:index/RecipientProviderConfig:RecipientProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"sharingCode":{"type":"string","description":"The one-time sharing code provided by the data recipient.\n","secret":true,"willReplaceOnChanges":true},"tokens":{"type":"array","items":{"$ref":"#/types/databricks:index/RecipientToken:RecipientToken"},"description":"List of Recipient Tokens. This field is only present when the\u003cspan pulumi-lang-nodejs=\" authenticationType \" pulumi-lang-dotnet=\" AuthenticationType \" pulumi-lang-go=\" authenticationType \" pulumi-lang-python=\" authentication_type \" pulumi-lang-yaml=\" authenticationType \" pulumi-lang-java=\" authenticationType \"\u003e authentication_type \u003c/span\u003eis TOKEN. Each list element is an object with following attributes:\n"}},"requiredInputs":["authenticationType"],"stateInputs":{"description":"Input properties used for looking up and filtering Recipient resources.\n","properties":{"activated":{"type":"boolean"},"activationUrl":{"type":"string","description":"Full activation URL to retrieve the access token. It will be empty if the token is already retrieved.\n"},"authenticationType":{"type":"string","description":"The delta sharing authentication type. Valid values are `TOKEN` and `DATABRICKS`.\n","willReplaceOnChanges":true},"cloud":{"type":"string","description":"Cloud vendor of the recipient's Unity Catalog Metstore. This field is only present when the\u003cspan pulumi-lang-nodejs=\" authenticationType \" pulumi-lang-dotnet=\" AuthenticationType \" pulumi-lang-go=\" authenticationType \" pulumi-lang-python=\" authentication_type \" pulumi-lang-yaml=\" authenticationType \" pulumi-lang-java=\" authenticationType \"\u003e authentication_type \u003c/span\u003eis `DATABRICKS`.\n"},"comment":{"type":"string","description":"Description about the recipient.\n"},"createdAt":{"type":"integer","description":"Time at which this recipient was created, in epoch milliseconds.\n"},"createdBy":{"type":"string","description":"Username of recipient creator.\n"},"dataRecipientGlobalMetastoreId":{"type":"string","description":"Required when \u003cspan pulumi-lang-nodejs=\"`authenticationType`\" pulumi-lang-dotnet=\"`AuthenticationType`\" pulumi-lang-go=\"`authenticationType`\" pulumi-lang-python=\"`authentication_type`\" pulumi-lang-yaml=\"`authenticationType`\" pulumi-lang-java=\"`authenticationType`\"\u003e`authentication_type`\u003c/span\u003e is `DATABRICKS`.\n","willReplaceOnChanges":true},"expirationTime":{"type":"integer","description":"Expiration timestamp of the token in epoch milliseconds.\n"},"ipAccessList":{"$ref":"#/types/databricks:index/RecipientIpAccessList:RecipientIpAccessList","description":"Recipient IP access list.\n"},"metastoreId":{"type":"string","description":"Unique identifier of recipient's Unity Catalog metastore. This field is only present when the\u003cspan pulumi-lang-nodejs=\" authenticationType \" pulumi-lang-dotnet=\" AuthenticationType \" pulumi-lang-go=\" authenticationType \" pulumi-lang-python=\" authentication_type \" pulumi-lang-yaml=\" authenticationType \" pulumi-lang-java=\" authenticationType \"\u003e authentication_type \u003c/span\u003eis `DATABRICKS`.\n"},"name":{"type":"string","description":"Name of recipient. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the recipient owner.\n"},"propertiesKvpairs":{"$ref":"#/types/databricks:index/RecipientPropertiesKvpairs:RecipientPropertiesKvpairs","description":"Recipient properties - object consisting of following fields:\n"},"providerConfig":{"$ref":"#/types/databricks:index/RecipientProviderConfig:RecipientProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"region":{"type":"string","description":"Cloud region of the recipient's Unity Catalog Metstore. This field is only present when the\u003cspan pulumi-lang-nodejs=\" authenticationType \" pulumi-lang-dotnet=\" AuthenticationType \" pulumi-lang-go=\" authenticationType \" pulumi-lang-python=\" authentication_type \" pulumi-lang-yaml=\" authenticationType \" pulumi-lang-java=\" authenticationType \"\u003e authentication_type \u003c/span\u003eis `DATABRICKS`.\n"},"sharingCode":{"type":"string","description":"The one-time sharing code provided by the data recipient.\n","secret":true,"willReplaceOnChanges":true},"tokens":{"type":"array","items":{"$ref":"#/types/databricks:index/RecipientToken:RecipientToken"},"description":"List of Recipient Tokens. This field is only present when the\u003cspan pulumi-lang-nodejs=\" authenticationType \" pulumi-lang-dotnet=\" AuthenticationType \" pulumi-lang-go=\" authenticationType \" pulumi-lang-python=\" authentication_type \" pulumi-lang-yaml=\" authenticationType \" pulumi-lang-java=\" authenticationType \"\u003e authentication_type \u003c/span\u003eis TOKEN. Each list element is an object with following attributes:\n"},"updatedAt":{"type":"integer","description":"Time at which this recipient was updated, in epoch milliseconds.\n"},"updatedBy":{"type":"string","description":"Username of recipient Token updater.\n"}},"type":"object"}},"databricks:index/registeredModel:RegisteredModel":{"description":"This resource allows you to create [Models in Unity Catalog](https://docs.databricks.com/en/mlflow/models-in-uc.html) in Databricks.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.RegisteredModel(\"this\", {\n    name: \"my_model\",\n    catalogName: \"main\",\n    schemaName: \"default\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.RegisteredModel(\"this\",\n    name=\"my_model\",\n    catalog_name=\"main\",\n    schema_name=\"default\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.RegisteredModel(\"this\", new()\n    {\n        Name = \"my_model\",\n        CatalogName = \"main\",\n        SchemaName = \"default\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewRegisteredModel(ctx, \"this\", \u0026databricks.RegisteredModelArgs{\n\t\t\tName:        pulumi.String(\"my_model\"),\n\t\t\tCatalogName: pulumi.String(\"main\"),\n\t\t\tSchemaName:  pulumi.String(\"default\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.RegisteredModel;\nimport com.pulumi.databricks.RegisteredModelArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new RegisteredModel(\"this\", RegisteredModelArgs.builder()\n            .name(\"my_model\")\n            .catalogName(\"main\")\n            .schemaName(\"default\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:RegisteredModel\n    properties:\n      name: my_model\n      catalogName: main\n      schemaName: default\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003ecan be used to grant principals `ALL_PRIVILEGES`, `APPLY_TAG`, and `EXECUTE` privileges.\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.ModelServing \" pulumi-lang-dotnet=\" databricks.ModelServing \" pulumi-lang-go=\" ModelServing \" pulumi-lang-python=\" ModelServing \" pulumi-lang-yaml=\" databricks.ModelServing \" pulumi-lang-java=\" databricks.ModelServing \"\u003e databricks.ModelServing \u003c/span\u003eto serve this model on a Databricks serving endpoint.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MlflowExperiment \" pulumi-lang-dotnet=\" databricks.MlflowExperiment \" pulumi-lang-go=\" MlflowExperiment \" pulumi-lang-python=\" MlflowExperiment \" pulumi-lang-yaml=\" databricks.MlflowExperiment \" pulumi-lang-java=\" databricks.MlflowExperiment \"\u003e databricks.MlflowExperiment \u003c/span\u003eto manage [MLflow experiments](https://docs.databricks.com/data/data-sources/mlflow-experiment.html) in Databricks.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eto manage schemas within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eto manage catalogs within Unity Catalog.\n\n","properties":{"aliases":{"type":"array","items":{"$ref":"#/types/databricks:index/RegisteredModelAlias:RegisteredModelAlias"}},"browseOnly":{"type":"boolean"},"catalogName":{"type":"string","description":"The name of the catalog where the schema and the registered model reside. *Change of this parameter forces recreation of the resource.*\n"},"comment":{"type":"string","description":"The comment attached to the registered model.\n"},"createdAt":{"type":"integer"},"createdBy":{"type":"string"},"fullName":{"type":"string"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"The name of the registered model.  *Change of this parameter forces recreation of the resource.*\n"},"owner":{"type":"string","description":"Name of the registered model owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/RegisteredModelProviderConfig:RegisteredModelProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaName":{"type":"string","description":"The name of the schema where the registered model resides. *Change of this parameter forces recreation of the resource.*\n"},"storageLocation":{"type":"string","description":"The storage location under which model version data files are stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). *Change of this parameter forces recreation of the resource.*\n"},"updatedAt":{"type":"integer"},"updatedBy":{"type":"string"}},"required":["createdAt","createdBy","fullName","metastoreId","name","owner","storageLocation","updatedAt","updatedBy"],"inputProperties":{"aliases":{"type":"array","items":{"$ref":"#/types/databricks:index/RegisteredModelAlias:RegisteredModelAlias"}},"browseOnly":{"type":"boolean"},"catalogName":{"type":"string","description":"The name of the catalog where the schema and the registered model reside. *Change of this parameter forces recreation of the resource.*\n","willReplaceOnChanges":true},"comment":{"type":"string","description":"The comment attached to the registered model.\n"},"createdAt":{"type":"integer"},"createdBy":{"type":"string"},"fullName":{"type":"string"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"The name of the registered model.  *Change of this parameter forces recreation of the resource.*\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Name of the registered model owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/RegisteredModelProviderConfig:RegisteredModelProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaName":{"type":"string","description":"The name of the schema where the registered model resides. *Change of this parameter forces recreation of the resource.*\n","willReplaceOnChanges":true},"storageLocation":{"type":"string","description":"The storage location under which model version data files are stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). *Change of this parameter forces recreation of the resource.*\n","willReplaceOnChanges":true},"updatedAt":{"type":"integer"},"updatedBy":{"type":"string"}},"stateInputs":{"description":"Input properties used for looking up and filtering RegisteredModel resources.\n","properties":{"aliases":{"type":"array","items":{"$ref":"#/types/databricks:index/RegisteredModelAlias:RegisteredModelAlias"}},"browseOnly":{"type":"boolean"},"catalogName":{"type":"string","description":"The name of the catalog where the schema and the registered model reside. *Change of this parameter forces recreation of the resource.*\n","willReplaceOnChanges":true},"comment":{"type":"string","description":"The comment attached to the registered model.\n"},"createdAt":{"type":"integer"},"createdBy":{"type":"string"},"fullName":{"type":"string"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"The name of the registered model.  *Change of this parameter forces recreation of the resource.*\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Name of the registered model owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/RegisteredModelProviderConfig:RegisteredModelProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaName":{"type":"string","description":"The name of the schema where the registered model resides. *Change of this parameter forces recreation of the resource.*\n","willReplaceOnChanges":true},"storageLocation":{"type":"string","description":"The storage location under which model version data files are stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). *Change of this parameter forces recreation of the resource.*\n","willReplaceOnChanges":true},"updatedAt":{"type":"integer"},"updatedBy":{"type":"string"}},"type":"object"}},"databricks:index/repo:Repo":{"description":"This resource allows you to manage [Databricks Git folders](https://docs.databricks.com/en/repos/index.html) (formerly known as Databricks Repos).\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e To create a Git folder from a private repository you need to configure Git token as described in the [documentation](https://docs.databricks.com/en/repos/index.html#configure-your-git-integration-with-databricks).  To set this token you can use\u003cspan pulumi-lang-nodejs=\" databricks.GitCredential \" pulumi-lang-dotnet=\" databricks.GitCredential \" pulumi-lang-go=\" GitCredential \" pulumi-lang-python=\" GitCredential \" pulumi-lang-yaml=\" databricks.GitCredential \" pulumi-lang-java=\" databricks.GitCredential \"\u003e databricks.GitCredential \u003c/span\u003eresource.\n\n## Example Usage\n\nYou can declare Pulumi-managed Git folder by specifying \u003cspan pulumi-lang-nodejs=\"`url`\" pulumi-lang-dotnet=\"`Url`\" pulumi-lang-go=\"`url`\" pulumi-lang-python=\"`url`\" pulumi-lang-yaml=\"`url`\" pulumi-lang-java=\"`url`\"\u003e`url`\u003c/span\u003e attribute of Git repository. In addition to that you may need to specify \u003cspan pulumi-lang-nodejs=\"`gitProvider`\" pulumi-lang-dotnet=\"`GitProvider`\" pulumi-lang-go=\"`gitProvider`\" pulumi-lang-python=\"`git_provider`\" pulumi-lang-yaml=\"`gitProvider`\" pulumi-lang-java=\"`gitProvider`\"\u003e`git_provider`\u003c/span\u003e attribute if Git provider doesn't belong to cloud Git providers (Github, GitLab, ...).  If \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e attribute isn't provided, then Git folder will be created in the default location:\n\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst nutterInHome = new databricks.Repo(\"nutter_in_home\", {url: \"https://github.com/user/demo.git\"});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nnutter_in_home = databricks.Repo(\"nutter_in_home\", url=\"https://github.com/user/demo.git\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var nutterInHome = new Databricks.Repo(\"nutter_in_home\", new()\n    {\n        Url = \"https://github.com/user/demo.git\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewRepo(ctx, \"nutter_in_home\", \u0026databricks.RepoArgs{\n\t\t\tUrl: pulumi.String(\"https://github.com/user/demo.git\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Repo;\nimport com.pulumi.databricks.RepoArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var nutterInHome = new Repo(\"nutterInHome\", RepoArgs.builder()\n            .url(\"https://github.com/user/demo.git\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  nutterInHome:\n    type: databricks:Repo\n    name: nutter_in_home\n    properties:\n      url: https://github.com/user/demo.git\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003ecan control which groups or individual users can access repos.\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GitCredential \" pulumi-lang-dotnet=\" databricks.GitCredential \" pulumi-lang-go=\" GitCredential \" pulumi-lang-python=\" GitCredential \" pulumi-lang-yaml=\" databricks.GitCredential \" pulumi-lang-java=\" databricks.GitCredential \"\u003e databricks.GitCredential \u003c/span\u003eto manage Git credentials.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eto manage directories in [Databricks Workpace](https://docs.databricks.com/workspace/workspace-objects.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Pipeline \" pulumi-lang-dotnet=\" databricks.Pipeline \" pulumi-lang-go=\" Pipeline \" pulumi-lang-python=\" Pipeline \" pulumi-lang-yaml=\" databricks.Pipeline \" pulumi-lang-java=\" databricks.Pipeline \"\u003e databricks.Pipeline \u003c/span\u003eto deploy [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt). \n*\u003cspan pulumi-lang-nodejs=\" databricks.Secret \" pulumi-lang-dotnet=\" databricks.Secret \" pulumi-lang-go=\" Secret \" pulumi-lang-python=\" Secret \" pulumi-lang-yaml=\" databricks.Secret \" pulumi-lang-java=\" databricks.Secret \"\u003e databricks.Secret \u003c/span\u003eto manage [secrets](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SecretAcl \" pulumi-lang-dotnet=\" databricks.SecretAcl \" pulumi-lang-go=\" SecretAcl \" pulumi-lang-python=\" SecretAcl \" pulumi-lang-yaml=\" databricks.SecretAcl \" pulumi-lang-java=\" databricks.SecretAcl \"\u003e databricks.SecretAcl \u003c/span\u003eto manage access to [secrets](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SecretScope \" pulumi-lang-dotnet=\" databricks.SecretScope \" pulumi-lang-go=\" SecretScope \" pulumi-lang-python=\" SecretScope \" pulumi-lang-yaml=\" databricks.SecretScope \" pulumi-lang-java=\" databricks.SecretScope \"\u003e databricks.SecretScope \u003c/span\u003eto create [secret scopes](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.WorkspaceConf \" pulumi-lang-dotnet=\" databricks.WorkspaceConf \" pulumi-lang-go=\" WorkspaceConf \" pulumi-lang-python=\" WorkspaceConf \" pulumi-lang-yaml=\" databricks.WorkspaceConf \" pulumi-lang-java=\" databricks.WorkspaceConf \"\u003e databricks.WorkspaceConf \u003c/span\u003eto manage workspace configuration for expert usage.\n\n","properties":{"branch":{"type":"string","description":"name of the branch for initial checkout. If not specified, the default branch of the repository will be used.  Conflicts with \u003cspan pulumi-lang-nodejs=\"`tag`\" pulumi-lang-dotnet=\"`Tag`\" pulumi-lang-go=\"`tag`\" pulumi-lang-python=\"`tag`\" pulumi-lang-yaml=\"`tag`\" pulumi-lang-java=\"`tag`\"\u003e`tag`\u003c/span\u003e.  If \u003cspan pulumi-lang-nodejs=\"`branch`\" pulumi-lang-dotnet=\"`Branch`\" pulumi-lang-go=\"`branch`\" pulumi-lang-python=\"`branch`\" pulumi-lang-yaml=\"`branch`\" pulumi-lang-java=\"`branch`\"\u003e`branch`\u003c/span\u003e is removed, and \u003cspan pulumi-lang-nodejs=\"`tag`\" pulumi-lang-dotnet=\"`Tag`\" pulumi-lang-go=\"`tag`\" pulumi-lang-python=\"`tag`\" pulumi-lang-yaml=\"`tag`\" pulumi-lang-java=\"`tag`\"\u003e`tag`\u003c/span\u003e isn't specified, then the repository will stay at the previously checked out state.\n"},"commitHash":{"type":"string","description":"Hash of the HEAD commit at time of the last executed operation. It won't change if you manually perform pull operation via UI or API\n"},"gitProvider":{"type":"string","description":"case insensitive name of the Git provider.  Following values are supported right now (could be a subject for a change, consult [Repos API documentation](https://docs.databricks.com/dev-tools/api/latest/repos.html)): `gitHub`, `gitHubEnterprise`, `bitbucketCloud`, `bitbucketServer`, `azureDevOpsServices`, `gitLab`, `gitLabEnterpriseEdition`, `awsCodeCommit`.\n"},"path":{"type":"string","description":"path to put the checked out Git folder. If not specified, , then the Git folder will be created in the default location.  If the value changes, Git folder is re-created.\n"},"providerConfig":{"$ref":"#/types/databricks:index/RepoProviderConfig:RepoProviderConfig"},"sparseCheckout":{"$ref":"#/types/databricks:index/RepoSparseCheckout:RepoSparseCheckout"},"tag":{"type":"string","description":"name of the tag for initial checkout.  Conflicts with \u003cspan pulumi-lang-nodejs=\"`branch`\" pulumi-lang-dotnet=\"`Branch`\" pulumi-lang-go=\"`branch`\" pulumi-lang-python=\"`branch`\" pulumi-lang-yaml=\"`branch`\" pulumi-lang-java=\"`branch`\"\u003e`branch`\u003c/span\u003e.\n"},"url":{"type":"string","description":"The URL of the Git Repository to clone from. If the value changes, Git folder is re-created.\n"},"workspacePath":{"type":"string","description":"path on Workspace File System (WSFS) in form of `/Workspace` + \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e\n"}},"required":["branch","commitHash","gitProvider","path","url","workspacePath"],"inputProperties":{"branch":{"type":"string","description":"name of the branch for initial checkout. If not specified, the default branch of the repository will be used.  Conflicts with \u003cspan pulumi-lang-nodejs=\"`tag`\" pulumi-lang-dotnet=\"`Tag`\" pulumi-lang-go=\"`tag`\" pulumi-lang-python=\"`tag`\" pulumi-lang-yaml=\"`tag`\" pulumi-lang-java=\"`tag`\"\u003e`tag`\u003c/span\u003e.  If \u003cspan pulumi-lang-nodejs=\"`branch`\" pulumi-lang-dotnet=\"`Branch`\" pulumi-lang-go=\"`branch`\" pulumi-lang-python=\"`branch`\" pulumi-lang-yaml=\"`branch`\" pulumi-lang-java=\"`branch`\"\u003e`branch`\u003c/span\u003e is removed, and \u003cspan pulumi-lang-nodejs=\"`tag`\" pulumi-lang-dotnet=\"`Tag`\" pulumi-lang-go=\"`tag`\" pulumi-lang-python=\"`tag`\" pulumi-lang-yaml=\"`tag`\" pulumi-lang-java=\"`tag`\"\u003e`tag`\u003c/span\u003e isn't specified, then the repository will stay at the previously checked out state.\n"},"commitHash":{"type":"string","description":"Hash of the HEAD commit at time of the last executed operation. It won't change if you manually perform pull operation via UI or API\n"},"gitProvider":{"type":"string","description":"case insensitive name of the Git provider.  Following values are supported right now (could be a subject for a change, consult [Repos API documentation](https://docs.databricks.com/dev-tools/api/latest/repos.html)): `gitHub`, `gitHubEnterprise`, `bitbucketCloud`, `bitbucketServer`, `azureDevOpsServices`, `gitLab`, `gitLabEnterpriseEdition`, `awsCodeCommit`.\n","willReplaceOnChanges":true},"path":{"type":"string","description":"path to put the checked out Git folder. If not specified, , then the Git folder will be created in the default location.  If the value changes, Git folder is re-created.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/RepoProviderConfig:RepoProviderConfig"},"sparseCheckout":{"$ref":"#/types/databricks:index/RepoSparseCheckout:RepoSparseCheckout","willReplaceOnChanges":true},"tag":{"type":"string","description":"name of the tag for initial checkout.  Conflicts with \u003cspan pulumi-lang-nodejs=\"`branch`\" pulumi-lang-dotnet=\"`Branch`\" pulumi-lang-go=\"`branch`\" pulumi-lang-python=\"`branch`\" pulumi-lang-yaml=\"`branch`\" pulumi-lang-java=\"`branch`\"\u003e`branch`\u003c/span\u003e.\n"},"url":{"type":"string","description":"The URL of the Git Repository to clone from. If the value changes, Git folder is re-created.\n","willReplaceOnChanges":true}},"requiredInputs":["url"],"stateInputs":{"description":"Input properties used for looking up and filtering Repo resources.\n","properties":{"branch":{"type":"string","description":"name of the branch for initial checkout. If not specified, the default branch of the repository will be used.  Conflicts with \u003cspan pulumi-lang-nodejs=\"`tag`\" pulumi-lang-dotnet=\"`Tag`\" pulumi-lang-go=\"`tag`\" pulumi-lang-python=\"`tag`\" pulumi-lang-yaml=\"`tag`\" pulumi-lang-java=\"`tag`\"\u003e`tag`\u003c/span\u003e.  If \u003cspan pulumi-lang-nodejs=\"`branch`\" pulumi-lang-dotnet=\"`Branch`\" pulumi-lang-go=\"`branch`\" pulumi-lang-python=\"`branch`\" pulumi-lang-yaml=\"`branch`\" pulumi-lang-java=\"`branch`\"\u003e`branch`\u003c/span\u003e is removed, and \u003cspan pulumi-lang-nodejs=\"`tag`\" pulumi-lang-dotnet=\"`Tag`\" pulumi-lang-go=\"`tag`\" pulumi-lang-python=\"`tag`\" pulumi-lang-yaml=\"`tag`\" pulumi-lang-java=\"`tag`\"\u003e`tag`\u003c/span\u003e isn't specified, then the repository will stay at the previously checked out state.\n"},"commitHash":{"type":"string","description":"Hash of the HEAD commit at time of the last executed operation. It won't change if you manually perform pull operation via UI or API\n"},"gitProvider":{"type":"string","description":"case insensitive name of the Git provider.  Following values are supported right now (could be a subject for a change, consult [Repos API documentation](https://docs.databricks.com/dev-tools/api/latest/repos.html)): `gitHub`, `gitHubEnterprise`, `bitbucketCloud`, `bitbucketServer`, `azureDevOpsServices`, `gitLab`, `gitLabEnterpriseEdition`, `awsCodeCommit`.\n","willReplaceOnChanges":true},"path":{"type":"string","description":"path to put the checked out Git folder. If not specified, , then the Git folder will be created in the default location.  If the value changes, Git folder is re-created.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/RepoProviderConfig:RepoProviderConfig"},"sparseCheckout":{"$ref":"#/types/databricks:index/RepoSparseCheckout:RepoSparseCheckout","willReplaceOnChanges":true},"tag":{"type":"string","description":"name of the tag for initial checkout.  Conflicts with \u003cspan pulumi-lang-nodejs=\"`branch`\" pulumi-lang-dotnet=\"`Branch`\" pulumi-lang-go=\"`branch`\" pulumi-lang-python=\"`branch`\" pulumi-lang-yaml=\"`branch`\" pulumi-lang-java=\"`branch`\"\u003e`branch`\u003c/span\u003e.\n"},"url":{"type":"string","description":"The URL of the Git Repository to clone from. If the value changes, Git folder is re-created.\n","willReplaceOnChanges":true},"workspacePath":{"type":"string","description":"path on Workspace File System (WSFS) in form of `/Workspace` + \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e\n"}},"type":"object"}},"databricks:index/restrictWorkspaceAdminsSetting:RestrictWorkspaceAdminsSetting":{"description":"The \u003cspan pulumi-lang-nodejs=\"`databricks.RestrictWorkspaceAdminsSetting`\" pulumi-lang-dotnet=\"`databricks.RestrictWorkspaceAdminsSetting`\" pulumi-lang-go=\"`RestrictWorkspaceAdminsSetting`\" pulumi-lang-python=\"`RestrictWorkspaceAdminsSetting`\" pulumi-lang-yaml=\"`databricks.RestrictWorkspaceAdminsSetting`\" pulumi-lang-java=\"`databricks.RestrictWorkspaceAdminsSetting`\"\u003e`databricks.RestrictWorkspaceAdminsSetting`\u003c/span\u003e resource lets you control the capabilities of workspace admins.\n\n\u003e This resource can only be used with a workspace-level provider!\n\nWith the status set to `ALLOW_ALL`, workspace admins can:\n\n1. Create service principal personal access tokens on behalf of any service principal in their workspace.\n2. Change a job owner to any user in the workspace.\n3. Change the job\u003cspan pulumi-lang-nodejs=\" runAs \" pulumi-lang-dotnet=\" RunAs \" pulumi-lang-go=\" runAs \" pulumi-lang-python=\" run_as \" pulumi-lang-yaml=\" runAs \" pulumi-lang-java=\" runAs \"\u003e run_as \u003c/span\u003esetting to any user in their workspace or a service principal on which they have the Service Principal User role.\n\nWith the status set to `RESTRICT_TOKENS_AND_JOB_RUN_AS`, workspace admins can:\n\n1. Only create personal access tokens on behalf of service principals on which they have the Service Principal User role.\n2. Only change a job owner to themselves.\n3. Only change the job\u003cspan pulumi-lang-nodejs=\" runAs \" pulumi-lang-dotnet=\" RunAs \" pulumi-lang-go=\" runAs \" pulumi-lang-python=\" run_as \" pulumi-lang-yaml=\" runAs \" pulumi-lang-java=\" runAs \"\u003e run_as \u003c/span\u003esetting to themselves a service principal on which they have the Service Principal User role.\n\n\u003e Only account admins can update the setting. And the account admin must be part of the workspace to change the setting status.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.RestrictWorkspaceAdminsSetting(\"this\", {restrictWorkspaceAdmins: {\n    status: \"RESTRICT_TOKENS_AND_JOB_RUN_AS\",\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.RestrictWorkspaceAdminsSetting(\"this\", restrict_workspace_admins={\n    \"status\": \"RESTRICT_TOKENS_AND_JOB_RUN_AS\",\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.RestrictWorkspaceAdminsSetting(\"this\", new()\n    {\n        RestrictWorkspaceAdmins = new Databricks.Inputs.RestrictWorkspaceAdminsSettingRestrictWorkspaceAdminsArgs\n        {\n            Status = \"RESTRICT_TOKENS_AND_JOB_RUN_AS\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewRestrictWorkspaceAdminsSetting(ctx, \"this\", \u0026databricks.RestrictWorkspaceAdminsSettingArgs{\n\t\t\tRestrictWorkspaceAdmins: \u0026databricks.RestrictWorkspaceAdminsSettingRestrictWorkspaceAdminsArgs{\n\t\t\t\tStatus: pulumi.String(\"RESTRICT_TOKENS_AND_JOB_RUN_AS\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.RestrictWorkspaceAdminsSetting;\nimport com.pulumi.databricks.RestrictWorkspaceAdminsSettingArgs;\nimport com.pulumi.databricks.inputs.RestrictWorkspaceAdminsSettingRestrictWorkspaceAdminsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new RestrictWorkspaceAdminsSetting(\"this\", RestrictWorkspaceAdminsSettingArgs.builder()\n            .restrictWorkspaceAdmins(RestrictWorkspaceAdminsSettingRestrictWorkspaceAdminsArgs.builder()\n                .status(\"RESTRICT_TOKENS_AND_JOB_RUN_AS\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:RestrictWorkspaceAdminsSetting\n    properties:\n      restrictWorkspaceAdmins:\n        status: RESTRICT_TOKENS_AND_JOB_RUN_AS\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/RestrictWorkspaceAdminsSettingProviderConfig:RestrictWorkspaceAdminsSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"restrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/RestrictWorkspaceAdminsSettingRestrictWorkspaceAdmins:RestrictWorkspaceAdminsSettingRestrictWorkspaceAdmins","description":"The configuration details.\n"},"settingName":{"type":"string"}},"required":["etag","restrictWorkspaceAdmins","settingName"],"inputProperties":{"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/RestrictWorkspaceAdminsSettingProviderConfig:RestrictWorkspaceAdminsSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"restrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/RestrictWorkspaceAdminsSettingRestrictWorkspaceAdmins:RestrictWorkspaceAdminsSettingRestrictWorkspaceAdmins","description":"The configuration details.\n"},"settingName":{"type":"string"}},"requiredInputs":["restrictWorkspaceAdmins"],"stateInputs":{"description":"Input properties used for looking up and filtering RestrictWorkspaceAdminsSetting resources.\n","properties":{"etag":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/RestrictWorkspaceAdminsSettingProviderConfig:RestrictWorkspaceAdminsSettingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"restrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/RestrictWorkspaceAdminsSettingRestrictWorkspaceAdmins:RestrictWorkspaceAdminsSettingRestrictWorkspaceAdmins","description":"The configuration details.\n"},"settingName":{"type":"string"}},"type":"object"}},"databricks:index/rfaAccessRequestDestinations:RfaAccessRequestDestinations":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nRequest for Access (RFA) access request destinations allow you to configure where notifications are sent when users request access to securable objects in Unity Catalog. This resource enables you to manage access request destinations for specific securable objects, such as tables, catalogs, or schemas.\n\nWhen a user requests access to a securable object, notifications can be sent to various destinations including email addresses, Slack channels, or Microsoft Teams channels. This resource allows you to configure these destinations to ensure that the appropriate stakeholders are notified of access requests.\n\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst customerDataTable = new databricks.RfaAccessRequestDestinations(\"customer_data_table\", {\n    destinations: [\n        {\n            destinationId: \"john.doe@databricks.com\",\n            destinationType: \"EMAIL\",\n        },\n        {\n            destinationId: \"https://www.databricks.com/\",\n            destinationType: \"URL\",\n        },\n        {\n            destinationId: \"456e7890-e89b-12d3-a456-426614174001\",\n            destinationType: \"SLACK\",\n        },\n        {\n            destinationId: \"789e0123-e89b-12d3-a456-426614174002\",\n            destinationType: \"MICROSOFT_TEAMS\",\n        },\n        {\n            destinationId: \"012e3456-e89b-12d3-a456-426614174003\",\n            destinationType: \"GENERIC_WEBHOOK\",\n        },\n    ],\n    securable: {\n        type: \"SCHEMA\",\n        fullName: \"main.customer_data\",\n    },\n    areAnyDestinationsHidden: false,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncustomer_data_table = databricks.RfaAccessRequestDestinations(\"customer_data_table\",\n    destinations=[\n        {\n            \"destination_id\": \"john.doe@databricks.com\",\n            \"destination_type\": \"EMAIL\",\n        },\n        {\n            \"destination_id\": \"https://www.databricks.com/\",\n            \"destination_type\": \"URL\",\n        },\n        {\n            \"destination_id\": \"456e7890-e89b-12d3-a456-426614174001\",\n            \"destination_type\": \"SLACK\",\n        },\n        {\n            \"destination_id\": \"789e0123-e89b-12d3-a456-426614174002\",\n            \"destination_type\": \"MICROSOFT_TEAMS\",\n        },\n        {\n            \"destination_id\": \"012e3456-e89b-12d3-a456-426614174003\",\n            \"destination_type\": \"GENERIC_WEBHOOK\",\n        },\n    ],\n    securable={\n        \"type\": \"SCHEMA\",\n        \"full_name\": \"main.customer_data\",\n    },\n    are_any_destinations_hidden=False)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var customerDataTable = new Databricks.RfaAccessRequestDestinations(\"customer_data_table\", new()\n    {\n        Destinations = new[]\n        {\n            new Databricks.Inputs.RfaAccessRequestDestinationsDestinationArgs\n            {\n                DestinationId = \"john.doe@databricks.com\",\n                DestinationType = \"EMAIL\",\n            },\n            new Databricks.Inputs.RfaAccessRequestDestinationsDestinationArgs\n            {\n                DestinationId = \"https://www.databricks.com/\",\n                DestinationType = \"URL\",\n            },\n            new Databricks.Inputs.RfaAccessRequestDestinationsDestinationArgs\n            {\n                DestinationId = \"456e7890-e89b-12d3-a456-426614174001\",\n                DestinationType = \"SLACK\",\n            },\n            new Databricks.Inputs.RfaAccessRequestDestinationsDestinationArgs\n            {\n                DestinationId = \"789e0123-e89b-12d3-a456-426614174002\",\n                DestinationType = \"MICROSOFT_TEAMS\",\n            },\n            new Databricks.Inputs.RfaAccessRequestDestinationsDestinationArgs\n            {\n                DestinationId = \"012e3456-e89b-12d3-a456-426614174003\",\n                DestinationType = \"GENERIC_WEBHOOK\",\n            },\n        },\n        Securable = new Databricks.Inputs.RfaAccessRequestDestinationsSecurableArgs\n        {\n            Type = \"SCHEMA\",\n            FullName = \"main.customer_data\",\n        },\n        AreAnyDestinationsHidden = false,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewRfaAccessRequestDestinations(ctx, \"customer_data_table\", \u0026databricks.RfaAccessRequestDestinationsArgs{\n\t\t\tDestinations: databricks.RfaAccessRequestDestinationsDestinationArray{\n\t\t\t\t\u0026databricks.RfaAccessRequestDestinationsDestinationArgs{\n\t\t\t\t\tDestinationId:   pulumi.String(\"john.doe@databricks.com\"),\n\t\t\t\t\tDestinationType: pulumi.String(\"EMAIL\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.RfaAccessRequestDestinationsDestinationArgs{\n\t\t\t\t\tDestinationId:   pulumi.String(\"https://www.databricks.com/\"),\n\t\t\t\t\tDestinationType: pulumi.String(\"URL\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.RfaAccessRequestDestinationsDestinationArgs{\n\t\t\t\t\tDestinationId:   pulumi.String(\"456e7890-e89b-12d3-a456-426614174001\"),\n\t\t\t\t\tDestinationType: pulumi.String(\"SLACK\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.RfaAccessRequestDestinationsDestinationArgs{\n\t\t\t\t\tDestinationId:   pulumi.String(\"789e0123-e89b-12d3-a456-426614174002\"),\n\t\t\t\t\tDestinationType: pulumi.String(\"MICROSOFT_TEAMS\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.RfaAccessRequestDestinationsDestinationArgs{\n\t\t\t\t\tDestinationId:   pulumi.String(\"012e3456-e89b-12d3-a456-426614174003\"),\n\t\t\t\t\tDestinationType: pulumi.String(\"GENERIC_WEBHOOK\"),\n\t\t\t\t},\n\t\t\t},\n\t\t\tSecurable: \u0026databricks.RfaAccessRequestDestinationsSecurableArgs{\n\t\t\t\tType:     pulumi.String(\"SCHEMA\"),\n\t\t\t\tFullName: pulumi.String(\"main.customer_data\"),\n\t\t\t},\n\t\t\tAreAnyDestinationsHidden: false,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.RfaAccessRequestDestinations;\nimport com.pulumi.databricks.RfaAccessRequestDestinationsArgs;\nimport com.pulumi.databricks.inputs.RfaAccessRequestDestinationsDestinationArgs;\nimport com.pulumi.databricks.inputs.RfaAccessRequestDestinationsSecurableArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var customerDataTable = new RfaAccessRequestDestinations(\"customerDataTable\", RfaAccessRequestDestinationsArgs.builder()\n            .destinations(            \n                RfaAccessRequestDestinationsDestinationArgs.builder()\n                    .destinationId(\"john.doe@databricks.com\")\n                    .destinationType(\"EMAIL\")\n                    .build(),\n                RfaAccessRequestDestinationsDestinationArgs.builder()\n                    .destinationId(\"https://www.databricks.com/\")\n                    .destinationType(\"URL\")\n                    .build(),\n                RfaAccessRequestDestinationsDestinationArgs.builder()\n                    .destinationId(\"456e7890-e89b-12d3-a456-426614174001\")\n                    .destinationType(\"SLACK\")\n                    .build(),\n                RfaAccessRequestDestinationsDestinationArgs.builder()\n                    .destinationId(\"789e0123-e89b-12d3-a456-426614174002\")\n                    .destinationType(\"MICROSOFT_TEAMS\")\n                    .build(),\n                RfaAccessRequestDestinationsDestinationArgs.builder()\n                    .destinationId(\"012e3456-e89b-12d3-a456-426614174003\")\n                    .destinationType(\"GENERIC_WEBHOOK\")\n                    .build())\n            .securable(RfaAccessRequestDestinationsSecurableArgs.builder()\n                .type(\"SCHEMA\")\n                .fullName(\"main.customer_data\")\n                .build())\n            .areAnyDestinationsHidden(false)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  customerDataTable:\n    type: databricks:RfaAccessRequestDestinations\n    name: customer_data_table\n    properties:\n      destinations:\n        - destinationId: john.doe@databricks.com\n          destinationType: EMAIL\n        - destinationId: https://www.databricks.com/\n          destinationType: URL\n        - destinationId: 456e7890-e89b-12d3-a456-426614174001\n          destinationType: SLACK\n        - destinationId: 789e0123-e89b-12d3-a456-426614174002\n          destinationType: MICROSOFT_TEAMS\n        - destinationId: 012e3456-e89b-12d3-a456-426614174003\n          destinationType: GENERIC_WEBHOOK\n      securable:\n        type: SCHEMA\n        fullName: main.customer_data\n      areAnyDestinationsHidden: false\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"areAnyDestinationsHidden":{"type":"boolean","description":"(boolean) - Indicates whether any destinations are hidden from the caller due to a lack of permissions.\nThis value is true if the caller does not have permission to see all destinations\n"},"destinationSourceSecurable":{"$ref":"#/types/databricks:index/RfaAccessRequestDestinationsDestinationSourceSecurable:RfaAccessRequestDestinationsDestinationSourceSecurable","description":"(Securable) - The source securable from which the destinations are inherited. Either the same value as securable (if destination\nis set directly on the securable) or the nearest parent securable with destinations set\n"},"destinations":{"type":"array","items":{"$ref":"#/types/databricks:index/RfaAccessRequestDestinationsDestination:RfaAccessRequestDestinationsDestination"},"description":"The access request destinations for the securable\n"},"fullName":{"type":"string","description":"(string) - The full name of the securable. Redundant with the name in the securable object, but necessary for Pulumi integration\n"},"providerConfig":{"$ref":"#/types/databricks:index/RfaAccessRequestDestinationsProviderConfig:RfaAccessRequestDestinationsProviderConfig","description":"Configure the provider for management through account provider.\n"},"securable":{"$ref":"#/types/databricks:index/RfaAccessRequestDestinationsSecurable:RfaAccessRequestDestinationsSecurable","description":"The securable for which the access request destinations are being modified or read\n"},"securableType":{"type":"string","description":"(string) - The type of the securable. Redundant with the type in the securable object, but necessary for Pulumi integration\n"}},"required":["areAnyDestinationsHidden","destinationSourceSecurable","fullName","securable","securableType"],"inputProperties":{"destinations":{"type":"array","items":{"$ref":"#/types/databricks:index/RfaAccessRequestDestinationsDestination:RfaAccessRequestDestinationsDestination"},"description":"The access request destinations for the securable\n"},"providerConfig":{"$ref":"#/types/databricks:index/RfaAccessRequestDestinationsProviderConfig:RfaAccessRequestDestinationsProviderConfig","description":"Configure the provider for management through account provider.\n"},"securable":{"$ref":"#/types/databricks:index/RfaAccessRequestDestinationsSecurable:RfaAccessRequestDestinationsSecurable","description":"The securable for which the access request destinations are being modified or read\n"}},"requiredInputs":["securable"],"stateInputs":{"description":"Input properties used for looking up and filtering RfaAccessRequestDestinations resources.\n","properties":{"areAnyDestinationsHidden":{"type":"boolean","description":"(boolean) - Indicates whether any destinations are hidden from the caller due to a lack of permissions.\nThis value is true if the caller does not have permission to see all destinations\n"},"destinationSourceSecurable":{"$ref":"#/types/databricks:index/RfaAccessRequestDestinationsDestinationSourceSecurable:RfaAccessRequestDestinationsDestinationSourceSecurable","description":"(Securable) - The source securable from which the destinations are inherited. Either the same value as securable (if destination\nis set directly on the securable) or the nearest parent securable with destinations set\n"},"destinations":{"type":"array","items":{"$ref":"#/types/databricks:index/RfaAccessRequestDestinationsDestination:RfaAccessRequestDestinationsDestination"},"description":"The access request destinations for the securable\n"},"fullName":{"type":"string","description":"(string) - The full name of the securable. Redundant with the name in the securable object, but necessary for Pulumi integration\n"},"providerConfig":{"$ref":"#/types/databricks:index/RfaAccessRequestDestinationsProviderConfig:RfaAccessRequestDestinationsProviderConfig","description":"Configure the provider for management through account provider.\n"},"securable":{"$ref":"#/types/databricks:index/RfaAccessRequestDestinationsSecurable:RfaAccessRequestDestinationsSecurable","description":"The securable for which the access request destinations are being modified or read\n"},"securableType":{"type":"string","description":"(string) - The type of the securable. Redundant with the type in the securable object, but necessary for Pulumi integration\n"}},"type":"object"}},"databricks:index/schema:Schema":{"description":"Within a metastore, Unity Catalog provides a 3-level namespace for organizing data: Catalogs, Databases (also called Schemas), and Tables / Views.\n\n\u003e This resource can only be used with a workspace-level provider!\n\nA \u003cspan pulumi-lang-nodejs=\"`databricks.Schema`\" pulumi-lang-dotnet=\"`databricks.Schema`\" pulumi-lang-go=\"`Schema`\" pulumi-lang-python=\"`Schema`\" pulumi-lang-yaml=\"`databricks.Schema`\" pulumi-lang-java=\"`databricks.Schema`\"\u003e`databricks.Schema`\u003c/span\u003e is contained within\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eand can contain tables \u0026 views.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    comment: \"this catalog is managed by terraform\",\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst things = new databricks.Schema(\"things\", {\n    catalogName: sandbox.id,\n    name: \"things\",\n    comment: \"this database is managed by terraform\",\n    properties: {\n        kind: \"various\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    comment=\"this catalog is managed by terraform\",\n    properties={\n        \"purpose\": \"testing\",\n    })\nthings = databricks.Schema(\"things\",\n    catalog_name=sandbox.id,\n    name=\"things\",\n    comment=\"this database is managed by terraform\",\n    properties={\n        \"kind\": \"various\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        Comment = \"this catalog is managed by terraform\",\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var things = new Databricks.Schema(\"things\", new()\n    {\n        CatalogName = sandbox.Id,\n        Name = \"things\",\n        Comment = \"this database is managed by terraform\",\n        Properties = \n        {\n            { \"kind\", \"various\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:    pulumi.String(\"sandbox\"),\n\t\t\tComment: pulumi.String(\"this catalog is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewSchema(ctx, \"things\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: sandbox.ID(),\n\t\t\tName:        pulumi.String(\"things\"),\n\t\t\tComment:     pulumi.String(\"this database is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"kind\": pulumi.String(\"various\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .comment(\"this catalog is managed by terraform\")\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var things = new Schema(\"things\", SchemaArgs.builder()\n            .catalogName(sandbox.id())\n            .name(\"things\")\n            .comment(\"this database is managed by terraform\")\n            .properties(Map.of(\"kind\", \"various\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      comment: this catalog is managed by terraform\n      properties:\n        purpose: testing\n  things:\n    type: databricks:Schema\n    properties:\n      catalogName: ${sandbox.id}\n      name: things\n      comment: this database is managed by terraform\n      properties:\n        kind: various\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.getTables \" pulumi-lang-dotnet=\" databricks.getTables \" pulumi-lang-go=\" getTables \" pulumi-lang-python=\" get_tables \" pulumi-lang-yaml=\" databricks.getTables \" pulumi-lang-java=\" databricks.getTables \"\u003e databricks.getTables \u003c/span\u003edata to list tables within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getSchemas \" pulumi-lang-dotnet=\" databricks.getSchemas \" pulumi-lang-go=\" getSchemas \" pulumi-lang-python=\" get_schemas \" pulumi-lang-yaml=\" databricks.getSchemas \" pulumi-lang-java=\" databricks.getSchemas \"\u003e databricks.getSchemas \u003c/span\u003edata to list schemas within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getCatalogs \" pulumi-lang-dotnet=\" databricks.getCatalogs \" pulumi-lang-go=\" getCatalogs \" pulumi-lang-python=\" get_catalogs \" pulumi-lang-yaml=\" databricks.getCatalogs \" pulumi-lang-java=\" databricks.getCatalogs \"\u003e databricks.getCatalogs \u003c/span\u003edata to list catalogs within Unity Catalog.\n\n","properties":{"catalogName":{"type":"string","description":"Name of parent catalog. Change forces creation of a new resource.\n"},"comment":{"type":"string","description":"User-supplied free-form text.\n"},"enablePredictiveOptimization":{"type":"string","description":"Whether predictive optimization should be enabled for this object and objects under it. Can be `ENABLE`, `DISABLE` or `INHERIT`\n"},"forceDestroy":{"type":"boolean","description":"Delete schema regardless of its contents.\n"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"Name of Schema relative to parent catalog. Change forces creation of a new resource.\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the schema owner.\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"Extensible Schema properties.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SchemaProviderConfig:SchemaProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaId":{"type":"string","description":"The unique identifier of the schema.\n"},"storageRoot":{"type":"string","description":"Managed location of the schema. Location in cloud storage where data for managed tables will be stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). If not specified, the location will default to the catalog root location. Change forces creation of a new resource.\n"}},"required":["catalogName","enablePredictiveOptimization","metastoreId","name","owner","schemaId"],"inputProperties":{"catalogName":{"type":"string","description":"Name of parent catalog. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"comment":{"type":"string","description":"User-supplied free-form text.\n"},"enablePredictiveOptimization":{"type":"string","description":"Whether predictive optimization should be enabled for this object and objects under it. Can be `ENABLE`, `DISABLE` or `INHERIT`\n"},"forceDestroy":{"type":"boolean","description":"Delete schema regardless of its contents.\n"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"Name of Schema relative to parent catalog. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the schema owner.\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"Extensible Schema properties.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SchemaProviderConfig:SchemaProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"storageRoot":{"type":"string","description":"Managed location of the schema. Location in cloud storage where data for managed tables will be stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). If not specified, the location will default to the catalog root location. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"requiredInputs":["catalogName"],"stateInputs":{"description":"Input properties used for looking up and filtering Schema resources.\n","properties":{"catalogName":{"type":"string","description":"Name of parent catalog. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"comment":{"type":"string","description":"User-supplied free-form text.\n"},"enablePredictiveOptimization":{"type":"string","description":"Whether predictive optimization should be enabled for this object and objects under it. Can be `ENABLE`, `DISABLE` or `INHERIT`\n"},"forceDestroy":{"type":"boolean","description":"Delete schema regardless of its contents.\n"},"metastoreId":{"type":"string"},"name":{"type":"string","description":"Name of Schema relative to parent catalog. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the schema owner.\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"Extensible Schema properties.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SchemaProviderConfig:SchemaProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaId":{"type":"string","description":"The unique identifier of the schema.\n"},"storageRoot":{"type":"string","description":"Managed location of the schema. Location in cloud storage where data for managed tables will be stored.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). If not specified, the location will default to the catalog root location. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/secret:Secret":{"description":"With this resource you can insert a secret under the provided scope with the given name. If a secret already exists with the same name, this command overwrites the existing secret's value. The server encrypts the secret using the secret scope's encryption settings before storing it. You must have WRITE or MANAGE permission on the secret scope. The secret key must consist of alphanumeric characters, dashes, underscores, and periods, and cannot exceed 128 characters. The maximum allowed secret value size is 128 KB. The maximum number of secrets in a given scope is 1000. You can read a secret value only from within a command on a cluster (for example, through a notebook); there is no API to read a secret value outside of a cluster. The permission applied is based on who is invoking the command and you must have at least READ permission. Please consult [Secrets User Guide](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) for more details.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst app = new databricks.SecretScope(\"app\", {name: \"application-secret-scope\"});\nconst publishingApi = new databricks.Secret(\"publishing_api\", {\n    key: \"publishing_api\",\n    stringValue: example.value,\n    scope: app.id,\n});\nconst _this = new databricks.Cluster(\"this\", {sparkConf: {\n    \"fs.azure.account.oauth2.client.secret\": publishingApi.configReference,\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\napp = databricks.SecretScope(\"app\", name=\"application-secret-scope\")\npublishing_api = databricks.Secret(\"publishing_api\",\n    key=\"publishing_api\",\n    string_value=example[\"value\"],\n    scope=app.id)\nthis = databricks.Cluster(\"this\", spark_conf={\n    \"fs.azure.account.oauth2.client.secret\": publishing_api.config_reference,\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var app = new Databricks.SecretScope(\"app\", new()\n    {\n        Name = \"application-secret-scope\",\n    });\n\n    var publishingApi = new Databricks.Secret(\"publishing_api\", new()\n    {\n        Key = \"publishing_api\",\n        StringValue = example.Value,\n        Scope = app.Id,\n    });\n\n    var @this = new Databricks.Cluster(\"this\", new()\n    {\n        SparkConf = \n        {\n            { \"fs.azure.account.oauth2.client.secret\", publishingApi.ConfigReference },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tapp, err := databricks.NewSecretScope(ctx, \"app\", \u0026databricks.SecretScopeArgs{\n\t\t\tName: pulumi.String(\"application-secret-scope\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tpublishingApi, err := databricks.NewSecret(ctx, \"publishing_api\", \u0026databricks.SecretArgs{\n\t\t\tKey:         pulumi.String(\"publishing_api\"),\n\t\t\tStringValue: pulumi.Any(example.Value),\n\t\t\tScope:       app.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewCluster(ctx, \"this\", \u0026databricks.ClusterArgs{\n\t\t\tSparkConf: pulumi.StringMap{\n\t\t\t\t\"fs.azure.account.oauth2.client.secret\": publishingApi.ConfigReference,\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SecretScope;\nimport com.pulumi.databricks.SecretScopeArgs;\nimport com.pulumi.databricks.Secret;\nimport com.pulumi.databricks.SecretArgs;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var app = new SecretScope(\"app\", SecretScopeArgs.builder()\n            .name(\"application-secret-scope\")\n            .build());\n\n        var publishingApi = new Secret(\"publishingApi\", SecretArgs.builder()\n            .key(\"publishing_api\")\n            .stringValue(example.value())\n            .scope(app.id())\n            .build());\n\n        var this_ = new Cluster(\"this\", ClusterArgs.builder()\n            .sparkConf(Map.of(\"fs.azure.account.oauth2.client.secret\", publishingApi.configReference()))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  app:\n    type: databricks:SecretScope\n    properties:\n      name: application-secret-scope\n  publishingApi:\n    type: databricks:Secret\n    name: publishing_api\n    properties:\n      key: publishing_api\n      stringValue: ${example.value}\n      scope: ${app.id}\n  this:\n    type: databricks:Cluster\n    properties:\n      sparkConf:\n        fs.azure.account.oauth2.client.secret: ${publishingApi.configReference}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Pipeline \" pulumi-lang-dotnet=\" databricks.Pipeline \" pulumi-lang-go=\" Pipeline \" pulumi-lang-python=\" Pipeline \" pulumi-lang-yaml=\" databricks.Pipeline \" pulumi-lang-java=\" databricks.Pipeline \"\u003e databricks.Pipeline \u003c/span\u003eto deploy [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage [Databricks Repos](https://docs.databricks.com/repos.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SecretAcl \" pulumi-lang-dotnet=\" databricks.SecretAcl \" pulumi-lang-go=\" SecretAcl \" pulumi-lang-python=\" SecretAcl \" pulumi-lang-yaml=\" databricks.SecretAcl \" pulumi-lang-java=\" databricks.SecretAcl \"\u003e databricks.SecretAcl \u003c/span\u003eto manage access to [secrets](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SecretScope \" pulumi-lang-dotnet=\" databricks.SecretScope \" pulumi-lang-go=\" SecretScope \" pulumi-lang-python=\" SecretScope \" pulumi-lang-yaml=\" databricks.SecretScope \" pulumi-lang-java=\" databricks.SecretScope \"\u003e databricks.SecretScope \u003c/span\u003eto create [secret scopes](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n\n","properties":{"configReference":{"type":"string","description":"(String) value to use as a secret reference in [Spark configuration and environment variables](https://docs.databricks.com/security/secrets/secrets.html#use-a-secret-in-a-spark-configuration-property-or-environment-variable): `{{secrets/scope/key}}`.\n"},"key":{"type":"string","description":"(String) key within secret scope. Must consist of alphanumeric characters, dashes, underscores, and periods, and may not exceed 128 characters.\n"},"lastUpdatedTimestamp":{"type":"integer","description":"(Integer) time secret was updated\n"},"providerConfig":{"$ref":"#/types/databricks:index/SecretProviderConfig:SecretProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"scope":{"type":"string","description":"(String) name of databricks secret scope. Must consist of alphanumeric characters, dashes, underscores, and periods, and may not exceed 128 characters.\n"},"stringValue":{"type":"string","description":"(String) super secret sensitive value.\n","secret":true}},"required":["configReference","key","lastUpdatedTimestamp","scope","stringValue"],"inputProperties":{"key":{"type":"string","description":"(String) key within secret scope. Must consist of alphanumeric characters, dashes, underscores, and periods, and may not exceed 128 characters.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/SecretProviderConfig:SecretProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"scope":{"type":"string","description":"(String) name of databricks secret scope. Must consist of alphanumeric characters, dashes, underscores, and periods, and may not exceed 128 characters.\n","willReplaceOnChanges":true},"stringValue":{"type":"string","description":"(String) super secret sensitive value.\n","secret":true,"willReplaceOnChanges":true}},"requiredInputs":["key","scope","stringValue"],"stateInputs":{"description":"Input properties used for looking up and filtering Secret resources.\n","properties":{"configReference":{"type":"string","description":"(String) value to use as a secret reference in [Spark configuration and environment variables](https://docs.databricks.com/security/secrets/secrets.html#use-a-secret-in-a-spark-configuration-property-or-environment-variable): `{{secrets/scope/key}}`.\n"},"key":{"type":"string","description":"(String) key within secret scope. Must consist of alphanumeric characters, dashes, underscores, and periods, and may not exceed 128 characters.\n","willReplaceOnChanges":true},"lastUpdatedTimestamp":{"type":"integer","description":"(Integer) time secret was updated\n"},"providerConfig":{"$ref":"#/types/databricks:index/SecretProviderConfig:SecretProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"scope":{"type":"string","description":"(String) name of databricks secret scope. Must consist of alphanumeric characters, dashes, underscores, and periods, and may not exceed 128 characters.\n","willReplaceOnChanges":true},"stringValue":{"type":"string","description":"(String) super secret sensitive value.\n","secret":true,"willReplaceOnChanges":true}},"type":"object"}},"databricks:index/secretAcl:SecretAcl":{"description":"Create or overwrite the ACL associated with the given principal (user or group) on the specified databricks_secret_scope. Please consult [Secrets User Guide](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) for more details.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\nThis way, data scientists can read the Publishing API key that is synchronized from, for example, Azure Key Vault.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst ds = new databricks.Group(\"ds\", {displayName: \"data-scientists\"});\nconst app = new databricks.SecretScope(\"app\", {name: \"app-secret-scope\"});\nconst mySecretAcl = new databricks.SecretAcl(\"my_secret_acl\", {\n    principal: ds.displayName,\n    permission: \"READ\",\n    scope: app.name,\n});\nconst publishingApi = new databricks.Secret(\"publishing_api\", {\n    key: \"publishing_api\",\n    stringValue: example.value,\n    scope: app.name,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nds = databricks.Group(\"ds\", display_name=\"data-scientists\")\napp = databricks.SecretScope(\"app\", name=\"app-secret-scope\")\nmy_secret_acl = databricks.SecretAcl(\"my_secret_acl\",\n    principal=ds.display_name,\n    permission=\"READ\",\n    scope=app.name)\npublishing_api = databricks.Secret(\"publishing_api\",\n    key=\"publishing_api\",\n    string_value=example[\"value\"],\n    scope=app.name)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var ds = new Databricks.Group(\"ds\", new()\n    {\n        DisplayName = \"data-scientists\",\n    });\n\n    var app = new Databricks.SecretScope(\"app\", new()\n    {\n        Name = \"app-secret-scope\",\n    });\n\n    var mySecretAcl = new Databricks.SecretAcl(\"my_secret_acl\", new()\n    {\n        Principal = ds.DisplayName,\n        Permission = \"READ\",\n        Scope = app.Name,\n    });\n\n    var publishingApi = new Databricks.Secret(\"publishing_api\", new()\n    {\n        Key = \"publishing_api\",\n        StringValue = example.Value,\n        Scope = app.Name,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tds, err := databricks.NewGroup(ctx, \"ds\", \u0026databricks.GroupArgs{\n\t\t\tDisplayName: pulumi.String(\"data-scientists\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tapp, err := databricks.NewSecretScope(ctx, \"app\", \u0026databricks.SecretScopeArgs{\n\t\t\tName: pulumi.String(\"app-secret-scope\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewSecretAcl(ctx, \"my_secret_acl\", \u0026databricks.SecretAclArgs{\n\t\t\tPrincipal:  ds.DisplayName,\n\t\t\tPermission: pulumi.String(\"READ\"),\n\t\t\tScope:      app.Name,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewSecret(ctx, \"publishing_api\", \u0026databricks.SecretArgs{\n\t\t\tKey:         pulumi.String(\"publishing_api\"),\n\t\t\tStringValue: pulumi.Any(example.Value),\n\t\t\tScope:       app.Name,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Group;\nimport com.pulumi.databricks.GroupArgs;\nimport com.pulumi.databricks.SecretScope;\nimport com.pulumi.databricks.SecretScopeArgs;\nimport com.pulumi.databricks.SecretAcl;\nimport com.pulumi.databricks.SecretAclArgs;\nimport com.pulumi.databricks.Secret;\nimport com.pulumi.databricks.SecretArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var ds = new Group(\"ds\", GroupArgs.builder()\n            .displayName(\"data-scientists\")\n            .build());\n\n        var app = new SecretScope(\"app\", SecretScopeArgs.builder()\n            .name(\"app-secret-scope\")\n            .build());\n\n        var mySecretAcl = new SecretAcl(\"mySecretAcl\", SecretAclArgs.builder()\n            .principal(ds.displayName())\n            .permission(\"READ\")\n            .scope(app.name())\n            .build());\n\n        var publishingApi = new Secret(\"publishingApi\", SecretArgs.builder()\n            .key(\"publishing_api\")\n            .stringValue(example.value())\n            .scope(app.name())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  ds:\n    type: databricks:Group\n    properties:\n      displayName: data-scientists\n  app:\n    type: databricks:SecretScope\n    properties:\n      name: app-secret-scope\n  mySecretAcl:\n    type: databricks:SecretAcl\n    name: my_secret_acl\n    properties:\n      principal: ${ds.displayName}\n      permission: READ\n      scope: ${app.name}\n  publishingApi:\n    type: databricks:Secret\n    name: publishing_api\n    properties:\n      key: publishing_api\n      stringValue: ${example.value}\n      scope: ${app.name}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eto manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage [Databricks Repos](https://docs.databricks.com/repos.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Secret \" pulumi-lang-dotnet=\" databricks.Secret \" pulumi-lang-go=\" Secret \" pulumi-lang-python=\" Secret \" pulumi-lang-yaml=\" databricks.Secret \" pulumi-lang-java=\" databricks.Secret \"\u003e databricks.Secret \u003c/span\u003eto manage [secrets](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SecretScope \" pulumi-lang-dotnet=\" databricks.SecretScope \" pulumi-lang-go=\" SecretScope \" pulumi-lang-python=\" SecretScope \" pulumi-lang-yaml=\" databricks.SecretScope \" pulumi-lang-java=\" databricks.SecretScope \"\u003e databricks.SecretScope \u003c/span\u003eto create [secret scopes](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n\n","properties":{"permission":{"type":"string","description":"`READ`, `WRITE` or `MANAGE`.\n"},"principal":{"type":"string","description":"principal's identifier. It can be:\n"},"providerConfig":{"$ref":"#/types/databricks:index/SecretAclProviderConfig:SecretAclProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"scope":{"type":"string","description":"name of the scope\n"}},"required":["permission","principal","scope"],"inputProperties":{"permission":{"type":"string","description":"`READ`, `WRITE` or `MANAGE`.\n","willReplaceOnChanges":true},"principal":{"type":"string","description":"principal's identifier. It can be:\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/SecretAclProviderConfig:SecretAclProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"scope":{"type":"string","description":"name of the scope\n","willReplaceOnChanges":true}},"requiredInputs":["permission","principal","scope"],"stateInputs":{"description":"Input properties used for looking up and filtering SecretAcl resources.\n","properties":{"permission":{"type":"string","description":"`READ`, `WRITE` or `MANAGE`.\n","willReplaceOnChanges":true},"principal":{"type":"string","description":"principal's identifier. It can be:\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/SecretAclProviderConfig:SecretAclProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"scope":{"type":"string","description":"name of the scope\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/secretScope:SecretScope":{"description":"Sometimes accessing data requires that you authenticate to external data sources through JDBC. Instead of directly entering your credentials into a notebook, use Databricks secrets to store your credentials and reference them in notebooks and jobs. Please consult [Secrets User Guide](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) for more details.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.SecretScope(\"this\", {name: \"terraform-demo-scope\"});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.SecretScope(\"this\", name=\"terraform-demo-scope\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.SecretScope(\"this\", new()\n    {\n        Name = \"terraform-demo-scope\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSecretScope(ctx, \"this\", \u0026databricks.SecretScopeArgs{\n\t\t\tName: pulumi.String(\"terraform-demo-scope\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SecretScope;\nimport com.pulumi.databricks.SecretScopeArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new SecretScope(\"this\", SecretScopeArgs.builder()\n            .name(\"terraform-demo-scope\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:SecretScope\n    properties:\n      name: terraform-demo-scope\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage [Databricks Repos](https://docs.databricks.com/repos.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Secret \" pulumi-lang-dotnet=\" databricks.Secret \" pulumi-lang-go=\" Secret \" pulumi-lang-python=\" Secret \" pulumi-lang-yaml=\" databricks.Secret \" pulumi-lang-java=\" databricks.Secret \"\u003e databricks.Secret \u003c/span\u003eto manage [secrets](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SecretAcl \" pulumi-lang-dotnet=\" databricks.SecretAcl \" pulumi-lang-go=\" SecretAcl \" pulumi-lang-python=\" SecretAcl \" pulumi-lang-yaml=\" databricks.SecretAcl \" pulumi-lang-java=\" databricks.SecretAcl \"\u003e databricks.SecretAcl \u003c/span\u003eto manage access to [secrets](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n\n","properties":{"backendType":{"type":"string","description":"Either `DATABRICKS` or `AZURE_KEYVAULT`\n"},"initialManagePrincipal":{"type":"string","description":"The principal with the only possible value \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e that is initially granted `MANAGE` permission to the created scope.  If it's omitted, then the\u003cspan pulumi-lang-nodejs=\" databricks.SecretAcl \" pulumi-lang-dotnet=\" databricks.SecretAcl \" pulumi-lang-go=\" SecretAcl \" pulumi-lang-python=\" SecretAcl \" pulumi-lang-yaml=\" databricks.SecretAcl \" pulumi-lang-java=\" databricks.SecretAcl \"\u003e databricks.SecretAcl \u003c/span\u003ewith `MANAGE` permission applied to the scope is assigned to the API request issuer's user identity (see [documentation](https://docs.databricks.com/dev-tools/api/latest/secrets.html#create-secret-scope)). This part of the state cannot be imported.\n"},"keyvaultMetadata":{"$ref":"#/types/databricks:index/SecretScopeKeyvaultMetadata:SecretScopeKeyvaultMetadata"},"name":{"type":"string","description":"Scope name requested by the user. Must be unique within a workspace. Must consist of alphanumeric characters, dashes, underscores, and periods, and may not exceed 128 characters.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SecretScopeProviderConfig:SecretScopeProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"required":["backendType","name"],"inputProperties":{"backendType":{"type":"string","description":"Either `DATABRICKS` or `AZURE_KEYVAULT`\n"},"initialManagePrincipal":{"type":"string","description":"The principal with the only possible value \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e that is initially granted `MANAGE` permission to the created scope.  If it's omitted, then the\u003cspan pulumi-lang-nodejs=\" databricks.SecretAcl \" pulumi-lang-dotnet=\" databricks.SecretAcl \" pulumi-lang-go=\" SecretAcl \" pulumi-lang-python=\" SecretAcl \" pulumi-lang-yaml=\" databricks.SecretAcl \" pulumi-lang-java=\" databricks.SecretAcl \"\u003e databricks.SecretAcl \u003c/span\u003ewith `MANAGE` permission applied to the scope is assigned to the API request issuer's user identity (see [documentation](https://docs.databricks.com/dev-tools/api/latest/secrets.html#create-secret-scope)). This part of the state cannot be imported.\n","willReplaceOnChanges":true},"keyvaultMetadata":{"$ref":"#/types/databricks:index/SecretScopeKeyvaultMetadata:SecretScopeKeyvaultMetadata","willReplaceOnChanges":true},"name":{"type":"string","description":"Scope name requested by the user. Must be unique within a workspace. Must consist of alphanumeric characters, dashes, underscores, and periods, and may not exceed 128 characters.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/SecretScopeProviderConfig:SecretScopeProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"stateInputs":{"description":"Input properties used for looking up and filtering SecretScope resources.\n","properties":{"backendType":{"type":"string","description":"Either `DATABRICKS` or `AZURE_KEYVAULT`\n"},"initialManagePrincipal":{"type":"string","description":"The principal with the only possible value \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e that is initially granted `MANAGE` permission to the created scope.  If it's omitted, then the\u003cspan pulumi-lang-nodejs=\" databricks.SecretAcl \" pulumi-lang-dotnet=\" databricks.SecretAcl \" pulumi-lang-go=\" SecretAcl \" pulumi-lang-python=\" SecretAcl \" pulumi-lang-yaml=\" databricks.SecretAcl \" pulumi-lang-java=\" databricks.SecretAcl \"\u003e databricks.SecretAcl \u003c/span\u003ewith `MANAGE` permission applied to the scope is assigned to the API request issuer's user identity (see [documentation](https://docs.databricks.com/dev-tools/api/latest/secrets.html#create-secret-scope)). This part of the state cannot be imported.\n","willReplaceOnChanges":true},"keyvaultMetadata":{"$ref":"#/types/databricks:index/SecretScopeKeyvaultMetadata:SecretScopeKeyvaultMetadata","willReplaceOnChanges":true},"name":{"type":"string","description":"Scope name requested by the user. Must be unique within a workspace. Must consist of alphanumeric characters, dashes, underscores, and periods, and may not exceed 128 characters.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/SecretScopeProviderConfig:SecretScopeProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/servicePrincipal:ServicePrincipal":{"description":"Directly manage [Service Principals](https://docs.databricks.com/administration-guide/users-groups/service-principals.html) that could be added to\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003ein Databricks account or workspace.\n\n\u003e This resource can be used with an account or workspace-level provider.\n\nThere are different types of service principals:\n\n* Databricks-managed - exists only inside the Databricks platform (all clouds) and couldn't be used for accessing non-Databricks services.\n* Azure-managed - existing Azure service principal (enterprise application) is registered inside Databricks.  It could be used to work with other Azure services.\n\n\u003e To assign account level service principals to workspace use databricks_mws_permission_assignment.\n\n\u003e Entitlements, like, \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`allowInstancePoolCreate`\" pulumi-lang-dotnet=\"`AllowInstancePoolCreate`\" pulumi-lang-go=\"`allowInstancePoolCreate`\" pulumi-lang-python=\"`allow_instance_pool_create`\" pulumi-lang-yaml=\"`allowInstancePoolCreate`\" pulumi-lang-java=\"`allowInstancePoolCreate`\"\u003e`allow_instance_pool_create`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e, `workspace-consume` applicable only for workspace-level service principals. Use\u003cspan pulumi-lang-nodejs=\" databricks.Entitlements \" pulumi-lang-dotnet=\" databricks.Entitlements \" pulumi-lang-go=\" Entitlements \" pulumi-lang-python=\" Entitlements \" pulumi-lang-yaml=\" databricks.Entitlements \" pulumi-lang-java=\" databricks.Entitlements \"\u003e databricks.Entitlements \u003c/span\u003eresource to assign entitlements inside a workspace to account-level service principals.\n\nThe default behavior when deleting a \u003cspan pulumi-lang-nodejs=\"`databricks.ServicePrincipal`\" pulumi-lang-dotnet=\"`databricks.ServicePrincipal`\" pulumi-lang-go=\"`ServicePrincipal`\" pulumi-lang-python=\"`ServicePrincipal`\" pulumi-lang-yaml=\"`databricks.ServicePrincipal`\" pulumi-lang-java=\"`databricks.ServicePrincipal`\"\u003e`databricks.ServicePrincipal`\u003c/span\u003e resource depends on whether the provider is configured at the workspace-level or account-level. When the provider is configured at the workspace-level, the service principal will be deleted from the workspace. When the provider is configured at the account-level, the service principal will be deactivated but not deleted. When the provider is configured at the account level, to delete the service principal from the account when the resource is deleted, set \u003cspan pulumi-lang-nodejs=\"`disableAsUserDeletion \" pulumi-lang-dotnet=\"`DisableAsUserDeletion \" pulumi-lang-go=\"`disableAsUserDeletion \" pulumi-lang-python=\"`disable_as_user_deletion \" pulumi-lang-yaml=\"`disableAsUserDeletion \" pulumi-lang-java=\"`disableAsUserDeletion \"\u003e`disable_as_user_deletion \u003c/span\u003e= false`. Conversely, when the provider is configured at the account-level, to deactivate the service principal when the resource is deleted, set \u003cspan pulumi-lang-nodejs=\"`disableAsUserDeletion \" pulumi-lang-dotnet=\"`DisableAsUserDeletion \" pulumi-lang-go=\"`disableAsUserDeletion \" pulumi-lang-python=\"`disable_as_user_deletion \" pulumi-lang-yaml=\"`disableAsUserDeletion \" pulumi-lang-java=\"`disableAsUserDeletion \"\u003e`disable_as_user_deletion \u003c/span\u003e= true`.\n\n## Example Usage\n\nCreating regular Databricks-managed service principal:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sp = new databricks.ServicePrincipal(\"sp\", {displayName: \"Admin SP\"});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsp = databricks.ServicePrincipal(\"sp\", display_name=\"Admin SP\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sp = new Databricks.ServicePrincipal(\"sp\", new()\n    {\n        DisplayName = \"Admin SP\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewServicePrincipal(ctx, \"sp\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"Admin SP\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sp = new ServicePrincipal(\"sp\", ServicePrincipalArgs.builder()\n            .displayName(\"Admin SP\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sp:\n    type: databricks:ServicePrincipal\n    properties:\n      displayName: Admin SP\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating service principal with administrative permissions - referencing special \u003cspan pulumi-lang-nodejs=\"`admins`\" pulumi-lang-dotnet=\"`Admins`\" pulumi-lang-go=\"`admins`\" pulumi-lang-python=\"`admins`\" pulumi-lang-yaml=\"`admins`\" pulumi-lang-java=\"`admins`\"\u003e`admins`\u003c/span\u003e\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003ein\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eresource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst admins = databricks.getGroup({\n    displayName: \"admins\",\n});\nconst sp = new databricks.ServicePrincipal(\"sp\", {displayName: \"Admin SP\"});\nconst i_am_admin = new databricks.GroupMember(\"i-am-admin\", {\n    groupId: admins.then(admins =\u003e admins.id),\n    memberId: sp.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nadmins = databricks.get_group(display_name=\"admins\")\nsp = databricks.ServicePrincipal(\"sp\", display_name=\"Admin SP\")\ni_am_admin = databricks.GroupMember(\"i-am-admin\",\n    group_id=admins.id,\n    member_id=sp.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var admins = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"admins\",\n    });\n\n    var sp = new Databricks.ServicePrincipal(\"sp\", new()\n    {\n        DisplayName = \"Admin SP\",\n    });\n\n    var i_am_admin = new Databricks.GroupMember(\"i-am-admin\", new()\n    {\n        GroupId = admins.Apply(getGroupResult =\u003e getGroupResult.Id),\n        MemberId = sp.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tadmins, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"admins\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsp, err := databricks.NewServicePrincipal(ctx, \"sp\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"Admin SP\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupMember(ctx, \"i-am-admin\", \u0026databricks.GroupMemberArgs{\n\t\t\tGroupId:  pulumi.String(admins.Id),\n\t\t\tMemberId: sp.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport com.pulumi.databricks.GroupMember;\nimport com.pulumi.databricks.GroupMemberArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var admins = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"admins\")\n            .build());\n\n        var sp = new ServicePrincipal(\"sp\", ServicePrincipalArgs.builder()\n            .displayName(\"Admin SP\")\n            .build());\n\n        var i_am_admin = new GroupMember(\"i-am-admin\", GroupMemberArgs.builder()\n            .groupId(admins.id())\n            .memberId(sp.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sp:\n    type: databricks:ServicePrincipal\n    properties:\n      displayName: Admin SP\n  i-am-admin:\n    type: databricks:GroupMember\n    properties:\n      groupId: ${admins.id}\n      memberId: ${sp.id}\nvariables:\n  admins:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: admins\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating Azure-managed service principal with cluster create permissions:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sp = new databricks.ServicePrincipal(\"sp\", {\n    applicationId: \"00000000-0000-0000-0000-000000000000\",\n    displayName: \"Example service principal\",\n    allowClusterCreate: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsp = databricks.ServicePrincipal(\"sp\",\n    application_id=\"00000000-0000-0000-0000-000000000000\",\n    display_name=\"Example service principal\",\n    allow_cluster_create=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sp = new Databricks.ServicePrincipal(\"sp\", new()\n    {\n        ApplicationId = \"00000000-0000-0000-0000-000000000000\",\n        DisplayName = \"Example service principal\",\n        AllowClusterCreate = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewServicePrincipal(ctx, \"sp\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tApplicationId:      pulumi.String(\"00000000-0000-0000-0000-000000000000\"),\n\t\t\tDisplayName:        pulumi.String(\"Example service principal\"),\n\t\t\tAllowClusterCreate: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sp = new ServicePrincipal(\"sp\", ServicePrincipalArgs.builder()\n            .applicationId(\"00000000-0000-0000-0000-000000000000\")\n            .displayName(\"Example service principal\")\n            .allowClusterCreate(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sp:\n    type: databricks:ServicePrincipal\n    properties:\n      applicationId: 00000000-0000-0000-0000-000000000000\n      displayName: Example service principal\n      allowClusterCreate: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating Databricks-managed service principal in AWS Databricks account:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sp = new databricks.ServicePrincipal(\"sp\", {displayName: \"Automation-only SP\"});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsp = databricks.ServicePrincipal(\"sp\", display_name=\"Automation-only SP\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sp = new Databricks.ServicePrincipal(\"sp\", new()\n    {\n        DisplayName = \"Automation-only SP\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewServicePrincipal(ctx, \"sp\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"Automation-only SP\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sp = new ServicePrincipal(\"sp\", ServicePrincipalArgs.builder()\n            .displayName(\"Automation-only SP\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sp:\n    type: databricks:ServicePrincipal\n    properties:\n      displayName: Automation-only SP\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating Azure-managed service principal in Azure Databricks account:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sp = new databricks.ServicePrincipal(\"sp\", {applicationId: \"00000000-0000-0000-0000-000000000000\"});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsp = databricks.ServicePrincipal(\"sp\", application_id=\"00000000-0000-0000-0000-000000000000\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sp = new Databricks.ServicePrincipal(\"sp\", new()\n    {\n        ApplicationId = \"00000000-0000-0000-0000-000000000000\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewServicePrincipal(ctx, \"sp\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tApplicationId: pulumi.String(\"00000000-0000-0000-0000-000000000000\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sp = new ServicePrincipal(\"sp\", ServicePrincipalArgs.builder()\n            .applicationId(\"00000000-0000-0000-0000-000000000000\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sp:\n    type: databricks:ServicePrincipal\n    properties:\n      applicationId: 00000000-0000-0000-0000-000000000000\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eto manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage data access in Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipalSecret \" pulumi-lang-dotnet=\" databricks.ServicePrincipalSecret \" pulumi-lang-go=\" ServicePrincipalSecret \" pulumi-lang-python=\" ServicePrincipalSecret \" pulumi-lang-yaml=\" databricks.ServicePrincipalSecret \" pulumi-lang-java=\" databricks.ServicePrincipalSecret \"\u003e databricks.ServicePrincipalSecret \u003c/span\u003eto manage secrets for a service principal.\n\n","properties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `servicePrincipals/00000000-0000-0000-0000-000000000000`.\n"},"active":{"type":"boolean","description":"Either service principal is active or not. True by default, but can be set to false in case of service principal deactivation with preserving service principal assets.\n"},"allowClusterCreate":{"type":"boolean","description":"Allow the service principal to have cluster create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e argument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within the boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"Allow the service principal to have instance pool create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"applicationId":{"type":"string","description":"This is the Azure Application ID of the given Azure service principal and will be their form of access and identity. For Databricks-managed service principals this value is auto-generated.\n"},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the service principal to have access to [Databricks SQL](https://databricks.com/product/databricks-sql) feature through databricks_sql_endpoint.\n"},"disableAsUserDeletion":{"type":"boolean","description":"Deactivate the service principal when deleting the resource, rather than deleting the service principal entirely. Defaults to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e when the provider is configured at the account-level and \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e when configured at the workspace-level. This flag is exclusive to\u003cspan pulumi-lang-nodejs=\" forceDeleteRepos \" pulumi-lang-dotnet=\" ForceDeleteRepos \" pulumi-lang-go=\" forceDeleteRepos \" pulumi-lang-python=\" force_delete_repos \" pulumi-lang-yaml=\" forceDeleteRepos \" pulumi-lang-java=\" forceDeleteRepos \"\u003e force_delete_repos \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" forceDeleteHomeDir \" pulumi-lang-dotnet=\" ForceDeleteHomeDir \" pulumi-lang-go=\" forceDeleteHomeDir \" pulumi-lang-python=\" force_delete_home_dir \" pulumi-lang-yaml=\" forceDeleteHomeDir \" pulumi-lang-java=\" forceDeleteHomeDir \"\u003e force_delete_home_dir \u003c/span\u003eflags.\n"},"displayName":{"type":"string","description":"This is an alias for the service principal and can be the full name of the service principal.\n"},"externalId":{"type":"string","description":"ID of the service principal in an external identity provider.\n"},"force":{"type":"boolean","description":"Ignore `cannot create service principal: Service principal with application ID X already exists` errors and implicitly import the specified service principal into Pulumi state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.\n"},"forceDeleteHomeDir":{"type":"boolean","description":"This flag determines whether the service principal's home directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.\n"},"forceDeleteRepos":{"type":"boolean","description":"This flag determines whether the service principal's repo directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.\n"},"home":{"type":"string","description":"Home folder of the service principal, e.g. `/Users/00000000-0000-0000-0000-000000000000`.\n"},"repos":{"type":"string","description":"Personal Repos location of the service principal, e.g. `/Repos/00000000-0000-0000-0000-000000000000`.\n"},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the service principal to have access to a Databricks Workspace.\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the service principal to have access to a Databricks Workspace as consumer, with limited access to workspace UI.  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"required":["aclPrincipalId","applicationId","displayName","home","repos"],"inputProperties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `servicePrincipals/00000000-0000-0000-0000-000000000000`.\n"},"active":{"type":"boolean","description":"Either service principal is active or not. True by default, but can be set to false in case of service principal deactivation with preserving service principal assets.\n"},"allowClusterCreate":{"type":"boolean","description":"Allow the service principal to have cluster create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e argument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within the boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"Allow the service principal to have instance pool create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"applicationId":{"type":"string","description":"This is the Azure Application ID of the given Azure service principal and will be their form of access and identity. For Databricks-managed service principals this value is auto-generated.\n","willReplaceOnChanges":true},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the service principal to have access to [Databricks SQL](https://databricks.com/product/databricks-sql) feature through databricks_sql_endpoint.\n"},"disableAsUserDeletion":{"type":"boolean","description":"Deactivate the service principal when deleting the resource, rather than deleting the service principal entirely. Defaults to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e when the provider is configured at the account-level and \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e when configured at the workspace-level. This flag is exclusive to\u003cspan pulumi-lang-nodejs=\" forceDeleteRepos \" pulumi-lang-dotnet=\" ForceDeleteRepos \" pulumi-lang-go=\" forceDeleteRepos \" pulumi-lang-python=\" force_delete_repos \" pulumi-lang-yaml=\" forceDeleteRepos \" pulumi-lang-java=\" forceDeleteRepos \"\u003e force_delete_repos \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" forceDeleteHomeDir \" pulumi-lang-dotnet=\" ForceDeleteHomeDir \" pulumi-lang-go=\" forceDeleteHomeDir \" pulumi-lang-python=\" force_delete_home_dir \" pulumi-lang-yaml=\" forceDeleteHomeDir \" pulumi-lang-java=\" forceDeleteHomeDir \"\u003e force_delete_home_dir \u003c/span\u003eflags.\n"},"displayName":{"type":"string","description":"This is an alias for the service principal and can be the full name of the service principal.\n"},"externalId":{"type":"string","description":"ID of the service principal in an external identity provider.\n"},"force":{"type":"boolean","description":"Ignore `cannot create service principal: Service principal with application ID X already exists` errors and implicitly import the specified service principal into Pulumi state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.\n"},"forceDeleteHomeDir":{"type":"boolean","description":"This flag determines whether the service principal's home directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.\n"},"forceDeleteRepos":{"type":"boolean","description":"This flag determines whether the service principal's repo directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.\n"},"home":{"type":"string","description":"Home folder of the service principal, e.g. `/Users/00000000-0000-0000-0000-000000000000`.\n"},"repos":{"type":"string","description":"Personal Repos location of the service principal, e.g. `/Repos/00000000-0000-0000-0000-000000000000`.\n"},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the service principal to have access to a Databricks Workspace.\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the service principal to have access to a Databricks Workspace as consumer, with limited access to workspace UI.  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering ServicePrincipal resources.\n","properties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `servicePrincipals/00000000-0000-0000-0000-000000000000`.\n"},"active":{"type":"boolean","description":"Either service principal is active or not. True by default, but can be set to false in case of service principal deactivation with preserving service principal assets.\n"},"allowClusterCreate":{"type":"boolean","description":"Allow the service principal to have cluster create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e argument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within the boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"Allow the service principal to have instance pool create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"applicationId":{"type":"string","description":"This is the Azure Application ID of the given Azure service principal and will be their form of access and identity. For Databricks-managed service principals this value is auto-generated.\n","willReplaceOnChanges":true},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the service principal to have access to [Databricks SQL](https://databricks.com/product/databricks-sql) feature through databricks_sql_endpoint.\n"},"disableAsUserDeletion":{"type":"boolean","description":"Deactivate the service principal when deleting the resource, rather than deleting the service principal entirely. Defaults to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e when the provider is configured at the account-level and \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e when configured at the workspace-level. This flag is exclusive to\u003cspan pulumi-lang-nodejs=\" forceDeleteRepos \" pulumi-lang-dotnet=\" ForceDeleteRepos \" pulumi-lang-go=\" forceDeleteRepos \" pulumi-lang-python=\" force_delete_repos \" pulumi-lang-yaml=\" forceDeleteRepos \" pulumi-lang-java=\" forceDeleteRepos \"\u003e force_delete_repos \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" forceDeleteHomeDir \" pulumi-lang-dotnet=\" ForceDeleteHomeDir \" pulumi-lang-go=\" forceDeleteHomeDir \" pulumi-lang-python=\" force_delete_home_dir \" pulumi-lang-yaml=\" forceDeleteHomeDir \" pulumi-lang-java=\" forceDeleteHomeDir \"\u003e force_delete_home_dir \u003c/span\u003eflags.\n"},"displayName":{"type":"string","description":"This is an alias for the service principal and can be the full name of the service principal.\n"},"externalId":{"type":"string","description":"ID of the service principal in an external identity provider.\n"},"force":{"type":"boolean","description":"Ignore `cannot create service principal: Service principal with application ID X already exists` errors and implicitly import the specified service principal into Pulumi state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.\n"},"forceDeleteHomeDir":{"type":"boolean","description":"This flag determines whether the service principal's home directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.\n"},"forceDeleteRepos":{"type":"boolean","description":"This flag determines whether the service principal's repo directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.\n"},"home":{"type":"string","description":"Home folder of the service principal, e.g. `/Users/00000000-0000-0000-0000-000000000000`.\n"},"repos":{"type":"string","description":"Personal Repos location of the service principal, e.g. `/Repos/00000000-0000-0000-0000-000000000000`.\n"},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the service principal to have access to a Databricks Workspace.\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the service principal to have access to a Databricks Workspace as consumer, with limited access to workspace UI.  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"type":"object"}},"databricks:index/servicePrincipalFederationPolicy:ServicePrincipalFederationPolicy":{"description":"[![GA](https://img.shields.io/badge/Release_Stage-GA-green)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nService principal federation policies allow automated workloads running outside of Databricks to access Databricks APIs without the need for Databricks secrets. Your application (workload) authenticates to Databricks as a Databricks service principal using tokens issued by the workload runtime, for example Github Actions.\n\nA service principal federation policy is associated with a service principal in your Databricks account, and specifies:\n* The identity provider (or issuer) from which the service principal can authenticate.\n* The workload identity (or subject) that is permitted to authenticate as the Databricks service principal.\n\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.ServicePrincipalFederationPolicy(\"this\", {\n    servicePrincipalId: 1234,\n    policyId: \"my-policy\",\n    oidcPolicy: {\n        issuer: \"https://myidp.example.com\",\n        subjectClaim: \"sub\",\n        subject: \"subject-in-token-from-myidp\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.ServicePrincipalFederationPolicy(\"this\",\n    service_principal_id=1234,\n    policy_id=\"my-policy\",\n    oidc_policy={\n        \"issuer\": \"https://myidp.example.com\",\n        \"subject_claim\": \"sub\",\n        \"subject\": \"subject-in-token-from-myidp\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.ServicePrincipalFederationPolicy(\"this\", new()\n    {\n        ServicePrincipalId = 1234,\n        PolicyId = \"my-policy\",\n        OidcPolicy = new Databricks.Inputs.ServicePrincipalFederationPolicyOidcPolicyArgs\n        {\n            Issuer = \"https://myidp.example.com\",\n            SubjectClaim = \"sub\",\n            Subject = \"subject-in-token-from-myidp\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewServicePrincipalFederationPolicy(ctx, \"this\", \u0026databricks.ServicePrincipalFederationPolicyArgs{\n\t\t\tServicePrincipalId: pulumi.Int(1234),\n\t\t\tPolicyId:           pulumi.String(\"my-policy\"),\n\t\t\tOidcPolicy: \u0026databricks.ServicePrincipalFederationPolicyOidcPolicyArgs{\n\t\t\t\tIssuer:       pulumi.String(\"https://myidp.example.com\"),\n\t\t\t\tSubjectClaim: pulumi.String(\"sub\"),\n\t\t\t\tSubject:      pulumi.String(\"subject-in-token-from-myidp\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ServicePrincipalFederationPolicy;\nimport com.pulumi.databricks.ServicePrincipalFederationPolicyArgs;\nimport com.pulumi.databricks.inputs.ServicePrincipalFederationPolicyOidcPolicyArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new ServicePrincipalFederationPolicy(\"this\", ServicePrincipalFederationPolicyArgs.builder()\n            .servicePrincipalId(1234)\n            .policyId(\"my-policy\")\n            .oidcPolicy(ServicePrincipalFederationPolicyOidcPolicyArgs.builder()\n                .issuer(\"https://myidp.example.com\")\n                .subjectClaim(\"sub\")\n                .subject(\"subject-in-token-from-myidp\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:ServicePrincipalFederationPolicy\n    properties:\n      servicePrincipalId: 1234\n      policyId: my-policy\n      oidcPolicy:\n        issuer: https://myidp.example.com\n        subjectClaim: sub\n        subject: subject-in-token-from-myidp\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"createTime":{"type":"string","description":"(string) - Creation time of the federation policy\n"},"description":{"type":"string","description":"Description of the federation policy\n"},"name":{"type":"string","description":"(string) - Resource name for the federation policy. Example values include\n`accounts/\u003caccount-id\u003e/federationPolicies/my-federation-policy` for Account Federation Policies, and\n`accounts/\u003caccount-id\u003e/servicePrincipals/\u003cservice-principal-id\u003e/federationPolicies/my-federation-policy`\nfor Service Principal Federation Policies. Typically an output parameter, which does not need to be\nspecified in create or update requests. If specified in a request, must match the value in the\nrequest URL\n"},"oidcPolicy":{"$ref":"#/types/databricks:index/ServicePrincipalFederationPolicyOidcPolicy:ServicePrincipalFederationPolicyOidcPolicy"},"policyId":{"type":"string","description":"(string) - The ID of the federation policy. Output only\n"},"servicePrincipalId":{"type":"integer","description":"(integer) - The service principal ID that this federation policy applies to. Output only. Only set for service principal federation policies\n"},"uid":{"type":"string","description":"(string) - Unique, immutable id of the federation policy\n"},"updateTime":{"type":"string","description":"(string) - Last update time of the federation policy\n"}},"required":["createTime","name","policyId","servicePrincipalId","uid","updateTime"],"inputProperties":{"description":{"type":"string","description":"Description of the federation policy\n"},"oidcPolicy":{"$ref":"#/types/databricks:index/ServicePrincipalFederationPolicyOidcPolicy:ServicePrincipalFederationPolicyOidcPolicy"},"policyId":{"type":"string","description":"(string) - The ID of the federation policy. Output only\n"},"servicePrincipalId":{"type":"integer","description":"(integer) - The service principal ID that this federation policy applies to. Output only. Only set for service principal federation policies\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering ServicePrincipalFederationPolicy resources.\n","properties":{"createTime":{"type":"string","description":"(string) - Creation time of the federation policy\n"},"description":{"type":"string","description":"Description of the federation policy\n"},"name":{"type":"string","description":"(string) - Resource name for the federation policy. Example values include\n`accounts/\u003caccount-id\u003e/federationPolicies/my-federation-policy` for Account Federation Policies, and\n`accounts/\u003caccount-id\u003e/servicePrincipals/\u003cservice-principal-id\u003e/federationPolicies/my-federation-policy`\nfor Service Principal Federation Policies. Typically an output parameter, which does not need to be\nspecified in create or update requests. If specified in a request, must match the value in the\nrequest URL\n"},"oidcPolicy":{"$ref":"#/types/databricks:index/ServicePrincipalFederationPolicyOidcPolicy:ServicePrincipalFederationPolicyOidcPolicy"},"policyId":{"type":"string","description":"(string) - The ID of the federation policy. Output only\n"},"servicePrincipalId":{"type":"integer","description":"(integer) - The service principal ID that this federation policy applies to. Output only. Only set for service principal federation policies\n"},"uid":{"type":"string","description":"(string) - Unique, immutable id of the federation policy\n"},"updateTime":{"type":"string","description":"(string) - Last update time of the federation policy\n"}},"type":"object"}},"databricks:index/servicePrincipalRole:ServicePrincipalRole":{"description":"This resource allows you to attach a role or\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to a databricks_service_principal.\n\n\u003e This resource can be used with an account or workspace-level provider.\n\n## Example Usage\n\nGranting a service principal access to an instance profile\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst instanceProfile = new databricks.InstanceProfile(\"instance_profile\", {instanceProfileArn: \"my_instance_profile_arn\"});\nconst _this = new databricks.ServicePrincipal(\"this\", {displayName: \"My Service Principal\"});\nconst myServicePrincipalInstanceProfile = new databricks.ServicePrincipalRole(\"my_service_principal_instance_profile\", {\n    servicePrincipalId: _this.id,\n    role: instanceProfile.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ninstance_profile = databricks.InstanceProfile(\"instance_profile\", instance_profile_arn=\"my_instance_profile_arn\")\nthis = databricks.ServicePrincipal(\"this\", display_name=\"My Service Principal\")\nmy_service_principal_instance_profile = databricks.ServicePrincipalRole(\"my_service_principal_instance_profile\",\n    service_principal_id=this.id,\n    role=instance_profile.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var instanceProfile = new Databricks.InstanceProfile(\"instance_profile\", new()\n    {\n        InstanceProfileArn = \"my_instance_profile_arn\",\n    });\n\n    var @this = new Databricks.ServicePrincipal(\"this\", new()\n    {\n        DisplayName = \"My Service Principal\",\n    });\n\n    var myServicePrincipalInstanceProfile = new Databricks.ServicePrincipalRole(\"my_service_principal_instance_profile\", new()\n    {\n        ServicePrincipalId = @this.Id,\n        Role = instanceProfile.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tinstanceProfile, err := databricks.NewInstanceProfile(ctx, \"instance_profile\", \u0026databricks.InstanceProfileArgs{\n\t\t\tInstanceProfileArn: pulumi.String(\"my_instance_profile_arn\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewServicePrincipal(ctx, \"this\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"My Service Principal\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewServicePrincipalRole(ctx, \"my_service_principal_instance_profile\", \u0026databricks.ServicePrincipalRoleArgs{\n\t\t\tServicePrincipalId: this.ID(),\n\t\t\tRole:               instanceProfile.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.InstanceProfile;\nimport com.pulumi.databricks.InstanceProfileArgs;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport com.pulumi.databricks.ServicePrincipalRole;\nimport com.pulumi.databricks.ServicePrincipalRoleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var instanceProfile = new InstanceProfile(\"instanceProfile\", InstanceProfileArgs.builder()\n            .instanceProfileArn(\"my_instance_profile_arn\")\n            .build());\n\n        var this_ = new ServicePrincipal(\"this\", ServicePrincipalArgs.builder()\n            .displayName(\"My Service Principal\")\n            .build());\n\n        var myServicePrincipalInstanceProfile = new ServicePrincipalRole(\"myServicePrincipalInstanceProfile\", ServicePrincipalRoleArgs.builder()\n            .servicePrincipalId(this_.id())\n            .role(instanceProfile.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  instanceProfile:\n    type: databricks:InstanceProfile\n    name: instance_profile\n    properties:\n      instanceProfileArn: my_instance_profile_arn\n  this:\n    type: databricks:ServicePrincipal\n    properties:\n      displayName: My Service Principal\n  myServicePrincipalInstanceProfile:\n    type: databricks:ServicePrincipalRole\n    name: my_service_principal_instance_profile\n    properties:\n      servicePrincipalId: ${this.id}\n      role: ${instanceProfile.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nGranting a service principal the Account Admin role.\n\n\u003e This can only be used with an account-level provider.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst tfAdmin = new databricks.ServicePrincipal(\"tf_admin\", {displayName: \"Pulumi Admin\"});\nconst tfAdminAccount = new databricks.ServicePrincipalRole(\"tf_admin_account\", {\n    servicePrincipalId: tfAdmin.id,\n    role: \"account_admin\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ntf_admin = databricks.ServicePrincipal(\"tf_admin\", display_name=\"Pulumi Admin\")\ntf_admin_account = databricks.ServicePrincipalRole(\"tf_admin_account\",\n    service_principal_id=tf_admin.id,\n    role=\"account_admin\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var tfAdmin = new Databricks.ServicePrincipal(\"tf_admin\", new()\n    {\n        DisplayName = \"Pulumi Admin\",\n    });\n\n    var tfAdminAccount = new Databricks.ServicePrincipalRole(\"tf_admin_account\", new()\n    {\n        ServicePrincipalId = tfAdmin.Id,\n        Role = \"account_admin\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\ttfAdmin, err := databricks.NewServicePrincipal(ctx, \"tf_admin\", \u0026databricks.ServicePrincipalArgs{\n\t\t\tDisplayName: pulumi.String(\"Pulumi Admin\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewServicePrincipalRole(ctx, \"tf_admin_account\", \u0026databricks.ServicePrincipalRoleArgs{\n\t\t\tServicePrincipalId: tfAdmin.ID(),\n\t\t\tRole:               pulumi.String(\"account_admin\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ServicePrincipal;\nimport com.pulumi.databricks.ServicePrincipalArgs;\nimport com.pulumi.databricks.ServicePrincipalRole;\nimport com.pulumi.databricks.ServicePrincipalRoleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var tfAdmin = new ServicePrincipal(\"tfAdmin\", ServicePrincipalArgs.builder()\n            .displayName(\"Pulumi Admin\")\n            .build());\n\n        var tfAdminAccount = new ServicePrincipalRole(\"tfAdminAccount\", ServicePrincipalRoleArgs.builder()\n            .servicePrincipalId(tfAdmin.id())\n            .role(\"account_admin\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  tfAdmin:\n    type: databricks:ServicePrincipal\n    name: tf_admin\n    properties:\n      displayName: Pulumi Admin\n  tfAdminAccount:\n    type: databricks:ServicePrincipalRole\n    name: tf_admin_account\n    properties:\n      servicePrincipalId: ${tfAdmin.id}\n      role: account_admin\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.UserRole \" pulumi-lang-dotnet=\" databricks.UserRole \" pulumi-lang-go=\" UserRole \" pulumi-lang-python=\" UserRole \" pulumi-lang-yaml=\" databricks.UserRole \" pulumi-lang-java=\" databricks.UserRole \"\u003e databricks.UserRole \u003c/span\u003eto attach role or\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_user.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_group.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.AccessControlRuleSet \" pulumi-lang-dotnet=\" databricks.AccessControlRuleSet \" pulumi-lang-go=\" AccessControlRuleSet \" pulumi-lang-python=\" AccessControlRuleSet \" pulumi-lang-yaml=\" databricks.AccessControlRuleSet \" pulumi-lang-java=\" databricks.AccessControlRuleSet \"\u003e databricks.AccessControlRuleSet \u003c/span\u003eto attach other roles to account level resources.\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"role":{"type":"string","description":"This is the role name, role id, or instance profile resource.\n"},"servicePrincipalId":{"type":"string","description":"This is the id of the service principal resource.\n"}},"required":["role","servicePrincipalId"],"inputProperties":{"role":{"type":"string","description":"This is the role name, role id, or instance profile resource.\n","willReplaceOnChanges":true},"servicePrincipalId":{"type":"string","description":"This is the id of the service principal resource.\n","willReplaceOnChanges":true}},"requiredInputs":["role","servicePrincipalId"],"stateInputs":{"description":"Input properties used for looking up and filtering ServicePrincipalRole resources.\n","properties":{"role":{"type":"string","description":"This is the role name, role id, or instance profile resource.\n","willReplaceOnChanges":true},"servicePrincipalId":{"type":"string","description":"This is the id of the service principal resource.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/servicePrincipalSecret:ServicePrincipalSecret":{"description":"With this resource you can create a secret for a given [Service Principals](https://docs.databricks.com/administration-guide/users-groups/service-principals.html).\n\n\u003e This resource can only be used with an account-level or workspace-level provider!\n\nThis secret can be used to configure the Databricks Pulumi Provider to authenticate with the service principal. See Authenticating with service principal.\n\nAdditionally, the secret can be used to request OAuth tokens for the service principal, which can be used to authenticate to Databricks REST APIs. See [Authentication using OAuth tokens for service principals](https://docs.databricks.com/dev-tools/authentication-oauth.html).\n\n## Example Usage\n\nCreate service principal secret\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst terraformSp = new databricks.ServicePrincipalSecret(\"terraform_sp\", {servicePrincipalId: _this.id});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nterraform_sp = databricks.ServicePrincipalSecret(\"terraform_sp\", service_principal_id=this[\"id\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var terraformSp = new Databricks.ServicePrincipalSecret(\"terraform_sp\", new()\n    {\n        ServicePrincipalId = @this.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewServicePrincipalSecret(ctx, \"terraform_sp\", \u0026databricks.ServicePrincipalSecretArgs{\n\t\t\tServicePrincipalId: pulumi.Any(this.Id),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.ServicePrincipalSecret;\nimport com.pulumi.databricks.ServicePrincipalSecretArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var terraformSp = new ServicePrincipalSecret(\"terraformSp\", ServicePrincipalSecretArgs.builder()\n            .servicePrincipalId(this_.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  terraformSp:\n    type: databricks:ServicePrincipalSecret\n    name: terraform_sp\n    properties:\n      servicePrincipalId: ${this.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nA secret can be automatically rotated by taking a dependency on the \u003cspan pulumi-lang-nodejs=\"`timeRotating`\" pulumi-lang-dotnet=\"`TimeRotating`\" pulumi-lang-go=\"`timeRotating`\" pulumi-lang-python=\"`time_rotating`\" pulumi-lang-yaml=\"`timeRotating`\" pulumi-lang-java=\"`timeRotating`\"\u003e`time_rotating`\u003c/span\u003e resource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as time from \"@pulumiverse/time\";\n\nconst _this = new time.Rotating(\"this\", {rotationDays: 30});\nconst terraformSp = new databricks.ServicePrincipalSecret(\"terraform_sp\", {\n    servicePrincipalId: thisDatabricksServicePrincipal.id,\n    timeRotating: pulumi.interpolate`Pulumi (created: ${_this.rfc3339})`,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumiverse_time as time\n\nthis = time.Rotating(\"this\", rotation_days=30)\nterraform_sp = databricks.ServicePrincipalSecret(\"terraform_sp\",\n    service_principal_id=this_databricks_service_principal[\"id\"],\n    time_rotating=this.rfc3339.apply(lambda rfc3339: f\"Pulumi (created: {rfc3339})\"))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Time = Pulumiverse.Time;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Time.Rotating(\"this\", new()\n    {\n        RotationDays = 30,\n    });\n\n    var terraformSp = new Databricks.ServicePrincipalSecret(\"terraform_sp\", new()\n    {\n        ServicePrincipalId = thisDatabricksServicePrincipal.Id,\n        TimeRotating = @this.Rfc3339.Apply(rfc3339 =\u003e $\"Pulumi (created: {rfc3339})\"),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumiverse/pulumi-time/sdk/go/time\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := time.NewRotating(ctx, \"this\", \u0026time.RotatingArgs{\n\t\t\tRotationDays: pulumi.Int(30),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewServicePrincipalSecret(ctx, \"terraform_sp\", \u0026databricks.ServicePrincipalSecretArgs{\n\t\t\tServicePrincipalId: pulumi.Any(thisDatabricksServicePrincipal.Id),\n\t\t\tTimeRotating: this.Rfc3339.ApplyT(func(rfc3339 string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"Pulumi (created: %v)\", rfc3339), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumiverse.time.Rotating;\nimport com.pulumiverse.time.RotatingArgs;\nimport com.pulumi.databricks.ServicePrincipalSecret;\nimport com.pulumi.databricks.ServicePrincipalSecretArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Rotating(\"this\", RotatingArgs.builder()\n            .rotationDays(30)\n            .build());\n\n        var terraformSp = new ServicePrincipalSecret(\"terraformSp\", ServicePrincipalSecretArgs.builder()\n            .servicePrincipalId(thisDatabricksServicePrincipal.id())\n            .timeRotating(this_.rfc3339().applyValue(_rfc3339 -\u003e String.format(\"Pulumi (created: %s)\", _rfc3339)))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: time:Rotating\n    properties:\n      rotationDays: 30\n  terraformSp:\n    type: databricks:ServicePrincipalSecret\n    name: terraform_sp\n    properties:\n      servicePrincipalId: ${thisDatabricksServicePrincipal.id}\n      timeRotating: 'Pulumi (created: ${this.rfc3339})'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eto manage [Service Principals](https://docs.databricks.com/administration-guide/users-groups/service-principals.html) in Databricks\n","properties":{"createTime":{"type":"string","description":"UTC time when the secret was created.\n"},"expireTime":{"type":"string","description":"UTC time when the secret will expire. If the field is not present, the secret does not expire.\n"},"lifetime":{"type":"string","description":"The lifetime of the secret in seconds formatted as `NNNNs`. If this parameter is not provided, the secret will have a default lifetime of 730 days (\u003cspan pulumi-lang-nodejs=\"`63072000s`\" pulumi-lang-dotnet=\"`63072000s`\" pulumi-lang-go=\"`63072000s`\" pulumi-lang-python=\"`63072000s`\" pulumi-lang-yaml=\"`63072000s`\" pulumi-lang-java=\"`63072000s`\"\u003e`63072000s`\u003c/span\u003e).  Expiration of secret will lead to generation of new secret.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ServicePrincipalSecretProviderConfig:ServicePrincipalSecretProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"secret":{"type":"string","description":"**Sensitive** Generated secret for the service principal.\n","secret":true},"secretHash":{"type":"string","description":"Secret Hash.\n"},"servicePrincipalId":{"type":"string","description":"SCIM ID of the\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003e(not application ID).\n"},"status":{"type":"string","description":"Status of the secret (i.e., `ACTIVE` - see [REST API docs for full list](https://docs.databricks.com/api/account/serviceprincipalsecrets/list#secrets-status)).\n"},"timeRotating":{"type":"string","description":"Changing this argument forces recreation of the secret.\n"},"updateTime":{"type":"string","description":"UTC time when the secret was updated.\n"}},"required":["createTime","expireTime","lifetime","secret","secretHash","servicePrincipalId","status","updateTime"],"inputProperties":{"createTime":{"type":"string","description":"UTC time when the secret was created.\n"},"expireTime":{"type":"string","description":"UTC time when the secret will expire. If the field is not present, the secret does not expire.\n"},"lifetime":{"type":"string","description":"The lifetime of the secret in seconds formatted as `NNNNs`. If this parameter is not provided, the secret will have a default lifetime of 730 days (\u003cspan pulumi-lang-nodejs=\"`63072000s`\" pulumi-lang-dotnet=\"`63072000s`\" pulumi-lang-go=\"`63072000s`\" pulumi-lang-python=\"`63072000s`\" pulumi-lang-yaml=\"`63072000s`\" pulumi-lang-java=\"`63072000s`\"\u003e`63072000s`\u003c/span\u003e).  Expiration of secret will lead to generation of new secret.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/ServicePrincipalSecretProviderConfig:ServicePrincipalSecretProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"secret":{"type":"string","description":"**Sensitive** Generated secret for the service principal.\n","secret":true},"secretHash":{"type":"string","description":"Secret Hash.\n"},"servicePrincipalId":{"type":"string","description":"SCIM ID of the\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003e(not application ID).\n","willReplaceOnChanges":true},"status":{"type":"string","description":"Status of the secret (i.e., `ACTIVE` - see [REST API docs for full list](https://docs.databricks.com/api/account/serviceprincipalsecrets/list#secrets-status)).\n"},"timeRotating":{"type":"string","description":"Changing this argument forces recreation of the secret.\n","willReplaceOnChanges":true},"updateTime":{"type":"string","description":"UTC time when the secret was updated.\n"}},"requiredInputs":["servicePrincipalId"],"stateInputs":{"description":"Input properties used for looking up and filtering ServicePrincipalSecret resources.\n","properties":{"createTime":{"type":"string","description":"UTC time when the secret was created.\n"},"expireTime":{"type":"string","description":"UTC time when the secret will expire. If the field is not present, the secret does not expire.\n"},"lifetime":{"type":"string","description":"The lifetime of the secret in seconds formatted as `NNNNs`. If this parameter is not provided, the secret will have a default lifetime of 730 days (\u003cspan pulumi-lang-nodejs=\"`63072000s`\" pulumi-lang-dotnet=\"`63072000s`\" pulumi-lang-go=\"`63072000s`\" pulumi-lang-python=\"`63072000s`\" pulumi-lang-yaml=\"`63072000s`\" pulumi-lang-java=\"`63072000s`\"\u003e`63072000s`\u003c/span\u003e).  Expiration of secret will lead to generation of new secret.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/ServicePrincipalSecretProviderConfig:ServicePrincipalSecretProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"secret":{"type":"string","description":"**Sensitive** Generated secret for the service principal.\n","secret":true},"secretHash":{"type":"string","description":"Secret Hash.\n"},"servicePrincipalId":{"type":"string","description":"SCIM ID of the\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003e(not application ID).\n","willReplaceOnChanges":true},"status":{"type":"string","description":"Status of the secret (i.e., `ACTIVE` - see [REST API docs for full list](https://docs.databricks.com/api/account/serviceprincipalsecrets/list#secrets-status)).\n"},"timeRotating":{"type":"string","description":"Changing this argument forces recreation of the secret.\n","willReplaceOnChanges":true},"updateTime":{"type":"string","description":"UTC time when the secret was updated.\n"}},"type":"object"}},"databricks:index/share:Share":{"description":"In Delta Sharing, a share is a read-only collection of tables and table partitions that a provider wants to share with one or more recipients. If your recipient uses a Unity Catalog-enabled Databricks workspace, you can also include notebook files, views (including dynamic views that restrict access at the row and column level), Unity Catalog volumes, and Unity Catalog models in a share.\n\n\u003e This resource can only be used with a workspace-level provider!\n\nIn a Unity Catalog-enabled Databricks workspace, a share is a securable object registered in Unity Catalog. A \u003cspan pulumi-lang-nodejs=\"`databricks.Share`\" pulumi-lang-dotnet=\"`databricks.Share`\" pulumi-lang-go=\"`Share`\" pulumi-lang-python=\"`Share`\" pulumi-lang-yaml=\"`databricks.Share`\" pulumi-lang-java=\"`databricks.Share`\"\u003e`databricks.Share`\u003c/span\u003e is contained within a databricks_metastore. If you remove a share from your Unity Catalog metastore, all recipients of that share lose the ability to access it.\n\n## Example Usage\n\n\u003e In Pulumi configuration, it is recommended to define objects in alphabetical order of their \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e arguments, so that you get consistent and readable diff. Whenever objects are added or removed, or \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e is renamed, you'll observe a change in the majority of tasks. It's related to the fact that the current version of the provider treats \u003cspan pulumi-lang-nodejs=\"`object`\" pulumi-lang-dotnet=\"`Object`\" pulumi-lang-go=\"`object`\" pulumi-lang-python=\"`object`\" pulumi-lang-yaml=\"`object`\" pulumi-lang-java=\"`object`\"\u003e`object`\u003c/span\u003e blocks as an ordered list. Alternatively, \u003cspan pulumi-lang-nodejs=\"`object`\" pulumi-lang-dotnet=\"`Object`\" pulumi-lang-go=\"`object`\" pulumi-lang-python=\"`object`\" pulumi-lang-yaml=\"`object`\" pulumi-lang-java=\"`object`\"\u003e`object`\u003c/span\u003e block could have been an unordered set, though end-users would see the entire block replaced upon a change in single property of the task.\n\nCreating a Delta Sharing share and add some existing tables to it\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst things = databricks.getTables({\n    catalogName: \"sandbox\",\n    schemaName: \"things\",\n});\nconst some = new databricks.Share(\"some\", {\n    objects: .map(entry =\u003e ({\n        name: entry.value,\n        dataObjectType: \"TABLE\",\n    })),\n    name: \"my_share\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthings = databricks.get_tables(catalog_name=\"sandbox\",\n    schema_name=\"things\")\nsome = databricks.Share(\"some\",\n    objects=[{\"key\": k, \"value\": v} for k, v in things.ids].apply(lambda entries: [{\n        \"name\": entry[\"value\"],\n        \"dataObjectType\": \"TABLE\",\n    } for entry in entries]),\n    name=\"my_share\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var things = Databricks.GetTables.Invoke(new()\n    {\n        CatalogName = \"sandbox\",\n        SchemaName = \"things\",\n    });\n\n    var some = new Databricks.Share(\"some\", new()\n    {\n        Objects = ,\n        Name = \"my_share\",\n    });\n\n});\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating a Delta Sharing share with mixed object types (tables and volumes)\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst mixed = new databricks.Share(\"mixed\", {\n    name: \"mixed_share\",\n    objects: [\n        {\n            name: \"my_catalog.my_schema.sales_table\",\n            dataObjectType: \"TABLE\",\n            sharedAs: \"my_schema.sales_table\",\n        },\n        {\n            name: \"my_catalog.my_schema.sales_mv\",\n            dataObjectType: \"MATERIALIZED_VIEW\",\n            sharedAs: \"my_schema.sales_mv\",\n        },\n        {\n            name: \"my_catalog.my_schema.training_data\",\n            dataObjectType: \"VOLUME\",\n            stringSharedAs: \"my_schema.training_data\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nmixed = databricks.Share(\"mixed\",\n    name=\"mixed_share\",\n    objects=[\n        {\n            \"name\": \"my_catalog.my_schema.sales_table\",\n            \"data_object_type\": \"TABLE\",\n            \"shared_as\": \"my_schema.sales_table\",\n        },\n        {\n            \"name\": \"my_catalog.my_schema.sales_mv\",\n            \"data_object_type\": \"MATERIALIZED_VIEW\",\n            \"shared_as\": \"my_schema.sales_mv\",\n        },\n        {\n            \"name\": \"my_catalog.my_schema.training_data\",\n            \"data_object_type\": \"VOLUME\",\n            \"string_shared_as\": \"my_schema.training_data\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var mixed = new Databricks.Share(\"mixed\", new()\n    {\n        Name = \"mixed_share\",\n        Objects = new[]\n        {\n            new Databricks.Inputs.ShareObjectArgs\n            {\n                Name = \"my_catalog.my_schema.sales_table\",\n                DataObjectType = \"TABLE\",\n                SharedAs = \"my_schema.sales_table\",\n            },\n            new Databricks.Inputs.ShareObjectArgs\n            {\n                Name = \"my_catalog.my_schema.sales_mv\",\n                DataObjectType = \"MATERIALIZED_VIEW\",\n                SharedAs = \"my_schema.sales_mv\",\n            },\n            new Databricks.Inputs.ShareObjectArgs\n            {\n                Name = \"my_catalog.my_schema.training_data\",\n                DataObjectType = \"VOLUME\",\n                StringSharedAs = \"my_schema.training_data\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewShare(ctx, \"mixed\", \u0026databricks.ShareArgs{\n\t\t\tName: pulumi.String(\"mixed_share\"),\n\t\t\tObjects: databricks.ShareObjectArray{\n\t\t\t\t\u0026databricks.ShareObjectArgs{\n\t\t\t\t\tName:           pulumi.String(\"my_catalog.my_schema.sales_table\"),\n\t\t\t\t\tDataObjectType: pulumi.String(\"TABLE\"),\n\t\t\t\t\tSharedAs:       pulumi.String(\"my_schema.sales_table\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.ShareObjectArgs{\n\t\t\t\t\tName:           pulumi.String(\"my_catalog.my_schema.sales_mv\"),\n\t\t\t\t\tDataObjectType: pulumi.String(\"MATERIALIZED_VIEW\"),\n\t\t\t\t\tSharedAs:       pulumi.String(\"my_schema.sales_mv\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.ShareObjectArgs{\n\t\t\t\t\tName:           pulumi.String(\"my_catalog.my_schema.training_data\"),\n\t\t\t\t\tDataObjectType: pulumi.String(\"VOLUME\"),\n\t\t\t\t\tStringSharedAs: pulumi.String(\"my_schema.training_data\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Share;\nimport com.pulumi.databricks.ShareArgs;\nimport com.pulumi.databricks.inputs.ShareObjectArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var mixed = new Share(\"mixed\", ShareArgs.builder()\n            .name(\"mixed_share\")\n            .objects(            \n                ShareObjectArgs.builder()\n                    .name(\"my_catalog.my_schema.sales_table\")\n                    .dataObjectType(\"TABLE\")\n                    .sharedAs(\"my_schema.sales_table\")\n                    .build(),\n                ShareObjectArgs.builder()\n                    .name(\"my_catalog.my_schema.sales_mv\")\n                    .dataObjectType(\"MATERIALIZED_VIEW\")\n                    .sharedAs(\"my_schema.sales_mv\")\n                    .build(),\n                ShareObjectArgs.builder()\n                    .name(\"my_catalog.my_schema.training_data\")\n                    .dataObjectType(\"VOLUME\")\n                    .stringSharedAs(\"my_schema.training_data\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  mixed:\n    type: databricks:Share\n    properties:\n      name: mixed_share\n      objects:\n        - name: my_catalog.my_schema.sales_table\n          dataObjectType: TABLE\n          sharedAs: my_schema.sales_table\n        - name: my_catalog.my_schema.sales_mv\n          dataObjectType: MATERIALIZED_VIEW\n          sharedAs: my_schema.sales_mv\n        - name: my_catalog.my_schema.training_data\n          dataObjectType: VOLUME\n          stringSharedAs: my_schema.training_data\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating a Delta Sharing share and add a schema to it(including all current and future tables).\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst schemaShare = new databricks.Share(\"schema_share\", {\n    name: \"schema_share\",\n    objects: [{\n        name: \"catalog_name.schema_name\",\n        dataObjectType: \"SCHEMA\",\n        historyDataSharingStatus: \"ENABLED\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nschema_share = databricks.Share(\"schema_share\",\n    name=\"schema_share\",\n    objects=[{\n        \"name\": \"catalog_name.schema_name\",\n        \"data_object_type\": \"SCHEMA\",\n        \"history_data_sharing_status\": \"ENABLED\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var schemaShare = new Databricks.Share(\"schema_share\", new()\n    {\n        Name = \"schema_share\",\n        Objects = new[]\n        {\n            new Databricks.Inputs.ShareObjectArgs\n            {\n                Name = \"catalog_name.schema_name\",\n                DataObjectType = \"SCHEMA\",\n                HistoryDataSharingStatus = \"ENABLED\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewShare(ctx, \"schema_share\", \u0026databricks.ShareArgs{\n\t\t\tName: pulumi.String(\"schema_share\"),\n\t\t\tObjects: databricks.ShareObjectArray{\n\t\t\t\t\u0026databricks.ShareObjectArgs{\n\t\t\t\t\tName:                     pulumi.String(\"catalog_name.schema_name\"),\n\t\t\t\t\tDataObjectType:           pulumi.String(\"SCHEMA\"),\n\t\t\t\t\tHistoryDataSharingStatus: pulumi.String(\"ENABLED\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Share;\nimport com.pulumi.databricks.ShareArgs;\nimport com.pulumi.databricks.inputs.ShareObjectArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var schemaShare = new Share(\"schemaShare\", ShareArgs.builder()\n            .name(\"schema_share\")\n            .objects(ShareObjectArgs.builder()\n                .name(\"catalog_name.schema_name\")\n                .dataObjectType(\"SCHEMA\")\n                .historyDataSharingStatus(\"ENABLED\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  schemaShare:\n    type: databricks:Share\n    name: schema_share\n    properties:\n      name: schema_share\n      objects:\n        - name: catalog_name.schema_name\n          dataObjectType: SCHEMA\n          historyDataSharingStatus: ENABLED\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating a Delta Sharing share and share a table with partitions spec and history\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst some = new databricks.Share(\"some\", {\n    name: \"my_share\",\n    objects: [{\n        name: \"my_catalog.my_schema.my_table\",\n        dataObjectType: \"TABLE\",\n        historyDataSharingStatus: \"ENABLED\",\n        partitions: [\n            {\n                values: [\n                    {\n                        name: \"year\",\n                        op: \"EQUAL\",\n                        value: \"2009\",\n                    },\n                    {\n                        name: \"month\",\n                        op: \"EQUAL\",\n                        value: \"12\",\n                    },\n                ],\n            },\n            {\n                values: [{\n                    name: \"year\",\n                    op: \"EQUAL\",\n                    value: \"2010\",\n                }],\n            },\n        ],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsome = databricks.Share(\"some\",\n    name=\"my_share\",\n    objects=[{\n        \"name\": \"my_catalog.my_schema.my_table\",\n        \"data_object_type\": \"TABLE\",\n        \"history_data_sharing_status\": \"ENABLED\",\n        \"partitions\": [\n            {\n                \"values\": [\n                    {\n                        \"name\": \"year\",\n                        \"op\": \"EQUAL\",\n                        \"value\": \"2009\",\n                    },\n                    {\n                        \"name\": \"month\",\n                        \"op\": \"EQUAL\",\n                        \"value\": \"12\",\n                    },\n                ],\n            },\n            {\n                \"values\": [{\n                    \"name\": \"year\",\n                    \"op\": \"EQUAL\",\n                    \"value\": \"2010\",\n                }],\n            },\n        ],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var some = new Databricks.Share(\"some\", new()\n    {\n        Name = \"my_share\",\n        Objects = new[]\n        {\n            new Databricks.Inputs.ShareObjectArgs\n            {\n                Name = \"my_catalog.my_schema.my_table\",\n                DataObjectType = \"TABLE\",\n                HistoryDataSharingStatus = \"ENABLED\",\n                Partitions = new[]\n                {\n                    new Databricks.Inputs.ShareObjectPartitionArgs\n                    {\n                        Values = new[]\n                        {\n                            new Databricks.Inputs.ShareObjectPartitionValueArgs\n                            {\n                                Name = \"year\",\n                                Op = \"EQUAL\",\n                                Value = \"2009\",\n                            },\n                            new Databricks.Inputs.ShareObjectPartitionValueArgs\n                            {\n                                Name = \"month\",\n                                Op = \"EQUAL\",\n                                Value = \"12\",\n                            },\n                        },\n                    },\n                    new Databricks.Inputs.ShareObjectPartitionArgs\n                    {\n                        Values = new[]\n                        {\n                            new Databricks.Inputs.ShareObjectPartitionValueArgs\n                            {\n                                Name = \"year\",\n                                Op = \"EQUAL\",\n                                Value = \"2010\",\n                            },\n                        },\n                    },\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewShare(ctx, \"some\", \u0026databricks.ShareArgs{\n\t\t\tName: pulumi.String(\"my_share\"),\n\t\t\tObjects: databricks.ShareObjectArray{\n\t\t\t\t\u0026databricks.ShareObjectArgs{\n\t\t\t\t\tName:                     pulumi.String(\"my_catalog.my_schema.my_table\"),\n\t\t\t\t\tDataObjectType:           pulumi.String(\"TABLE\"),\n\t\t\t\t\tHistoryDataSharingStatus: pulumi.String(\"ENABLED\"),\n\t\t\t\t\tPartitions: databricks.ShareObjectPartitionArray{\n\t\t\t\t\t\t\u0026databricks.ShareObjectPartitionArgs{\n\t\t\t\t\t\t\tValues: databricks.ShareObjectPartitionValueArray{\n\t\t\t\t\t\t\t\t\u0026databricks.ShareObjectPartitionValueArgs{\n\t\t\t\t\t\t\t\t\tName:  pulumi.String(\"year\"),\n\t\t\t\t\t\t\t\t\tOp:    pulumi.String(\"EQUAL\"),\n\t\t\t\t\t\t\t\t\tValue: pulumi.String(\"2009\"),\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\u0026databricks.ShareObjectPartitionValueArgs{\n\t\t\t\t\t\t\t\t\tName:  pulumi.String(\"month\"),\n\t\t\t\t\t\t\t\t\tOp:    pulumi.String(\"EQUAL\"),\n\t\t\t\t\t\t\t\t\tValue: pulumi.String(\"12\"),\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\u0026databricks.ShareObjectPartitionArgs{\n\t\t\t\t\t\t\tValues: databricks.ShareObjectPartitionValueArray{\n\t\t\t\t\t\t\t\t\u0026databricks.ShareObjectPartitionValueArgs{\n\t\t\t\t\t\t\t\t\tName:  pulumi.String(\"year\"),\n\t\t\t\t\t\t\t\t\tOp:    pulumi.String(\"EQUAL\"),\n\t\t\t\t\t\t\t\t\tValue: pulumi.String(\"2010\"),\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Share;\nimport com.pulumi.databricks.ShareArgs;\nimport com.pulumi.databricks.inputs.ShareObjectArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var some = new Share(\"some\", ShareArgs.builder()\n            .name(\"my_share\")\n            .objects(ShareObjectArgs.builder()\n                .name(\"my_catalog.my_schema.my_table\")\n                .dataObjectType(\"TABLE\")\n                .historyDataSharingStatus(\"ENABLED\")\n                .partitions(                \n                    ShareObjectPartitionArgs.builder()\n                        .values(                        \n                            ShareObjectPartitionValueArgs.builder()\n                                .name(\"year\")\n                                .op(\"EQUAL\")\n                                .value(\"2009\")\n                                .build(),\n                            ShareObjectPartitionValueArgs.builder()\n                                .name(\"month\")\n                                .op(\"EQUAL\")\n                                .value(\"12\")\n                                .build())\n                        .build(),\n                    ShareObjectPartitionArgs.builder()\n                        .values(ShareObjectPartitionValueArgs.builder()\n                            .name(\"year\")\n                            .op(\"EQUAL\")\n                            .value(\"2010\")\n                            .build())\n                        .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  some:\n    type: databricks:Share\n    properties:\n      name: my_share\n      objects:\n        - name: my_catalog.my_schema.my_table\n          dataObjectType: TABLE\n          historyDataSharingStatus: ENABLED\n          partitions:\n            - values:\n                - name: year\n                  op: EQUAL\n                  value: '2009'\n                - name: month\n                  op: EQUAL\n                  value: '12'\n            - values:\n                - name: year\n                  op: EQUAL\n                  value: '2010'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Recipient \" pulumi-lang-dotnet=\" databricks.Recipient \" pulumi-lang-go=\" Recipient \" pulumi-lang-python=\" Recipient \" pulumi-lang-yaml=\" databricks.Recipient \" pulumi-lang-java=\" databricks.Recipient \"\u003e databricks.Recipient \u003c/span\u003eto create Delta Sharing recipients.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage Delta Sharing permissions.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getShares \" pulumi-lang-dotnet=\" databricks.getShares \" pulumi-lang-go=\" getShares \" pulumi-lang-python=\" get_shares \" pulumi-lang-yaml=\" databricks.getShares \" pulumi-lang-java=\" databricks.getShares \"\u003e databricks.getShares \u003c/span\u003eto read existing Delta Sharing shares.\n\n","properties":{"comment":{"type":"string","description":"User-supplied free-form text.\n"},"createdAt":{"type":"integer","description":"Time when the share was created.\n"},"createdBy":{"type":"string","description":"The principal that created the share.\n"},"effectiveOwner":{"type":"string"},"name":{"type":"string","description":"Name of share. Change forces creation of a new resource.\n"},"objects":{"type":"array","items":{"$ref":"#/types/databricks:index/ShareObject:ShareObject"}},"owner":{"type":"string","description":"User name/group name/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the share owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ShareProviderConfig:ShareProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"storageLocation":{"type":"string"},"storageRoot":{"type":"string"},"updatedAt":{"type":"integer"},"updatedBy":{"type":"string"}},"required":["createdAt","createdBy","effectiveOwner","name","storageLocation","updatedAt","updatedBy"],"inputProperties":{"comment":{"type":"string","description":"User-supplied free-form text.\n"},"name":{"type":"string","description":"Name of share. Change forces creation of a new resource.\n"},"objects":{"type":"array","items":{"$ref":"#/types/databricks:index/ShareObject:ShareObject"}},"owner":{"type":"string","description":"User name/group name/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the share owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ShareProviderConfig:ShareProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"storageRoot":{"type":"string"}},"stateInputs":{"description":"Input properties used for looking up and filtering Share resources.\n","properties":{"comment":{"type":"string","description":"User-supplied free-form text.\n"},"createdAt":{"type":"integer","description":"Time when the share was created.\n"},"createdBy":{"type":"string","description":"The principal that created the share.\n"},"effectiveOwner":{"type":"string"},"name":{"type":"string","description":"Name of share. Change forces creation of a new resource.\n"},"objects":{"type":"array","items":{"$ref":"#/types/databricks:index/ShareObject:ShareObject"}},"owner":{"type":"string","description":"User name/group name/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the share owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/ShareProviderConfig:ShareProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"storageLocation":{"type":"string"},"storageRoot":{"type":"string"},"updatedAt":{"type":"integer"},"updatedBy":{"type":"string"}},"type":"object"}},"databricks:index/sqlAlert:SqlAlert":{"description":"!\u003e This resource is deprecated! Please switch to databricks_alert.\n\nThis resource allows you to manage [Databricks SQL Alerts](https://docs.databricks.com/sql/user/queries/index.html).\n\n\u003e To manage [SQLA resources](https://docs.databricks.com/sql/get-started/concepts.html) you must have \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e on your\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eor databricks_user.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sharedDir = new databricks.Directory(\"shared_dir\", {path: \"/Shared/Queries\"});\nconst _this = new databricks.SqlQuery(\"this\", {\n    dataSourceId: example.dataSourceId,\n    name: \"My Query Name\",\n    query: \"SELECT 1 AS p1, 2 as p2\",\n    parent: pulumi.interpolate`folders/${sharedDir.objectId}`,\n});\nconst alert = new databricks.SqlAlert(\"alert\", {\n    queryId: _this.id,\n    name: \"My Alert\",\n    parent: pulumi.interpolate`folders/${sharedDir.objectId}`,\n    rearm: 1,\n    options: {\n        column: \"p1\",\n        op: \"==\",\n        value: \"2\",\n        muted: false,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nshared_dir = databricks.Directory(\"shared_dir\", path=\"/Shared/Queries\")\nthis = databricks.SqlQuery(\"this\",\n    data_source_id=example[\"dataSourceId\"],\n    name=\"My Query Name\",\n    query=\"SELECT 1 AS p1, 2 as p2\",\n    parent=shared_dir.object_id.apply(lambda object_id: f\"folders/{object_id}\"))\nalert = databricks.SqlAlert(\"alert\",\n    query_id=this.id,\n    name=\"My Alert\",\n    parent=shared_dir.object_id.apply(lambda object_id: f\"folders/{object_id}\"),\n    rearm=1,\n    options={\n        \"column\": \"p1\",\n        \"op\": \"==\",\n        \"value\": \"2\",\n        \"muted\": False,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sharedDir = new Databricks.Directory(\"shared_dir\", new()\n    {\n        Path = \"/Shared/Queries\",\n    });\n\n    var @this = new Databricks.SqlQuery(\"this\", new()\n    {\n        DataSourceId = example.DataSourceId,\n        Name = \"My Query Name\",\n        Query = \"SELECT 1 AS p1, 2 as p2\",\n        Parent = sharedDir.ObjectId.Apply(objectId =\u003e $\"folders/{objectId}\"),\n    });\n\n    var alert = new Databricks.SqlAlert(\"alert\", new()\n    {\n        QueryId = @this.Id,\n        Name = \"My Alert\",\n        Parent = sharedDir.ObjectId.Apply(objectId =\u003e $\"folders/{objectId}\"),\n        Rearm = 1,\n        Options = new Databricks.Inputs.SqlAlertOptionsArgs\n        {\n            Column = \"p1\",\n            Op = \"==\",\n            Value = \"2\",\n            Muted = false,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsharedDir, err := databricks.NewDirectory(ctx, \"shared_dir\", \u0026databricks.DirectoryArgs{\n\t\t\tPath: pulumi.String(\"/Shared/Queries\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.NewSqlQuery(ctx, \"this\", \u0026databricks.SqlQueryArgs{\n\t\t\tDataSourceId: pulumi.Any(example.DataSourceId),\n\t\t\tName:         pulumi.String(\"My Query Name\"),\n\t\t\tQuery:        pulumi.String(\"SELECT 1 AS p1, 2 as p2\"),\n\t\t\tParent: sharedDir.ObjectId.ApplyT(func(objectId int) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"folders/%v\", objectId), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewSqlAlert(ctx, \"alert\", \u0026databricks.SqlAlertArgs{\n\t\t\tQueryId: this.ID(),\n\t\t\tName:    pulumi.String(\"My Alert\"),\n\t\t\tParent: sharedDir.ObjectId.ApplyT(func(objectId int) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"folders/%v\", objectId), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tRearm: pulumi.Int(1),\n\t\t\tOptions: \u0026databricks.SqlAlertOptionsArgs{\n\t\t\t\tColumn: pulumi.String(\"p1\"),\n\t\t\t\tOp:     pulumi.String(\"==\"),\n\t\t\t\tValue:  pulumi.String(\"2\"),\n\t\t\t\tMuted:  pulumi.Bool(false),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Directory;\nimport com.pulumi.databricks.DirectoryArgs;\nimport com.pulumi.databricks.SqlQuery;\nimport com.pulumi.databricks.SqlQueryArgs;\nimport com.pulumi.databricks.SqlAlert;\nimport com.pulumi.databricks.SqlAlertArgs;\nimport com.pulumi.databricks.inputs.SqlAlertOptionsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sharedDir = new Directory(\"sharedDir\", DirectoryArgs.builder()\n            .path(\"/Shared/Queries\")\n            .build());\n\n        var this_ = new SqlQuery(\"this\", SqlQueryArgs.builder()\n            .dataSourceId(example.dataSourceId())\n            .name(\"My Query Name\")\n            .query(\"SELECT 1 AS p1, 2 as p2\")\n            .parent(sharedDir.objectId().applyValue(_objectId -\u003e String.format(\"folders/%s\", _objectId)))\n            .build());\n\n        var alert = new SqlAlert(\"alert\", SqlAlertArgs.builder()\n            .queryId(this_.id())\n            .name(\"My Alert\")\n            .parent(sharedDir.objectId().applyValue(_objectId -\u003e String.format(\"folders/%s\", _objectId)))\n            .rearm(1)\n            .options(SqlAlertOptionsArgs.builder()\n                .column(\"p1\")\n                .op(\"==\")\n                .value(\"2\")\n                .muted(false)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sharedDir:\n    type: databricks:Directory\n    name: shared_dir\n    properties:\n      path: /Shared/Queries\n  this:\n    type: databricks:SqlQuery\n    properties:\n      dataSourceId: ${example.dataSourceId}\n      name: My Query Name\n      query: SELECT 1 AS p1, 2 as p2\n      parent: folders/${sharedDir.objectId}\n  alert:\n    type: databricks:SqlAlert\n    properties:\n      queryId: ${this.id}\n      name: My Alert\n      parent: folders/${sharedDir.objectId}\n      rearm: 1\n      options:\n        column: p1\n        op: ==\n        value: '2'\n        muted: false\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access Control\n\u003cspan pulumi-lang-nodejs=\"\ndatabricks.Permissions \" pulumi-lang-dotnet=\"\ndatabricks.Permissions \" pulumi-lang-go=\"\nPermissions \" pulumi-lang-python=\"\nPermissions \" pulumi-lang-yaml=\"\ndatabricks.Permissions \" pulumi-lang-java=\"\ndatabricks.Permissions \"\u003e\ndatabricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Manage*, *Edit*, *Run* or *View* individual alerts.\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlQuery \" pulumi-lang-dotnet=\" databricks.SqlQuery \" pulumi-lang-go=\" SqlQuery \" pulumi-lang-python=\" SqlQuery \" pulumi-lang-yaml=\" databricks.SqlQuery \" pulumi-lang-java=\" databricks.SqlQuery \"\u003e databricks.SqlQuery \u003c/span\u003eto manage Databricks SQL [Queries](https://docs.databricks.com/sql/user/queries/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eto manage Databricks SQL [Endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eto manage directories in [Databricks Workpace](https://docs.databricks.com/workspace/workspace-objects.html).\n\n","properties":{"createdAt":{"type":"string"},"name":{"type":"string","description":"Name of the alert.\n"},"options":{"$ref":"#/types/databricks:index/SqlAlertOptions:SqlAlertOptions","description":"Alert configuration options.\n"},"parent":{"type":"string","description":"The identifier of the workspace folder containing the alert. The default is ther user's home folder. The folder identifier is formatted as `folder/\u003cfolder_id\u003e`.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SqlAlertProviderConfig:SqlAlertProviderConfig"},"queryId":{"type":"string","description":"ID of the query evaluated by the alert.\n"},"rearm":{"type":"integer","description":"Number of seconds after being triggered before the alert rearms itself and can be triggered again. If not defined, alert will never be triggered again.\n"},"updatedAt":{"type":"string"}},"required":["createdAt","name","options","queryId","updatedAt"],"inputProperties":{"createdAt":{"type":"string"},"name":{"type":"string","description":"Name of the alert.\n"},"options":{"$ref":"#/types/databricks:index/SqlAlertOptions:SqlAlertOptions","description":"Alert configuration options.\n"},"parent":{"type":"string","description":"The identifier of the workspace folder containing the alert. The default is ther user's home folder. The folder identifier is formatted as `folder/\u003cfolder_id\u003e`.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/SqlAlertProviderConfig:SqlAlertProviderConfig"},"queryId":{"type":"string","description":"ID of the query evaluated by the alert.\n"},"rearm":{"type":"integer","description":"Number of seconds after being triggered before the alert rearms itself and can be triggered again. If not defined, alert will never be triggered again.\n"},"updatedAt":{"type":"string"}},"requiredInputs":["options","queryId"],"stateInputs":{"description":"Input properties used for looking up and filtering SqlAlert resources.\n","properties":{"createdAt":{"type":"string"},"name":{"type":"string","description":"Name of the alert.\n"},"options":{"$ref":"#/types/databricks:index/SqlAlertOptions:SqlAlertOptions","description":"Alert configuration options.\n"},"parent":{"type":"string","description":"The identifier of the workspace folder containing the alert. The default is ther user's home folder. The folder identifier is formatted as `folder/\u003cfolder_id\u003e`.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/SqlAlertProviderConfig:SqlAlertProviderConfig"},"queryId":{"type":"string","description":"ID of the query evaluated by the alert.\n"},"rearm":{"type":"integer","description":"Number of seconds after being triggered before the alert rearms itself and can be triggered again. If not defined, alert will never be triggered again.\n"},"updatedAt":{"type":"string"}},"type":"object"}},"databricks:index/sqlDashboard:SqlDashboard":{"description":"!\u003e This resource is deprecated! Please switch to\u003cspan pulumi-lang-nodejs=\" databricks.Dashboard \" pulumi-lang-dotnet=\" databricks.Dashboard \" pulumi-lang-go=\" Dashboard \" pulumi-lang-python=\" Dashboard \" pulumi-lang-yaml=\" databricks.Dashboard \" pulumi-lang-java=\" databricks.Dashboard \"\u003e databricks.Dashboard \u003c/span\u003eto author new AI/BI dashboards using the latest tooling.\n\nThis resource is used to manage [Legacy dashboards](https://docs.databricks.com/sql/user/dashboards/index.html). To manage [SQL resources](https://docs.databricks.com/sql/get-started/concepts.html) you must have \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e on your\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eor databricks_user.\n\n\u003e documentation for this resource is a work in progress.\n\nA dashboard may have one or more widgets.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sharedDir = new databricks.Directory(\"shared_dir\", {path: \"/Shared/Dashboards\"});\nconst d1 = new databricks.SqlDashboard(\"d1\", {\n    name: \"My Dashboard Name\",\n    parent: pulumi.interpolate`folders/${sharedDir.objectId}`,\n    tags: [\n        \"some-tag\",\n        \"another-tag\",\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nshared_dir = databricks.Directory(\"shared_dir\", path=\"/Shared/Dashboards\")\nd1 = databricks.SqlDashboard(\"d1\",\n    name=\"My Dashboard Name\",\n    parent=shared_dir.object_id.apply(lambda object_id: f\"folders/{object_id}\"),\n    tags=[\n        \"some-tag\",\n        \"another-tag\",\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sharedDir = new Databricks.Directory(\"shared_dir\", new()\n    {\n        Path = \"/Shared/Dashboards\",\n    });\n\n    var d1 = new Databricks.SqlDashboard(\"d1\", new()\n    {\n        Name = \"My Dashboard Name\",\n        Parent = sharedDir.ObjectId.Apply(objectId =\u003e $\"folders/{objectId}\"),\n        Tags = new[]\n        {\n            \"some-tag\",\n            \"another-tag\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsharedDir, err := databricks.NewDirectory(ctx, \"shared_dir\", \u0026databricks.DirectoryArgs{\n\t\t\tPath: pulumi.String(\"/Shared/Dashboards\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewSqlDashboard(ctx, \"d1\", \u0026databricks.SqlDashboardArgs{\n\t\t\tName: pulumi.String(\"My Dashboard Name\"),\n\t\t\tParent: sharedDir.ObjectId.ApplyT(func(objectId int) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"folders/%v\", objectId), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tTags: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"some-tag\"),\n\t\t\t\tpulumi.String(\"another-tag\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Directory;\nimport com.pulumi.databricks.DirectoryArgs;\nimport com.pulumi.databricks.SqlDashboard;\nimport com.pulumi.databricks.SqlDashboardArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sharedDir = new Directory(\"sharedDir\", DirectoryArgs.builder()\n            .path(\"/Shared/Dashboards\")\n            .build());\n\n        var d1 = new SqlDashboard(\"d1\", SqlDashboardArgs.builder()\n            .name(\"My Dashboard Name\")\n            .parent(sharedDir.objectId().applyValue(_objectId -\u003e String.format(\"folders/%s\", _objectId)))\n            .tags(            \n                \"some-tag\",\n                \"another-tag\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sharedDir:\n    type: databricks:Directory\n    name: shared_dir\n    properties:\n      path: /Shared/Dashboards\n  d1:\n    type: databricks:SqlDashboard\n    properties:\n      name: My Dashboard Name\n      parent: folders/${sharedDir.objectId}\n      tags:\n        - some-tag\n        - another-tag\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nExample permission to share dashboard with all users:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst d1 = new databricks.Permissions(\"d1\", {\n    sqlDashboardId: d1DatabricksSqlDashboard.id,\n    accessControls: [{\n        groupName: users.displayName,\n        permissionLevel: \"CAN_RUN\",\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nd1 = databricks.Permissions(\"d1\",\n    sql_dashboard_id=d1_databricks_sql_dashboard[\"id\"],\n    access_controls=[{\n        \"group_name\": users[\"displayName\"],\n        \"permission_level\": \"CAN_RUN\",\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var d1 = new Databricks.Permissions(\"d1\", new()\n    {\n        SqlDashboardId = d1DatabricksSqlDashboard.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = users.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPermissions(ctx, \"d1\", \u0026databricks.PermissionsArgs{\n\t\t\tSqlDashboardId: pulumi.Any(d1DatabricksSqlDashboard.Id),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.Any(users.DisplayName),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var d1 = new Permissions(\"d1\", PermissionsArgs.builder()\n            .sqlDashboardId(d1DatabricksSqlDashboard.id())\n            .accessControls(PermissionsAccessControlArgs.builder()\n                .groupName(users.displayName())\n                .permissionLevel(\"CAN_RUN\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  d1:\n    type: databricks:Permissions\n    properties:\n      sqlDashboardId: ${d1DatabricksSqlDashboard.id}\n      accessControls:\n        - groupName: ${users.displayName}\n          permissionLevel: CAN_RUN\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eto manage Databricks SQL [Endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlGlobalConfig \" pulumi-lang-dotnet=\" databricks.SqlGlobalConfig \" pulumi-lang-go=\" SqlGlobalConfig \" pulumi-lang-python=\" SqlGlobalConfig \" pulumi-lang-yaml=\" databricks.SqlGlobalConfig \" pulumi-lang-java=\" databricks.SqlGlobalConfig \"\u003e databricks.SqlGlobalConfig \u003c/span\u003eto configure the security policy, databricks_instance_profile, and [data access properties](https://docs.databricks.com/sql/admin/data-access-configuration.html) for all\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eof workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage data access in Unity Catalog.\n\n","properties":{"createdAt":{"type":"string"},"dashboardFiltersEnabled":{"type":"boolean"},"name":{"type":"string"},"parent":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/SqlDashboardProviderConfig:SqlDashboardProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"runAsRole":{"type":"string"},"tags":{"type":"array","items":{"type":"string"}},"updatedAt":{"type":"string"}},"required":["createdAt","name","updatedAt"],"inputProperties":{"createdAt":{"type":"string"},"dashboardFiltersEnabled":{"type":"boolean"},"name":{"type":"string"},"parent":{"type":"string","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/SqlDashboardProviderConfig:SqlDashboardProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"runAsRole":{"type":"string"},"tags":{"type":"array","items":{"type":"string"}},"updatedAt":{"type":"string"}},"stateInputs":{"description":"Input properties used for looking up and filtering SqlDashboard resources.\n","properties":{"createdAt":{"type":"string"},"dashboardFiltersEnabled":{"type":"boolean"},"name":{"type":"string"},"parent":{"type":"string","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/SqlDashboardProviderConfig:SqlDashboardProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"runAsRole":{"type":"string"},"tags":{"type":"array","items":{"type":"string"}},"updatedAt":{"type":"string"}},"type":"object"}},"databricks:index/sqlEndpoint:SqlEndpoint":{"description":"This resource is used to manage [Databricks SQL warehouses](https://docs.databricks.com/sql/admin/sql-endpoints.html). To create [SQL warehouses](https://docs.databricks.com/sql/get-started/concepts.html) you must have \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e on your\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eor databricks_user.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst me = databricks.getCurrentUser({});\nconst _this = new databricks.SqlEndpoint(\"this\", {\n    name: me.then(me =\u003e `Endpoint of ${me.alphanumeric}`),\n    clusterSize: \"Small\",\n    maxNumClusters: 1,\n    tags: {\n        customTags: [{\n            key: \"City\",\n            value: \"Amsterdam\",\n        }],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nme = databricks.get_current_user()\nthis = databricks.SqlEndpoint(\"this\",\n    name=f\"Endpoint of {me.alphanumeric}\",\n    cluster_size=\"Small\",\n    max_num_clusters=1,\n    tags={\n        \"custom_tags\": [{\n            \"key\": \"City\",\n            \"value\": \"Amsterdam\",\n        }],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var me = Databricks.GetCurrentUser.Invoke();\n\n    var @this = new Databricks.SqlEndpoint(\"this\", new()\n    {\n        Name = $\"Endpoint of {me.Apply(getCurrentUserResult =\u003e getCurrentUserResult.Alphanumeric)}\",\n        ClusterSize = \"Small\",\n        MaxNumClusters = 1,\n        Tags = new Databricks.Inputs.SqlEndpointTagsArgs\n        {\n            CustomTags = new[]\n            {\n                new Databricks.Inputs.SqlEndpointTagsCustomTagArgs\n                {\n                    Key = \"City\",\n                    Value = \"Amsterdam\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tme, err := databricks.GetCurrentUser(ctx, \u0026databricks.GetCurrentUserArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewSqlEndpoint(ctx, \"this\", \u0026databricks.SqlEndpointArgs{\n\t\t\tName:           pulumi.Sprintf(\"Endpoint of %v\", me.Alphanumeric),\n\t\t\tClusterSize:    pulumi.String(\"Small\"),\n\t\t\tMaxNumClusters: pulumi.Int(1),\n\t\t\tTags: \u0026databricks.SqlEndpointTagsArgs{\n\t\t\t\tCustomTags: databricks.SqlEndpointTagsCustomTagArray{\n\t\t\t\t\t\u0026databricks.SqlEndpointTagsCustomTagArgs{\n\t\t\t\t\t\tKey:   pulumi.String(\"City\"),\n\t\t\t\t\t\tValue: pulumi.String(\"Amsterdam\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetCurrentUserArgs;\nimport com.pulumi.databricks.SqlEndpoint;\nimport com.pulumi.databricks.SqlEndpointArgs;\nimport com.pulumi.databricks.inputs.SqlEndpointTagsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var me = DatabricksFunctions.getCurrentUser(GetCurrentUserArgs.builder()\n            .build());\n\n        var this_ = new SqlEndpoint(\"this\", SqlEndpointArgs.builder()\n            .name(String.format(\"Endpoint of %s\", me.alphanumeric()))\n            .clusterSize(\"Small\")\n            .maxNumClusters(1)\n            .tags(SqlEndpointTagsArgs.builder()\n                .customTags(SqlEndpointTagsCustomTagArgs.builder()\n                    .key(\"City\")\n                    .value(\"Amsterdam\")\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:SqlEndpoint\n    properties:\n      name: Endpoint of ${me.alphanumeric}\n      clusterSize: Small\n      maxNumClusters: 1\n      tags:\n        customTags:\n          - key: City\n            value: Amsterdam\nvariables:\n  me:\n    fn::invoke:\n      function: databricks:getCurrentUser\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Access control\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Can Use* or *Can Manage* SQL warehouses.\n* \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e on\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eor databricks_user.\n\n## Related resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlDashboard \" pulumi-lang-dotnet=\" databricks.SqlDashboard \" pulumi-lang-go=\" SqlDashboard \" pulumi-lang-python=\" SqlDashboard \" pulumi-lang-yaml=\" databricks.SqlDashboard \" pulumi-lang-java=\" databricks.SqlDashboard \"\u003e databricks.SqlDashboard \u003c/span\u003eto manage Databricks SQL [Dashboards](https://docs.databricks.com/sql/user/dashboards/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlGlobalConfig \" pulumi-lang-dotnet=\" databricks.SqlGlobalConfig \" pulumi-lang-go=\" SqlGlobalConfig \" pulumi-lang-python=\" SqlGlobalConfig \" pulumi-lang-yaml=\" databricks.SqlGlobalConfig \" pulumi-lang-java=\" databricks.SqlGlobalConfig \"\u003e databricks.SqlGlobalConfig \u003c/span\u003eto configure the security policy, databricks_instance_profile, and [data access properties](https://docs.databricks.com/sql/admin/data-access-configuration.html) for all\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eof workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage data access in Unity Catalog.\n\n","properties":{"autoStopMins":{"type":"integer","description":"Time in minutes until an idle SQL warehouse terminates all clusters and stops. This field is optional. The default is 120, set to 0 to disable the auto stop.\n"},"channel":{"$ref":"#/types/databricks:index/SqlEndpointChannel:SqlEndpointChannel","description":"block, consisting of following fields:\n"},"clusterSize":{"type":"string","description":"The size of the clusters allocated to the endpoint: \"2X-Small\", \"X-Small\", \"Small\", \"Medium\", \"Large\", \"X-Large\", \"2X-Large\", \"3X-Large\", \"4X-Large\", \"5X-Large\".\n"},"creatorName":{"type":"string","description":"The username of the user who created the endpoint.\n"},"dataSourceId":{"type":"string","description":"(Deprecated, will be removed) ID of the data source for this endpoint. This is used to bind an Databricks SQL query to an endpoint.\n"},"enablePhoton":{"type":"boolean","description":"Whether to enable [Photon](https://databricks.com/product/delta-engine). This field is optional and is enabled by default.\n"},"enableServerlessCompute":{"type":"boolean","description":"Whether this SQL warehouse is a serverless endpoint. See below for details about the default values. To avoid ambiguity, especially for organizations with many workspaces, Databricks recommends that you always set this field explicitly.\n\n* If omitted, the default is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between September 1, 2022 and April 30, 2023 (between November 1, 2022 and May 19, 2023 for Azure), the default remains the previous behavior which is default to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. If your account needs updated [terms of use](https://docs.databricks.com/sql/admin/serverless.html#accept-terms), workspace admins are prompted in the Databricks SQL UI. A workspace must meet the [requirements](https://docs.databricks.com/sql/admin/serverless.html#requirements).\n"},"healths":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlEndpointHealth:SqlEndpointHealth"},"description":"Health status of the endpoint.\n"},"instanceProfileArn":{"type":"string"},"jdbcUrl":{"type":"string","description":"JDBC connection string.\n"},"maxNumClusters":{"type":"integer","description":"Maximum number of clusters available when a SQL warehouse is running. This field is required. If multi-cluster load balancing is not enabled, this is default to \u003cspan pulumi-lang-nodejs=\"`1`\" pulumi-lang-dotnet=\"`1`\" pulumi-lang-go=\"`1`\" pulumi-lang-python=\"`1`\" pulumi-lang-yaml=\"`1`\" pulumi-lang-java=\"`1`\"\u003e`1`\u003c/span\u003e.\n"},"minNumClusters":{"type":"integer","description":"Minimum number of clusters available when a SQL warehouse is running. The default is \u003cspan pulumi-lang-nodejs=\"`1`\" pulumi-lang-dotnet=\"`1`\" pulumi-lang-go=\"`1`\" pulumi-lang-python=\"`1`\" pulumi-lang-yaml=\"`1`\" pulumi-lang-java=\"`1`\"\u003e`1`\u003c/span\u003e.\n"},"name":{"type":"string","description":"Name of the SQL warehouse. Must be unique.\n"},"noWait":{"type":"boolean","description":"Whether to skip waiting for the SQL warehouse to start after creation. Default is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e. When set to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e, Pulumi will create the warehouse but won't wait for it to be in a running state before completing.\n"},"numActiveSessions":{"type":"integer","description":"The current number of clusters used by the endpoint.\n"},"numClusters":{"type":"integer","description":"The current number of clusters used by the endpoint.\n"},"odbcParams":{"$ref":"#/types/databricks:index/SqlEndpointOdbcParams:SqlEndpointOdbcParams","description":"ODBC connection params: `odbc_params.hostname`, `odbc_params.path`, `odbc_params.protocol`, and `odbc_params.port`.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SqlEndpointProviderConfig:SqlEndpointProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"spotInstancePolicy":{"type":"string","description":"The spot policy to use for allocating instances to clusters: `COST_OPTIMIZED` or `RELIABILITY_OPTIMIZED`. This field is optional. Default is `COST_OPTIMIZED`.\n"},"state":{"type":"string","description":"The current state of the endpoint.\n"},"tags":{"$ref":"#/types/databricks:index/SqlEndpointTags:SqlEndpointTags","description":"Databricks tags all endpoint resources with these tags.\n"},"warehouseType":{"type":"string","description":"SQL warehouse type. See for [AWS](https://docs.databricks.com/sql/admin/sql-endpoints.html#switch-the-sql-warehouse-type-pro-classic-or-serverless) or [Azure](https://learn.microsoft.com/en-us/azure/databricks/sql/admin/create-sql-warehouse#--upgrade-a-pro-or-classic-sql-warehouse-to-a-serverless-sql-warehouse). Set to `PRO` or `CLASSIC`. If the field \u003cspan pulumi-lang-nodejs=\"`enableServerlessCompute`\" pulumi-lang-dotnet=\"`EnableServerlessCompute`\" pulumi-lang-go=\"`enableServerlessCompute`\" pulumi-lang-python=\"`enable_serverless_compute`\" pulumi-lang-yaml=\"`enableServerlessCompute`\" pulumi-lang-java=\"`enableServerlessCompute`\"\u003e`enable_serverless_compute`\u003c/span\u003e has the value \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e either explicitly or through the default logic (see that field above for details), the default is `PRO`, which is required for serverless SQL warehouses. Otherwise, the default is `CLASSIC`.\n"}},"required":["clusterSize","creatorName","dataSourceId","enableServerlessCompute","healths","jdbcUrl","name","numActiveSessions","numClusters","odbcParams","state"],"inputProperties":{"autoStopMins":{"type":"integer","description":"Time in minutes until an idle SQL warehouse terminates all clusters and stops. This field is optional. The default is 120, set to 0 to disable the auto stop.\n"},"channel":{"$ref":"#/types/databricks:index/SqlEndpointChannel:SqlEndpointChannel","description":"block, consisting of following fields:\n"},"clusterSize":{"type":"string","description":"The size of the clusters allocated to the endpoint: \"2X-Small\", \"X-Small\", \"Small\", \"Medium\", \"Large\", \"X-Large\", \"2X-Large\", \"3X-Large\", \"4X-Large\", \"5X-Large\".\n"},"dataSourceId":{"type":"string","description":"(Deprecated, will be removed) ID of the data source for this endpoint. This is used to bind an Databricks SQL query to an endpoint.\n"},"enablePhoton":{"type":"boolean","description":"Whether to enable [Photon](https://databricks.com/product/delta-engine). This field is optional and is enabled by default.\n"},"enableServerlessCompute":{"type":"boolean","description":"Whether this SQL warehouse is a serverless endpoint. See below for details about the default values. To avoid ambiguity, especially for organizations with many workspaces, Databricks recommends that you always set this field explicitly.\n\n* If omitted, the default is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between September 1, 2022 and April 30, 2023 (between November 1, 2022 and May 19, 2023 for Azure), the default remains the previous behavior which is default to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. If your account needs updated [terms of use](https://docs.databricks.com/sql/admin/serverless.html#accept-terms), workspace admins are prompted in the Databricks SQL UI. A workspace must meet the [requirements](https://docs.databricks.com/sql/admin/serverless.html#requirements).\n"},"instanceProfileArn":{"type":"string"},"maxNumClusters":{"type":"integer","description":"Maximum number of clusters available when a SQL warehouse is running. This field is required. If multi-cluster load balancing is not enabled, this is default to \u003cspan pulumi-lang-nodejs=\"`1`\" pulumi-lang-dotnet=\"`1`\" pulumi-lang-go=\"`1`\" pulumi-lang-python=\"`1`\" pulumi-lang-yaml=\"`1`\" pulumi-lang-java=\"`1`\"\u003e`1`\u003c/span\u003e.\n"},"minNumClusters":{"type":"integer","description":"Minimum number of clusters available when a SQL warehouse is running. The default is \u003cspan pulumi-lang-nodejs=\"`1`\" pulumi-lang-dotnet=\"`1`\" pulumi-lang-go=\"`1`\" pulumi-lang-python=\"`1`\" pulumi-lang-yaml=\"`1`\" pulumi-lang-java=\"`1`\"\u003e`1`\u003c/span\u003e.\n"},"name":{"type":"string","description":"Name of the SQL warehouse. Must be unique.\n"},"noWait":{"type":"boolean","description":"Whether to skip waiting for the SQL warehouse to start after creation. Default is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e. When set to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e, Pulumi will create the warehouse but won't wait for it to be in a running state before completing.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SqlEndpointProviderConfig:SqlEndpointProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"spotInstancePolicy":{"type":"string","description":"The spot policy to use for allocating instances to clusters: `COST_OPTIMIZED` or `RELIABILITY_OPTIMIZED`. This field is optional. Default is `COST_OPTIMIZED`.\n"},"tags":{"$ref":"#/types/databricks:index/SqlEndpointTags:SqlEndpointTags","description":"Databricks tags all endpoint resources with these tags.\n"},"warehouseType":{"type":"string","description":"SQL warehouse type. See for [AWS](https://docs.databricks.com/sql/admin/sql-endpoints.html#switch-the-sql-warehouse-type-pro-classic-or-serverless) or [Azure](https://learn.microsoft.com/en-us/azure/databricks/sql/admin/create-sql-warehouse#--upgrade-a-pro-or-classic-sql-warehouse-to-a-serverless-sql-warehouse). Set to `PRO` or `CLASSIC`. If the field \u003cspan pulumi-lang-nodejs=\"`enableServerlessCompute`\" pulumi-lang-dotnet=\"`EnableServerlessCompute`\" pulumi-lang-go=\"`enableServerlessCompute`\" pulumi-lang-python=\"`enable_serverless_compute`\" pulumi-lang-yaml=\"`enableServerlessCompute`\" pulumi-lang-java=\"`enableServerlessCompute`\"\u003e`enable_serverless_compute`\u003c/span\u003e has the value \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e either explicitly or through the default logic (see that field above for details), the default is `PRO`, which is required for serverless SQL warehouses. Otherwise, the default is `CLASSIC`.\n"}},"requiredInputs":["clusterSize"],"stateInputs":{"description":"Input properties used for looking up and filtering SqlEndpoint resources.\n","properties":{"autoStopMins":{"type":"integer","description":"Time in minutes until an idle SQL warehouse terminates all clusters and stops. This field is optional. The default is 120, set to 0 to disable the auto stop.\n"},"channel":{"$ref":"#/types/databricks:index/SqlEndpointChannel:SqlEndpointChannel","description":"block, consisting of following fields:\n"},"clusterSize":{"type":"string","description":"The size of the clusters allocated to the endpoint: \"2X-Small\", \"X-Small\", \"Small\", \"Medium\", \"Large\", \"X-Large\", \"2X-Large\", \"3X-Large\", \"4X-Large\", \"5X-Large\".\n"},"creatorName":{"type":"string","description":"The username of the user who created the endpoint.\n"},"dataSourceId":{"type":"string","description":"(Deprecated, will be removed) ID of the data source for this endpoint. This is used to bind an Databricks SQL query to an endpoint.\n"},"enablePhoton":{"type":"boolean","description":"Whether to enable [Photon](https://databricks.com/product/delta-engine). This field is optional and is enabled by default.\n"},"enableServerlessCompute":{"type":"boolean","description":"Whether this SQL warehouse is a serverless endpoint. See below for details about the default values. To avoid ambiguity, especially for organizations with many workspaces, Databricks recommends that you always set this field explicitly.\n\n* If omitted, the default is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between September 1, 2022 and April 30, 2023 (between November 1, 2022 and May 19, 2023 for Azure), the default remains the previous behavior which is default to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. If your account needs updated [terms of use](https://docs.databricks.com/sql/admin/serverless.html#accept-terms), workspace admins are prompted in the Databricks SQL UI. A workspace must meet the [requirements](https://docs.databricks.com/sql/admin/serverless.html#requirements).\n"},"healths":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlEndpointHealth:SqlEndpointHealth"},"description":"Health status of the endpoint.\n"},"instanceProfileArn":{"type":"string"},"jdbcUrl":{"type":"string","description":"JDBC connection string.\n"},"maxNumClusters":{"type":"integer","description":"Maximum number of clusters available when a SQL warehouse is running. This field is required. If multi-cluster load balancing is not enabled, this is default to \u003cspan pulumi-lang-nodejs=\"`1`\" pulumi-lang-dotnet=\"`1`\" pulumi-lang-go=\"`1`\" pulumi-lang-python=\"`1`\" pulumi-lang-yaml=\"`1`\" pulumi-lang-java=\"`1`\"\u003e`1`\u003c/span\u003e.\n"},"minNumClusters":{"type":"integer","description":"Minimum number of clusters available when a SQL warehouse is running. The default is \u003cspan pulumi-lang-nodejs=\"`1`\" pulumi-lang-dotnet=\"`1`\" pulumi-lang-go=\"`1`\" pulumi-lang-python=\"`1`\" pulumi-lang-yaml=\"`1`\" pulumi-lang-java=\"`1`\"\u003e`1`\u003c/span\u003e.\n"},"name":{"type":"string","description":"Name of the SQL warehouse. Must be unique.\n"},"noWait":{"type":"boolean","description":"Whether to skip waiting for the SQL warehouse to start after creation. Default is \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e. When set to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e, Pulumi will create the warehouse but won't wait for it to be in a running state before completing.\n"},"numActiveSessions":{"type":"integer","description":"The current number of clusters used by the endpoint.\n"},"numClusters":{"type":"integer","description":"The current number of clusters used by the endpoint.\n"},"odbcParams":{"$ref":"#/types/databricks:index/SqlEndpointOdbcParams:SqlEndpointOdbcParams","description":"ODBC connection params: `odbc_params.hostname`, `odbc_params.path`, `odbc_params.protocol`, and `odbc_params.port`.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SqlEndpointProviderConfig:SqlEndpointProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"spotInstancePolicy":{"type":"string","description":"The spot policy to use for allocating instances to clusters: `COST_OPTIMIZED` or `RELIABILITY_OPTIMIZED`. This field is optional. Default is `COST_OPTIMIZED`.\n"},"state":{"type":"string","description":"The current state of the endpoint.\n"},"tags":{"$ref":"#/types/databricks:index/SqlEndpointTags:SqlEndpointTags","description":"Databricks tags all endpoint resources with these tags.\n"},"warehouseType":{"type":"string","description":"SQL warehouse type. See for [AWS](https://docs.databricks.com/sql/admin/sql-endpoints.html#switch-the-sql-warehouse-type-pro-classic-or-serverless) or [Azure](https://learn.microsoft.com/en-us/azure/databricks/sql/admin/create-sql-warehouse#--upgrade-a-pro-or-classic-sql-warehouse-to-a-serverless-sql-warehouse). Set to `PRO` or `CLASSIC`. If the field \u003cspan pulumi-lang-nodejs=\"`enableServerlessCompute`\" pulumi-lang-dotnet=\"`EnableServerlessCompute`\" pulumi-lang-go=\"`enableServerlessCompute`\" pulumi-lang-python=\"`enable_serverless_compute`\" pulumi-lang-yaml=\"`enableServerlessCompute`\" pulumi-lang-java=\"`enableServerlessCompute`\"\u003e`enable_serverless_compute`\u003c/span\u003e has the value \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e either explicitly or through the default logic (see that field above for details), the default is `PRO`, which is required for serverless SQL warehouses. Otherwise, the default is `CLASSIC`.\n"}},"type":"object"}},"databricks:index/sqlGlobalConfig:SqlGlobalConfig":{"description":"This resource configures the security policy, databricks_instance_profile, and [data access properties](https://docs.databricks.com/sql/admin/data-access-configuration.html) for all\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eof workspace. *Please note that changing parameters of this resource will restart all running databricks_sql_endpoint.*  To use this resource you need to be an administrator.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n### AWS example\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.SqlGlobalConfig(\"this\", {\n    securityPolicy: \"DATA_ACCESS_CONTROL\",\n    instanceProfileArn: \"arn:....\",\n    dataAccessConfig: {\n        \"spark.sql.session.timeZone\": \"UTC\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.SqlGlobalConfig(\"this\",\n    security_policy=\"DATA_ACCESS_CONTROL\",\n    instance_profile_arn=\"arn:....\",\n    data_access_config={\n        \"spark.sql.session.timeZone\": \"UTC\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.SqlGlobalConfig(\"this\", new()\n    {\n        SecurityPolicy = \"DATA_ACCESS_CONTROL\",\n        InstanceProfileArn = \"arn:....\",\n        DataAccessConfig = \n        {\n            { \"spark.sql.session.timeZone\", \"UTC\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSqlGlobalConfig(ctx, \"this\", \u0026databricks.SqlGlobalConfigArgs{\n\t\t\tSecurityPolicy:     pulumi.String(\"DATA_ACCESS_CONTROL\"),\n\t\t\tInstanceProfileArn: pulumi.String(\"arn:....\"),\n\t\t\tDataAccessConfig: pulumi.StringMap{\n\t\t\t\t\"spark.sql.session.timeZone\": pulumi.String(\"UTC\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlGlobalConfig;\nimport com.pulumi.databricks.SqlGlobalConfigArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new SqlGlobalConfig(\"this\", SqlGlobalConfigArgs.builder()\n            .securityPolicy(\"DATA_ACCESS_CONTROL\")\n            .instanceProfileArn(\"arn:....\")\n            .dataAccessConfig(Map.of(\"spark.sql.session.timeZone\", \"UTC\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:SqlGlobalConfig\n    properties:\n      securityPolicy: DATA_ACCESS_CONTROL\n      instanceProfileArn: arn:....\n      dataAccessConfig:\n        spark.sql.session.timeZone: UTC\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Azure example\n\nFor Azure you should use the \u003cspan pulumi-lang-nodejs=\"`dataAccessConfig`\" pulumi-lang-dotnet=\"`DataAccessConfig`\" pulumi-lang-go=\"`dataAccessConfig`\" pulumi-lang-python=\"`data_access_config`\" pulumi-lang-yaml=\"`dataAccessConfig`\" pulumi-lang-java=\"`dataAccessConfig`\"\u003e`data_access_config`\u003c/span\u003e to provide the service principal configuration. You can use the Databricks SQL Admin Console UI to help you generate the right configuration values.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.SqlGlobalConfig(\"this\", {\n    securityPolicy: \"DATA_ACCESS_CONTROL\",\n    dataAccessConfig: {\n        \"spark.hadoop.fs.azure.account.auth.type\": \"OAuth\",\n        \"spark.hadoop.fs.azure.account.oauth.provider.type\": \"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\",\n        \"spark.hadoop.fs.azure.account.oauth2.client.id\": applicationId,\n        \"spark.hadoop.fs.azure.account.oauth2.client.secret\": `{{secrets/${secretScope}/${secretKey}}}`,\n        \"spark.hadoop.fs.azure.account.oauth2.client.endpoint\": `https://login.microsoftonline.com/${tenantId}/oauth2/token`,\n    },\n    sqlConfigParams: {\n        ANSI_MODE: \"true\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.SqlGlobalConfig(\"this\",\n    security_policy=\"DATA_ACCESS_CONTROL\",\n    data_access_config={\n        \"spark.hadoop.fs.azure.account.auth.type\": \"OAuth\",\n        \"spark.hadoop.fs.azure.account.oauth.provider.type\": \"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\",\n        \"spark.hadoop.fs.azure.account.oauth2.client.id\": application_id,\n        \"spark.hadoop.fs.azure.account.oauth2.client.secret\": f\"{{{{secrets/{secret_scope}/{secret_key}}}}}\",\n        \"spark.hadoop.fs.azure.account.oauth2.client.endpoint\": f\"https://login.microsoftonline.com/{tenant_id}/oauth2/token\",\n    },\n    sql_config_params={\n        \"ANSI_MODE\": \"true\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.SqlGlobalConfig(\"this\", new()\n    {\n        SecurityPolicy = \"DATA_ACCESS_CONTROL\",\n        DataAccessConfig = \n        {\n            { \"spark.hadoop.fs.azure.account.auth.type\", \"OAuth\" },\n            { \"spark.hadoop.fs.azure.account.oauth.provider.type\", \"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\" },\n            { \"spark.hadoop.fs.azure.account.oauth2.client.id\", applicationId },\n            { \"spark.hadoop.fs.azure.account.oauth2.client.secret\", $\"{{{{secrets/{secretScope}/{secretKey}}}}}\" },\n            { \"spark.hadoop.fs.azure.account.oauth2.client.endpoint\", $\"https://login.microsoftonline.com/{tenantId}/oauth2/token\" },\n        },\n        SqlConfigParams = \n        {\n            { \"ANSI_MODE\", \"true\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSqlGlobalConfig(ctx, \"this\", \u0026databricks.SqlGlobalConfigArgs{\n\t\t\tSecurityPolicy: pulumi.String(\"DATA_ACCESS_CONTROL\"),\n\t\t\tDataAccessConfig: pulumi.StringMap{\n\t\t\t\t\"spark.hadoop.fs.azure.account.auth.type\":              pulumi.String(\"OAuth\"),\n\t\t\t\t\"spark.hadoop.fs.azure.account.oauth.provider.type\":    pulumi.String(\"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\"),\n\t\t\t\t\"spark.hadoop.fs.azure.account.oauth2.client.id\":       pulumi.Any(applicationId),\n\t\t\t\t\"spark.hadoop.fs.azure.account.oauth2.client.secret\":   pulumi.Sprintf(\"{{secrets/%v/%v}}\", secretScope, secretKey),\n\t\t\t\t\"spark.hadoop.fs.azure.account.oauth2.client.endpoint\": pulumi.Sprintf(\"https://login.microsoftonline.com/%v/oauth2/token\", tenantId),\n\t\t\t},\n\t\t\tSqlConfigParams: pulumi.StringMap{\n\t\t\t\t\"ANSI_MODE\": pulumi.String(\"true\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlGlobalConfig;\nimport com.pulumi.databricks.SqlGlobalConfigArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new SqlGlobalConfig(\"this\", SqlGlobalConfigArgs.builder()\n            .securityPolicy(\"DATA_ACCESS_CONTROL\")\n            .dataAccessConfig(Map.ofEntries(\n                Map.entry(\"spark.hadoop.fs.azure.account.auth.type\", \"OAuth\"),\n                Map.entry(\"spark.hadoop.fs.azure.account.oauth.provider.type\", \"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\"),\n                Map.entry(\"spark.hadoop.fs.azure.account.oauth2.client.id\", applicationId),\n                Map.entry(\"spark.hadoop.fs.azure.account.oauth2.client.secret\", String.format(\"{{{{secrets/%s/%s}}}}\", secretScope,secretKey)),\n                Map.entry(\"spark.hadoop.fs.azure.account.oauth2.client.endpoint\", String.format(\"https://login.microsoftonline.com/%s/oauth2/token\", tenantId))\n            ))\n            .sqlConfigParams(Map.of(\"ANSI_MODE\", \"true\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:SqlGlobalConfig\n    properties:\n      securityPolicy: DATA_ACCESS_CONTROL\n      dataAccessConfig:\n        spark.hadoop.fs.azure.account.auth.type: OAuth\n        spark.hadoop.fs.azure.account.oauth.provider.type: org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider\n        spark.hadoop.fs.azure.account.oauth2.client.id: ${applicationId}\n        spark.hadoop.fs.azure.account.oauth2.client.secret: '{{secrets/${secretScope}/${secretKey}}}'\n        spark.hadoop.fs.azure.account.oauth2.client.endpoint: https://login.microsoftonline.com/${tenantId}/oauth2/token\n      sqlConfigParams:\n        ANSI_MODE: 'true'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlDashboard \" pulumi-lang-dotnet=\" databricks.SqlDashboard \" pulumi-lang-go=\" SqlDashboard \" pulumi-lang-python=\" SqlDashboard \" pulumi-lang-yaml=\" databricks.SqlDashboard \" pulumi-lang-java=\" databricks.SqlDashboard \"\u003e databricks.SqlDashboard \u003c/span\u003eto manage Databricks SQL [Dashboards](https://docs.databricks.com/sql/user/dashboards/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eto manage Databricks SQL [Warehouses](https://docs.databricks.com/sql/admin/sql-endpoints.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage data access in Unity Catalog.\n\n","properties":{"dataAccessConfig":{"type":"object","additionalProperties":{"type":"string"},"description":"Data access configuration for databricks_sql_endpoint, such as configuration for an external Hive metastore, Hadoop Filesystem configuration, etc.  Please note that the list of supported configuration properties is limited, so refer to the [documentation](https://docs.databricks.com/sql/admin/data-access-configuration.html#supported-properties) for a full list.  Apply will fail if you're specifying not permitted configuration.\n"},"enableServerlessCompute":{"type":"boolean","deprecationMessage":"This field is intended as an internal API and may be removed from the Databricks Terraform provider in the future"},"googleServiceAccount":{"type":"string","description":"used to access GCP services, such as Cloud Storage, from databricks_sql_endpoint. Please note that this parameter is only for GCP, and will generate an error if used on other clouds.\n"},"instanceProfileArn":{"type":"string","description":"databricks_instance_profile used to access storage from databricks_sql_endpoint. Please note that this parameter is only for AWS, and will generate an error if used on other clouds.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SqlGlobalConfigProviderConfig:SqlGlobalConfigProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"securityPolicy":{"type":"string","description":"The policy for controlling access to datasets. Default value: `DATA_ACCESS_CONTROL`, consult documentation for list of possible values\n"},"sqlConfigParams":{"type":"object","additionalProperties":{"type":"string"},"description":"SQL Configuration Parameters let you override the default behavior for all sessions with all endpoints.\n"}},"required":["enableServerlessCompute"],"inputProperties":{"dataAccessConfig":{"type":"object","additionalProperties":{"type":"string"},"description":"Data access configuration for databricks_sql_endpoint, such as configuration for an external Hive metastore, Hadoop Filesystem configuration, etc.  Please note that the list of supported configuration properties is limited, so refer to the [documentation](https://docs.databricks.com/sql/admin/data-access-configuration.html#supported-properties) for a full list.  Apply will fail if you're specifying not permitted configuration.\n"},"enableServerlessCompute":{"type":"boolean","deprecationMessage":"This field is intended as an internal API and may be removed from the Databricks Terraform provider in the future"},"googleServiceAccount":{"type":"string","description":"used to access GCP services, such as Cloud Storage, from databricks_sql_endpoint. Please note that this parameter is only for GCP, and will generate an error if used on other clouds.\n"},"instanceProfileArn":{"type":"string","description":"databricks_instance_profile used to access storage from databricks_sql_endpoint. Please note that this parameter is only for AWS, and will generate an error if used on other clouds.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SqlGlobalConfigProviderConfig:SqlGlobalConfigProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"securityPolicy":{"type":"string","description":"The policy for controlling access to datasets. Default value: `DATA_ACCESS_CONTROL`, consult documentation for list of possible values\n"},"sqlConfigParams":{"type":"object","additionalProperties":{"type":"string"},"description":"SQL Configuration Parameters let you override the default behavior for all sessions with all endpoints.\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering SqlGlobalConfig resources.\n","properties":{"dataAccessConfig":{"type":"object","additionalProperties":{"type":"string"},"description":"Data access configuration for databricks_sql_endpoint, such as configuration for an external Hive metastore, Hadoop Filesystem configuration, etc.  Please note that the list of supported configuration properties is limited, so refer to the [documentation](https://docs.databricks.com/sql/admin/data-access-configuration.html#supported-properties) for a full list.  Apply will fail if you're specifying not permitted configuration.\n"},"enableServerlessCompute":{"type":"boolean","deprecationMessage":"This field is intended as an internal API and may be removed from the Databricks Terraform provider in the future"},"googleServiceAccount":{"type":"string","description":"used to access GCP services, such as Cloud Storage, from databricks_sql_endpoint. Please note that this parameter is only for GCP, and will generate an error if used on other clouds.\n"},"instanceProfileArn":{"type":"string","description":"databricks_instance_profile used to access storage from databricks_sql_endpoint. Please note that this parameter is only for AWS, and will generate an error if used on other clouds.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SqlGlobalConfigProviderConfig:SqlGlobalConfigProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"securityPolicy":{"type":"string","description":"The policy for controlling access to datasets. Default value: `DATA_ACCESS_CONTROL`, consult documentation for list of possible values\n"},"sqlConfigParams":{"type":"object","additionalProperties":{"type":"string"},"description":"SQL Configuration Parameters let you override the default behavior for all sessions with all endpoints.\n"}},"type":"object"}},"databricks:index/sqlPermissions:SqlPermissions":{"description":"\u003e Please switch to\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003ewith Unity Catalog to manage data access, which provides a better and faster way for managing data security. \u003cspan pulumi-lang-nodejs=\"`databricks.Grants`\" pulumi-lang-dotnet=\"`databricks.Grants`\" pulumi-lang-go=\"`Grants`\" pulumi-lang-python=\"`Grants`\" pulumi-lang-yaml=\"`databricks.Grants`\" pulumi-lang-java=\"`databricks.Grants`\"\u003e`databricks.Grants`\u003c/span\u003e resource *doesn't require a technical cluster to perform operations*. On workspaces with Unity Catalog enabled, you may run into errors such as `Error: cannot create sql permissions: cannot read current grants: For unity catalog, please specify the catalog name explicitly. E.g. SHOW GRANT ``your.address@email.com`` ON CATALOG main`. This happens if your \u003cspan pulumi-lang-nodejs=\"`defaultCatalogName`\" pulumi-lang-dotnet=\"`DefaultCatalogName`\" pulumi-lang-go=\"`defaultCatalogName`\" pulumi-lang-python=\"`default_catalog_name`\" pulumi-lang-yaml=\"`defaultCatalogName`\" pulumi-lang-java=\"`defaultCatalogName`\"\u003e`default_catalog_name`\u003c/span\u003e was set to a UC catalog instead of \u003cspan pulumi-lang-nodejs=\"`hiveMetastore`\" pulumi-lang-dotnet=\"`HiveMetastore`\" pulumi-lang-go=\"`hiveMetastore`\" pulumi-lang-python=\"`hive_metastore`\" pulumi-lang-yaml=\"`hiveMetastore`\" pulumi-lang-java=\"`hiveMetastore`\"\u003e`hive_metastore`\u003c/span\u003e. The workaround is to re-assign the metastore again with the default catalog set to \u003cspan pulumi-lang-nodejs=\"`hiveMetastore`\" pulumi-lang-dotnet=\"`HiveMetastore`\" pulumi-lang-go=\"`hiveMetastore`\" pulumi-lang-python=\"`hive_metastore`\" pulumi-lang-yaml=\"`hiveMetastore`\" pulumi-lang-java=\"`hiveMetastore`\"\u003e`hive_metastore`\u003c/span\u003e. See databricks_metastore_assignment.\n\nThis resource manages data object access control lists in Databricks workspaces for things like tables, views, databases, and [more](https://docs.databricks.com/security/access-control/table-acls/object-privileges.html). In order to enable Table Access control, you have to login to the workspace as administrator, go to `Admin Console`, pick the `Access Control` tab, click on the `Enable` button in the `Table Access Control` section, and click `Confirm`. The security guarantees of table access control **will only be effective if cluster access control is also turned on**. Please make sure that no users can create clusters in your workspace and all\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003ehave approximately the following configuration:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst clusterWithTableAccessControl = new databricks.Cluster(\"cluster_with_table_access_control\", {sparkConf: {\n    \"spark.databricks.acl.dfAclsEnabled\": \"true\",\n    \"spark.databricks.repl.allowedLanguages\": \"python,sql\",\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncluster_with_table_access_control = databricks.Cluster(\"cluster_with_table_access_control\", spark_conf={\n    \"spark.databricks.acl.dfAclsEnabled\": \"true\",\n    \"spark.databricks.repl.allowedLanguages\": \"python,sql\",\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var clusterWithTableAccessControl = new Databricks.Cluster(\"cluster_with_table_access_control\", new()\n    {\n        SparkConf = \n        {\n            { \"spark.databricks.acl.dfAclsEnabled\", \"true\" },\n            { \"spark.databricks.repl.allowedLanguages\", \"python,sql\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewCluster(ctx, \"cluster_with_table_access_control\", \u0026databricks.ClusterArgs{\n\t\t\tSparkConf: pulumi.StringMap{\n\t\t\t\t\"spark.databricks.acl.dfAclsEnabled\":     pulumi.String(\"true\"),\n\t\t\t\t\"spark.databricks.repl.allowedLanguages\": pulumi.String(\"python,sql\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var clusterWithTableAccessControl = new Cluster(\"clusterWithTableAccessControl\", ClusterArgs.builder()\n            .sparkConf(Map.ofEntries(\n                Map.entry(\"spark.databricks.acl.dfAclsEnabled\", \"true\"),\n                Map.entry(\"spark.databricks.repl.allowedLanguages\", \"python,sql\")\n            ))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  clusterWithTableAccessControl:\n    type: databricks:Cluster\n    name: cluster_with_table_access_control\n    properties:\n      sparkConf:\n        spark.databricks.acl.dfAclsEnabled: 'true'\n        spark.databricks.repl.allowedLanguages: python,sql\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n\u003e This resource can only be used with a workspace-level provider!\n\nIt is required to define all permissions for a securable in a single resource, otherwise Pulumi cannot guarantee config drift prevention.\n\n## Example Usage\n\nThe following resource definition will enforce access control on a table by executing the following SQL queries on a special auto-terminating cluster it would create for this operation:\n\n* ```SHOW GRANT ON TABLE `default`.`foo` ```\n* ```REVOKE ALL PRIVILEGES ON TABLE `default`.`foo` FROM ... every group and user that has access to it ...```\n* ```GRANT MODIFY, SELECT ON TABLE `default`.`foo` TO `serge@example.com` ```\n* ```GRANT SELECT ON TABLE `default`.`foo` TO `special group` ```\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst fooTable = new databricks.SqlPermissions(\"foo_table\", {\n    table: \"foo\",\n    privilegeAssignments: [\n        {\n            principal: \"serge@example.com\",\n            privileges: [\n                \"SELECT\",\n                \"MODIFY\",\n            ],\n        },\n        {\n            principal: \"special group\",\n            privileges: [\"SELECT\"],\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nfoo_table = databricks.SqlPermissions(\"foo_table\",\n    table=\"foo\",\n    privilege_assignments=[\n        {\n            \"principal\": \"serge@example.com\",\n            \"privileges\": [\n                \"SELECT\",\n                \"MODIFY\",\n            ],\n        },\n        {\n            \"principal\": \"special group\",\n            \"privileges\": [\"SELECT\"],\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var fooTable = new Databricks.SqlPermissions(\"foo_table\", new()\n    {\n        Table = \"foo\",\n        PrivilegeAssignments = new[]\n        {\n            new Databricks.Inputs.SqlPermissionsPrivilegeAssignmentArgs\n            {\n                Principal = \"serge@example.com\",\n                Privileges = new[]\n                {\n                    \"SELECT\",\n                    \"MODIFY\",\n                },\n            },\n            new Databricks.Inputs.SqlPermissionsPrivilegeAssignmentArgs\n            {\n                Principal = \"special group\",\n                Privileges = new[]\n                {\n                    \"SELECT\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSqlPermissions(ctx, \"foo_table\", \u0026databricks.SqlPermissionsArgs{\n\t\t\tTable: pulumi.String(\"foo\"),\n\t\t\tPrivilegeAssignments: databricks.SqlPermissionsPrivilegeAssignmentArray{\n\t\t\t\t\u0026databricks.SqlPermissionsPrivilegeAssignmentArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"serge@example.com\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.SqlPermissionsPrivilegeAssignmentArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"special group\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlPermissions;\nimport com.pulumi.databricks.SqlPermissionsArgs;\nimport com.pulumi.databricks.inputs.SqlPermissionsPrivilegeAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var fooTable = new SqlPermissions(\"fooTable\", SqlPermissionsArgs.builder()\n            .table(\"foo\")\n            .privilegeAssignments(            \n                SqlPermissionsPrivilegeAssignmentArgs.builder()\n                    .principal(\"serge@example.com\")\n                    .privileges(                    \n                        \"SELECT\",\n                        \"MODIFY\")\n                    .build(),\n                SqlPermissionsPrivilegeAssignmentArgs.builder()\n                    .principal(\"special group\")\n                    .privileges(\"SELECT\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  fooTable:\n    type: databricks:SqlPermissions\n    name: foo_table\n    properties:\n      table: foo\n      privilegeAssignments:\n        - principal: serge@example.com\n          privileges:\n            - SELECT\n            - MODIFY\n        - principal: special group\n          privileges:\n            - SELECT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage data access in Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eto manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eto [manage users](https://docs.databricks.com/administration-guide/users-groups/users.html), that could be added to\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003ewithin the workspace.\n\n","properties":{"anonymousFunction":{"type":"boolean","description":"If this access control for using an anonymous function. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"anyFile":{"type":"boolean","description":"If this access control for reading/writing any file. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"catalog":{"type":"boolean","description":"If this access control for the entire catalog. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n"},"clusterId":{"type":"string","description":"Id of an existing databricks_cluster, where the appropriate `GRANT`/`REVOKE` commands are executed. This cluster must have the appropriate data security mode (`USER_ISOLATION` or `LEGACY_TABLE_ACL` specified). If no \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e is specified, a TACL-enabled cluster with the name `terraform-table-acl` is automatically created.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst fooTable = new databricks.SqlPermissions(\"foo_table\", {clusterId: clusterName.id});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nfoo_table = databricks.SqlPermissions(\"foo_table\", cluster_id=cluster_name[\"id\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var fooTable = new Databricks.SqlPermissions(\"foo_table\", new()\n    {\n        ClusterId = clusterName.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSqlPermissions(ctx, \"foo_table\", \u0026databricks.SqlPermissionsArgs{\n\t\t\tClusterId: pulumi.Any(clusterName.Id),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlPermissions;\nimport com.pulumi.databricks.SqlPermissionsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var fooTable = new SqlPermissions(\"fooTable\", SqlPermissionsArgs.builder()\n            .clusterId(clusterName.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  fooTable:\n    type: databricks:SqlPermissions\n    name: foo_table\n    properties:\n      clusterId: ${clusterName.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nThe following arguments are available to specify the data object you need to enforce access controls on. You must specify only one of those arguments (except for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`view`\" pulumi-lang-dotnet=\"`View`\" pulumi-lang-go=\"`view`\" pulumi-lang-python=\"`view`\" pulumi-lang-yaml=\"`view`\" pulumi-lang-java=\"`view`\"\u003e`view`\u003c/span\u003e), otherwise resource creation will fail.\n"},"database":{"type":"string","description":"Name of the database. Has a default value of \u003cspan pulumi-lang-nodejs=\"`default`\" pulumi-lang-dotnet=\"`Default`\" pulumi-lang-go=\"`default`\" pulumi-lang-python=\"`default`\" pulumi-lang-yaml=\"`default`\" pulumi-lang-java=\"`default`\"\u003e`default`\u003c/span\u003e.\n"},"privilegeAssignments":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlPermissionsPrivilegeAssignment:SqlPermissionsPrivilegeAssignment"}},"table":{"type":"string","description":"Name of the table. Can be combined with the \u003cspan pulumi-lang-nodejs=\"`database`\" pulumi-lang-dotnet=\"`Database`\" pulumi-lang-go=\"`database`\" pulumi-lang-python=\"`database`\" pulumi-lang-yaml=\"`database`\" pulumi-lang-java=\"`database`\"\u003e`database`\u003c/span\u003e.\n"},"view":{"type":"string","description":"Name of the view. Can be combined with the \u003cspan pulumi-lang-nodejs=\"`database`\" pulumi-lang-dotnet=\"`Database`\" pulumi-lang-go=\"`database`\" pulumi-lang-python=\"`database`\" pulumi-lang-yaml=\"`database`\" pulumi-lang-java=\"`database`\"\u003e`database`\u003c/span\u003e.\n"}},"required":["clusterId"],"inputProperties":{"anonymousFunction":{"type":"boolean","description":"If this access control for using an anonymous function. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n","willReplaceOnChanges":true},"anyFile":{"type":"boolean","description":"If this access control for reading/writing any file. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n","willReplaceOnChanges":true},"catalog":{"type":"boolean","description":"If this access control for the entire catalog. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n","willReplaceOnChanges":true},"clusterId":{"type":"string","description":"Id of an existing databricks_cluster, where the appropriate `GRANT`/`REVOKE` commands are executed. This cluster must have the appropriate data security mode (`USER_ISOLATION` or `LEGACY_TABLE_ACL` specified). If no \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e is specified, a TACL-enabled cluster with the name `terraform-table-acl` is automatically created.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst fooTable = new databricks.SqlPermissions(\"foo_table\", {clusterId: clusterName.id});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nfoo_table = databricks.SqlPermissions(\"foo_table\", cluster_id=cluster_name[\"id\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var fooTable = new Databricks.SqlPermissions(\"foo_table\", new()\n    {\n        ClusterId = clusterName.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSqlPermissions(ctx, \"foo_table\", \u0026databricks.SqlPermissionsArgs{\n\t\t\tClusterId: pulumi.Any(clusterName.Id),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlPermissions;\nimport com.pulumi.databricks.SqlPermissionsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var fooTable = new SqlPermissions(\"fooTable\", SqlPermissionsArgs.builder()\n            .clusterId(clusterName.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  fooTable:\n    type: databricks:SqlPermissions\n    name: foo_table\n    properties:\n      clusterId: ${clusterName.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nThe following arguments are available to specify the data object you need to enforce access controls on. You must specify only one of those arguments (except for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`view`\" pulumi-lang-dotnet=\"`View`\" pulumi-lang-go=\"`view`\" pulumi-lang-python=\"`view`\" pulumi-lang-yaml=\"`view`\" pulumi-lang-java=\"`view`\"\u003e`view`\u003c/span\u003e), otherwise resource creation will fail.\n"},"database":{"type":"string","description":"Name of the database. Has a default value of \u003cspan pulumi-lang-nodejs=\"`default`\" pulumi-lang-dotnet=\"`Default`\" pulumi-lang-go=\"`default`\" pulumi-lang-python=\"`default`\" pulumi-lang-yaml=\"`default`\" pulumi-lang-java=\"`default`\"\u003e`default`\u003c/span\u003e.\n","willReplaceOnChanges":true},"privilegeAssignments":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlPermissionsPrivilegeAssignment:SqlPermissionsPrivilegeAssignment"}},"table":{"type":"string","description":"Name of the table. Can be combined with the \u003cspan pulumi-lang-nodejs=\"`database`\" pulumi-lang-dotnet=\"`Database`\" pulumi-lang-go=\"`database`\" pulumi-lang-python=\"`database`\" pulumi-lang-yaml=\"`database`\" pulumi-lang-java=\"`database`\"\u003e`database`\u003c/span\u003e.\n","willReplaceOnChanges":true},"view":{"type":"string","description":"Name of the view. Can be combined with the \u003cspan pulumi-lang-nodejs=\"`database`\" pulumi-lang-dotnet=\"`Database`\" pulumi-lang-go=\"`database`\" pulumi-lang-python=\"`database`\" pulumi-lang-yaml=\"`database`\" pulumi-lang-java=\"`database`\"\u003e`database`\u003c/span\u003e.\n","willReplaceOnChanges":true}},"stateInputs":{"description":"Input properties used for looking up and filtering SqlPermissions resources.\n","properties":{"anonymousFunction":{"type":"boolean","description":"If this access control for using an anonymous function. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n","willReplaceOnChanges":true},"anyFile":{"type":"boolean","description":"If this access control for reading/writing any file. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n","willReplaceOnChanges":true},"catalog":{"type":"boolean","description":"If this access control for the entire catalog. Defaults to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n","willReplaceOnChanges":true},"clusterId":{"type":"string","description":"Id of an existing databricks_cluster, where the appropriate `GRANT`/`REVOKE` commands are executed. This cluster must have the appropriate data security mode (`USER_ISOLATION` or `LEGACY_TABLE_ACL` specified). If no \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e is specified, a TACL-enabled cluster with the name `terraform-table-acl` is automatically created.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst fooTable = new databricks.SqlPermissions(\"foo_table\", {clusterId: clusterName.id});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nfoo_table = databricks.SqlPermissions(\"foo_table\", cluster_id=cluster_name[\"id\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var fooTable = new Databricks.SqlPermissions(\"foo_table\", new()\n    {\n        ClusterId = clusterName.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSqlPermissions(ctx, \"foo_table\", \u0026databricks.SqlPermissionsArgs{\n\t\t\tClusterId: pulumi.Any(clusterName.Id),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlPermissions;\nimport com.pulumi.databricks.SqlPermissionsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var fooTable = new SqlPermissions(\"fooTable\", SqlPermissionsArgs.builder()\n            .clusterId(clusterName.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  fooTable:\n    type: databricks:SqlPermissions\n    name: foo_table\n    properties:\n      clusterId: ${clusterName.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nThe following arguments are available to specify the data object you need to enforce access controls on. You must specify only one of those arguments (except for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`view`\" pulumi-lang-dotnet=\"`View`\" pulumi-lang-go=\"`view`\" pulumi-lang-python=\"`view`\" pulumi-lang-yaml=\"`view`\" pulumi-lang-java=\"`view`\"\u003e`view`\u003c/span\u003e), otherwise resource creation will fail.\n"},"database":{"type":"string","description":"Name of the database. Has a default value of \u003cspan pulumi-lang-nodejs=\"`default`\" pulumi-lang-dotnet=\"`Default`\" pulumi-lang-go=\"`default`\" pulumi-lang-python=\"`default`\" pulumi-lang-yaml=\"`default`\" pulumi-lang-java=\"`default`\"\u003e`default`\u003c/span\u003e.\n","willReplaceOnChanges":true},"privilegeAssignments":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlPermissionsPrivilegeAssignment:SqlPermissionsPrivilegeAssignment"}},"table":{"type":"string","description":"Name of the table. Can be combined with the \u003cspan pulumi-lang-nodejs=\"`database`\" pulumi-lang-dotnet=\"`Database`\" pulumi-lang-go=\"`database`\" pulumi-lang-python=\"`database`\" pulumi-lang-yaml=\"`database`\" pulumi-lang-java=\"`database`\"\u003e`database`\u003c/span\u003e.\n","willReplaceOnChanges":true},"view":{"type":"string","description":"Name of the view. Can be combined with the \u003cspan pulumi-lang-nodejs=\"`database`\" pulumi-lang-dotnet=\"`Database`\" pulumi-lang-go=\"`database`\" pulumi-lang-python=\"`database`\" pulumi-lang-yaml=\"`database`\" pulumi-lang-java=\"`database`\"\u003e`database`\u003c/span\u003e.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/sqlQuery:SqlQuery":{"description":"!\u003e This resource is deprecated! Please switch to databricks_query.\n\nTo manage [SQLA resources](https://docs.databricks.com/sql/get-started/concepts.html) you must have \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e on your\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eor databricks_user.\n\n\u003e documentation for this resource is a work in progress.\n\nA query may have one or more visualizations.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sharedDir = new databricks.Directory(\"shared_dir\", {path: \"/Shared/Queries\"});\nconst q1 = new databricks.SqlQuery(\"q1\", {\n    dataSourceId: example.dataSourceId,\n    name: \"My Query Name\",\n    query: `                        SELECT {{ p1 }} AS p1\n                        WHERE 1=1\n                        AND p2 in ({{ p2 }})\n                        AND event_date \u003e date '{{ p3 }}'\n`,\n    parent: pulumi.interpolate`folders/${sharedDir.objectId}`,\n    runAsRole: \"viewer\",\n    parameters: [\n        {\n            name: \"p1\",\n            title: \"Title for p1\",\n            text: {\n                value: \"default\",\n            },\n        },\n        {\n            name: \"p2\",\n            title: \"Title for p2\",\n            \"enum\": {\n                options: [\n                    \"default\",\n                    \"foo\",\n                    \"bar\",\n                ],\n                value: \"default\",\n                multiple: {\n                    prefix: \"\\\"\",\n                    suffix: \"\\\"\",\n                    separator: \",\",\n                },\n            },\n        },\n        {\n            name: \"p3\",\n            title: \"Title for p3\",\n            date: {\n                value: \"2022-01-01\",\n            },\n        },\n    ],\n    tags: [\n        \"t1\",\n        \"t2\",\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nshared_dir = databricks.Directory(\"shared_dir\", path=\"/Shared/Queries\")\nq1 = databricks.SqlQuery(\"q1\",\n    data_source_id=example[\"dataSourceId\"],\n    name=\"My Query Name\",\n    query=\"\"\"                        SELECT {{ p1 }} AS p1\n                        WHERE 1=1\n                        AND p2 in ({{ p2 }})\n                        AND event_date \u003e date '{{ p3 }}'\n\"\"\",\n    parent=shared_dir.object_id.apply(lambda object_id: f\"folders/{object_id}\"),\n    run_as_role=\"viewer\",\n    parameters=[\n        {\n            \"name\": \"p1\",\n            \"title\": \"Title for p1\",\n            \"text\": {\n                \"value\": \"default\",\n            },\n        },\n        {\n            \"name\": \"p2\",\n            \"title\": \"Title for p2\",\n            \"enum\": {\n                \"options\": [\n                    \"default\",\n                    \"foo\",\n                    \"bar\",\n                ],\n                \"value\": \"default\",\n                \"multiple\": {\n                    \"prefix\": \"\\\"\",\n                    \"suffix\": \"\\\"\",\n                    \"separator\": \",\",\n                },\n            },\n        },\n        {\n            \"name\": \"p3\",\n            \"title\": \"Title for p3\",\n            \"date\": {\n                \"value\": \"2022-01-01\",\n            },\n        },\n    ],\n    tags=[\n        \"t1\",\n        \"t2\",\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sharedDir = new Databricks.Directory(\"shared_dir\", new()\n    {\n        Path = \"/Shared/Queries\",\n    });\n\n    var q1 = new Databricks.SqlQuery(\"q1\", new()\n    {\n        DataSourceId = example.DataSourceId,\n        Name = \"My Query Name\",\n        Query = @\"                        SELECT {{ p1 }} AS p1\n                        WHERE 1=1\n                        AND p2 in ({{ p2 }})\n                        AND event_date \u003e date '{{ p3 }}'\n\",\n        Parent = sharedDir.ObjectId.Apply(objectId =\u003e $\"folders/{objectId}\"),\n        RunAsRole = \"viewer\",\n        Parameters = new[]\n        {\n            new Databricks.Inputs.SqlQueryParameterArgs\n            {\n                Name = \"p1\",\n                Title = \"Title for p1\",\n                Text = new Databricks.Inputs.SqlQueryParameterTextArgs\n                {\n                    Value = \"default\",\n                },\n            },\n            new Databricks.Inputs.SqlQueryParameterArgs\n            {\n                Name = \"p2\",\n                Title = \"Title for p2\",\n                Enum = new Databricks.Inputs.SqlQueryParameterEnumArgs\n                {\n                    Options = new[]\n                    {\n                        \"default\",\n                        \"foo\",\n                        \"bar\",\n                    },\n                    Value = \"default\",\n                    Multiple = new Databricks.Inputs.SqlQueryParameterEnumMultipleArgs\n                    {\n                        Prefix = \"\\\"\",\n                        Suffix = \"\\\"\",\n                        Separator = \",\",\n                    },\n                },\n            },\n            new Databricks.Inputs.SqlQueryParameterArgs\n            {\n                Name = \"p3\",\n                Title = \"Title for p3\",\n                Date = new Databricks.Inputs.SqlQueryParameterDateArgs\n                {\n                    Value = \"2022-01-01\",\n                },\n            },\n        },\n        Tags = new[]\n        {\n            \"t1\",\n            \"t2\",\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsharedDir, err := databricks.NewDirectory(ctx, \"shared_dir\", \u0026databricks.DirectoryArgs{\n\t\t\tPath: pulumi.String(\"/Shared/Queries\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewSqlQuery(ctx, \"q1\", \u0026databricks.SqlQueryArgs{\n\t\t\tDataSourceId: pulumi.Any(example.DataSourceId),\n\t\t\tName:         pulumi.String(\"My Query Name\"),\n\t\t\tQuery:        pulumi.String(\"                        SELECT {{ p1 }} AS p1\\n                        WHERE 1=1\\n                        AND p2 in ({{ p2 }})\\n                        AND event_date \u003e date '{{ p3 }}'\\n\"),\n\t\t\tParent: sharedDir.ObjectId.ApplyT(func(objectId int) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"folders/%v\", objectId), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tRunAsRole: pulumi.String(\"viewer\"),\n\t\t\tParameters: databricks.SqlQueryParameterArray{\n\t\t\t\t\u0026databricks.SqlQueryParameterArgs{\n\t\t\t\t\tName:  pulumi.String(\"p1\"),\n\t\t\t\t\tTitle: pulumi.String(\"Title for p1\"),\n\t\t\t\t\tText: \u0026databricks.SqlQueryParameterTextArgs{\n\t\t\t\t\t\tValue: pulumi.String(\"default\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.SqlQueryParameterArgs{\n\t\t\t\t\tName:  pulumi.String(\"p2\"),\n\t\t\t\t\tTitle: pulumi.String(\"Title for p2\"),\n\t\t\t\t\tEnum: \u0026databricks.SqlQueryParameterEnumArgs{\n\t\t\t\t\t\tOptions: pulumi.StringArray{\n\t\t\t\t\t\t\tpulumi.String(\"default\"),\n\t\t\t\t\t\t\tpulumi.String(\"foo\"),\n\t\t\t\t\t\t\tpulumi.String(\"bar\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t\tValue: pulumi.String(\"default\"),\n\t\t\t\t\t\tMultiple: \u0026databricks.SqlQueryParameterEnumMultipleArgs{\n\t\t\t\t\t\t\tPrefix:    pulumi.String(\"\\\"\"),\n\t\t\t\t\t\t\tSuffix:    pulumi.String(\"\\\"\"),\n\t\t\t\t\t\t\tSeparator: pulumi.String(\",\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t\u0026databricks.SqlQueryParameterArgs{\n\t\t\t\t\tName:  pulumi.String(\"p3\"),\n\t\t\t\t\tTitle: pulumi.String(\"Title for p3\"),\n\t\t\t\t\tDate: \u0026databricks.SqlQueryParameterDateArgs{\n\t\t\t\t\t\tValue: pulumi.String(\"2022-01-01\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tTags: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"t1\"),\n\t\t\t\tpulumi.String(\"t2\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Directory;\nimport com.pulumi.databricks.DirectoryArgs;\nimport com.pulumi.databricks.SqlQuery;\nimport com.pulumi.databricks.SqlQueryArgs;\nimport com.pulumi.databricks.inputs.SqlQueryParameterArgs;\nimport com.pulumi.databricks.inputs.SqlQueryParameterTextArgs;\nimport com.pulumi.databricks.inputs.SqlQueryParameterEnumArgs;\nimport com.pulumi.databricks.inputs.SqlQueryParameterEnumMultipleArgs;\nimport com.pulumi.databricks.inputs.SqlQueryParameterDateArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sharedDir = new Directory(\"sharedDir\", DirectoryArgs.builder()\n            .path(\"/Shared/Queries\")\n            .build());\n\n        var q1 = new SqlQuery(\"q1\", SqlQueryArgs.builder()\n            .dataSourceId(example.dataSourceId())\n            .name(\"My Query Name\")\n            .query(\"\"\"\n                        SELECT {{ p1 }} AS p1\n                        WHERE 1=1\n                        AND p2 in ({{ p2 }})\n                        AND event_date \u003e date '{{ p3 }}'\n            \"\"\")\n            .parent(sharedDir.objectId().applyValue(_objectId -\u003e String.format(\"folders/%s\", _objectId)))\n            .runAsRole(\"viewer\")\n            .parameters(            \n                SqlQueryParameterArgs.builder()\n                    .name(\"p1\")\n                    .title(\"Title for p1\")\n                    .text(SqlQueryParameterTextArgs.builder()\n                        .value(\"default\")\n                        .build())\n                    .build(),\n                SqlQueryParameterArgs.builder()\n                    .name(\"p2\")\n                    .title(\"Title for p2\")\n                    .enum_(SqlQueryParameterEnumArgs.builder()\n                        .options(                        \n                            \"default\",\n                            \"foo\",\n                            \"bar\")\n                        .value(\"default\")\n                        .multiple(SqlQueryParameterEnumMultipleArgs.builder()\n                            .prefix(\"\\\"\")\n                            .suffix(\"\\\"\")\n                            .separator(\",\")\n                            .build())\n                        .build())\n                    .build(),\n                SqlQueryParameterArgs.builder()\n                    .name(\"p3\")\n                    .title(\"Title for p3\")\n                    .date(SqlQueryParameterDateArgs.builder()\n                        .value(\"2022-01-01\")\n                        .build())\n                    .build())\n            .tags(            \n                \"t1\",\n                \"t2\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sharedDir:\n    type: databricks:Directory\n    name: shared_dir\n    properties:\n      path: /Shared/Queries\n  q1:\n    type: databricks:SqlQuery\n    properties:\n      dataSourceId: ${example.dataSourceId}\n      name: My Query Name\n      query: |2\n                                SELECT {{ p1 }} AS p1\n                                WHERE 1=1\n                                AND p2 in ({{ p2 }})\n                                AND event_date \u003e date '{{ p3 }}'\n      parent: folders/${sharedDir.objectId}\n      runAsRole: viewer\n      parameters:\n        - name: p1\n          title: Title for p1\n          text:\n            value: default\n        - name: p2\n          title: Title for p2\n          enum:\n            options:\n              - default\n              - foo\n              - bar\n            value: default\n            multiple:\n              prefix: '\"'\n              suffix: '\"'\n              separator: ','\n        - name: p3\n          title: Title for p3\n          date:\n            value: 2022-01-01\n      tags:\n        - t1\n        - t2\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nExample permission to share query with all users:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst q1 = new databricks.Permissions(\"q1\", {\n    sqlQueryId: q1DatabricksSqlQuery.id,\n    accessControls: [\n        {\n            groupName: users.displayName,\n            permissionLevel: \"CAN_RUN\",\n        },\n        {\n            groupName: team.displayName,\n            permissionLevel: \"CAN_EDIT\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nq1 = databricks.Permissions(\"q1\",\n    sql_query_id=q1_databricks_sql_query[\"id\"],\n    access_controls=[\n        {\n            \"group_name\": users[\"displayName\"],\n            \"permission_level\": \"CAN_RUN\",\n        },\n        {\n            \"group_name\": team[\"displayName\"],\n            \"permission_level\": \"CAN_EDIT\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var q1 = new Databricks.Permissions(\"q1\", new()\n    {\n        SqlQueryId = q1DatabricksSqlQuery.Id,\n        AccessControls = new[]\n        {\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = users.DisplayName,\n                PermissionLevel = \"CAN_RUN\",\n            },\n            new Databricks.Inputs.PermissionsAccessControlArgs\n            {\n                GroupName = team.DisplayName,\n                PermissionLevel = \"CAN_EDIT\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewPermissions(ctx, \"q1\", \u0026databricks.PermissionsArgs{\n\t\t\tSqlQueryId: pulumi.Any(q1DatabricksSqlQuery.Id),\n\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.Any(users.DisplayName),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_RUN\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\tGroupName:       pulumi.Any(team.DisplayName),\n\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_EDIT\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var q1 = new Permissions(\"q1\", PermissionsArgs.builder()\n            .sqlQueryId(q1DatabricksSqlQuery.id())\n            .accessControls(            \n                PermissionsAccessControlArgs.builder()\n                    .groupName(users.displayName())\n                    .permissionLevel(\"CAN_RUN\")\n                    .build(),\n                PermissionsAccessControlArgs.builder()\n                    .groupName(team.displayName())\n                    .permissionLevel(\"CAN_EDIT\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  q1:\n    type: databricks:Permissions\n    properties:\n      sqlQueryId: ${q1DatabricksSqlQuery.id}\n      accessControls:\n        - groupName: ${users.displayName}\n          permissionLevel: CAN_RUN\n        - groupName: ${team.displayName}\n          permissionLevel: CAN_EDIT\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Troubleshooting\n\nIn case you see `Error: cannot create sql query: Internal Server Error` during `pulumi up`; double check that you are using the correct \u003cspan pulumi-lang-nodejs=\"`dataSourceId`\" pulumi-lang-dotnet=\"`DataSourceId`\" pulumi-lang-go=\"`dataSourceId`\" pulumi-lang-python=\"`data_source_id`\" pulumi-lang-yaml=\"`dataSourceId`\" pulumi-lang-java=\"`dataSourceId`\"\u003e`data_source_id`\u003c/span\u003e\n\nOperations on \u003cspan pulumi-lang-nodejs=\"`databricks.SqlQuery`\" pulumi-lang-dotnet=\"`databricks.SqlQuery`\" pulumi-lang-go=\"`SqlQuery`\" pulumi-lang-python=\"`SqlQuery`\" pulumi-lang-yaml=\"`databricks.SqlQuery`\" pulumi-lang-java=\"`databricks.SqlQuery`\"\u003e`databricks.SqlQuery`\u003c/span\u003e schedules are ⛔️ deprecated. You can create, update or delete a schedule for SQLA and other Databricks resources using the\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eresource.\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlDashboard \" pulumi-lang-dotnet=\" databricks.SqlDashboard \" pulumi-lang-go=\" SqlDashboard \" pulumi-lang-python=\" SqlDashboard \" pulumi-lang-yaml=\" databricks.SqlDashboard \" pulumi-lang-java=\" databricks.SqlDashboard \"\u003e databricks.SqlDashboard \u003c/span\u003eto manage Databricks SQL [Dashboards](https://docs.databricks.com/sql/user/dashboards/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eto manage Databricks SQL [Endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlGlobalConfig \" pulumi-lang-dotnet=\" databricks.SqlGlobalConfig \" pulumi-lang-go=\" SqlGlobalConfig \" pulumi-lang-python=\" SqlGlobalConfig \" pulumi-lang-yaml=\" databricks.SqlGlobalConfig \" pulumi-lang-java=\" databricks.SqlGlobalConfig \"\u003e databricks.SqlGlobalConfig \u003c/span\u003eto configure the security policy, databricks_instance_profile, and [data access properties](https://docs.databricks.com/sql/admin/data-access-configuration.html) for all\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eof workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage data access in Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto schedule Databricks SQL queries (as well as dashboards and alerts) using Databricks Jobs.\n\n","properties":{"createdAt":{"type":"string"},"dataSourceId":{"type":"string","description":"Data source ID of a SQL warehouse\n"},"description":{"type":"string","description":"General description that conveys additional information about this query such as usage notes.\n"},"name":{"type":"string","description":"The title of this query that appears in list views, widget headings, and on the query page.\n"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlQueryParameter:SqlQueryParameter"}},"parent":{"type":"string","description":"The identifier of the workspace folder containing the object.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SqlQueryProviderConfig:SqlQueryProviderConfig"},"query":{"type":"string","description":"The text of the query to be run.\n"},"runAsRole":{"type":"string","description":"Run as role. Possible values are \u003cspan pulumi-lang-nodejs=\"`viewer`\" pulumi-lang-dotnet=\"`Viewer`\" pulumi-lang-go=\"`viewer`\" pulumi-lang-python=\"`viewer`\" pulumi-lang-yaml=\"`viewer`\" pulumi-lang-java=\"`viewer`\"\u003e`viewer`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`owner`\" pulumi-lang-dotnet=\"`Owner`\" pulumi-lang-go=\"`owner`\" pulumi-lang-python=\"`owner`\" pulumi-lang-yaml=\"`owner`\" pulumi-lang-java=\"`owner`\"\u003e`owner`\u003c/span\u003e.\n"},"schedule":{"$ref":"#/types/databricks:index/SqlQuerySchedule:SqlQuerySchedule","deprecationMessage":"Operations on \u003cspan pulumi-lang-nodejs=\"`databricks.SqlQuery`\" pulumi-lang-dotnet=\"`databricks.SqlQuery`\" pulumi-lang-go=\"`SqlQuery`\" pulumi-lang-python=\"`SqlQuery`\" pulumi-lang-yaml=\"`databricks.SqlQuery`\" pulumi-lang-java=\"`databricks.SqlQuery`\"\u003e`databricks.SqlQuery`\u003c/span\u003e schedules are deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`databricks.Job`\" pulumi-lang-dotnet=\"`databricks.Job`\" pulumi-lang-go=\"`Job`\" pulumi-lang-python=\"`Job`\" pulumi-lang-yaml=\"`databricks.Job`\" pulumi-lang-java=\"`databricks.Job`\"\u003e`databricks.Job`\u003c/span\u003e resource to schedule a \u003cspan pulumi-lang-nodejs=\"`sqlTask`\" pulumi-lang-dotnet=\"`SqlTask`\" pulumi-lang-go=\"`sqlTask`\" pulumi-lang-python=\"`sql_task`\" pulumi-lang-yaml=\"`sqlTask`\" pulumi-lang-java=\"`sqlTask`\"\u003e`sql_task`\u003c/span\u003e."},"tags":{"type":"array","items":{"type":"string"}},"updatedAt":{"type":"string"}},"required":["createdAt","dataSourceId","name","query","updatedAt"],"inputProperties":{"createdAt":{"type":"string"},"dataSourceId":{"type":"string","description":"Data source ID of a SQL warehouse\n"},"description":{"type":"string","description":"General description that conveys additional information about this query such as usage notes.\n"},"name":{"type":"string","description":"The title of this query that appears in list views, widget headings, and on the query page.\n"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlQueryParameter:SqlQueryParameter"}},"parent":{"type":"string","description":"The identifier of the workspace folder containing the object.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/SqlQueryProviderConfig:SqlQueryProviderConfig"},"query":{"type":"string","description":"The text of the query to be run.\n"},"runAsRole":{"type":"string","description":"Run as role. Possible values are \u003cspan pulumi-lang-nodejs=\"`viewer`\" pulumi-lang-dotnet=\"`Viewer`\" pulumi-lang-go=\"`viewer`\" pulumi-lang-python=\"`viewer`\" pulumi-lang-yaml=\"`viewer`\" pulumi-lang-java=\"`viewer`\"\u003e`viewer`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`owner`\" pulumi-lang-dotnet=\"`Owner`\" pulumi-lang-go=\"`owner`\" pulumi-lang-python=\"`owner`\" pulumi-lang-yaml=\"`owner`\" pulumi-lang-java=\"`owner`\"\u003e`owner`\u003c/span\u003e.\n"},"schedule":{"$ref":"#/types/databricks:index/SqlQuerySchedule:SqlQuerySchedule","deprecationMessage":"Operations on \u003cspan pulumi-lang-nodejs=\"`databricks.SqlQuery`\" pulumi-lang-dotnet=\"`databricks.SqlQuery`\" pulumi-lang-go=\"`SqlQuery`\" pulumi-lang-python=\"`SqlQuery`\" pulumi-lang-yaml=\"`databricks.SqlQuery`\" pulumi-lang-java=\"`databricks.SqlQuery`\"\u003e`databricks.SqlQuery`\u003c/span\u003e schedules are deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`databricks.Job`\" pulumi-lang-dotnet=\"`databricks.Job`\" pulumi-lang-go=\"`Job`\" pulumi-lang-python=\"`Job`\" pulumi-lang-yaml=\"`databricks.Job`\" pulumi-lang-java=\"`databricks.Job`\"\u003e`databricks.Job`\u003c/span\u003e resource to schedule a \u003cspan pulumi-lang-nodejs=\"`sqlTask`\" pulumi-lang-dotnet=\"`SqlTask`\" pulumi-lang-go=\"`sqlTask`\" pulumi-lang-python=\"`sql_task`\" pulumi-lang-yaml=\"`sqlTask`\" pulumi-lang-java=\"`sqlTask`\"\u003e`sql_task`\u003c/span\u003e."},"tags":{"type":"array","items":{"type":"string"}},"updatedAt":{"type":"string"}},"requiredInputs":["dataSourceId","query"],"stateInputs":{"description":"Input properties used for looking up and filtering SqlQuery resources.\n","properties":{"createdAt":{"type":"string"},"dataSourceId":{"type":"string","description":"Data source ID of a SQL warehouse\n"},"description":{"type":"string","description":"General description that conveys additional information about this query such as usage notes.\n"},"name":{"type":"string","description":"The title of this query that appears in list views, widget headings, and on the query page.\n"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlQueryParameter:SqlQueryParameter"}},"parent":{"type":"string","description":"The identifier of the workspace folder containing the object.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/SqlQueryProviderConfig:SqlQueryProviderConfig"},"query":{"type":"string","description":"The text of the query to be run.\n"},"runAsRole":{"type":"string","description":"Run as role. Possible values are \u003cspan pulumi-lang-nodejs=\"`viewer`\" pulumi-lang-dotnet=\"`Viewer`\" pulumi-lang-go=\"`viewer`\" pulumi-lang-python=\"`viewer`\" pulumi-lang-yaml=\"`viewer`\" pulumi-lang-java=\"`viewer`\"\u003e`viewer`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`owner`\" pulumi-lang-dotnet=\"`Owner`\" pulumi-lang-go=\"`owner`\" pulumi-lang-python=\"`owner`\" pulumi-lang-yaml=\"`owner`\" pulumi-lang-java=\"`owner`\"\u003e`owner`\u003c/span\u003e.\n"},"schedule":{"$ref":"#/types/databricks:index/SqlQuerySchedule:SqlQuerySchedule","deprecationMessage":"Operations on \u003cspan pulumi-lang-nodejs=\"`databricks.SqlQuery`\" pulumi-lang-dotnet=\"`databricks.SqlQuery`\" pulumi-lang-go=\"`SqlQuery`\" pulumi-lang-python=\"`SqlQuery`\" pulumi-lang-yaml=\"`databricks.SqlQuery`\" pulumi-lang-java=\"`databricks.SqlQuery`\"\u003e`databricks.SqlQuery`\u003c/span\u003e schedules are deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`databricks.Job`\" pulumi-lang-dotnet=\"`databricks.Job`\" pulumi-lang-go=\"`Job`\" pulumi-lang-python=\"`Job`\" pulumi-lang-yaml=\"`databricks.Job`\" pulumi-lang-java=\"`databricks.Job`\"\u003e`databricks.Job`\u003c/span\u003e resource to schedule a \u003cspan pulumi-lang-nodejs=\"`sqlTask`\" pulumi-lang-dotnet=\"`SqlTask`\" pulumi-lang-go=\"`sqlTask`\" pulumi-lang-python=\"`sql_task`\" pulumi-lang-yaml=\"`sqlTask`\" pulumi-lang-java=\"`sqlTask`\"\u003e`sql_task`\u003c/span\u003e."},"tags":{"type":"array","items":{"type":"string"}},"updatedAt":{"type":"string"}},"type":"object"}},"databricks:index/sqlTable:SqlTable":{"description":"Within a metastore, Unity Catalog provides a 3-level namespace for organizing data: Catalogs, databases (also called schemas), and tables/views.\n\nA \u003cspan pulumi-lang-nodejs=\"`databricks.SqlTable`\" pulumi-lang-dotnet=\"`databricks.SqlTable`\" pulumi-lang-go=\"`SqlTable`\" pulumi-lang-python=\"`SqlTable`\" pulumi-lang-yaml=\"`databricks.SqlTable`\" pulumi-lang-java=\"`databricks.SqlTable`\"\u003e`databricks.SqlTable`\u003c/span\u003e is contained within databricks_schema, and can represent either a managed table, an external table, or a view.\n\nThis resource creates and updates the Unity Catalog table/view by executing the necessary SQL queries on a special auto-terminating cluster it would create for this operation. You could also specify a SQL warehouse or cluster for the queries to be executed on.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e This resource doesn't handle complex cases of schema evolution due to the limitations of Pulumi itself.  If you need to implement schema evolution it's recommended to use specialized tools, such as, [Liquibase](https://medium.com/dbsql-sme-engineering/advanced-schema-management-on-databricks-with-liquibase-1900e9f7b9c0) and [Flyway](https://medium.com/dbsql-sme-engineering/databricks-schema-management-with-flyway-527c4a9f5d67).\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    comment: \"this catalog is managed by terraform\",\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst things = new databricks.Schema(\"things\", {\n    catalogName: sandbox.id,\n    name: \"things\",\n    comment: \"this database is managed by terraform\",\n    properties: {\n        kind: \"various\",\n    },\n});\nconst thing = new databricks.SqlTable(\"thing\", {\n    name: \"quickstart_table\",\n    catalogName: sandbox.name,\n    schemaName: things.name,\n    tableType: \"MANAGED\",\n    columns: [\n        {\n            name: \"id\",\n            type: \"int\",\n        },\n        {\n            name: \"name\",\n            type: \"string\",\n            comment: \"name of thing\",\n        },\n    ],\n    comment: \"this table is managed by terraform\",\n});\nconst thingView = new databricks.SqlTable(\"thing_view\", {\n    name: \"quickstart_table_view\",\n    catalogName: sandbox.name,\n    schemaName: things.name,\n    tableType: \"VIEW\",\n    clusterId: \"0423-201305-xsrt82qn\",\n    viewDefinition: std.format({\n        input: \"SELECT name FROM %s WHERE id == 1\",\n        args: [thing.id],\n    }).then(invoke =\u003e invoke.result),\n    comment: \"this view is managed by terraform\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    comment=\"this catalog is managed by terraform\",\n    properties={\n        \"purpose\": \"testing\",\n    })\nthings = databricks.Schema(\"things\",\n    catalog_name=sandbox.id,\n    name=\"things\",\n    comment=\"this database is managed by terraform\",\n    properties={\n        \"kind\": \"various\",\n    })\nthing = databricks.SqlTable(\"thing\",\n    name=\"quickstart_table\",\n    catalog_name=sandbox.name,\n    schema_name=things.name,\n    table_type=\"MANAGED\",\n    columns=[\n        {\n            \"name\": \"id\",\n            \"type\": \"int\",\n        },\n        {\n            \"name\": \"name\",\n            \"type\": \"string\",\n            \"comment\": \"name of thing\",\n        },\n    ],\n    comment=\"this table is managed by terraform\")\nthing_view = databricks.SqlTable(\"thing_view\",\n    name=\"quickstart_table_view\",\n    catalog_name=sandbox.name,\n    schema_name=things.name,\n    table_type=\"VIEW\",\n    cluster_id=\"0423-201305-xsrt82qn\",\n    view_definition=std.format(input=\"SELECT name FROM %s WHERE id == 1\",\n        args=[thing.id]).result,\n    comment=\"this view is managed by terraform\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        Comment = \"this catalog is managed by terraform\",\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var things = new Databricks.Schema(\"things\", new()\n    {\n        CatalogName = sandbox.Id,\n        Name = \"things\",\n        Comment = \"this database is managed by terraform\",\n        Properties = \n        {\n            { \"kind\", \"various\" },\n        },\n    });\n\n    var thing = new Databricks.SqlTable(\"thing\", new()\n    {\n        Name = \"quickstart_table\",\n        CatalogName = sandbox.Name,\n        SchemaName = things.Name,\n        TableType = \"MANAGED\",\n        Columns = new[]\n        {\n            new Databricks.Inputs.SqlTableColumnArgs\n            {\n                Name = \"id\",\n                Type = \"int\",\n            },\n            new Databricks.Inputs.SqlTableColumnArgs\n            {\n                Name = \"name\",\n                Type = \"string\",\n                Comment = \"name of thing\",\n            },\n        },\n        Comment = \"this table is managed by terraform\",\n    });\n\n    var thingView = new Databricks.SqlTable(\"thing_view\", new()\n    {\n        Name = \"quickstart_table_view\",\n        CatalogName = sandbox.Name,\n        SchemaName = things.Name,\n        TableType = \"VIEW\",\n        ClusterId = \"0423-201305-xsrt82qn\",\n        ViewDefinition = Std.Format.Invoke(new()\n        {\n            Input = \"SELECT name FROM %s WHERE id == 1\",\n            Args = new[]\n            {\n                thing.Id,\n            },\n        }).Apply(invoke =\u003e invoke.Result),\n        Comment = \"this view is managed by terraform\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:    pulumi.String(\"sandbox\"),\n\t\t\tComment: pulumi.String(\"this catalog is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthings, err := databricks.NewSchema(ctx, \"things\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: sandbox.ID(),\n\t\t\tName:        pulumi.String(\"things\"),\n\t\t\tComment:     pulumi.String(\"this database is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"kind\": pulumi.String(\"various\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthing, err := databricks.NewSqlTable(ctx, \"thing\", \u0026databricks.SqlTableArgs{\n\t\t\tName:        pulumi.String(\"quickstart_table\"),\n\t\t\tCatalogName: sandbox.Name,\n\t\t\tSchemaName:  things.Name,\n\t\t\tTableType:   pulumi.String(\"MANAGED\"),\n\t\t\tColumns: databricks.SqlTableColumnArray{\n\t\t\t\t\u0026databricks.SqlTableColumnArgs{\n\t\t\t\t\tName: pulumi.String(\"id\"),\n\t\t\t\t\tType: pulumi.String(\"int\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.SqlTableColumnArgs{\n\t\t\t\t\tName:    pulumi.String(\"name\"),\n\t\t\t\t\tType:    pulumi.String(\"string\"),\n\t\t\t\t\tComment: pulumi.String(\"name of thing\"),\n\t\t\t\t},\n\t\t\t},\n\t\t\tComment: pulumi.String(\"this table is managed by terraform\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tinvokeFormat, err := std.Format(ctx, \u0026std.FormatArgs{\n\t\t\tInput: \"SELECT name FROM %s WHERE id == 1\",\n\t\t\tArgs: pulumi.StringArray{\n\t\t\t\tthing.ID(),\n\t\t\t},\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewSqlTable(ctx, \"thing_view\", \u0026databricks.SqlTableArgs{\n\t\t\tName:           pulumi.String(\"quickstart_table_view\"),\n\t\t\tCatalogName:    sandbox.Name,\n\t\t\tSchemaName:     things.Name,\n\t\t\tTableType:      pulumi.String(\"VIEW\"),\n\t\t\tClusterId:      pulumi.String(\"0423-201305-xsrt82qn\"),\n\t\t\tViewDefinition: pulumi.String(invokeFormat.Result),\n\t\t\tComment:        pulumi.String(\"this view is managed by terraform\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport com.pulumi.databricks.SqlTable;\nimport com.pulumi.databricks.SqlTableArgs;\nimport com.pulumi.databricks.inputs.SqlTableColumnArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.FormatArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .comment(\"this catalog is managed by terraform\")\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var things = new Schema(\"things\", SchemaArgs.builder()\n            .catalogName(sandbox.id())\n            .name(\"things\")\n            .comment(\"this database is managed by terraform\")\n            .properties(Map.of(\"kind\", \"various\"))\n            .build());\n\n        var thing = new SqlTable(\"thing\", SqlTableArgs.builder()\n            .name(\"quickstart_table\")\n            .catalogName(sandbox.name())\n            .schemaName(things.name())\n            .tableType(\"MANAGED\")\n            .columns(            \n                SqlTableColumnArgs.builder()\n                    .name(\"id\")\n                    .type(\"int\")\n                    .build(),\n                SqlTableColumnArgs.builder()\n                    .name(\"name\")\n                    .type(\"string\")\n                    .comment(\"name of thing\")\n                    .build())\n            .comment(\"this table is managed by terraform\")\n            .build());\n\n        var thingView = new SqlTable(\"thingView\", SqlTableArgs.builder()\n            .name(\"quickstart_table_view\")\n            .catalogName(sandbox.name())\n            .schemaName(things.name())\n            .tableType(\"VIEW\")\n            .clusterId(\"0423-201305-xsrt82qn\")\n            .viewDefinition(StdFunctions.format(FormatArgs.builder()\n                .input(\"SELECT name FROM %s WHERE id == 1\")\n                .args(thing.id())\n                .build()).result())\n            .comment(\"this view is managed by terraform\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      comment: this catalog is managed by terraform\n      properties:\n        purpose: testing\n  things:\n    type: databricks:Schema\n    properties:\n      catalogName: ${sandbox.id}\n      name: things\n      comment: this database is managed by terraform\n      properties:\n        kind: various\n  thing:\n    type: databricks:SqlTable\n    properties:\n      name: quickstart_table\n      catalogName: ${sandbox.name}\n      schemaName: ${things.name}\n      tableType: MANAGED\n      columns:\n        - name: id\n          type: int\n        - name: name\n          type: string\n          comment: name of thing\n      comment: this table is managed by terraform\n  thingView:\n    type: databricks:SqlTable\n    name: thing_view\n    properties:\n      name: quickstart_table_view\n      catalogName: ${sandbox.name}\n      schemaName: ${things.name}\n      tableType: VIEW\n      clusterId: 0423-201305-xsrt82qn\n      viewDefinition:\n        fn::invoke:\n          function: std:format\n          arguments:\n            input: SELECT name FROM %s WHERE id == 1\n            args:\n              - ${thing.id}\n          return: result\n      comment: this view is managed by terraform\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Use an existing warehouse to create a table\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nconst _this = new databricks.SqlEndpoint(\"this\", {\n    name: \"endpoint\",\n    clusterSize: \"2X-Small\",\n    maxNumClusters: 1,\n});\nconst thing = new databricks.SqlTable(\"thing\", {\n    name: \"quickstart_table\",\n    catalogName: sandbox.name,\n    schemaName: things.name,\n    tableType: \"MANAGED\",\n    warehouseId: _this.id,\n    columns: [\n        {\n            name: \"id\",\n            type: \"int\",\n        },\n        {\n            name: \"name\",\n            type: \"string\",\n            comment: \"name of thing\",\n        },\n    ],\n    comment: \"this table is managed by terraform\",\n});\nconst thingView = new databricks.SqlTable(\"thing_view\", {\n    name: \"quickstart_table_view\",\n    catalogName: sandbox.name,\n    schemaName: things.name,\n    tableType: \"VIEW\",\n    warehouseId: _this.id,\n    viewDefinition: std.format({\n        input: \"SELECT name FROM %s WHERE id == 1\",\n        args: [thing.id],\n    }).then(invoke =\u003e invoke.result),\n    comment: \"this view is managed by terraform\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\nthis = databricks.SqlEndpoint(\"this\",\n    name=\"endpoint\",\n    cluster_size=\"2X-Small\",\n    max_num_clusters=1)\nthing = databricks.SqlTable(\"thing\",\n    name=\"quickstart_table\",\n    catalog_name=sandbox[\"name\"],\n    schema_name=things[\"name\"],\n    table_type=\"MANAGED\",\n    warehouse_id=this.id,\n    columns=[\n        {\n            \"name\": \"id\",\n            \"type\": \"int\",\n        },\n        {\n            \"name\": \"name\",\n            \"type\": \"string\",\n            \"comment\": \"name of thing\",\n        },\n    ],\n    comment=\"this table is managed by terraform\")\nthing_view = databricks.SqlTable(\"thing_view\",\n    name=\"quickstart_table_view\",\n    catalog_name=sandbox[\"name\"],\n    schema_name=things[\"name\"],\n    table_type=\"VIEW\",\n    warehouse_id=this.id,\n    view_definition=std.format(input=\"SELECT name FROM %s WHERE id == 1\",\n        args=[thing.id]).result,\n    comment=\"this view is managed by terraform\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.SqlEndpoint(\"this\", new()\n    {\n        Name = \"endpoint\",\n        ClusterSize = \"2X-Small\",\n        MaxNumClusters = 1,\n    });\n\n    var thing = new Databricks.SqlTable(\"thing\", new()\n    {\n        Name = \"quickstart_table\",\n        CatalogName = sandbox.Name,\n        SchemaName = things.Name,\n        TableType = \"MANAGED\",\n        WarehouseId = @this.Id,\n        Columns = new[]\n        {\n            new Databricks.Inputs.SqlTableColumnArgs\n            {\n                Name = \"id\",\n                Type = \"int\",\n            },\n            new Databricks.Inputs.SqlTableColumnArgs\n            {\n                Name = \"name\",\n                Type = \"string\",\n                Comment = \"name of thing\",\n            },\n        },\n        Comment = \"this table is managed by terraform\",\n    });\n\n    var thingView = new Databricks.SqlTable(\"thing_view\", new()\n    {\n        Name = \"quickstart_table_view\",\n        CatalogName = sandbox.Name,\n        SchemaName = things.Name,\n        TableType = \"VIEW\",\n        WarehouseId = @this.Id,\n        ViewDefinition = Std.Format.Invoke(new()\n        {\n            Input = \"SELECT name FROM %s WHERE id == 1\",\n            Args = new[]\n            {\n                thing.Id,\n            },\n        }).Apply(invoke =\u003e invoke.Result),\n        Comment = \"this view is managed by terraform\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi-std/sdk/go/std\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.NewSqlEndpoint(ctx, \"this\", \u0026databricks.SqlEndpointArgs{\n\t\t\tName:           pulumi.String(\"endpoint\"),\n\t\t\tClusterSize:    pulumi.String(\"2X-Small\"),\n\t\t\tMaxNumClusters: pulumi.Int(1),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthing, err := databricks.NewSqlTable(ctx, \"thing\", \u0026databricks.SqlTableArgs{\n\t\t\tName:        pulumi.String(\"quickstart_table\"),\n\t\t\tCatalogName: pulumi.Any(sandbox.Name),\n\t\t\tSchemaName:  pulumi.Any(things.Name),\n\t\t\tTableType:   pulumi.String(\"MANAGED\"),\n\t\t\tWarehouseId: this.ID(),\n\t\t\tColumns: databricks.SqlTableColumnArray{\n\t\t\t\t\u0026databricks.SqlTableColumnArgs{\n\t\t\t\t\tName: pulumi.String(\"id\"),\n\t\t\t\t\tType: pulumi.String(\"int\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.SqlTableColumnArgs{\n\t\t\t\t\tName:    pulumi.String(\"name\"),\n\t\t\t\t\tType:    pulumi.String(\"string\"),\n\t\t\t\t\tComment: pulumi.String(\"name of thing\"),\n\t\t\t\t},\n\t\t\t},\n\t\t\tComment: pulumi.String(\"this table is managed by terraform\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tinvokeFormat, err := std.Format(ctx, \u0026std.FormatArgs{\n\t\t\tInput: \"SELECT name FROM %s WHERE id == 1\",\n\t\t\tArgs: pulumi.StringArray{\n\t\t\t\tthing.ID(),\n\t\t\t},\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewSqlTable(ctx, \"thing_view\", \u0026databricks.SqlTableArgs{\n\t\t\tName:           pulumi.String(\"quickstart_table_view\"),\n\t\t\tCatalogName:    pulumi.Any(sandbox.Name),\n\t\t\tSchemaName:     pulumi.Any(things.Name),\n\t\t\tTableType:      pulumi.String(\"VIEW\"),\n\t\t\tWarehouseId:    this.ID(),\n\t\t\tViewDefinition: pulumi.String(invokeFormat.Result),\n\t\t\tComment:        pulumi.String(\"this view is managed by terraform\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlEndpoint;\nimport com.pulumi.databricks.SqlEndpointArgs;\nimport com.pulumi.databricks.SqlTable;\nimport com.pulumi.databricks.SqlTableArgs;\nimport com.pulumi.databricks.inputs.SqlTableColumnArgs;\nimport com.pulumi.std.StdFunctions;\nimport com.pulumi.std.inputs.FormatArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new SqlEndpoint(\"this\", SqlEndpointArgs.builder()\n            .name(\"endpoint\")\n            .clusterSize(\"2X-Small\")\n            .maxNumClusters(1)\n            .build());\n\n        var thing = new SqlTable(\"thing\", SqlTableArgs.builder()\n            .name(\"quickstart_table\")\n            .catalogName(sandbox.name())\n            .schemaName(things.name())\n            .tableType(\"MANAGED\")\n            .warehouseId(this_.id())\n            .columns(            \n                SqlTableColumnArgs.builder()\n                    .name(\"id\")\n                    .type(\"int\")\n                    .build(),\n                SqlTableColumnArgs.builder()\n                    .name(\"name\")\n                    .type(\"string\")\n                    .comment(\"name of thing\")\n                    .build())\n            .comment(\"this table is managed by terraform\")\n            .build());\n\n        var thingView = new SqlTable(\"thingView\", SqlTableArgs.builder()\n            .name(\"quickstart_table_view\")\n            .catalogName(sandbox.name())\n            .schemaName(things.name())\n            .tableType(\"VIEW\")\n            .warehouseId(this_.id())\n            .viewDefinition(StdFunctions.format(FormatArgs.builder()\n                .input(\"SELECT name FROM %s WHERE id == 1\")\n                .args(thing.id())\n                .build()).result())\n            .comment(\"this view is managed by terraform\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:SqlEndpoint\n    properties:\n      name: endpoint\n      clusterSize: 2X-Small\n      maxNumClusters: 1\n  thing:\n    type: databricks:SqlTable\n    properties:\n      name: quickstart_table\n      catalogName: ${sandbox.name}\n      schemaName: ${things.name}\n      tableType: MANAGED\n      warehouseId: ${this.id}\n      columns:\n        - name: id\n          type: int\n        - name: name\n          type: string\n          comment: name of thing\n      comment: this table is managed by terraform\n  thingView:\n    type: databricks:SqlTable\n    name: thing_view\n    properties:\n      name: quickstart_table_view\n      catalogName: ${sandbox.name}\n      schemaName: ${things.name}\n      tableType: VIEW\n      warehouseId: ${this.id}\n      viewDefinition:\n        fn::invoke:\n          function: std:format\n          arguments:\n            input: SELECT name FROM %s WHERE id == 1\n            args:\n              - ${thing.id}\n          return: result\n      comment: this view is managed by terraform\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Use an Identity Column\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    comment: \"this catalog is managed by terraform\",\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst things = new databricks.Schema(\"things\", {\n    catalogName: sandbox.id,\n    name: \"things\",\n    comment: \"this database is managed by terraform\",\n    properties: {\n        kind: \"various\",\n    },\n});\nconst thing = new databricks.SqlTable(\"thing\", {\n    name: \"identity_table\",\n    catalogName: sandbox.name,\n    schemaName: things.name,\n    tableType: \"MANAGED\",\n    columns: [\n        {\n            name: \"id\",\n            type: \"bigint\",\n            identity: \"default\",\n        },\n        {\n            name: \"name\",\n            type: \"string\",\n            comment: \"name of thing\",\n        },\n    ],\n    comment: \"this table is managed by terraform\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    comment=\"this catalog is managed by terraform\",\n    properties={\n        \"purpose\": \"testing\",\n    })\nthings = databricks.Schema(\"things\",\n    catalog_name=sandbox.id,\n    name=\"things\",\n    comment=\"this database is managed by terraform\",\n    properties={\n        \"kind\": \"various\",\n    })\nthing = databricks.SqlTable(\"thing\",\n    name=\"identity_table\",\n    catalog_name=sandbox.name,\n    schema_name=things.name,\n    table_type=\"MANAGED\",\n    columns=[\n        {\n            \"name\": \"id\",\n            \"type\": \"bigint\",\n            \"identity\": \"default\",\n        },\n        {\n            \"name\": \"name\",\n            \"type\": \"string\",\n            \"comment\": \"name of thing\",\n        },\n    ],\n    comment=\"this table is managed by terraform\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        Comment = \"this catalog is managed by terraform\",\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var things = new Databricks.Schema(\"things\", new()\n    {\n        CatalogName = sandbox.Id,\n        Name = \"things\",\n        Comment = \"this database is managed by terraform\",\n        Properties = \n        {\n            { \"kind\", \"various\" },\n        },\n    });\n\n    var thing = new Databricks.SqlTable(\"thing\", new()\n    {\n        Name = \"identity_table\",\n        CatalogName = sandbox.Name,\n        SchemaName = things.Name,\n        TableType = \"MANAGED\",\n        Columns = new[]\n        {\n            new Databricks.Inputs.SqlTableColumnArgs\n            {\n                Name = \"id\",\n                Type = \"bigint\",\n                Identity = \"default\",\n            },\n            new Databricks.Inputs.SqlTableColumnArgs\n            {\n                Name = \"name\",\n                Type = \"string\",\n                Comment = \"name of thing\",\n            },\n        },\n        Comment = \"this table is managed by terraform\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:    pulumi.String(\"sandbox\"),\n\t\t\tComment: pulumi.String(\"this catalog is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthings, err := databricks.NewSchema(ctx, \"things\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: sandbox.ID(),\n\t\t\tName:        pulumi.String(\"things\"),\n\t\t\tComment:     pulumi.String(\"this database is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"kind\": pulumi.String(\"various\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewSqlTable(ctx, \"thing\", \u0026databricks.SqlTableArgs{\n\t\t\tName:        pulumi.String(\"identity_table\"),\n\t\t\tCatalogName: sandbox.Name,\n\t\t\tSchemaName:  things.Name,\n\t\t\tTableType:   pulumi.String(\"MANAGED\"),\n\t\t\tColumns: databricks.SqlTableColumnArray{\n\t\t\t\t\u0026databricks.SqlTableColumnArgs{\n\t\t\t\t\tName:     pulumi.String(\"id\"),\n\t\t\t\t\tType:     pulumi.String(\"bigint\"),\n\t\t\t\t\tIdentity: pulumi.String(\"default\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.SqlTableColumnArgs{\n\t\t\t\t\tName:    pulumi.String(\"name\"),\n\t\t\t\t\tType:    pulumi.String(\"string\"),\n\t\t\t\t\tComment: pulumi.String(\"name of thing\"),\n\t\t\t\t},\n\t\t\t},\n\t\t\tComment: pulumi.String(\"this table is managed by terraform\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport com.pulumi.databricks.SqlTable;\nimport com.pulumi.databricks.SqlTableArgs;\nimport com.pulumi.databricks.inputs.SqlTableColumnArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .comment(\"this catalog is managed by terraform\")\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var things = new Schema(\"things\", SchemaArgs.builder()\n            .catalogName(sandbox.id())\n            .name(\"things\")\n            .comment(\"this database is managed by terraform\")\n            .properties(Map.of(\"kind\", \"various\"))\n            .build());\n\n        var thing = new SqlTable(\"thing\", SqlTableArgs.builder()\n            .name(\"identity_table\")\n            .catalogName(sandbox.name())\n            .schemaName(things.name())\n            .tableType(\"MANAGED\")\n            .columns(            \n                SqlTableColumnArgs.builder()\n                    .name(\"id\")\n                    .type(\"bigint\")\n                    .identity(\"default\")\n                    .build(),\n                SqlTableColumnArgs.builder()\n                    .name(\"name\")\n                    .type(\"string\")\n                    .comment(\"name of thing\")\n                    .build())\n            .comment(\"this table is managed by terraform\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      comment: this catalog is managed by terraform\n      properties:\n        purpose: testing\n  things:\n    type: databricks:Schema\n    properties:\n      catalogName: ${sandbox.id}\n      name: things\n      comment: this database is managed by terraform\n      properties:\n        kind: various\n  thing:\n    type: databricks:SqlTable\n    properties:\n      name: identity_table\n      catalogName: ${sandbox.name}\n      schemaName: ${things.name}\n      tableType: MANAGED\n      columns:\n        - name: id\n          type: bigint\n          identity: default\n        - name: name\n          type: string\n          comment: name of thing\n      comment: this table is managed by terraform\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Enable automatic clustering\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst thing = new databricks.SqlTable(\"thing\", {\n    name: \"auto_cluster_table\",\n    catalogName: sandbox.name,\n    schemaName: things.name,\n    tableType: \"MANAGED\",\n    clusterKeys: [\"AUTO\"],\n    columns: [{\n        name: \"name\",\n        type: \"string\",\n        comment: \"name of thing\",\n    }],\n    comment: \"this table is managed by terraform\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthing = databricks.SqlTable(\"thing\",\n    name=\"auto_cluster_table\",\n    catalog_name=sandbox[\"name\"],\n    schema_name=things[\"name\"],\n    table_type=\"MANAGED\",\n    cluster_keys=[\"AUTO\"],\n    columns=[{\n        \"name\": \"name\",\n        \"type\": \"string\",\n        \"comment\": \"name of thing\",\n    }],\n    comment=\"this table is managed by terraform\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var thing = new Databricks.SqlTable(\"thing\", new()\n    {\n        Name = \"auto_cluster_table\",\n        CatalogName = sandbox.Name,\n        SchemaName = things.Name,\n        TableType = \"MANAGED\",\n        ClusterKeys = new[]\n        {\n            \"AUTO\",\n        },\n        Columns = new[]\n        {\n            new Databricks.Inputs.SqlTableColumnArgs\n            {\n                Name = \"name\",\n                Type = \"string\",\n                Comment = \"name of thing\",\n            },\n        },\n        Comment = \"this table is managed by terraform\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSqlTable(ctx, \"thing\", \u0026databricks.SqlTableArgs{\n\t\t\tName:        pulumi.String(\"auto_cluster_table\"),\n\t\t\tCatalogName: pulumi.Any(sandbox.Name),\n\t\t\tSchemaName:  pulumi.Any(things.Name),\n\t\t\tTableType:   pulumi.String(\"MANAGED\"),\n\t\t\tClusterKeys: pulumi.StringArray{\n\t\t\t\tpulumi.String(\"AUTO\"),\n\t\t\t},\n\t\t\tColumns: databricks.SqlTableColumnArray{\n\t\t\t\t\u0026databricks.SqlTableColumnArgs{\n\t\t\t\t\tName:    pulumi.String(\"name\"),\n\t\t\t\t\tType:    pulumi.String(\"string\"),\n\t\t\t\t\tComment: pulumi.String(\"name of thing\"),\n\t\t\t\t},\n\t\t\t},\n\t\t\tComment: pulumi.String(\"this table is managed by terraform\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlTable;\nimport com.pulumi.databricks.SqlTableArgs;\nimport com.pulumi.databricks.inputs.SqlTableColumnArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var thing = new SqlTable(\"thing\", SqlTableArgs.builder()\n            .name(\"auto_cluster_table\")\n            .catalogName(sandbox.name())\n            .schemaName(things.name())\n            .tableType(\"MANAGED\")\n            .clusterKeys(\"AUTO\")\n            .columns(SqlTableColumnArgs.builder()\n                .name(\"name\")\n                .type(\"string\")\n                .comment(\"name of thing\")\n                .build())\n            .comment(\"this table is managed by terraform\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  thing:\n    type: databricks:SqlTable\n    properties:\n      name: auto_cluster_table\n      catalogName: ${sandbox.name}\n      schemaName: ${things.name}\n      tableType: MANAGED\n      clusterKeys:\n        - AUTO\n      columns:\n        - name: name\n          type: string\n          comment: name of thing\n      comment: this table is managed by terraform\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Migration from \u003cspan pulumi-lang-nodejs=\"`databricks.Table`\" pulumi-lang-dotnet=\"`databricks.Table`\" pulumi-lang-go=\"`Table`\" pulumi-lang-python=\"`Table`\" pulumi-lang-yaml=\"`databricks.Table`\" pulumi-lang-java=\"`databricks.Table`\"\u003e`databricks.Table`\u003c/span\u003e\n\nThe \u003cspan pulumi-lang-nodejs=\"`databricks.Table`\" pulumi-lang-dotnet=\"`databricks.Table`\" pulumi-lang-go=\"`Table`\" pulumi-lang-python=\"`Table`\" pulumi-lang-yaml=\"`databricks.Table`\" pulumi-lang-java=\"`databricks.Table`\"\u003e`databricks.Table`\u003c/span\u003e resource has been deprecated in favor of \u003cspan pulumi-lang-nodejs=\"`databricks.SqlTable`\" pulumi-lang-dotnet=\"`databricks.SqlTable`\" pulumi-lang-go=\"`SqlTable`\" pulumi-lang-python=\"`SqlTable`\" pulumi-lang-yaml=\"`databricks.SqlTable`\" pulumi-lang-java=\"`databricks.SqlTable`\"\u003e`databricks.SqlTable`\u003c/span\u003e. To migrate from \u003cspan pulumi-lang-nodejs=\"`databricks.Table`\" pulumi-lang-dotnet=\"`databricks.Table`\" pulumi-lang-go=\"`Table`\" pulumi-lang-python=\"`Table`\" pulumi-lang-yaml=\"`databricks.Table`\" pulumi-lang-java=\"`databricks.Table`\"\u003e`databricks.Table`\u003c/span\u003e to \u003cspan pulumi-lang-nodejs=\"`databricks.SqlTable`\" pulumi-lang-dotnet=\"`databricks.SqlTable`\" pulumi-lang-go=\"`SqlTable`\" pulumi-lang-python=\"`SqlTable`\" pulumi-lang-yaml=\"`databricks.SqlTable`\" pulumi-lang-java=\"`databricks.SqlTable`\"\u003e`databricks.SqlTable`\u003c/span\u003e:\n\n1. Define a \u003cspan pulumi-lang-nodejs=\"`databricks.SqlTable`\" pulumi-lang-dotnet=\"`databricks.SqlTable`\" pulumi-lang-go=\"`SqlTable`\" pulumi-lang-python=\"`SqlTable`\" pulumi-lang-yaml=\"`databricks.SqlTable`\" pulumi-lang-java=\"`databricks.SqlTable`\"\u003e`databricks.SqlTable`\u003c/span\u003e resource with arguments corresponding to \u003cspan pulumi-lang-nodejs=\"`databricks.Table`\" pulumi-lang-dotnet=\"`databricks.Table`\" pulumi-lang-go=\"`Table`\" pulumi-lang-python=\"`Table`\" pulumi-lang-yaml=\"`databricks.Table`\" pulumi-lang-java=\"`databricks.Table`\"\u003e`databricks.Table`\u003c/span\u003e.\n2. Add a \u003cspan pulumi-lang-nodejs=\"`removed`\" pulumi-lang-dotnet=\"`Removed`\" pulumi-lang-go=\"`removed`\" pulumi-lang-python=\"`removed`\" pulumi-lang-yaml=\"`removed`\" pulumi-lang-java=\"`removed`\"\u003e`removed`\u003c/span\u003e block to remove the \u003cspan pulumi-lang-nodejs=\"`databricks.Table`\" pulumi-lang-dotnet=\"`databricks.Table`\" pulumi-lang-go=\"`Table`\" pulumi-lang-python=\"`Table`\" pulumi-lang-yaml=\"`databricks.Table`\" pulumi-lang-java=\"`databricks.Table`\"\u003e`databricks.Table`\u003c/span\u003e resource without deleting the existing table by using the \u003cspan pulumi-lang-nodejs=\"`lifecycle`\" pulumi-lang-dotnet=\"`Lifecycle`\" pulumi-lang-go=\"`lifecycle`\" pulumi-lang-python=\"`lifecycle`\" pulumi-lang-yaml=\"`lifecycle`\" pulumi-lang-java=\"`lifecycle`\"\u003e`lifecycle`\u003c/span\u003e block. If you're using Pulumi version below v1.7.0, you will need to use the `terraform state rm` command instead.\n3. Add an \u003cspan pulumi-lang-nodejs=\"`import`\" pulumi-lang-dotnet=\"`Import`\" pulumi-lang-go=\"`import`\" pulumi-lang-python=\"`import`\" pulumi-lang-yaml=\"`import`\" pulumi-lang-java=\"`import`\"\u003e`import`\u003c/span\u003e block to add the \u003cspan pulumi-lang-nodejs=\"`databricks.SqlTable`\" pulumi-lang-dotnet=\"`databricks.SqlTable`\" pulumi-lang-go=\"`SqlTable`\" pulumi-lang-python=\"`SqlTable`\" pulumi-lang-yaml=\"`databricks.SqlTable`\" pulumi-lang-java=\"`databricks.SqlTable`\"\u003e`databricks.SqlTable`\u003c/span\u003e resource, corresponding to the existing table. If you're using Pulumi version below v1.5.0, you will need to use `pulumi import` command instead.\n\nFor example, suppose we have the following \u003cspan pulumi-lang-nodejs=\"`databricks.Table`\" pulumi-lang-dotnet=\"`databricks.Table`\" pulumi-lang-go=\"`Table`\" pulumi-lang-python=\"`Table`\" pulumi-lang-yaml=\"`databricks.Table`\" pulumi-lang-java=\"`databricks.Table`\"\u003e`databricks.Table`\u003c/span\u003e resource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.Table(\"this\", {\n    catalogName: \"catalog\",\n    schemaName: \"schema\",\n    name: \"table\",\n    tableType: \"MANAGED\",\n    dataSourceFormat: \"DELTA\",\n    columns: [{\n        name: \"col1\",\n        typeName: \"STRING\",\n        typeJson: \"{\\\"type\\\":\\\"STRING\\\"}\",\n        comment: \"comment\",\n        nullable: true,\n    }],\n    comment: \"comment\",\n    properties: {\n        key: \"value\",\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.Table(\"this\",\n    catalog_name=\"catalog\",\n    schema_name=\"schema\",\n    name=\"table\",\n    table_type=\"MANAGED\",\n    data_source_format=\"DELTA\",\n    columns=[{\n        \"name\": \"col1\",\n        \"type_name\": \"STRING\",\n        \"type_json\": \"{\\\"type\\\":\\\"STRING\\\"}\",\n        \"comment\": \"comment\",\n        \"nullable\": True,\n    }],\n    comment=\"comment\",\n    properties={\n        \"key\": \"value\",\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.Table(\"this\", new()\n    {\n        CatalogName = \"catalog\",\n        SchemaName = \"schema\",\n        Name = \"table\",\n        TableType = \"MANAGED\",\n        DataSourceFormat = \"DELTA\",\n        Columns = new[]\n        {\n            new Databricks.Inputs.TableColumnArgs\n            {\n                Name = \"col1\",\n                TypeName = \"STRING\",\n                TypeJson = \"{\\\"type\\\":\\\"STRING\\\"}\",\n                Comment = \"comment\",\n                Nullable = true,\n            },\n        },\n        Comment = \"comment\",\n        Properties = \n        {\n            { \"key\", \"value\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewTable(ctx, \"this\", \u0026databricks.TableArgs{\n\t\t\tCatalogName:      pulumi.String(\"catalog\"),\n\t\t\tSchemaName:       pulumi.String(\"schema\"),\n\t\t\tName:             pulumi.String(\"table\"),\n\t\t\tTableType:        pulumi.String(\"MANAGED\"),\n\t\t\tDataSourceFormat: pulumi.String(\"DELTA\"),\n\t\t\tColumns: databricks.TableColumnArray{\n\t\t\t\t\u0026databricks.TableColumnArgs{\n\t\t\t\t\tName:     pulumi.String(\"col1\"),\n\t\t\t\t\tTypeName: pulumi.String(\"STRING\"),\n\t\t\t\t\tTypeJson: pulumi.String(\"{\\\"type\\\":\\\"STRING\\\"}\"),\n\t\t\t\t\tComment:  pulumi.String(\"comment\"),\n\t\t\t\t\tNullable: pulumi.Bool(true),\n\t\t\t\t},\n\t\t\t},\n\t\t\tComment: pulumi.String(\"comment\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"key\": pulumi.String(\"value\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Table;\nimport com.pulumi.databricks.TableArgs;\nimport com.pulumi.databricks.inputs.TableColumnArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Table(\"this\", TableArgs.builder()\n            .catalogName(\"catalog\")\n            .schemaName(\"schema\")\n            .name(\"table\")\n            .tableType(\"MANAGED\")\n            .dataSourceFormat(\"DELTA\")\n            .columns(TableColumnArgs.builder()\n                .name(\"col1\")\n                .typeName(\"STRING\")\n                .typeJson(\"{\\\"type\\\":\\\"STRING\\\"}\")\n                .comment(\"comment\")\n                .nullable(true)\n                .build())\n            .comment(\"comment\")\n            .properties(Map.of(\"key\", \"value\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:Table\n    properties:\n      catalogName: catalog\n      schemaName: schema\n      name: table\n      tableType: MANAGED\n      dataSourceFormat: DELTA\n      columns:\n        - name: col1\n          typeName: STRING\n          typeJson: '{\"type\":\"STRING\"}'\n          comment: comment\n          nullable: true\n      comment: comment\n      properties:\n        key: value\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nThe migration would look like this:\n\n","properties":{"catalogName":{"type":"string","description":"Name of parent catalog. Change forces the creation of a new resource.\n"},"clusterId":{"type":"string","description":"All table CRUD operations must be executed on a running cluster or SQL warehouse. If a\u003cspan pulumi-lang-nodejs=\" clusterId \" pulumi-lang-dotnet=\" ClusterId \" pulumi-lang-go=\" clusterId \" pulumi-lang-python=\" cluster_id \" pulumi-lang-yaml=\" clusterId \" pulumi-lang-java=\" clusterId \"\u003e cluster_id \u003c/span\u003eis specified, it will be used to execute SQL commands to manage this table. If empty, a cluster will be created automatically with the name `terraform-sql-table`. Conflicts with \u003cspan pulumi-lang-nodejs=\"`warehouseId`\" pulumi-lang-dotnet=\"`WarehouseId`\" pulumi-lang-go=\"`warehouseId`\" pulumi-lang-python=\"`warehouse_id`\" pulumi-lang-yaml=\"`warehouseId`\" pulumi-lang-java=\"`warehouseId`\"\u003e`warehouse_id`\u003c/span\u003e.\n"},"clusterKeys":{"type":"array","items":{"type":"string"},"description":"a subset of columns to liquid cluster the table by. For automatic clustering, set \u003cspan pulumi-lang-nodejs=\"`clusterKeys`\" pulumi-lang-dotnet=\"`ClusterKeys`\" pulumi-lang-go=\"`clusterKeys`\" pulumi-lang-python=\"`cluster_keys`\" pulumi-lang-yaml=\"`clusterKeys`\" pulumi-lang-java=\"`clusterKeys`\"\u003e`cluster_keys`\u003c/span\u003e to `[\"AUTO\"]`. To turn off clustering, set it to `[\"NONE\"]`. Conflicts with \u003cspan pulumi-lang-nodejs=\"`partitions`\" pulumi-lang-dotnet=\"`Partitions`\" pulumi-lang-go=\"`partitions`\" pulumi-lang-python=\"`partitions`\" pulumi-lang-yaml=\"`partitions`\" pulumi-lang-java=\"`partitions`\"\u003e`partitions`\u003c/span\u003e.\n"},"columns":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlTableColumn:SqlTableColumn"}},"comment":{"type":"string","description":"User-supplied free-form text. Changing the comment is not currently supported on the `VIEW` table type.\n"},"dataSourceFormat":{"type":"string","description":"External tables are supported in multiple data source formats. The string constants identifying these formats are `DELTA`, `CSV`, `JSON`, `AVRO`, `PARQUET`, `ORC`, and `TEXT`. Change forces the creation of a new resource. Not supported for `MANAGED` tables or `VIEW`.\n"},"effectiveProperties":{"type":"object","additionalProperties":{"type":"string"}},"name":{"type":"string","description":"Name of table relative to parent catalog and schema. Change forces the creation of a new resource.\n"},"options":{"type":"object","additionalProperties":{"type":"string"},"description":"Map of user defined table options. Change forces creation of a new resource.\n"},"owner":{"type":"string","description":"User name/group name/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the table owner.\n"},"partitions":{"type":"array","items":{"type":"string"},"description":"a subset of columns to partition the table by. Change forces the creation of a new resource. Conflicts with \u003cspan pulumi-lang-nodejs=\"`clusterKeys`\" pulumi-lang-dotnet=\"`ClusterKeys`\" pulumi-lang-go=\"`clusterKeys`\" pulumi-lang-python=\"`cluster_keys`\" pulumi-lang-yaml=\"`clusterKeys`\" pulumi-lang-java=\"`clusterKeys`\"\u003e`cluster_keys`\u003c/span\u003e.\n"},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"A map of table properties.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SqlTableProviderConfig:SqlTableProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaName":{"type":"string","description":"Name of parent Schema relative to parent Catalog. Change forces the creation of a new resource.\n"},"storageCredentialName":{"type":"string","description":"For EXTERNAL Tables only: the name of storage credential to use. Change forces the creation of a new resource.\n"},"storageLocation":{"type":"string","description":"URL of storage location for Table data (required for EXTERNAL Tables).  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.).  Not supported for `VIEW` or `MANAGED` table_type.\n"},"tableId":{"type":"string","description":"The unique identifier of the table.\n"},"tableType":{"type":"string","description":"Distinguishes a view vs. managed/external Table. `MANAGED`, `EXTERNAL` or `VIEW`. Change forces the creation of a new resource.\n"},"viewDefinition":{"type":"string","description":"SQL text defining the view (for \u003cspan pulumi-lang-nodejs=\"`tableType \" pulumi-lang-dotnet=\"`TableType \" pulumi-lang-go=\"`tableType \" pulumi-lang-python=\"`table_type \" pulumi-lang-yaml=\"`tableType \" pulumi-lang-java=\"`tableType \"\u003e`table_type \u003c/span\u003e== \"VIEW\"`). Not supported for `MANAGED` or `EXTERNAL` table_type.\n"},"warehouseId":{"type":"string","description":"All table CRUD operations must be executed on a running cluster or SQL warehouse. If a \u003cspan pulumi-lang-nodejs=\"`warehouseId`\" pulumi-lang-dotnet=\"`WarehouseId`\" pulumi-lang-go=\"`warehouseId`\" pulumi-lang-python=\"`warehouse_id`\" pulumi-lang-yaml=\"`warehouseId`\" pulumi-lang-java=\"`warehouseId`\"\u003e`warehouse_id`\u003c/span\u003e is specified, that SQL warehouse will be used to execute SQL commands to manage this table. Conflicts with \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e.\n"}},"required":["catalogName","clusterId","columns","effectiveProperties","name","owner","partitions","schemaName","tableId","tableType"],"inputProperties":{"catalogName":{"type":"string","description":"Name of parent catalog. Change forces the creation of a new resource.\n","willReplaceOnChanges":true},"clusterId":{"type":"string","description":"All table CRUD operations must be executed on a running cluster or SQL warehouse. If a\u003cspan pulumi-lang-nodejs=\" clusterId \" pulumi-lang-dotnet=\" ClusterId \" pulumi-lang-go=\" clusterId \" pulumi-lang-python=\" cluster_id \" pulumi-lang-yaml=\" clusterId \" pulumi-lang-java=\" clusterId \"\u003e cluster_id \u003c/span\u003eis specified, it will be used to execute SQL commands to manage this table. If empty, a cluster will be created automatically with the name `terraform-sql-table`. Conflicts with \u003cspan pulumi-lang-nodejs=\"`warehouseId`\" pulumi-lang-dotnet=\"`WarehouseId`\" pulumi-lang-go=\"`warehouseId`\" pulumi-lang-python=\"`warehouse_id`\" pulumi-lang-yaml=\"`warehouseId`\" pulumi-lang-java=\"`warehouseId`\"\u003e`warehouse_id`\u003c/span\u003e.\n"},"clusterKeys":{"type":"array","items":{"type":"string"},"description":"a subset of columns to liquid cluster the table by. For automatic clustering, set \u003cspan pulumi-lang-nodejs=\"`clusterKeys`\" pulumi-lang-dotnet=\"`ClusterKeys`\" pulumi-lang-go=\"`clusterKeys`\" pulumi-lang-python=\"`cluster_keys`\" pulumi-lang-yaml=\"`clusterKeys`\" pulumi-lang-java=\"`clusterKeys`\"\u003e`cluster_keys`\u003c/span\u003e to `[\"AUTO\"]`. To turn off clustering, set it to `[\"NONE\"]`. Conflicts with \u003cspan pulumi-lang-nodejs=\"`partitions`\" pulumi-lang-dotnet=\"`Partitions`\" pulumi-lang-go=\"`partitions`\" pulumi-lang-python=\"`partitions`\" pulumi-lang-yaml=\"`partitions`\" pulumi-lang-java=\"`partitions`\"\u003e`partitions`\u003c/span\u003e.\n"},"columns":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlTableColumn:SqlTableColumn"}},"comment":{"type":"string","description":"User-supplied free-form text. Changing the comment is not currently supported on the `VIEW` table type.\n"},"dataSourceFormat":{"type":"string","description":"External tables are supported in multiple data source formats. The string constants identifying these formats are `DELTA`, `CSV`, `JSON`, `AVRO`, `PARQUET`, `ORC`, and `TEXT`. Change forces the creation of a new resource. Not supported for `MANAGED` tables or `VIEW`.\n","willReplaceOnChanges":true},"name":{"type":"string","description":"Name of table relative to parent catalog and schema. Change forces the creation of a new resource.\n"},"options":{"type":"object","additionalProperties":{"type":"string"},"description":"Map of user defined table options. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"User name/group name/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the table owner.\n"},"partitions":{"type":"array","items":{"type":"string"},"description":"a subset of columns to partition the table by. Change forces the creation of a new resource. Conflicts with \u003cspan pulumi-lang-nodejs=\"`clusterKeys`\" pulumi-lang-dotnet=\"`ClusterKeys`\" pulumi-lang-go=\"`clusterKeys`\" pulumi-lang-python=\"`cluster_keys`\" pulumi-lang-yaml=\"`clusterKeys`\" pulumi-lang-java=\"`clusterKeys`\"\u003e`cluster_keys`\u003c/span\u003e.\n","willReplaceOnChanges":true},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"A map of table properties.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SqlTableProviderConfig:SqlTableProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaName":{"type":"string","description":"Name of parent Schema relative to parent Catalog. Change forces the creation of a new resource.\n","willReplaceOnChanges":true},"storageCredentialName":{"type":"string","description":"For EXTERNAL Tables only: the name of storage credential to use. Change forces the creation of a new resource.\n","willReplaceOnChanges":true},"storageLocation":{"type":"string","description":"URL of storage location for Table data (required for EXTERNAL Tables).  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.).  Not supported for `VIEW` or `MANAGED` table_type.\n"},"tableType":{"type":"string","description":"Distinguishes a view vs. managed/external Table. `MANAGED`, `EXTERNAL` or `VIEW`. Change forces the creation of a new resource.\n","willReplaceOnChanges":true},"viewDefinition":{"type":"string","description":"SQL text defining the view (for \u003cspan pulumi-lang-nodejs=\"`tableType \" pulumi-lang-dotnet=\"`TableType \" pulumi-lang-go=\"`tableType \" pulumi-lang-python=\"`table_type \" pulumi-lang-yaml=\"`tableType \" pulumi-lang-java=\"`tableType \"\u003e`table_type \u003c/span\u003e== \"VIEW\"`). Not supported for `MANAGED` or `EXTERNAL` table_type.\n"},"warehouseId":{"type":"string","description":"All table CRUD operations must be executed on a running cluster or SQL warehouse. If a \u003cspan pulumi-lang-nodejs=\"`warehouseId`\" pulumi-lang-dotnet=\"`WarehouseId`\" pulumi-lang-go=\"`warehouseId`\" pulumi-lang-python=\"`warehouse_id`\" pulumi-lang-yaml=\"`warehouseId`\" pulumi-lang-java=\"`warehouseId`\"\u003e`warehouse_id`\u003c/span\u003e is specified, that SQL warehouse will be used to execute SQL commands to manage this table. Conflicts with \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e.\n"}},"requiredInputs":["catalogName","schemaName","tableType"],"stateInputs":{"description":"Input properties used for looking up and filtering SqlTable resources.\n","properties":{"catalogName":{"type":"string","description":"Name of parent catalog. Change forces the creation of a new resource.\n","willReplaceOnChanges":true},"clusterId":{"type":"string","description":"All table CRUD operations must be executed on a running cluster or SQL warehouse. If a\u003cspan pulumi-lang-nodejs=\" clusterId \" pulumi-lang-dotnet=\" ClusterId \" pulumi-lang-go=\" clusterId \" pulumi-lang-python=\" cluster_id \" pulumi-lang-yaml=\" clusterId \" pulumi-lang-java=\" clusterId \"\u003e cluster_id \u003c/span\u003eis specified, it will be used to execute SQL commands to manage this table. If empty, a cluster will be created automatically with the name `terraform-sql-table`. Conflicts with \u003cspan pulumi-lang-nodejs=\"`warehouseId`\" pulumi-lang-dotnet=\"`WarehouseId`\" pulumi-lang-go=\"`warehouseId`\" pulumi-lang-python=\"`warehouse_id`\" pulumi-lang-yaml=\"`warehouseId`\" pulumi-lang-java=\"`warehouseId`\"\u003e`warehouse_id`\u003c/span\u003e.\n"},"clusterKeys":{"type":"array","items":{"type":"string"},"description":"a subset of columns to liquid cluster the table by. For automatic clustering, set \u003cspan pulumi-lang-nodejs=\"`clusterKeys`\" pulumi-lang-dotnet=\"`ClusterKeys`\" pulumi-lang-go=\"`clusterKeys`\" pulumi-lang-python=\"`cluster_keys`\" pulumi-lang-yaml=\"`clusterKeys`\" pulumi-lang-java=\"`clusterKeys`\"\u003e`cluster_keys`\u003c/span\u003e to `[\"AUTO\"]`. To turn off clustering, set it to `[\"NONE\"]`. Conflicts with \u003cspan pulumi-lang-nodejs=\"`partitions`\" pulumi-lang-dotnet=\"`Partitions`\" pulumi-lang-go=\"`partitions`\" pulumi-lang-python=\"`partitions`\" pulumi-lang-yaml=\"`partitions`\" pulumi-lang-java=\"`partitions`\"\u003e`partitions`\u003c/span\u003e.\n"},"columns":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlTableColumn:SqlTableColumn"}},"comment":{"type":"string","description":"User-supplied free-form text. Changing the comment is not currently supported on the `VIEW` table type.\n"},"dataSourceFormat":{"type":"string","description":"External tables are supported in multiple data source formats. The string constants identifying these formats are `DELTA`, `CSV`, `JSON`, `AVRO`, `PARQUET`, `ORC`, and `TEXT`. Change forces the creation of a new resource. Not supported for `MANAGED` tables or `VIEW`.\n","willReplaceOnChanges":true},"effectiveProperties":{"type":"object","additionalProperties":{"type":"string"}},"name":{"type":"string","description":"Name of table relative to parent catalog and schema. Change forces the creation of a new resource.\n"},"options":{"type":"object","additionalProperties":{"type":"string"},"description":"Map of user defined table options. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"User name/group name/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the table owner.\n"},"partitions":{"type":"array","items":{"type":"string"},"description":"a subset of columns to partition the table by. Change forces the creation of a new resource. Conflicts with \u003cspan pulumi-lang-nodejs=\"`clusterKeys`\" pulumi-lang-dotnet=\"`ClusterKeys`\" pulumi-lang-go=\"`clusterKeys`\" pulumi-lang-python=\"`cluster_keys`\" pulumi-lang-yaml=\"`clusterKeys`\" pulumi-lang-java=\"`clusterKeys`\"\u003e`cluster_keys`\u003c/span\u003e.\n","willReplaceOnChanges":true},"properties":{"type":"object","additionalProperties":{"type":"string"},"description":"A map of table properties.\n"},"providerConfig":{"$ref":"#/types/databricks:index/SqlTableProviderConfig:SqlTableProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaName":{"type":"string","description":"Name of parent Schema relative to parent Catalog. Change forces the creation of a new resource.\n","willReplaceOnChanges":true},"storageCredentialName":{"type":"string","description":"For EXTERNAL Tables only: the name of storage credential to use. Change forces the creation of a new resource.\n","willReplaceOnChanges":true},"storageLocation":{"type":"string","description":"URL of storage location for Table data (required for EXTERNAL Tables).  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.).  Not supported for `VIEW` or `MANAGED` table_type.\n"},"tableId":{"type":"string","description":"The unique identifier of the table.\n"},"tableType":{"type":"string","description":"Distinguishes a view vs. managed/external Table. `MANAGED`, `EXTERNAL` or `VIEW`. Change forces the creation of a new resource.\n","willReplaceOnChanges":true},"viewDefinition":{"type":"string","description":"SQL text defining the view (for \u003cspan pulumi-lang-nodejs=\"`tableType \" pulumi-lang-dotnet=\"`TableType \" pulumi-lang-go=\"`tableType \" pulumi-lang-python=\"`table_type \" pulumi-lang-yaml=\"`tableType \" pulumi-lang-java=\"`tableType \"\u003e`table_type \u003c/span\u003e== \"VIEW\"`). Not supported for `MANAGED` or `EXTERNAL` table_type.\n"},"warehouseId":{"type":"string","description":"All table CRUD operations must be executed on a running cluster or SQL warehouse. If a \u003cspan pulumi-lang-nodejs=\"`warehouseId`\" pulumi-lang-dotnet=\"`WarehouseId`\" pulumi-lang-go=\"`warehouseId`\" pulumi-lang-python=\"`warehouse_id`\" pulumi-lang-yaml=\"`warehouseId`\" pulumi-lang-java=\"`warehouseId`\"\u003e`warehouse_id`\u003c/span\u003e is specified, that SQL warehouse will be used to execute SQL commands to manage this table. Conflicts with \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e.\n"}},"type":"object"}},"databricks:index/sqlVisualization:SqlVisualization":{"description":"!\u003e This resource is deprecated and will be removed in future.\n\n\u003e Please switch to\u003cspan pulumi-lang-nodejs=\" databricks.Dashboard \" pulumi-lang-dotnet=\" databricks.Dashboard \" pulumi-lang-go=\" Dashboard \" pulumi-lang-python=\" Dashboard \" pulumi-lang-yaml=\" databricks.Dashboard \" pulumi-lang-java=\" databricks.Dashboard \"\u003e databricks.Dashboard \u003c/span\u003eto author new AI/BI dashboards using the latest tooling\n\nTo manage [SQLA resources](https://docs.databricks.com/sql/get-started/concepts.html) you must have \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e on your\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eor databricks_user.\n\n\u003e documentation for this resource is a work in progress.\n\nA visualization is always tied to a query. Every query may have one or more visualizations.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst q1v1 = new databricks.SqlVisualization(\"q1v1\", {\n    queryId: q1.id,\n    type: \"table\",\n    name: \"My Table\",\n    description: \"Some Description\",\n    options: JSON.stringify({\n        itemsPerPage: 25,\n        columns: [\n            {\n                name: \"p1\",\n                type: \"string\",\n                title: \"Parameter 1\",\n                displayAs: \"string\",\n            },\n            {\n                name: \"p2\",\n                type: \"string\",\n                title: \"Parameter 2\",\n                displayAs: \"link\",\n                highlightLinks: true,\n            },\n        ],\n    }),\n});\n```\n```python\nimport pulumi\nimport json\nimport pulumi_databricks as databricks\n\nq1v1 = databricks.SqlVisualization(\"q1v1\",\n    query_id=q1[\"id\"],\n    type=\"table\",\n    name=\"My Table\",\n    description=\"Some Description\",\n    options=json.dumps({\n        \"itemsPerPage\": 25,\n        \"columns\": [\n            {\n                \"name\": \"p1\",\n                \"type\": \"string\",\n                \"title\": \"Parameter 1\",\n                \"displayAs\": \"string\",\n            },\n            {\n                \"name\": \"p2\",\n                \"type\": \"string\",\n                \"title\": \"Parameter 2\",\n                \"displayAs\": \"link\",\n                \"highlightLinks\": True,\n            },\n        ],\n    }))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text.Json;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var q1v1 = new Databricks.SqlVisualization(\"q1v1\", new()\n    {\n        QueryId = q1.Id,\n        Type = \"table\",\n        Name = \"My Table\",\n        Description = \"Some Description\",\n        Options = JsonSerializer.Serialize(new Dictionary\u003cstring, object?\u003e\n        {\n            [\"itemsPerPage\"] = 25,\n            [\"columns\"] = new[]\n            {\n                new Dictionary\u003cstring, object?\u003e\n                {\n                    [\"name\"] = \"p1\",\n                    [\"type\"] = \"string\",\n                    [\"title\"] = \"Parameter 1\",\n                    [\"displayAs\"] = \"string\",\n                },\n                new Dictionary\u003cstring, object?\u003e\n                {\n                    [\"name\"] = \"p2\",\n                    [\"type\"] = \"string\",\n                    [\"title\"] = \"Parameter 2\",\n                    [\"displayAs\"] = \"link\",\n                    [\"highlightLinks\"] = true,\n                },\n            },\n        }),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"encoding/json\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\ttmpJSON0, err := json.Marshal(map[string]interface{}{\n\t\t\t\"itemsPerPage\": 25,\n\t\t\t\"columns\": []map[string]interface{}{\n\t\t\t\tmap[string]interface{}{\n\t\t\t\t\t\"name\":      \"p1\",\n\t\t\t\t\t\"type\":      \"string\",\n\t\t\t\t\t\"title\":     \"Parameter 1\",\n\t\t\t\t\t\"displayAs\": \"string\",\n\t\t\t\t},\n\t\t\t\tmap[string]interface{}{\n\t\t\t\t\t\"name\":           \"p2\",\n\t\t\t\t\t\"type\":           \"string\",\n\t\t\t\t\t\"title\":          \"Parameter 2\",\n\t\t\t\t\t\"displayAs\":      \"link\",\n\t\t\t\t\t\"highlightLinks\": true,\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tjson0 := string(tmpJSON0)\n\t\t_, err = databricks.NewSqlVisualization(ctx, \"q1v1\", \u0026databricks.SqlVisualizationArgs{\n\t\t\tQueryId:     pulumi.Any(q1.Id),\n\t\t\tType:        pulumi.String(\"table\"),\n\t\t\tName:        pulumi.String(\"My Table\"),\n\t\t\tDescription: pulumi.String(\"Some Description\"),\n\t\t\tOptions:     pulumi.String(json0),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlVisualization;\nimport com.pulumi.databricks.SqlVisualizationArgs;\nimport static com.pulumi.codegen.internal.Serialization.*;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var q1v1 = new SqlVisualization(\"q1v1\", SqlVisualizationArgs.builder()\n            .queryId(q1.id())\n            .type(\"table\")\n            .name(\"My Table\")\n            .description(\"Some Description\")\n            .options(serializeJson(\n                jsonObject(\n                    jsonProperty(\"itemsPerPage\", 25),\n                    jsonProperty(\"columns\", jsonArray(\n                        jsonObject(\n                            jsonProperty(\"name\", \"p1\"),\n                            jsonProperty(\"type\", \"string\"),\n                            jsonProperty(\"title\", \"Parameter 1\"),\n                            jsonProperty(\"displayAs\", \"string\")\n                        ), \n                        jsonObject(\n                            jsonProperty(\"name\", \"p2\"),\n                            jsonProperty(\"type\", \"string\"),\n                            jsonProperty(\"title\", \"Parameter 2\"),\n                            jsonProperty(\"displayAs\", \"link\"),\n                            jsonProperty(\"highlightLinks\", true)\n                        )\n                    ))\n                )))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  q1v1:\n    type: databricks:SqlVisualization\n    properties:\n      queryId: ${q1.id}\n      type: table\n      name: My Table\n      description: Some Description\n      options:\n        fn::toJSON:\n          itemsPerPage: 25\n          columns:\n            - name: p1\n              type: string\n              title: Parameter 1\n              displayAs: string\n            - name: p2\n              type: string\n              title: Parameter 2\n              displayAs: link\n              highlightLinks: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Separating `visualization definition` from IAC configuration\n\nSince \u003cspan pulumi-lang-nodejs=\"`options`\" pulumi-lang-dotnet=\"`Options`\" pulumi-lang-go=\"`options`\" pulumi-lang-python=\"`options`\" pulumi-lang-yaml=\"`options`\" pulumi-lang-java=\"`options`\"\u003e`options`\u003c/span\u003e field contains the full JSON encoded string definition of how to render a visualization for the backend API - `sql/api/visualizations`, they can get quite verbose.\n\nIf you have lots of visualizations to declare, it might be cleaner to separate the \u003cspan pulumi-lang-nodejs=\"`options`\" pulumi-lang-dotnet=\"`Options`\" pulumi-lang-go=\"`options`\" pulumi-lang-python=\"`options`\" pulumi-lang-yaml=\"`options`\" pulumi-lang-java=\"`options`\"\u003e`options`\u003c/span\u003e field and store them as separate `.json` files to be referenced.\n\n### Example\n\n- directory tree\n\n    ```bash\n    .\n    ├── q1vx.tf\n    └── visualizations\n        ├── q1v1.json\n        └── q1v2.json\n```\n\n- resource definitions\n\n    ","properties":{"description":{"type":"string"},"name":{"type":"string"},"options":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/SqlVisualizationProviderConfig:SqlVisualizationProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"queryId":{"type":"string"},"queryPlan":{"type":"string"},"type":{"type":"string"},"visualizationId":{"type":"string"}},"required":["name","options","queryId","type","visualizationId"],"inputProperties":{"description":{"type":"string"},"name":{"type":"string"},"options":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/SqlVisualizationProviderConfig:SqlVisualizationProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"queryId":{"type":"string","willReplaceOnChanges":true},"queryPlan":{"type":"string"},"type":{"type":"string"},"visualizationId":{"type":"string","willReplaceOnChanges":true}},"requiredInputs":["options","queryId","type"],"stateInputs":{"description":"Input properties used for looking up and filtering SqlVisualization resources.\n","properties":{"description":{"type":"string"},"name":{"type":"string"},"options":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/SqlVisualizationProviderConfig:SqlVisualizationProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"queryId":{"type":"string","willReplaceOnChanges":true},"queryPlan":{"type":"string"},"type":{"type":"string"},"visualizationId":{"type":"string","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/sqlWidget:SqlWidget":{"description":"!\u003e This resource is deprecated and will be removed in future.\n\n\u003e Please switch to\u003cspan pulumi-lang-nodejs=\" databricks.Dashboard \" pulumi-lang-dotnet=\" databricks.Dashboard \" pulumi-lang-go=\" Dashboard \" pulumi-lang-python=\" Dashboard \" pulumi-lang-yaml=\" databricks.Dashboard \" pulumi-lang-java=\" databricks.Dashboard \"\u003e databricks.Dashboard \u003c/span\u003eto author new AI/BI dashboards using the latest tooling\n\nTo manage [SQL resources](https://docs.databricks.com/sql/get-started/concepts.html) you must have \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e on your\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eor databricks_user.\n\n\u003e documentation for this resource is a work in progress.\n\nA widget is always tied to a Legacy dashboard. Every dashboard may have one or more widgets.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst d1w1 = new databricks.SqlWidget(\"d1w1\", {\n    dashboardId: d1.id,\n    text: \"Hello! I'm a **text widget**!\",\n    position: {\n        sizeX: 3,\n        sizeY: 4,\n        posX: 0,\n        posY: 0,\n    },\n});\nconst d1w2 = new databricks.SqlWidget(\"d1w2\", {\n    dashboardId: d1.id,\n    visualizationId: q1v1.id,\n    position: {\n        sizeX: 3,\n        sizeY: 4,\n        posX: 3,\n        posY: 0,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nd1w1 = databricks.SqlWidget(\"d1w1\",\n    dashboard_id=d1[\"id\"],\n    text=\"Hello! I'm a **text widget**!\",\n    position={\n        \"size_x\": 3,\n        \"size_y\": 4,\n        \"pos_x\": 0,\n        \"pos_y\": 0,\n    })\nd1w2 = databricks.SqlWidget(\"d1w2\",\n    dashboard_id=d1[\"id\"],\n    visualization_id=q1v1[\"id\"],\n    position={\n        \"size_x\": 3,\n        \"size_y\": 4,\n        \"pos_x\": 3,\n        \"pos_y\": 0,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var d1w1 = new Databricks.SqlWidget(\"d1w1\", new()\n    {\n        DashboardId = d1.Id,\n        Text = \"Hello! I'm a **text widget**!\",\n        Position = new Databricks.Inputs.SqlWidgetPositionArgs\n        {\n            SizeX = 3,\n            SizeY = 4,\n            PosX = 0,\n            PosY = 0,\n        },\n    });\n\n    var d1w2 = new Databricks.SqlWidget(\"d1w2\", new()\n    {\n        DashboardId = d1.Id,\n        VisualizationId = q1v1.Id,\n        Position = new Databricks.Inputs.SqlWidgetPositionArgs\n        {\n            SizeX = 3,\n            SizeY = 4,\n            PosX = 3,\n            PosY = 0,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSqlWidget(ctx, \"d1w1\", \u0026databricks.SqlWidgetArgs{\n\t\t\tDashboardId: pulumi.Any(d1.Id),\n\t\t\tText:        pulumi.String(\"Hello! I'm a **text widget**!\"),\n\t\t\tPosition: \u0026databricks.SqlWidgetPositionArgs{\n\t\t\t\tSizeX: pulumi.Int(3),\n\t\t\t\tSizeY: pulumi.Int(4),\n\t\t\t\tPosX:  pulumi.Int(0),\n\t\t\t\tPosY:  pulumi.Int(0),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewSqlWidget(ctx, \"d1w2\", \u0026databricks.SqlWidgetArgs{\n\t\t\tDashboardId:     pulumi.Any(d1.Id),\n\t\t\tVisualizationId: pulumi.Any(q1v1.Id),\n\t\t\tPosition: \u0026databricks.SqlWidgetPositionArgs{\n\t\t\t\tSizeX: pulumi.Int(3),\n\t\t\t\tSizeY: pulumi.Int(4),\n\t\t\t\tPosX:  pulumi.Int(3),\n\t\t\t\tPosY:  pulumi.Int(0),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SqlWidget;\nimport com.pulumi.databricks.SqlWidgetArgs;\nimport com.pulumi.databricks.inputs.SqlWidgetPositionArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var d1w1 = new SqlWidget(\"d1w1\", SqlWidgetArgs.builder()\n            .dashboardId(d1.id())\n            .text(\"Hello! I'm a **text widget**!\")\n            .position(SqlWidgetPositionArgs.builder()\n                .sizeX(3)\n                .sizeY(4)\n                .posX(0)\n                .posY(0)\n                .build())\n            .build());\n\n        var d1w2 = new SqlWidget(\"d1w2\", SqlWidgetArgs.builder()\n            .dashboardId(d1.id())\n            .visualizationId(q1v1.id())\n            .position(SqlWidgetPositionArgs.builder()\n                .sizeX(3)\n                .sizeY(4)\n                .posX(3)\n                .posY(0)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  d1w1:\n    type: databricks:SqlWidget\n    properties:\n      dashboardId: ${d1.id}\n      text: Hello! I'm a **text widget**!\n      position:\n        sizeX: 3\n        sizeY: 4\n        posX: 0\n        posY: 0\n  d1w2:\n    type: databricks:SqlWidget\n    properties:\n      dashboardId: ${d1.id}\n      visualizationId: ${q1v1.id}\n      position:\n        sizeX: 3\n        sizeY: 4\n        posX: 3\n        posY: 0\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlDashboard \" pulumi-lang-dotnet=\" databricks.SqlDashboard \" pulumi-lang-go=\" SqlDashboard \" pulumi-lang-python=\" SqlDashboard \" pulumi-lang-yaml=\" databricks.SqlDashboard \" pulumi-lang-java=\" databricks.SqlDashboard \"\u003e databricks.SqlDashboard \u003c/span\u003eto manage Databricks SQL [Dashboards](https://docs.databricks.com/sql/user/dashboards/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eto manage Databricks SQL [Endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlGlobalConfig \" pulumi-lang-dotnet=\" databricks.SqlGlobalConfig \" pulumi-lang-go=\" SqlGlobalConfig \" pulumi-lang-python=\" SqlGlobalConfig \" pulumi-lang-yaml=\" databricks.SqlGlobalConfig \" pulumi-lang-java=\" databricks.SqlGlobalConfig \"\u003e databricks.SqlGlobalConfig \u003c/span\u003eto configure the security policy, databricks_instance_profile, and [data access properties](https://docs.databricks.com/sql/admin/data-access-configuration.html) for all\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eof workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage data access in Unity Catalog.\n\n","properties":{"dashboardId":{"type":"string"},"description":{"type":"string"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlWidgetParameter:SqlWidgetParameter"}},"position":{"$ref":"#/types/databricks:index/SqlWidgetPosition:SqlWidgetPosition"},"providerConfig":{"$ref":"#/types/databricks:index/SqlWidgetProviderConfig:SqlWidgetProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"text":{"type":"string"},"title":{"type":"string"},"visualizationId":{"type":"string"},"widgetId":{"type":"string"}},"required":["dashboardId","widgetId"],"inputProperties":{"dashboardId":{"type":"string","willReplaceOnChanges":true},"description":{"type":"string"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlWidgetParameter:SqlWidgetParameter"}},"position":{"$ref":"#/types/databricks:index/SqlWidgetPosition:SqlWidgetPosition"},"providerConfig":{"$ref":"#/types/databricks:index/SqlWidgetProviderConfig:SqlWidgetProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"text":{"type":"string"},"title":{"type":"string"},"visualizationId":{"type":"string","willReplaceOnChanges":true},"widgetId":{"type":"string","willReplaceOnChanges":true}},"requiredInputs":["dashboardId"],"stateInputs":{"description":"Input properties used for looking up and filtering SqlWidget resources.\n","properties":{"dashboardId":{"type":"string","willReplaceOnChanges":true},"description":{"type":"string"},"parameters":{"type":"array","items":{"$ref":"#/types/databricks:index/SqlWidgetParameter:SqlWidgetParameter"}},"position":{"$ref":"#/types/databricks:index/SqlWidgetPosition:SqlWidgetPosition"},"providerConfig":{"$ref":"#/types/databricks:index/SqlWidgetProviderConfig:SqlWidgetProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"text":{"type":"string"},"title":{"type":"string"},"visualizationId":{"type":"string","willReplaceOnChanges":true},"widgetId":{"type":"string","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/storageCredential:StorageCredential":{"description":"To work with external tables, Unity Catalog introduces two new objects to access and work with external cloud storage:\n\n- \u003cspan pulumi-lang-nodejs=\"`databricks.StorageCredential`\" pulumi-lang-dotnet=\"`databricks.StorageCredential`\" pulumi-lang-go=\"`StorageCredential`\" pulumi-lang-python=\"`StorageCredential`\" pulumi-lang-yaml=\"`databricks.StorageCredential`\" pulumi-lang-java=\"`databricks.StorageCredential`\"\u003e`databricks.StorageCredential`\u003c/span\u003e represents authentication methods to access cloud storage (e.g. an IAM role for Amazon S3 or a service principal/managed identity for Azure Storage). Storage credentials are access-controlled to determine which users can use the credential.\n-\u003cspan pulumi-lang-nodejs=\" databricks.ExternalLocation \" pulumi-lang-dotnet=\" databricks.ExternalLocation \" pulumi-lang-go=\" ExternalLocation \" pulumi-lang-python=\" ExternalLocation \" pulumi-lang-yaml=\" databricks.ExternalLocation \" pulumi-lang-java=\" databricks.ExternalLocation \"\u003e databricks.ExternalLocation \u003c/span\u003eare objects that combine a cloud storage path with a Storage Credential that can be used to access the location.\n\n\u003e This resource can be used with an account or workspace-level provider.\n\nOn AWS, the IAM role for a storage credential requires a trust policy. See [documentation](https://docs.databricks.com/en/connect/unity-catalog/cloud-storage/storage-credentials.html#step-1-create-an-iam-role) for more details. The data source\u003cspan pulumi-lang-nodejs=\" databricks.getAwsUnityCatalogAssumeRolePolicy \" pulumi-lang-dotnet=\" databricks.getAwsUnityCatalogAssumeRolePolicy \" pulumi-lang-go=\" getAwsUnityCatalogAssumeRolePolicy \" pulumi-lang-python=\" get_aws_unity_catalog_assume_role_policy \" pulumi-lang-yaml=\" databricks.getAwsUnityCatalogAssumeRolePolicy \" pulumi-lang-java=\" databricks.getAwsUnityCatalogAssumeRolePolicy \"\u003e databricks.getAwsUnityCatalogAssumeRolePolicy \u003c/span\u003ecan be used to create the necessary AWS Unity Catalog assume role policy.\n\n## Example Usage\n\nFor AWS\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst external = new databricks.StorageCredential(\"external\", {\n    name: externalDataAccess.name,\n    awsIamRole: {\n        roleArn: externalDataAccess.arn,\n    },\n    comment: \"Managed by TF\",\n});\nconst externalCreds = new databricks.Grants(\"external_creds\", {\n    storageCredential: external.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\"CREATE_EXTERNAL_TABLE\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexternal = databricks.StorageCredential(\"external\",\n    name=external_data_access[\"name\"],\n    aws_iam_role={\n        \"role_arn\": external_data_access[\"arn\"],\n    },\n    comment=\"Managed by TF\")\nexternal_creds = databricks.Grants(\"external_creds\",\n    storage_credential=external.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\"CREATE_EXTERNAL_TABLE\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var external = new Databricks.StorageCredential(\"external\", new()\n    {\n        Name = externalDataAccess.Name,\n        AwsIamRole = new Databricks.Inputs.StorageCredentialAwsIamRoleArgs\n        {\n            RoleArn = externalDataAccess.Arn,\n        },\n        Comment = \"Managed by TF\",\n    });\n\n    var externalCreds = new Databricks.Grants(\"external_creds\", new()\n    {\n        StorageCredential = external.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"CREATE_EXTERNAL_TABLE\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternal, err := databricks.NewStorageCredential(ctx, \"external\", \u0026databricks.StorageCredentialArgs{\n\t\t\tName: pulumi.Any(externalDataAccess.Name),\n\t\t\tAwsIamRole: \u0026databricks.StorageCredentialAwsIamRoleArgs{\n\t\t\t\tRoleArn: pulumi.Any(externalDataAccess.Arn),\n\t\t\t},\n\t\t\tComment: pulumi.String(\"Managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"external_creds\", \u0026databricks.GrantsArgs{\n\t\t\tStorageCredential: external.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.StorageCredential;\nimport com.pulumi.databricks.StorageCredentialArgs;\nimport com.pulumi.databricks.inputs.StorageCredentialAwsIamRoleArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var external = new StorageCredential(\"external\", StorageCredentialArgs.builder()\n            .name(externalDataAccess.name())\n            .awsIamRole(StorageCredentialAwsIamRoleArgs.builder()\n                .roleArn(externalDataAccess.arn())\n                .build())\n            .comment(\"Managed by TF\")\n            .build());\n\n        var externalCreds = new Grants(\"externalCreds\", GrantsArgs.builder()\n            .storageCredential(external.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(\"CREATE_EXTERNAL_TABLE\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  external:\n    type: databricks:StorageCredential\n    properties:\n      name: ${externalDataAccess.name}\n      awsIamRole:\n        roleArn: ${externalDataAccess.arn}\n      comment: Managed by TF\n  externalCreds:\n    type: databricks:Grants\n    name: external_creds\n    properties:\n      storageCredential: ${external.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - CREATE_EXTERNAL_TABLE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nFor Azure\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst externalMi = new databricks.StorageCredential(\"external_mi\", {\n    name: \"mi_credential\",\n    azureManagedIdentity: {\n        accessConnectorId: example.id,\n    },\n    comment: \"Managed identity credential managed by TF\",\n});\nconst externalCreds = new databricks.Grants(\"external_creds\", {\n    storageCredential: externalMi.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\"CREATE_EXTERNAL_TABLE\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexternal_mi = databricks.StorageCredential(\"external_mi\",\n    name=\"mi_credential\",\n    azure_managed_identity={\n        \"access_connector_id\": example[\"id\"],\n    },\n    comment=\"Managed identity credential managed by TF\")\nexternal_creds = databricks.Grants(\"external_creds\",\n    storage_credential=external_mi.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\"CREATE_EXTERNAL_TABLE\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var externalMi = new Databricks.StorageCredential(\"external_mi\", new()\n    {\n        Name = \"mi_credential\",\n        AzureManagedIdentity = new Databricks.Inputs.StorageCredentialAzureManagedIdentityArgs\n        {\n            AccessConnectorId = example.Id,\n        },\n        Comment = \"Managed identity credential managed by TF\",\n    });\n\n    var externalCreds = new Databricks.Grants(\"external_creds\", new()\n    {\n        StorageCredential = externalMi.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"CREATE_EXTERNAL_TABLE\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternalMi, err := databricks.NewStorageCredential(ctx, \"external_mi\", \u0026databricks.StorageCredentialArgs{\n\t\t\tName: pulumi.String(\"mi_credential\"),\n\t\t\tAzureManagedIdentity: \u0026databricks.StorageCredentialAzureManagedIdentityArgs{\n\t\t\t\tAccessConnectorId: pulumi.Any(example.Id),\n\t\t\t},\n\t\t\tComment: pulumi.String(\"Managed identity credential managed by TF\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"external_creds\", \u0026databricks.GrantsArgs{\n\t\t\tStorageCredential: externalMi.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.StorageCredential;\nimport com.pulumi.databricks.StorageCredentialArgs;\nimport com.pulumi.databricks.inputs.StorageCredentialAzureManagedIdentityArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var externalMi = new StorageCredential(\"externalMi\", StorageCredentialArgs.builder()\n            .name(\"mi_credential\")\n            .azureManagedIdentity(StorageCredentialAzureManagedIdentityArgs.builder()\n                .accessConnectorId(example.id())\n                .build())\n            .comment(\"Managed identity credential managed by TF\")\n            .build());\n\n        var externalCreds = new Grants(\"externalCreds\", GrantsArgs.builder()\n            .storageCredential(externalMi.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(\"CREATE_EXTERNAL_TABLE\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  externalMi:\n    type: databricks:StorageCredential\n    name: external_mi\n    properties:\n      name: mi_credential\n      azureManagedIdentity:\n        accessConnectorId: ${example.id}\n      comment: Managed identity credential managed by TF\n  externalCreds:\n    type: databricks:Grants\n    name: external_creds\n    properties:\n      storageCredential: ${externalMi.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - CREATE_EXTERNAL_TABLE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nFor GCP\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst external = new databricks.StorageCredential(\"external\", {\n    name: \"the-creds\",\n    databricksGcpServiceAccount: {},\n});\nconst externalCreds = new databricks.Grants(\"external_creds\", {\n    storageCredential: external.id,\n    grants: [{\n        principal: \"Data Engineers\",\n        privileges: [\"CREATE_EXTERNAL_TABLE\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexternal = databricks.StorageCredential(\"external\",\n    name=\"the-creds\",\n    databricks_gcp_service_account={})\nexternal_creds = databricks.Grants(\"external_creds\",\n    storage_credential=external.id,\n    grants=[{\n        \"principal\": \"Data Engineers\",\n        \"privileges\": [\"CREATE_EXTERNAL_TABLE\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var external = new Databricks.StorageCredential(\"external\", new()\n    {\n        Name = \"the-creds\",\n        DatabricksGcpServiceAccount = null,\n    });\n\n    var externalCreds = new Databricks.Grants(\"external_creds\", new()\n    {\n        StorageCredential = external.Id,\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"Data Engineers\",\n                Privileges = new[]\n                {\n                    \"CREATE_EXTERNAL_TABLE\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\texternal, err := databricks.NewStorageCredential(ctx, \"external\", \u0026databricks.StorageCredentialArgs{\n\t\t\tName:                        pulumi.String(\"the-creds\"),\n\t\t\tDatabricksGcpServiceAccount: \u0026databricks.StorageCredentialDatabricksGcpServiceAccountArgs{},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"external_creds\", \u0026databricks.GrantsArgs{\n\t\t\tStorageCredential: external.ID(),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"Data Engineers\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"CREATE_EXTERNAL_TABLE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.StorageCredential;\nimport com.pulumi.databricks.StorageCredentialArgs;\nimport com.pulumi.databricks.inputs.StorageCredentialDatabricksGcpServiceAccountArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var external = new StorageCredential(\"external\", StorageCredentialArgs.builder()\n            .name(\"the-creds\")\n            .databricksGcpServiceAccount(StorageCredentialDatabricksGcpServiceAccountArgs.builder()\n                .build())\n            .build());\n\n        var externalCreds = new Grants(\"externalCreds\", GrantsArgs.builder()\n            .storageCredential(external.id())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"Data Engineers\")\n                .privileges(\"CREATE_EXTERNAL_TABLE\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  external:\n    type: databricks:StorageCredential\n    properties:\n      name: the-creds\n      databricksGcpServiceAccount: {}\n  externalCreds:\n    type: databricks:Grants\n    name: external_creds\n    properties:\n      storageCredential: ${external.id}\n      grants:\n        - principal: Data Engineers\n          privileges:\n            - CREATE_EXTERNAL_TABLE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"awsIamRole":{"$ref":"#/types/databricks:index/StorageCredentialAwsIamRole:StorageCredentialAwsIamRole","description":"exposes two additional attributes:\n"},"azureManagedIdentity":{"$ref":"#/types/databricks:index/StorageCredentialAzureManagedIdentity:StorageCredentialAzureManagedIdentity"},"azureServicePrincipal":{"$ref":"#/types/databricks:index/StorageCredentialAzureServicePrincipal:StorageCredentialAzureServicePrincipal"},"cloudflareApiToken":{"$ref":"#/types/databricks:index/StorageCredentialCloudflareApiToken:StorageCredentialCloudflareApiToken"},"comment":{"type":"string"},"databricksGcpServiceAccount":{"$ref":"#/types/databricks:index/StorageCredentialDatabricksGcpServiceAccount:StorageCredentialDatabricksGcpServiceAccount"},"forceDestroy":{"type":"boolean","description":"Delete storage credential regardless of its dependencies.\n"},"forceUpdate":{"type":"boolean","description":"Update storage credential regardless of its dependents.\n"},"gcpServiceAccountKey":{"$ref":"#/types/databricks:index/StorageCredentialGcpServiceAccountKey:StorageCredentialGcpServiceAccountKey"},"isolationMode":{"type":"string","description":"Whether the storage credential is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATION_MODE_ISOLATED` or `ISOLATION_MODE_OPEN`. Setting the credential to `ISOLATION_MODE_ISOLATED` will automatically allow access from the current workspace.\n\n\u003cspan pulumi-lang-nodejs=\"`awsIamRole`\" pulumi-lang-dotnet=\"`AwsIamRole`\" pulumi-lang-go=\"`awsIamRole`\" pulumi-lang-python=\"`aws_iam_role`\" pulumi-lang-yaml=\"`awsIamRole`\" pulumi-lang-java=\"`awsIamRole`\"\u003e`aws_iam_role`\u003c/span\u003e optional configuration block for credential details for AWS:\n"},"metastoreId":{"type":"string","description":"Unique identifier of the parent Metastore. If set for workspace-level, it must match the ID of the metastore assigned to the worspace. When changing the metastore assigned to a workspace, this field becomes required.\n"},"name":{"type":"string","description":"Name of Storage Credentials, which must be unique within the databricks_metastore. Change forces creation of a new resource.\n"},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the storage credential owner.\n"},"readOnly":{"type":"boolean","description":"Indicates whether the storage credential is only usable for read operations.\n"},"skipValidation":{"type":"boolean","description":"Suppress validation errors if any \u0026 force save the storage credential.\n"},"storageCredentialId":{"type":"string","description":"Unique ID of storage credential.\n"}},"required":["databricksGcpServiceAccount","isolationMode","metastoreId","name","owner","storageCredentialId"],"inputProperties":{"awsIamRole":{"$ref":"#/types/databricks:index/StorageCredentialAwsIamRole:StorageCredentialAwsIamRole","description":"exposes two additional attributes:\n"},"azureManagedIdentity":{"$ref":"#/types/databricks:index/StorageCredentialAzureManagedIdentity:StorageCredentialAzureManagedIdentity"},"azureServicePrincipal":{"$ref":"#/types/databricks:index/StorageCredentialAzureServicePrincipal:StorageCredentialAzureServicePrincipal"},"cloudflareApiToken":{"$ref":"#/types/databricks:index/StorageCredentialCloudflareApiToken:StorageCredentialCloudflareApiToken"},"comment":{"type":"string"},"databricksGcpServiceAccount":{"$ref":"#/types/databricks:index/StorageCredentialDatabricksGcpServiceAccount:StorageCredentialDatabricksGcpServiceAccount"},"forceDestroy":{"type":"boolean","description":"Delete storage credential regardless of its dependencies.\n"},"forceUpdate":{"type":"boolean","description":"Update storage credential regardless of its dependents.\n"},"gcpServiceAccountKey":{"$ref":"#/types/databricks:index/StorageCredentialGcpServiceAccountKey:StorageCredentialGcpServiceAccountKey"},"isolationMode":{"type":"string","description":"Whether the storage credential is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATION_MODE_ISOLATED` or `ISOLATION_MODE_OPEN`. Setting the credential to `ISOLATION_MODE_ISOLATED` will automatically allow access from the current workspace.\n\n\u003cspan pulumi-lang-nodejs=\"`awsIamRole`\" pulumi-lang-dotnet=\"`AwsIamRole`\" pulumi-lang-go=\"`awsIamRole`\" pulumi-lang-python=\"`aws_iam_role`\" pulumi-lang-yaml=\"`awsIamRole`\" pulumi-lang-java=\"`awsIamRole`\"\u003e`aws_iam_role`\u003c/span\u003e optional configuration block for credential details for AWS:\n"},"metastoreId":{"type":"string","description":"Unique identifier of the parent Metastore. If set for workspace-level, it must match the ID of the metastore assigned to the worspace. When changing the metastore assigned to a workspace, this field becomes required.\n"},"name":{"type":"string","description":"Name of Storage Credentials, which must be unique within the databricks_metastore. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the storage credential owner.\n"},"readOnly":{"type":"boolean","description":"Indicates whether the storage credential is only usable for read operations.\n"},"skipValidation":{"type":"boolean","description":"Suppress validation errors if any \u0026 force save the storage credential.\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering StorageCredential resources.\n","properties":{"awsIamRole":{"$ref":"#/types/databricks:index/StorageCredentialAwsIamRole:StorageCredentialAwsIamRole","description":"exposes two additional attributes:\n"},"azureManagedIdentity":{"$ref":"#/types/databricks:index/StorageCredentialAzureManagedIdentity:StorageCredentialAzureManagedIdentity"},"azureServicePrincipal":{"$ref":"#/types/databricks:index/StorageCredentialAzureServicePrincipal:StorageCredentialAzureServicePrincipal"},"cloudflareApiToken":{"$ref":"#/types/databricks:index/StorageCredentialCloudflareApiToken:StorageCredentialCloudflareApiToken"},"comment":{"type":"string"},"databricksGcpServiceAccount":{"$ref":"#/types/databricks:index/StorageCredentialDatabricksGcpServiceAccount:StorageCredentialDatabricksGcpServiceAccount"},"forceDestroy":{"type":"boolean","description":"Delete storage credential regardless of its dependencies.\n"},"forceUpdate":{"type":"boolean","description":"Update storage credential regardless of its dependents.\n"},"gcpServiceAccountKey":{"$ref":"#/types/databricks:index/StorageCredentialGcpServiceAccountKey:StorageCredentialGcpServiceAccountKey"},"isolationMode":{"type":"string","description":"Whether the storage credential is accessible from all workspaces or a specific set of workspaces. Can be `ISOLATION_MODE_ISOLATED` or `ISOLATION_MODE_OPEN`. Setting the credential to `ISOLATION_MODE_ISOLATED` will automatically allow access from the current workspace.\n\n\u003cspan pulumi-lang-nodejs=\"`awsIamRole`\" pulumi-lang-dotnet=\"`AwsIamRole`\" pulumi-lang-go=\"`awsIamRole`\" pulumi-lang-python=\"`aws_iam_role`\" pulumi-lang-yaml=\"`awsIamRole`\" pulumi-lang-java=\"`awsIamRole`\"\u003e`aws_iam_role`\u003c/span\u003e optional configuration block for credential details for AWS:\n"},"metastoreId":{"type":"string","description":"Unique identifier of the parent Metastore. If set for workspace-level, it must match the ID of the metastore assigned to the worspace. When changing the metastore assigned to a workspace, this field becomes required.\n"},"name":{"type":"string","description":"Name of Storage Credentials, which must be unique within the databricks_metastore. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"owner":{"type":"string","description":"Username/groupname/sp\u003cspan pulumi-lang-nodejs=\" applicationId \" pulumi-lang-dotnet=\" ApplicationId \" pulumi-lang-go=\" applicationId \" pulumi-lang-python=\" application_id \" pulumi-lang-yaml=\" applicationId \" pulumi-lang-java=\" applicationId \"\u003e application_id \u003c/span\u003eof the storage credential owner.\n"},"readOnly":{"type":"boolean","description":"Indicates whether the storage credential is only usable for read operations.\n"},"skipValidation":{"type":"boolean","description":"Suppress validation errors if any \u0026 force save the storage credential.\n"},"storageCredentialId":{"type":"string","description":"Unique ID of storage credential.\n"}},"type":"object"}},"databricks:index/systemSchema:SystemSchema":{"description":"Manages system tables enablement. System tables are a Databricks-hosted analytical store of your account's operational data. System tables can be used for historical observability across your account. System tables must be enabled by an account admin.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e Certain system schemas (such as \u003cspan pulumi-lang-nodejs=\"`billing`\" pulumi-lang-dotnet=\"`Billing`\" pulumi-lang-go=\"`billing`\" pulumi-lang-python=\"`billing`\" pulumi-lang-yaml=\"`billing`\" pulumi-lang-java=\"`billing`\"\u003e`billing`\u003c/span\u003e) may be auto-enabled once GA and should not be manually declared in Pulumi configurations.  Certain schemas can't also be disabled completely.\n\n## Example Usage\n\nEnable the system schema \u003cspan pulumi-lang-nodejs=\"`access`\" pulumi-lang-dotnet=\"`Access`\" pulumi-lang-go=\"`access`\" pulumi-lang-python=\"`access`\" pulumi-lang-yaml=\"`access`\" pulumi-lang-java=\"`access`\"\u003e`access`\u003c/span\u003e\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.SystemSchema(\"this\", {schema: \"access\"});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.SystemSchema(\"this\", schema=\"access\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.SystemSchema(\"this\", new()\n    {\n        Schema = \"access\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewSystemSchema(ctx, \"this\", \u0026databricks.SystemSchemaArgs{\n\t\t\tSchema: pulumi.String(\"access\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.SystemSchema;\nimport com.pulumi.databricks.SystemSchemaArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new SystemSchema(\"this\", SystemSchemaArgs.builder()\n            .schema(\"access\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:SystemSchema\n    properties:\n      schema: access\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"autoEnabled":{"type":"boolean"},"fullName":{"type":"string","description":"the full name of the system schema, in form of `system.\u003cschema\u003e`.\n"},"metastoreId":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/SystemSchemaProviderConfig:SystemSchemaProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schema":{"type":"string","description":"name of the system schema.\n"},"state":{"type":"string","description":"The current state of enablement for the system schema.\n"}},"required":["autoEnabled","fullName","metastoreId","schema","state"],"inputProperties":{"providerConfig":{"$ref":"#/types/databricks:index/SystemSchemaProviderConfig:SystemSchemaProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schema":{"type":"string","description":"name of the system schema.\n"}},"requiredInputs":["schema"],"stateInputs":{"description":"Input properties used for looking up and filtering SystemSchema resources.\n","properties":{"autoEnabled":{"type":"boolean"},"fullName":{"type":"string","description":"the full name of the system schema, in form of `system.\u003cschema\u003e`.\n"},"metastoreId":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/SystemSchemaProviderConfig:SystemSchemaProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schema":{"type":"string","description":"name of the system schema.\n"},"state":{"type":"string","description":"The current state of enablement for the system schema.\n"}},"type":"object"}},"databricks:index/table:Table":{"properties":{"catalogName":{"type":"string"},"columns":{"type":"array","items":{"$ref":"#/types/databricks:index/TableColumn:TableColumn"}},"comment":{"type":"string"},"dataSourceFormat":{"type":"string"},"name":{"type":"string"},"owner":{"type":"string"},"properties":{"type":"object","additionalProperties":{"type":"string"}},"providerConfig":{"$ref":"#/types/databricks:index/TableProviderConfig:TableProviderConfig"},"schemaName":{"type":"string"},"storageCredentialName":{"type":"string"},"storageLocation":{"type":"string"},"tableType":{"type":"string"},"viewDefinition":{"type":"string"}},"required":["catalogName","columns","dataSourceFormat","name","owner","schemaName","tableType"],"inputProperties":{"catalogName":{"type":"string","willReplaceOnChanges":true},"columns":{"type":"array","items":{"$ref":"#/types/databricks:index/TableColumn:TableColumn"}},"comment":{"type":"string"},"dataSourceFormat":{"type":"string"},"name":{"type":"string"},"owner":{"type":"string"},"properties":{"type":"object","additionalProperties":{"type":"string"}},"providerConfig":{"$ref":"#/types/databricks:index/TableProviderConfig:TableProviderConfig"},"schemaName":{"type":"string","willReplaceOnChanges":true},"storageCredentialName":{"type":"string","willReplaceOnChanges":true},"storageLocation":{"type":"string"},"tableType":{"type":"string","willReplaceOnChanges":true},"viewDefinition":{"type":"string"}},"requiredInputs":["catalogName","columns","dataSourceFormat","schemaName","tableType"],"stateInputs":{"description":"Input properties used for looking up and filtering Table resources.\n","properties":{"catalogName":{"type":"string","willReplaceOnChanges":true},"columns":{"type":"array","items":{"$ref":"#/types/databricks:index/TableColumn:TableColumn"}},"comment":{"type":"string"},"dataSourceFormat":{"type":"string"},"name":{"type":"string"},"owner":{"type":"string"},"properties":{"type":"object","additionalProperties":{"type":"string"}},"providerConfig":{"$ref":"#/types/databricks:index/TableProviderConfig:TableProviderConfig"},"schemaName":{"type":"string","willReplaceOnChanges":true},"storageCredentialName":{"type":"string","willReplaceOnChanges":true},"storageLocation":{"type":"string"},"tableType":{"type":"string","willReplaceOnChanges":true},"viewDefinition":{"type":"string"}},"type":"object"}},"databricks:index/tagPolicy:TagPolicy":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nDefine tag policies to manage governed tags in your account.\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.EntityTagAssignment \" pulumi-lang-dotnet=\" databricks.EntityTagAssignment \" pulumi-lang-go=\" EntityTagAssignment \" pulumi-lang-python=\" EntityTagAssignment \" pulumi-lang-yaml=\" databricks.EntityTagAssignment \" pulumi-lang-java=\" databricks.EntityTagAssignment \"\u003e databricks.EntityTagAssignment \u003c/span\u003efor assigning tags to supported Unity Catalog entities.\n*\u003cspan pulumi-lang-nodejs=\" databricks.WorkspaceEntityTagAssignment \" pulumi-lang-dotnet=\" databricks.WorkspaceEntityTagAssignment \" pulumi-lang-go=\" WorkspaceEntityTagAssignment \" pulumi-lang-python=\" WorkspaceEntityTagAssignment \" pulumi-lang-yaml=\" databricks.WorkspaceEntityTagAssignment \" pulumi-lang-java=\" databricks.WorkspaceEntityTagAssignment \"\u003e databricks.WorkspaceEntityTagAssignment \u003c/span\u003efor assigning tags to supported workspace entities.\n*\u003cspan pulumi-lang-nodejs=\" databricks.PolicyInfo \" pulumi-lang-dotnet=\" databricks.PolicyInfo \" pulumi-lang-go=\" PolicyInfo \" pulumi-lang-python=\" PolicyInfo \" pulumi-lang-yaml=\" databricks.PolicyInfo \" pulumi-lang-java=\" databricks.PolicyInfo \"\u003e databricks.PolicyInfo \u003c/span\u003efor defining ABAC policies using governed tags.\n*\u003cspan pulumi-lang-nodejs=\" databricks.AccessControlRuleSet \" pulumi-lang-dotnet=\" databricks.AccessControlRuleSet \" pulumi-lang-go=\" AccessControlRuleSet \" pulumi-lang-python=\" AccessControlRuleSet \" pulumi-lang-yaml=\" databricks.AccessControlRuleSet \" pulumi-lang-java=\" databricks.AccessControlRuleSet \"\u003e databricks.AccessControlRuleSet \u003c/span\u003efor managing account-level and individual tag policy permissions.\n\n\u003e **Note** This resource can only be used with a workspace-level provider!\n\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst exampleTagPolicy = new databricks.TagPolicy(\"example_tag_policy\", {\n    tagKey: \"example_tag_key\",\n    description: \"Example description.\",\n    values: [\n        {\n            name: \"example_value_1\",\n        },\n        {\n            name: \"example_value_2\",\n        },\n        {\n            name: \"example_value_3\",\n        },\n    ],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexample_tag_policy = databricks.TagPolicy(\"example_tag_policy\",\n    tag_key=\"example_tag_key\",\n    description=\"Example description.\",\n    values=[\n        {\n            \"name\": \"example_value_1\",\n        },\n        {\n            \"name\": \"example_value_2\",\n        },\n        {\n            \"name\": \"example_value_3\",\n        },\n    ])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var exampleTagPolicy = new Databricks.TagPolicy(\"example_tag_policy\", new()\n    {\n        TagKey = \"example_tag_key\",\n        Description = \"Example description.\",\n        Values = new[]\n        {\n            new Databricks.Inputs.TagPolicyValueArgs\n            {\n                Name = \"example_value_1\",\n            },\n            new Databricks.Inputs.TagPolicyValueArgs\n            {\n                Name = \"example_value_2\",\n            },\n            new Databricks.Inputs.TagPolicyValueArgs\n            {\n                Name = \"example_value_3\",\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewTagPolicy(ctx, \"example_tag_policy\", \u0026databricks.TagPolicyArgs{\n\t\t\tTagKey:      pulumi.String(\"example_tag_key\"),\n\t\t\tDescription: pulumi.String(\"Example description.\"),\n\t\t\tValues: databricks.TagPolicyValueArray{\n\t\t\t\t\u0026databricks.TagPolicyValueArgs{\n\t\t\t\t\tName: pulumi.String(\"example_value_1\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.TagPolicyValueArgs{\n\t\t\t\t\tName: pulumi.String(\"example_value_2\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.TagPolicyValueArgs{\n\t\t\t\t\tName: pulumi.String(\"example_value_3\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.TagPolicy;\nimport com.pulumi.databricks.TagPolicyArgs;\nimport com.pulumi.databricks.inputs.TagPolicyValueArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var exampleTagPolicy = new TagPolicy(\"exampleTagPolicy\", TagPolicyArgs.builder()\n            .tagKey(\"example_tag_key\")\n            .description(\"Example description.\")\n            .values(            \n                TagPolicyValueArgs.builder()\n                    .name(\"example_value_1\")\n                    .build(),\n                TagPolicyValueArgs.builder()\n                    .name(\"example_value_2\")\n                    .build(),\n                TagPolicyValueArgs.builder()\n                    .name(\"example_value_3\")\n                    .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  exampleTagPolicy:\n    type: databricks:TagPolicy\n    name: example_tag_policy\n    properties:\n      tagKey: example_tag_key\n      description: Example description.\n      values:\n        - name: example_value_1\n        - name: example_value_2\n        - name: example_value_3\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"createTime":{"type":"string","description":"(string) - Timestamp when the tag policy was created\n"},"description":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/TagPolicyProviderConfig:TagPolicyProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string"},"updateTime":{"type":"string","description":"(string) - Timestamp when the tag policy was last updated\n"},"values":{"type":"array","items":{"$ref":"#/types/databricks:index/TagPolicyValue:TagPolicyValue"}}},"required":["createTime","tagKey","updateTime"],"inputProperties":{"description":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/TagPolicyProviderConfig:TagPolicyProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string"},"values":{"type":"array","items":{"$ref":"#/types/databricks:index/TagPolicyValue:TagPolicyValue"}}},"requiredInputs":["tagKey"],"stateInputs":{"description":"Input properties used for looking up and filtering TagPolicy resources.\n","properties":{"createTime":{"type":"string","description":"(string) - Timestamp when the tag policy was created\n"},"description":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/TagPolicyProviderConfig:TagPolicyProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string"},"updateTime":{"type":"string","description":"(string) - Timestamp when the tag policy was last updated\n"},"values":{"type":"array","items":{"$ref":"#/types/databricks:index/TagPolicyValue:TagPolicyValue"}}},"type":"object"}},"databricks:index/token:Token":{"description":"This resource creates [Personal Access Tokens](https://docs.databricks.com/sql/user/security/personal-access-tokens.html) for the same user that is authenticated with the provider. Most likely you should use\u003cspan pulumi-lang-nodejs=\" databricks.OboToken \" pulumi-lang-dotnet=\" databricks.OboToken \" pulumi-lang-go=\" OboToken \" pulumi-lang-python=\" OboToken \" pulumi-lang-yaml=\" databricks.OboToken \" pulumi-lang-java=\" databricks.OboToken \"\u003e databricks.OboToken \u003c/span\u003eto create [On-Behalf-Of tokens](https://docs.databricks.com/administration-guide/users-groups/service-principals.html#manage-personal-access-tokens-for-a-service-principal) for a\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003ein Databricks workspaces on AWS. Databricks workspaces on other clouds use their own native OAuth token flows.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\n// create PAT token to provision entities within workspace\nconst pat = new databricks.Token(\"pat\", {\n    comment: \"Pulumi Provisioning\",\n    lifetimeSeconds: 8640000,\n});\nexport const databricksToken = pat.tokenValue;\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\n# create PAT token to provision entities within workspace\npat = databricks.Token(\"pat\",\n    comment=\"Pulumi Provisioning\",\n    lifetime_seconds=8640000)\npulumi.export(\"databricksToken\", pat.token_value)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    // create PAT token to provision entities within workspace\n    var pat = new Databricks.Token(\"pat\", new()\n    {\n        Comment = \"Pulumi Provisioning\",\n        LifetimeSeconds = 8640000,\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"databricksToken\"] = pat.TokenValue,\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t// create PAT token to provision entities within workspace\n\t\tpat, err := databricks.NewToken(ctx, \"pat\", \u0026databricks.TokenArgs{\n\t\t\tComment:         pulumi.String(\"Pulumi Provisioning\"),\n\t\t\tLifetimeSeconds: pulumi.Int(8640000),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"databricksToken\", pat.TokenValue)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Token;\nimport com.pulumi.databricks.TokenArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        // create PAT token to provision entities within workspace\n        var pat = new Token(\"pat\", TokenArgs.builder()\n            .comment(\"Pulumi Provisioning\")\n            .lifetimeSeconds(8640000)\n            .build());\n\n        ctx.export(\"databricksToken\", pat.tokenValue());\n    }\n}\n```\n```yaml\nresources:\n  # create PAT token to provision entities within workspace\n  pat:\n    type: databricks:Token\n    properties:\n      comment: Pulumi Provisioning\n      lifetimeSeconds: 8.64e+06\noutputs:\n  # output token for other modules\n  databricksToken: ${pat.tokenValue}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nA token can be automatically rotated by taking a dependency on the \u003cspan pulumi-lang-nodejs=\"`timeRotating`\" pulumi-lang-dotnet=\"`TimeRotating`\" pulumi-lang-go=\"`timeRotating`\" pulumi-lang-python=\"`time_rotating`\" pulumi-lang-yaml=\"`timeRotating`\" pulumi-lang-java=\"`timeRotating`\"\u003e`time_rotating`\u003c/span\u003e resource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as time from \"@pulumiverse/time\";\n\nconst _this = new time.Rotating(\"this\", {rotationDays: 30});\nconst pat = new databricks.Token(\"pat\", {\n    comment: pulumi.interpolate`Pulumi (created: ${_this.rfc3339})`,\n    lifetimeSeconds: 60 * 24 * 60 * 60,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumiverse_time as time\n\nthis = time.Rotating(\"this\", rotation_days=30)\npat = databricks.Token(\"pat\",\n    comment=this.rfc3339.apply(lambda rfc3339: f\"Pulumi (created: {rfc3339})\"),\n    lifetime_seconds=60 * 24 * 60 * 60)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Time = Pulumiverse.Time;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Time.Rotating(\"this\", new()\n    {\n        RotationDays = 30,\n    });\n\n    var pat = new Databricks.Token(\"pat\", new()\n    {\n        Comment = @this.Rfc3339.Apply(rfc3339 =\u003e $\"Pulumi (created: {rfc3339})\"),\n        LifetimeSeconds = 60 * 24 * 60 * 60,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumiverse/pulumi-time/sdk/go/time\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := time.NewRotating(ctx, \"this\", \u0026time.RotatingArgs{\n\t\t\tRotationDays: pulumi.Int(30),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewToken(ctx, \"pat\", \u0026databricks.TokenArgs{\n\t\t\tComment: this.Rfc3339.ApplyT(func(rfc3339 string) (string, error) {\n\t\t\t\treturn fmt.Sprintf(\"Pulumi (created: %v)\", rfc3339), nil\n\t\t\t}).(pulumi.StringOutput),\n\t\t\tLifetimeSeconds: int(60 * 24 * 60 * 60),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumiverse.time.Rotating;\nimport com.pulumiverse.time.RotatingArgs;\nimport com.pulumi.databricks.Token;\nimport com.pulumi.databricks.TokenArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new Rotating(\"this\", RotatingArgs.builder()\n            .rotationDays(30)\n            .build());\n\n        var pat = new Token(\"pat\", TokenArgs.builder()\n            .comment(this_.rfc3339().applyValue(_rfc3339 -\u003e String.format(\"Pulumi (created: %s)\", _rfc3339)))\n            .lifetimeSeconds(60 * 24 * 60 * 60)\n            .build());\n\n    }\n}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"comment":{"type":"string","description":"(String) Comment that will appear on the user's settings page for this token.\n"},"creationTime":{"type":"integer"},"expiryTime":{"type":"integer"},"lifetimeSeconds":{"type":"integer","description":"(Integer) The lifetime of the token, in seconds. If no lifetime is specified, then expire time will be set to maximum allowed by the workspace configuration or platform.\n"},"providerConfig":{"$ref":"#/types/databricks:index/TokenProviderConfig:TokenProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"tokenId":{"type":"string"},"tokenValue":{"type":"string","description":"**Sensitive** value of the newly-created token.\n","secret":true}},"required":["creationTime","expiryTime","tokenId","tokenValue"],"inputProperties":{"comment":{"type":"string","description":"(String) Comment that will appear on the user's settings page for this token.\n","willReplaceOnChanges":true},"creationTime":{"type":"integer"},"expiryTime":{"type":"integer"},"lifetimeSeconds":{"type":"integer","description":"(Integer) The lifetime of the token, in seconds. If no lifetime is specified, then expire time will be set to maximum allowed by the workspace configuration or platform.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/TokenProviderConfig:TokenProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"tokenId":{"type":"string"}},"stateInputs":{"description":"Input properties used for looking up and filtering Token resources.\n","properties":{"comment":{"type":"string","description":"(String) Comment that will appear on the user's settings page for this token.\n","willReplaceOnChanges":true},"creationTime":{"type":"integer"},"expiryTime":{"type":"integer"},"lifetimeSeconds":{"type":"integer","description":"(Integer) The lifetime of the token, in seconds. If no lifetime is specified, then expire time will be set to maximum allowed by the workspace configuration or platform.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/TokenProviderConfig:TokenProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"tokenId":{"type":"string"},"tokenValue":{"type":"string","description":"**Sensitive** value of the newly-created token.\n","secret":true}},"type":"object"}},"databricks:index/user:User":{"description":"This resource allows you to manage [users in Databricks Workspace](https://docs.databricks.com/administration-guide/users-groups/users.html), [Databricks Account Console](https://accounts.cloud.databricks.com/) or [Azure Databricks Account Console](https://accounts.azuredatabricks.net). You can also associate Databricks users to databricks_group. Upon user creation the user will receive a welcome email. You can also get information about caller identity using\u003cspan pulumi-lang-nodejs=\" databricks.getCurrentUser \" pulumi-lang-dotnet=\" databricks.getCurrentUser \" pulumi-lang-go=\" getCurrentUser \" pulumi-lang-python=\" get_current_user \" pulumi-lang-yaml=\" databricks.getCurrentUser \" pulumi-lang-java=\" databricks.getCurrentUser \"\u003e databricks.getCurrentUser \u003c/span\u003edata source.\n\n\u003e This resource can be used with an account or workspace-level provider.\n\n\u003e To assign account level users to workspace use databricks_mws_permission_assignment.\n\n\u003e Entitlements, like, \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`allowInstancePoolCreate`\" pulumi-lang-dotnet=\"`AllowInstancePoolCreate`\" pulumi-lang-go=\"`allowInstancePoolCreate`\" pulumi-lang-python=\"`allow_instance_pool_create`\" pulumi-lang-yaml=\"`allowInstancePoolCreate`\" pulumi-lang-java=\"`allowInstancePoolCreate`\"\u003e`allow_instance_pool_create`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`workspaceConsume`\" pulumi-lang-dotnet=\"`WorkspaceConsume`\" pulumi-lang-go=\"`workspaceConsume`\" pulumi-lang-python=\"`workspace_consume`\" pulumi-lang-yaml=\"`workspaceConsume`\" pulumi-lang-java=\"`workspaceConsume`\"\u003e`workspace_consume`\u003c/span\u003e applicable only for workspace-level users.  Use\u003cspan pulumi-lang-nodejs=\" databricks.Entitlements \" pulumi-lang-dotnet=\" databricks.Entitlements \" pulumi-lang-go=\" Entitlements \" pulumi-lang-python=\" Entitlements \" pulumi-lang-yaml=\" databricks.Entitlements \" pulumi-lang-java=\" databricks.Entitlements \"\u003e databricks.Entitlements \u003c/span\u003eresource to assign entitlements inside a workspace to account-level users.\n\nTo create users in the Databricks account, the provider must be configured with `host = \"https://accounts.cloud.databricks.com\"` on AWS deployments or `host = \"https://accounts.azuredatabricks.net\"` and authenticate using AAD tokens on Azure deployments.\n\nThe default behavior when deleting a \u003cspan pulumi-lang-nodejs=\"`databricks.User`\" pulumi-lang-dotnet=\"`databricks.User`\" pulumi-lang-go=\"`User`\" pulumi-lang-python=\"`User`\" pulumi-lang-yaml=\"`databricks.User`\" pulumi-lang-java=\"`databricks.User`\"\u003e`databricks.User`\u003c/span\u003e resource depends on whether the provider is configured at the workspace-level or account-level. When the provider is configured at the workspace-level, the user will be deleted from the workspace. When the provider is configured at the account-level, the user will be deactivated but not deleted. When the provider is configured at the account level, to delete the user from the account when the resource is deleted, set \u003cspan pulumi-lang-nodejs=\"`disableAsUserDeletion \" pulumi-lang-dotnet=\"`DisableAsUserDeletion \" pulumi-lang-go=\"`disableAsUserDeletion \" pulumi-lang-python=\"`disable_as_user_deletion \" pulumi-lang-yaml=\"`disableAsUserDeletion \" pulumi-lang-java=\"`disableAsUserDeletion \"\u003e`disable_as_user_deletion \u003c/span\u003e= false`. Conversely, when the provider is configured at the account-level, to deactivate the user when the resource is deleted, set \u003cspan pulumi-lang-nodejs=\"`disableAsUserDeletion \" pulumi-lang-dotnet=\"`DisableAsUserDeletion \" pulumi-lang-go=\"`disableAsUserDeletion \" pulumi-lang-python=\"`disable_as_user_deletion \" pulumi-lang-yaml=\"`disableAsUserDeletion \" pulumi-lang-java=\"`disableAsUserDeletion \"\u003e`disable_as_user_deletion \u003c/span\u003e= true`.\n\n## Example Usage\n\nCreating regular user:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst me = new databricks.User(\"me\", {userName: \"me@example.com\"});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nme = databricks.User(\"me\", user_name=\"me@example.com\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var me = new Databricks.User(\"me\", new()\n    {\n        UserName = \"me@example.com\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewUser(ctx, \"me\", \u0026databricks.UserArgs{\n\t\t\tUserName: pulumi.String(\"me@example.com\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var me = new User(\"me\", UserArgs.builder()\n            .userName(\"me@example.com\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  me:\n    type: databricks:User\n    properties:\n      userName: me@example.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating user with administrative permissions - referencing special \u003cspan pulumi-lang-nodejs=\"`admins`\" pulumi-lang-dotnet=\"`Admins`\" pulumi-lang-go=\"`admins`\" pulumi-lang-python=\"`admins`\" pulumi-lang-yaml=\"`admins`\" pulumi-lang-java=\"`admins`\"\u003e`admins`\u003c/span\u003e\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003ein\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eresource:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst admins = databricks.getGroup({\n    displayName: \"admins\",\n});\nconst me = new databricks.User(\"me\", {userName: \"me@example.com\"});\nconst i_am_admin = new databricks.GroupMember(\"i-am-admin\", {\n    groupId: admins.then(admins =\u003e admins.id),\n    memberId: me.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nadmins = databricks.get_group(display_name=\"admins\")\nme = databricks.User(\"me\", user_name=\"me@example.com\")\ni_am_admin = databricks.GroupMember(\"i-am-admin\",\n    group_id=admins.id,\n    member_id=me.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var admins = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"admins\",\n    });\n\n    var me = new Databricks.User(\"me\", new()\n    {\n        UserName = \"me@example.com\",\n    });\n\n    var i_am_admin = new Databricks.GroupMember(\"i-am-admin\", new()\n    {\n        GroupId = admins.Apply(getGroupResult =\u003e getGroupResult.Id),\n        MemberId = me.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tadmins, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"admins\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tme, err := databricks.NewUser(ctx, \"me\", \u0026databricks.UserArgs{\n\t\t\tUserName: pulumi.String(\"me@example.com\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupMember(ctx, \"i-am-admin\", \u0026databricks.GroupMemberArgs{\n\t\t\tGroupId:  pulumi.String(admins.Id),\n\t\t\tMemberId: me.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport com.pulumi.databricks.GroupMember;\nimport com.pulumi.databricks.GroupMemberArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var admins = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"admins\")\n            .build());\n\n        var me = new User(\"me\", UserArgs.builder()\n            .userName(\"me@example.com\")\n            .build());\n\n        var i_am_admin = new GroupMember(\"i-am-admin\", GroupMemberArgs.builder()\n            .groupId(admins.id())\n            .memberId(me.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  me:\n    type: databricks:User\n    properties:\n      userName: me@example.com\n  i-am-admin:\n    type: databricks:GroupMember\n    properties:\n      groupId: ${admins.id}\n      memberId: ${me.id}\nvariables:\n  admins:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: admins\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating user with cluster create permissions:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst me = new databricks.User(\"me\", {\n    userName: \"me@example.com\",\n    displayName: \"Example user\",\n    allowClusterCreate: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nme = databricks.User(\"me\",\n    user_name=\"me@example.com\",\n    display_name=\"Example user\",\n    allow_cluster_create=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var me = new Databricks.User(\"me\", new()\n    {\n        UserName = \"me@example.com\",\n        DisplayName = \"Example user\",\n        AllowClusterCreate = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewUser(ctx, \"me\", \u0026databricks.UserArgs{\n\t\t\tUserName:           pulumi.String(\"me@example.com\"),\n\t\t\tDisplayName:        pulumi.String(\"Example user\"),\n\t\t\tAllowClusterCreate: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var me = new User(\"me\", UserArgs.builder()\n            .userName(\"me@example.com\")\n            .displayName(\"Example user\")\n            .allowClusterCreate(true)\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  me:\n    type: databricks:User\n    properties:\n      userName: me@example.com\n      displayName: Example user\n      allowClusterCreate: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating user in AWS Databricks account:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst accountUser = new databricks.User(\"account_user\", {\n    userName: \"me@example.com\",\n    displayName: \"Example user\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\naccount_user = databricks.User(\"account_user\",\n    user_name=\"me@example.com\",\n    display_name=\"Example user\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var accountUser = new Databricks.User(\"account_user\", new()\n    {\n        UserName = \"me@example.com\",\n        DisplayName = \"Example user\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewUser(ctx, \"account_user\", \u0026databricks.UserArgs{\n\t\t\tUserName:    pulumi.String(\"me@example.com\"),\n\t\t\tDisplayName: pulumi.String(\"Example user\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var accountUser = new User(\"accountUser\", UserArgs.builder()\n            .userName(\"me@example.com\")\n            .displayName(\"Example user\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  accountUser:\n    type: databricks:User\n    name: account_user\n    properties:\n      userName: me@example.com\n      displayName: Example user\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nCreating user in Azure Databricks account:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst accountUser = new databricks.User(\"account_user\", {\n    userName: \"me@example.com\",\n    displayName: \"Example user\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\naccount_user = databricks.User(\"account_user\",\n    user_name=\"me@example.com\",\n    display_name=\"Example user\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var accountUser = new Databricks.User(\"account_user\", new()\n    {\n        UserName = \"me@example.com\",\n        DisplayName = \"Example user\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewUser(ctx, \"account_user\", \u0026databricks.UserArgs{\n\t\t\tUserName:    pulumi.String(\"me@example.com\"),\n\t\t\tDisplayName: pulumi.String(\"Example user\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var accountUser = new User(\"accountUser\", UserArgs.builder()\n            .userName(\"me@example.com\")\n            .displayName(\"Example user\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  accountUser:\n    type: databricks:User\n    name: account_user\n    properties:\n      userName: me@example.com\n      displayName: Example user\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_group.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003edata to retrieve information about databricks_user.\n\n","properties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `users/mr.foo@example.com`.\n"},"active":{"type":"boolean","description":"Either user is active or not. True by default, but can be set to false in case of user deactivation with preserving user assets.\n"},"allowClusterCreate":{"type":"boolean","description":"Allow the user to have cluster create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e argument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"Allow the user to have instance pool create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the user to have access to [Databricks SQL](https://databricks.com/product/databricks-sql)  UI, [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one) and through databricks_sql_endpoint.\n"},"disableAsUserDeletion":{"type":"boolean","description":"Deactivate the user when deleting the resource, rather than deleting the user entirely. Defaults to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e when the provider is configured at the account-level and \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e when configured at the workspace-level. This flag is exclusive to\u003cspan pulumi-lang-nodejs=\" forceDeleteRepos \" pulumi-lang-dotnet=\" ForceDeleteRepos \" pulumi-lang-go=\" forceDeleteRepos \" pulumi-lang-python=\" force_delete_repos \" pulumi-lang-yaml=\" forceDeleteRepos \" pulumi-lang-java=\" forceDeleteRepos \"\u003e force_delete_repos \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" forceDeleteHomeDir \" pulumi-lang-dotnet=\" ForceDeleteHomeDir \" pulumi-lang-go=\" forceDeleteHomeDir \" pulumi-lang-python=\" force_delete_home_dir \" pulumi-lang-yaml=\" forceDeleteHomeDir \" pulumi-lang-java=\" forceDeleteHomeDir \"\u003e force_delete_home_dir \u003c/span\u003eflags.\n"},"displayName":{"type":"string","description":"This is an alias for the username that can be the full name of the user.\n"},"externalId":{"type":"string","description":"ID of the user in an external identity provider.\n"},"force":{"type":"boolean","description":"Ignore `cannot create user: User with username X already exists` errors and implicitly import the specific user into Pulumi state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.\n"},"forceDeleteHomeDir":{"type":"boolean","description":"This flag determines whether the user's home directory is deleted when the user is deleted. It will have not impact when in the accounts SCIM API. False by default.\n"},"forceDeleteRepos":{"type":"boolean","description":"This flag determines whether the user's repo directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.\n"},"home":{"type":"string","description":"Home folder of the user, e.g. `/Users/mr.foo@example.com`.\n"},"repos":{"type":"string","description":"Personal Repos location of the user, e.g. `/Repos/mr.foo@example.com`.\n"},"userName":{"type":"string","description":"This is the username of the given user and will be their form of access and identity.  Provided username will be converted to lower case if it contains upper case characters.\n"},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the user to have access to a Databricks Workspace UI and [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the user to have access only to [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"required":["aclPrincipalId","disableAsUserDeletion","displayName","home","repos","userName"],"inputProperties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `users/mr.foo@example.com`.\n"},"active":{"type":"boolean","description":"Either user is active or not. True by default, but can be set to false in case of user deactivation with preserving user assets.\n"},"allowClusterCreate":{"type":"boolean","description":"Allow the user to have cluster create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e argument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"Allow the user to have instance pool create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the user to have access to [Databricks SQL](https://databricks.com/product/databricks-sql)  UI, [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one) and through databricks_sql_endpoint.\n"},"disableAsUserDeletion":{"type":"boolean","description":"Deactivate the user when deleting the resource, rather than deleting the user entirely. Defaults to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e when the provider is configured at the account-level and \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e when configured at the workspace-level. This flag is exclusive to\u003cspan pulumi-lang-nodejs=\" forceDeleteRepos \" pulumi-lang-dotnet=\" ForceDeleteRepos \" pulumi-lang-go=\" forceDeleteRepos \" pulumi-lang-python=\" force_delete_repos \" pulumi-lang-yaml=\" forceDeleteRepos \" pulumi-lang-java=\" forceDeleteRepos \"\u003e force_delete_repos \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" forceDeleteHomeDir \" pulumi-lang-dotnet=\" ForceDeleteHomeDir \" pulumi-lang-go=\" forceDeleteHomeDir \" pulumi-lang-python=\" force_delete_home_dir \" pulumi-lang-yaml=\" forceDeleteHomeDir \" pulumi-lang-java=\" forceDeleteHomeDir \"\u003e force_delete_home_dir \u003c/span\u003eflags.\n"},"displayName":{"type":"string","description":"This is an alias for the username that can be the full name of the user.\n"},"externalId":{"type":"string","description":"ID of the user in an external identity provider.\n"},"force":{"type":"boolean","description":"Ignore `cannot create user: User with username X already exists` errors and implicitly import the specific user into Pulumi state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.\n"},"forceDeleteHomeDir":{"type":"boolean","description":"This flag determines whether the user's home directory is deleted when the user is deleted. It will have not impact when in the accounts SCIM API. False by default.\n"},"forceDeleteRepos":{"type":"boolean","description":"This flag determines whether the user's repo directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.\n"},"home":{"type":"string","description":"Home folder of the user, e.g. `/Users/mr.foo@example.com`.\n"},"repos":{"type":"string","description":"Personal Repos location of the user, e.g. `/Repos/mr.foo@example.com`.\n"},"userName":{"type":"string","description":"This is the username of the given user and will be their form of access and identity.  Provided username will be converted to lower case if it contains upper case characters.\n","willReplaceOnChanges":true},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the user to have access to a Databricks Workspace UI and [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the user to have access only to [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"requiredInputs":["userName"],"stateInputs":{"description":"Input properties used for looking up and filtering User resources.\n","properties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `users/mr.foo@example.com`.\n"},"active":{"type":"boolean","description":"Either user is active or not. True by default, but can be set to false in case of user deactivation with preserving user assets.\n"},"allowClusterCreate":{"type":"boolean","description":"Allow the user to have cluster create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e argument. Everyone without \u003cspan pulumi-lang-nodejs=\"`allowClusterCreate`\" pulumi-lang-dotnet=\"`AllowClusterCreate`\" pulumi-lang-go=\"`allowClusterCreate`\" pulumi-lang-python=\"`allow_cluster_create`\" pulumi-lang-yaml=\"`allowClusterCreate`\" pulumi-lang-java=\"`allowClusterCreate`\"\u003e`allow_cluster_create`\u003c/span\u003e argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy.\n"},"allowInstancePoolCreate":{"type":"boolean","description":"Allow the user to have instance pool create privileges. Defaults to false. More fine grained permissions could be assigned with\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" instancePoolId \" pulumi-lang-dotnet=\" InstancePoolId \" pulumi-lang-go=\" instancePoolId \" pulumi-lang-python=\" instance_pool_id \" pulumi-lang-yaml=\" instancePoolId \" pulumi-lang-java=\" instancePoolId \"\u003e instance_pool_id \u003c/span\u003eargument.\n"},"databricksSqlAccess":{"type":"boolean","description":"This is a field to allow the user to have access to [Databricks SQL](https://databricks.com/product/databricks-sql)  UI, [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one) and through databricks_sql_endpoint.\n"},"disableAsUserDeletion":{"type":"boolean","description":"Deactivate the user when deleting the resource, rather than deleting the user entirely. Defaults to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e when the provider is configured at the account-level and \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e when configured at the workspace-level. This flag is exclusive to\u003cspan pulumi-lang-nodejs=\" forceDeleteRepos \" pulumi-lang-dotnet=\" ForceDeleteRepos \" pulumi-lang-go=\" forceDeleteRepos \" pulumi-lang-python=\" force_delete_repos \" pulumi-lang-yaml=\" forceDeleteRepos \" pulumi-lang-java=\" forceDeleteRepos \"\u003e force_delete_repos \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" forceDeleteHomeDir \" pulumi-lang-dotnet=\" ForceDeleteHomeDir \" pulumi-lang-go=\" forceDeleteHomeDir \" pulumi-lang-python=\" force_delete_home_dir \" pulumi-lang-yaml=\" forceDeleteHomeDir \" pulumi-lang-java=\" forceDeleteHomeDir \"\u003e force_delete_home_dir \u003c/span\u003eflags.\n"},"displayName":{"type":"string","description":"This is an alias for the username that can be the full name of the user.\n"},"externalId":{"type":"string","description":"ID of the user in an external identity provider.\n"},"force":{"type":"boolean","description":"Ignore `cannot create user: User with username X already exists` errors and implicitly import the specific user into Pulumi state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.\n"},"forceDeleteHomeDir":{"type":"boolean","description":"This flag determines whether the user's home directory is deleted when the user is deleted. It will have not impact when in the accounts SCIM API. False by default.\n"},"forceDeleteRepos":{"type":"boolean","description":"This flag determines whether the user's repo directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.\n"},"home":{"type":"string","description":"Home folder of the user, e.g. `/Users/mr.foo@example.com`.\n"},"repos":{"type":"string","description":"Personal Repos location of the user, e.g. `/Repos/mr.foo@example.com`.\n"},"userName":{"type":"string","description":"This is the username of the given user and will be their form of access and identity.  Provided username will be converted to lower case if it contains upper case characters.\n","willReplaceOnChanges":true},"workspaceAccess":{"type":"boolean","description":"This is a field to allow the user to have access to a Databricks Workspace UI and [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).\n"},"workspaceConsume":{"type":"boolean","description":"This is a field to allow the user to have access only to [Databricks One](https://docs.databricks.com/aws/en/workspace/databricks-one#who-can-access-databricks-one).  Couldn't be used with \u003cspan pulumi-lang-nodejs=\"`workspaceAccess`\" pulumi-lang-dotnet=\"`WorkspaceAccess`\" pulumi-lang-go=\"`workspaceAccess`\" pulumi-lang-python=\"`workspace_access`\" pulumi-lang-yaml=\"`workspaceAccess`\" pulumi-lang-java=\"`workspaceAccess`\"\u003e`workspace_access`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`databricksSqlAccess`\" pulumi-lang-dotnet=\"`DatabricksSqlAccess`\" pulumi-lang-go=\"`databricksSqlAccess`\" pulumi-lang-python=\"`databricks_sql_access`\" pulumi-lang-yaml=\"`databricksSqlAccess`\" pulumi-lang-java=\"`databricksSqlAccess`\"\u003e`databricks_sql_access`\u003c/span\u003e.\n"}},"type":"object"}},"databricks:index/userInstanceProfile:UserInstanceProfile":{"description":"\u003e **Deprecated** Please rewrite with databricks_user_role. This resource will be removed in v0.5.x\n\nThis resource allows you to attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_user.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst instanceProfile = new databricks.InstanceProfile(\"instance_profile\", {instanceProfileArn: \"my_instance_profile_arn\"});\nconst myUser = new databricks.User(\"my_user\", {userName: \"me@example.com\"});\nconst myUserInstanceProfile = new databricks.UserInstanceProfile(\"my_user_instance_profile\", {\n    userId: myUser.id,\n    instanceProfileId: instanceProfile.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ninstance_profile = databricks.InstanceProfile(\"instance_profile\", instance_profile_arn=\"my_instance_profile_arn\")\nmy_user = databricks.User(\"my_user\", user_name=\"me@example.com\")\nmy_user_instance_profile = databricks.UserInstanceProfile(\"my_user_instance_profile\",\n    user_id=my_user.id,\n    instance_profile_id=instance_profile.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var instanceProfile = new Databricks.InstanceProfile(\"instance_profile\", new()\n    {\n        InstanceProfileArn = \"my_instance_profile_arn\",\n    });\n\n    var myUser = new Databricks.User(\"my_user\", new()\n    {\n        UserName = \"me@example.com\",\n    });\n\n    var myUserInstanceProfile = new Databricks.UserInstanceProfile(\"my_user_instance_profile\", new()\n    {\n        UserId = myUser.Id,\n        InstanceProfileId = instanceProfile.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tinstanceProfile, err := databricks.NewInstanceProfile(ctx, \"instance_profile\", \u0026databricks.InstanceProfileArgs{\n\t\t\tInstanceProfileArn: pulumi.String(\"my_instance_profile_arn\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmyUser, err := databricks.NewUser(ctx, \"my_user\", \u0026databricks.UserArgs{\n\t\t\tUserName: pulumi.String(\"me@example.com\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewUserInstanceProfile(ctx, \"my_user_instance_profile\", \u0026databricks.UserInstanceProfileArgs{\n\t\t\tUserId:            myUser.ID(),\n\t\t\tInstanceProfileId: instanceProfile.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.InstanceProfile;\nimport com.pulumi.databricks.InstanceProfileArgs;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport com.pulumi.databricks.UserInstanceProfile;\nimport com.pulumi.databricks.UserInstanceProfileArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var instanceProfile = new InstanceProfile(\"instanceProfile\", InstanceProfileArgs.builder()\n            .instanceProfileArn(\"my_instance_profile_arn\")\n            .build());\n\n        var myUser = new User(\"myUser\", UserArgs.builder()\n            .userName(\"me@example.com\")\n            .build());\n\n        var myUserInstanceProfile = new UserInstanceProfile(\"myUserInstanceProfile\", UserInstanceProfileArgs.builder()\n            .userId(myUser.id())\n            .instanceProfileId(instanceProfile.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  instanceProfile:\n    type: databricks:InstanceProfile\n    name: instance_profile\n    properties:\n      instanceProfileArn: my_instance_profile_arn\n  myUser:\n    type: databricks:User\n    name: my_user\n    properties:\n      userName: me@example.com\n  myUserInstanceProfile:\n    type: databricks:UserInstanceProfile\n    name: my_user_instance_profile\n    properties:\n      userId: ${myUser.id}\n      instanceProfileId: ${instanceProfile.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_group.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eto [manage users](https://docs.databricks.com/administration-guide/users-groups/users.html), that could be added to\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003ewithin the workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003edata to retrieve information about databricks_user.\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"instanceProfileId":{"type":"string","description":"This is the id of the instance profile resource.\n"},"userId":{"type":"string","description":"This is the id of the user resource.\n"}},"required":["instanceProfileId","userId"],"inputProperties":{"instanceProfileId":{"type":"string","description":"This is the id of the instance profile resource.\n","willReplaceOnChanges":true},"userId":{"type":"string","description":"This is the id of the user resource.\n","willReplaceOnChanges":true}},"requiredInputs":["instanceProfileId","userId"],"stateInputs":{"description":"Input properties used for looking up and filtering UserInstanceProfile resources.\n","properties":{"instanceProfileId":{"type":"string","description":"This is the id of the instance profile resource.\n","willReplaceOnChanges":true},"userId":{"type":"string","description":"This is the id of the user resource.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/userRole:UserRole":{"description":"This resource allows you to attach a role or\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_user.\n\n\u003e This resource can be used with an account or workspace-level provider.\n\n## Example Usage\n\nAdding AWS instance profile to a user\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst instanceProfile = new databricks.InstanceProfile(\"instance_profile\", {instanceProfileArn: \"my_instance_profile_arn\"});\nconst myUser = new databricks.User(\"my_user\", {userName: \"me@example.com\"});\nconst myUserRole = new databricks.UserRole(\"my_user_role\", {\n    userId: myUser.id,\n    role: instanceProfile.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ninstance_profile = databricks.InstanceProfile(\"instance_profile\", instance_profile_arn=\"my_instance_profile_arn\")\nmy_user = databricks.User(\"my_user\", user_name=\"me@example.com\")\nmy_user_role = databricks.UserRole(\"my_user_role\",\n    user_id=my_user.id,\n    role=instance_profile.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var instanceProfile = new Databricks.InstanceProfile(\"instance_profile\", new()\n    {\n        InstanceProfileArn = \"my_instance_profile_arn\",\n    });\n\n    var myUser = new Databricks.User(\"my_user\", new()\n    {\n        UserName = \"me@example.com\",\n    });\n\n    var myUserRole = new Databricks.UserRole(\"my_user_role\", new()\n    {\n        UserId = myUser.Id,\n        Role = instanceProfile.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tinstanceProfile, err := databricks.NewInstanceProfile(ctx, \"instance_profile\", \u0026databricks.InstanceProfileArgs{\n\t\t\tInstanceProfileArn: pulumi.String(\"my_instance_profile_arn\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmyUser, err := databricks.NewUser(ctx, \"my_user\", \u0026databricks.UserArgs{\n\t\t\tUserName: pulumi.String(\"me@example.com\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewUserRole(ctx, \"my_user_role\", \u0026databricks.UserRoleArgs{\n\t\t\tUserId: myUser.ID(),\n\t\t\tRole:   instanceProfile.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.InstanceProfile;\nimport com.pulumi.databricks.InstanceProfileArgs;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport com.pulumi.databricks.UserRole;\nimport com.pulumi.databricks.UserRoleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var instanceProfile = new InstanceProfile(\"instanceProfile\", InstanceProfileArgs.builder()\n            .instanceProfileArn(\"my_instance_profile_arn\")\n            .build());\n\n        var myUser = new User(\"myUser\", UserArgs.builder()\n            .userName(\"me@example.com\")\n            .build());\n\n        var myUserRole = new UserRole(\"myUserRole\", UserRoleArgs.builder()\n            .userId(myUser.id())\n            .role(instanceProfile.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  instanceProfile:\n    type: databricks:InstanceProfile\n    name: instance_profile\n    properties:\n      instanceProfileArn: my_instance_profile_arn\n  myUser:\n    type: databricks:User\n    name: my_user\n    properties:\n      userName: me@example.com\n  myUserRole:\n    type: databricks:UserRole\n    name: my_user_role\n    properties:\n      userId: ${myUser.id}\n      role: ${instanceProfile.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nAdding user as administrator to Databricks Account\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst myUser = new databricks.User(\"my_user\", {userName: \"me@example.com\"});\nconst myUserAccountAdmin = new databricks.UserRole(\"my_user_account_admin\", {\n    userId: myUser.id,\n    role: \"account_admin\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nmy_user = databricks.User(\"my_user\", user_name=\"me@example.com\")\nmy_user_account_admin = databricks.UserRole(\"my_user_account_admin\",\n    user_id=my_user.id,\n    role=\"account_admin\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var myUser = new Databricks.User(\"my_user\", new()\n    {\n        UserName = \"me@example.com\",\n    });\n\n    var myUserAccountAdmin = new Databricks.UserRole(\"my_user_account_admin\", new()\n    {\n        UserId = myUser.Id,\n        Role = \"account_admin\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tmyUser, err := databricks.NewUser(ctx, \"my_user\", \u0026databricks.UserArgs{\n\t\t\tUserName: pulumi.String(\"me@example.com\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewUserRole(ctx, \"my_user_account_admin\", \u0026databricks.UserRoleArgs{\n\t\t\tUserId: myUser.ID(),\n\t\t\tRole:   pulumi.String(\"account_admin\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport com.pulumi.databricks.UserRole;\nimport com.pulumi.databricks.UserRoleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var myUser = new User(\"myUser\", UserArgs.builder()\n            .userName(\"me@example.com\")\n            .build());\n\n        var myUserAccountAdmin = new UserRole(\"myUserAccountAdmin\", UserRoleArgs.builder()\n            .userId(myUser.id())\n            .role(\"account_admin\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  myUser:\n    type: databricks:User\n    name: my_user\n    properties:\n      userName: me@example.com\n  myUserAccountAdmin:\n    type: databricks:UserRole\n    name: my_user_account_admin\n    properties:\n      userId: ${myUser.id}\n      role: account_admin\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_group.\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eto [manage users](https://docs.databricks.com/administration-guide/users-groups/users.html), that could be added to\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003ewithin the workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003edata to retrieve information about databricks_user.\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"role":{"type":"string","description":"Either a role name or the ARN/ID of the instance profile resource.\n"},"userId":{"type":"string","description":"This is the id of the user resource.\n"}},"required":["role","userId"],"inputProperties":{"role":{"type":"string","description":"Either a role name or the ARN/ID of the instance profile resource.\n","willReplaceOnChanges":true},"userId":{"type":"string","description":"This is the id of the user resource.\n","willReplaceOnChanges":true}},"requiredInputs":["role","userId"],"stateInputs":{"description":"Input properties used for looking up and filtering UserRole resources.\n","properties":{"role":{"type":"string","description":"Either a role name or the ARN/ID of the instance profile resource.\n","willReplaceOnChanges":true},"userId":{"type":"string","description":"This is the id of the user resource.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/vectorSearchEndpoint:VectorSearchEndpoint":{"description":"This resource allows you to create [Mosaic AI Vector Search Endpoint](https://docs.databricks.com/en/generative-ai/vector-search.html) in Databricks.  Mosaic AI Vector Search is a serverless similarity search engine that allows you to store a vector representation of your data, including metadata, in a vector database.  The Mosaic AI Vector Search Endpoint is used to create and access vector search indexes.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.VectorSearchEndpoint(\"this\", {\n    name: \"vector-search-test\",\n    endpointType: \"STANDARD\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.VectorSearchEndpoint(\"this\",\n    name=\"vector-search-test\",\n    endpoint_type=\"STANDARD\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.VectorSearchEndpoint(\"this\", new()\n    {\n        Name = \"vector-search-test\",\n        EndpointType = \"STANDARD\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewVectorSearchEndpoint(ctx, \"this\", \u0026databricks.VectorSearchEndpointArgs{\n\t\t\tName:         pulumi.String(\"vector-search-test\"),\n\t\t\tEndpointType: pulumi.String(\"STANDARD\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.VectorSearchEndpoint;\nimport com.pulumi.databricks.VectorSearchEndpointArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new VectorSearchEndpoint(\"this\", VectorSearchEndpointArgs.builder()\n            .name(\"vector-search-test\")\n            .endpointType(\"STANDARD\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:VectorSearchEndpoint\n    properties:\n      name: vector-search-test\n      endpointType: STANDARD\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this resource.\n"},"creationTimestamp":{"type":"integer","description":"Timestamp of endpoint creation (milliseconds).\n"},"creator":{"type":"string","description":"Creator of the endpoint.\n"},"effectiveBudgetPolicyId":{"type":"string","description":"The effective budget policy ID.\n"},"endpointId":{"type":"string","description":"Unique internal identifier of the endpoint (UUID).\n"},"endpointStatuses":{"type":"array","items":{"$ref":"#/types/databricks:index/VectorSearchEndpointEndpointStatus:VectorSearchEndpointEndpointStatus"},"description":"Object describing the current status of the endpoint consisting of the following fields:\n"},"endpointType":{"type":"string","description":"Type of Mosaic AI Vector Search Endpoint.  Currently only accepting single value: `STANDARD` (See [documentation](https://docs.databricks.com/api/workspace/vectorsearchendpoints/createendpoint) for the list of currently supported values). (Change leads to recreation of the resource).\n"},"lastUpdatedTimestamp":{"type":"integer","description":"Timestamp of the last update to the endpoint (milliseconds).\n"},"lastUpdatedUser":{"type":"string","description":"User who last updated the endpoint.\n"},"name":{"type":"string","description":"Name of the Mosaic AI Vector Search Endpoint to create. (Change leads to recreation of the resource).\n"},"numIndexes":{"type":"integer","description":"Number of indexes on the endpoint.\n"},"providerConfig":{"$ref":"#/types/databricks:index/VectorSearchEndpointProviderConfig:VectorSearchEndpointProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"required":["budgetPolicyId","creationTimestamp","creator","effectiveBudgetPolicyId","endpointId","endpointStatuses","endpointType","lastUpdatedTimestamp","lastUpdatedUser","name","numIndexes"],"inputProperties":{"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this resource.\n"},"endpointType":{"type":"string","description":"Type of Mosaic AI Vector Search Endpoint.  Currently only accepting single value: `STANDARD` (See [documentation](https://docs.databricks.com/api/workspace/vectorsearchendpoints/createendpoint) for the list of currently supported values). (Change leads to recreation of the resource).\n","willReplaceOnChanges":true},"name":{"type":"string","description":"Name of the Mosaic AI Vector Search Endpoint to create. (Change leads to recreation of the resource).\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/VectorSearchEndpointProviderConfig:VectorSearchEndpointProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"requiredInputs":["endpointType"],"stateInputs":{"description":"Input properties used for looking up and filtering VectorSearchEndpoint resources.\n","properties":{"budgetPolicyId":{"type":"string","description":"The Budget Policy ID set for this resource.\n"},"creationTimestamp":{"type":"integer","description":"Timestamp of endpoint creation (milliseconds).\n"},"creator":{"type":"string","description":"Creator of the endpoint.\n"},"effectiveBudgetPolicyId":{"type":"string","description":"The effective budget policy ID.\n"},"endpointId":{"type":"string","description":"Unique internal identifier of the endpoint (UUID).\n"},"endpointStatuses":{"type":"array","items":{"$ref":"#/types/databricks:index/VectorSearchEndpointEndpointStatus:VectorSearchEndpointEndpointStatus"},"description":"Object describing the current status of the endpoint consisting of the following fields:\n"},"endpointType":{"type":"string","description":"Type of Mosaic AI Vector Search Endpoint.  Currently only accepting single value: `STANDARD` (See [documentation](https://docs.databricks.com/api/workspace/vectorsearchendpoints/createendpoint) for the list of currently supported values). (Change leads to recreation of the resource).\n","willReplaceOnChanges":true},"lastUpdatedTimestamp":{"type":"integer","description":"Timestamp of the last update to the endpoint (milliseconds).\n"},"lastUpdatedUser":{"type":"string","description":"User who last updated the endpoint.\n"},"name":{"type":"string","description":"Name of the Mosaic AI Vector Search Endpoint to create. (Change leads to recreation of the resource).\n","willReplaceOnChanges":true},"numIndexes":{"type":"integer","description":"Number of indexes on the endpoint.\n"},"providerConfig":{"$ref":"#/types/databricks:index/VectorSearchEndpointProviderConfig:VectorSearchEndpointProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"type":"object"}},"databricks:index/vectorSearchIndex:VectorSearchIndex":{"description":"This resource allows you to create [Mosaic AI Vector Search Index](https://docs.databricks.com/en/generative-ai/create-query-vector-search.html) in Databricks.  Mosaic AI Vector Search is a serverless similarity search engine that allows you to store a vector representation of your data, including metadata, in a vector database.  The Mosaic AI Vector Search Index provides the ability to search data in the linked Delta Table.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sync = new databricks.VectorSearchIndex(\"sync\", {\n    name: \"main.default.vector_search_index\",\n    endpointName: thisDatabricksVectorSearchEndpoint.name,\n    primaryKey: \"id\",\n    indexType: \"DELTA_SYNC\",\n    deltaSyncIndexSpec: {\n        sourceTable: \"main.default.source_table\",\n        pipelineType: \"TRIGGERED\",\n        embeddingSourceColumns: [{\n            name: \"text\",\n            embeddingModelEndpointName: _this.name,\n        }],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsync = databricks.VectorSearchIndex(\"sync\",\n    name=\"main.default.vector_search_index\",\n    endpoint_name=this_databricks_vector_search_endpoint[\"name\"],\n    primary_key=\"id\",\n    index_type=\"DELTA_SYNC\",\n    delta_sync_index_spec={\n        \"source_table\": \"main.default.source_table\",\n        \"pipeline_type\": \"TRIGGERED\",\n        \"embedding_source_columns\": [{\n            \"name\": \"text\",\n            \"embedding_model_endpoint_name\": this[\"name\"],\n        }],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sync = new Databricks.VectorSearchIndex(\"sync\", new()\n    {\n        Name = \"main.default.vector_search_index\",\n        EndpointName = thisDatabricksVectorSearchEndpoint.Name,\n        PrimaryKey = \"id\",\n        IndexType = \"DELTA_SYNC\",\n        DeltaSyncIndexSpec = new Databricks.Inputs.VectorSearchIndexDeltaSyncIndexSpecArgs\n        {\n            SourceTable = \"main.default.source_table\",\n            PipelineType = \"TRIGGERED\",\n            EmbeddingSourceColumns = new[]\n            {\n                new Databricks.Inputs.VectorSearchIndexDeltaSyncIndexSpecEmbeddingSourceColumnArgs\n                {\n                    Name = \"text\",\n                    EmbeddingModelEndpointName = @this.Name,\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewVectorSearchIndex(ctx, \"sync\", \u0026databricks.VectorSearchIndexArgs{\n\t\t\tName:         pulumi.String(\"main.default.vector_search_index\"),\n\t\t\tEndpointName: pulumi.Any(thisDatabricksVectorSearchEndpoint.Name),\n\t\t\tPrimaryKey:   pulumi.String(\"id\"),\n\t\t\tIndexType:    pulumi.String(\"DELTA_SYNC\"),\n\t\t\tDeltaSyncIndexSpec: \u0026databricks.VectorSearchIndexDeltaSyncIndexSpecArgs{\n\t\t\t\tSourceTable:  pulumi.String(\"main.default.source_table\"),\n\t\t\t\tPipelineType: pulumi.String(\"TRIGGERED\"),\n\t\t\t\tEmbeddingSourceColumns: databricks.VectorSearchIndexDeltaSyncIndexSpecEmbeddingSourceColumnArray{\n\t\t\t\t\t\u0026databricks.VectorSearchIndexDeltaSyncIndexSpecEmbeddingSourceColumnArgs{\n\t\t\t\t\t\tName:                       pulumi.String(\"text\"),\n\t\t\t\t\t\tEmbeddingModelEndpointName: pulumi.Any(this.Name),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.VectorSearchIndex;\nimport com.pulumi.databricks.VectorSearchIndexArgs;\nimport com.pulumi.databricks.inputs.VectorSearchIndexDeltaSyncIndexSpecArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sync = new VectorSearchIndex(\"sync\", VectorSearchIndexArgs.builder()\n            .name(\"main.default.vector_search_index\")\n            .endpointName(thisDatabricksVectorSearchEndpoint.name())\n            .primaryKey(\"id\")\n            .indexType(\"DELTA_SYNC\")\n            .deltaSyncIndexSpec(VectorSearchIndexDeltaSyncIndexSpecArgs.builder()\n                .sourceTable(\"main.default.source_table\")\n                .pipelineType(\"TRIGGERED\")\n                .embeddingSourceColumns(VectorSearchIndexDeltaSyncIndexSpecEmbeddingSourceColumnArgs.builder()\n                    .name(\"text\")\n                    .embeddingModelEndpointName(this_.name())\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sync:\n    type: databricks:VectorSearchIndex\n    properties:\n      name: main.default.vector_search_index\n      endpointName: ${thisDatabricksVectorSearchEndpoint.name}\n      primaryKey: id\n      indexType: DELTA_SYNC\n      deltaSyncIndexSpec:\n        sourceTable: main.default.source_table\n        pipelineType: TRIGGERED\n        embeddingSourceColumns:\n          - name: text\n            embeddingModelEndpointName: ${this.name}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"creator":{"type":"string","description":"Creator of the endpoint.\n"},"deltaSyncIndexSpec":{"$ref":"#/types/databricks:index/VectorSearchIndexDeltaSyncIndexSpec:VectorSearchIndexDeltaSyncIndexSpec","description":"Specification for Delta Sync Index. Required if \u003cspan pulumi-lang-nodejs=\"`indexType`\" pulumi-lang-dotnet=\"`IndexType`\" pulumi-lang-go=\"`indexType`\" pulumi-lang-python=\"`index_type`\" pulumi-lang-yaml=\"`indexType`\" pulumi-lang-java=\"`indexType`\"\u003e`index_type`\u003c/span\u003e is `DELTA_SYNC`. This field is a block and is documented below.\n"},"directAccessIndexSpec":{"$ref":"#/types/databricks:index/VectorSearchIndexDirectAccessIndexSpec:VectorSearchIndexDirectAccessIndexSpec","description":"Specification for Direct Vector Access Index. Required if \u003cspan pulumi-lang-nodejs=\"`indexType`\" pulumi-lang-dotnet=\"`IndexType`\" pulumi-lang-go=\"`indexType`\" pulumi-lang-python=\"`index_type`\" pulumi-lang-yaml=\"`indexType`\" pulumi-lang-java=\"`indexType`\"\u003e`index_type`\u003c/span\u003e is `DIRECT_ACCESS`. This field is a block and is documented below.\n"},"endpointName":{"type":"string","description":"The name of the Mosaic AI Vector Search Endpoint that will be used for indexing the data.\n"},"indexType":{"type":"string","description":"Mosaic AI Vector Search index type. Currently supported values are:\n* `DELTA_SYNC`: An index that automatically syncs with a source Delta Table, automatically and incrementally updating the index as the underlying data in the Delta Table changes.\n* `DIRECT_ACCESS`: An index that supports the direct read and write of vectors and metadata through our REST and SDK APIs. With this model, the user manages index updates.\n"},"name":{"type":"string","description":"Three-level name of the Mosaic AI Vector Search Index to create (`catalog.schema.index_name`).\n"},"primaryKey":{"type":"string","description":"The column name that will be used as a primary key.\n"},"providerConfig":{"$ref":"#/types/databricks:index/VectorSearchIndexProviderConfig:VectorSearchIndexProviderConfig"},"statuses":{"type":"array","items":{"$ref":"#/types/databricks:index/VectorSearchIndexStatus:VectorSearchIndexStatus"},"description":"Object describing the current status of the index consisting of the following fields:\n"}},"required":["creator","endpointName","indexType","name","primaryKey","statuses"],"inputProperties":{"deltaSyncIndexSpec":{"$ref":"#/types/databricks:index/VectorSearchIndexDeltaSyncIndexSpec:VectorSearchIndexDeltaSyncIndexSpec","description":"Specification for Delta Sync Index. Required if \u003cspan pulumi-lang-nodejs=\"`indexType`\" pulumi-lang-dotnet=\"`IndexType`\" pulumi-lang-go=\"`indexType`\" pulumi-lang-python=\"`index_type`\" pulumi-lang-yaml=\"`indexType`\" pulumi-lang-java=\"`indexType`\"\u003e`index_type`\u003c/span\u003e is `DELTA_SYNC`. This field is a block and is documented below.\n","willReplaceOnChanges":true},"directAccessIndexSpec":{"$ref":"#/types/databricks:index/VectorSearchIndexDirectAccessIndexSpec:VectorSearchIndexDirectAccessIndexSpec","description":"Specification for Direct Vector Access Index. Required if \u003cspan pulumi-lang-nodejs=\"`indexType`\" pulumi-lang-dotnet=\"`IndexType`\" pulumi-lang-go=\"`indexType`\" pulumi-lang-python=\"`index_type`\" pulumi-lang-yaml=\"`indexType`\" pulumi-lang-java=\"`indexType`\"\u003e`index_type`\u003c/span\u003e is `DIRECT_ACCESS`. This field is a block and is documented below.\n","willReplaceOnChanges":true},"endpointName":{"type":"string","description":"The name of the Mosaic AI Vector Search Endpoint that will be used for indexing the data.\n","willReplaceOnChanges":true},"indexType":{"type":"string","description":"Mosaic AI Vector Search index type. Currently supported values are:\n* `DELTA_SYNC`: An index that automatically syncs with a source Delta Table, automatically and incrementally updating the index as the underlying data in the Delta Table changes.\n* `DIRECT_ACCESS`: An index that supports the direct read and write of vectors and metadata through our REST and SDK APIs. With this model, the user manages index updates.\n","willReplaceOnChanges":true},"name":{"type":"string","description":"Three-level name of the Mosaic AI Vector Search Index to create (`catalog.schema.index_name`).\n","willReplaceOnChanges":true},"primaryKey":{"type":"string","description":"The column name that will be used as a primary key.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/VectorSearchIndexProviderConfig:VectorSearchIndexProviderConfig","willReplaceOnChanges":true}},"requiredInputs":["endpointName","indexType","primaryKey"],"stateInputs":{"description":"Input properties used for looking up and filtering VectorSearchIndex resources.\n","properties":{"creator":{"type":"string","description":"Creator of the endpoint.\n"},"deltaSyncIndexSpec":{"$ref":"#/types/databricks:index/VectorSearchIndexDeltaSyncIndexSpec:VectorSearchIndexDeltaSyncIndexSpec","description":"Specification for Delta Sync Index. Required if \u003cspan pulumi-lang-nodejs=\"`indexType`\" pulumi-lang-dotnet=\"`IndexType`\" pulumi-lang-go=\"`indexType`\" pulumi-lang-python=\"`index_type`\" pulumi-lang-yaml=\"`indexType`\" pulumi-lang-java=\"`indexType`\"\u003e`index_type`\u003c/span\u003e is `DELTA_SYNC`. This field is a block and is documented below.\n","willReplaceOnChanges":true},"directAccessIndexSpec":{"$ref":"#/types/databricks:index/VectorSearchIndexDirectAccessIndexSpec:VectorSearchIndexDirectAccessIndexSpec","description":"Specification for Direct Vector Access Index. Required if \u003cspan pulumi-lang-nodejs=\"`indexType`\" pulumi-lang-dotnet=\"`IndexType`\" pulumi-lang-go=\"`indexType`\" pulumi-lang-python=\"`index_type`\" pulumi-lang-yaml=\"`indexType`\" pulumi-lang-java=\"`indexType`\"\u003e`index_type`\u003c/span\u003e is `DIRECT_ACCESS`. This field is a block and is documented below.\n","willReplaceOnChanges":true},"endpointName":{"type":"string","description":"The name of the Mosaic AI Vector Search Endpoint that will be used for indexing the data.\n","willReplaceOnChanges":true},"indexType":{"type":"string","description":"Mosaic AI Vector Search index type. Currently supported values are:\n* `DELTA_SYNC`: An index that automatically syncs with a source Delta Table, automatically and incrementally updating the index as the underlying data in the Delta Table changes.\n* `DIRECT_ACCESS`: An index that supports the direct read and write of vectors and metadata through our REST and SDK APIs. With this model, the user manages index updates.\n","willReplaceOnChanges":true},"name":{"type":"string","description":"Three-level name of the Mosaic AI Vector Search Index to create (`catalog.schema.index_name`).\n","willReplaceOnChanges":true},"primaryKey":{"type":"string","description":"The column name that will be used as a primary key.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/VectorSearchIndexProviderConfig:VectorSearchIndexProviderConfig","willReplaceOnChanges":true},"statuses":{"type":"array","items":{"$ref":"#/types/databricks:index/VectorSearchIndexStatus:VectorSearchIndexStatus"},"description":"Object describing the current status of the index consisting of the following fields:\n"}},"type":"object"}},"databricks:index/volume:Volume":{"description":"Volumes are Unity Catalog objects representing a logical volume of storage in a cloud object storage location. Volumes provide capabilities for accessing, storing, governing, and organizing files. While tables provide governance over tabular datasets, volumes add governance over non-tabular datasets. You can use volumes to store and access files in any format, including structured, semi-structured, and unstructured data.\n\n\u003e This resource can only be used with a workspace-level provider!\n\nA volume resides in the third layer of Unity Catalog's three-level namespace. Volumes are siblings to tables, views, and other objects organized under a schema in Unity Catalog.\n\nA volume can be **managed** or **external**.\n\nA **managed volume** is a Unity Catalog-governed storage volume created within the default storage location of the containing schema. Managed volumes allow the creation of governed storage for working with files without the overhead of external locations and storage credentials. You do not need to specify a location when creating a managed volume, and all file access for data in managed volumes is through paths managed by Unity Catalog.\n\nAn **external volume** is a Unity Catalog-governed storage volume registered against a directory within an external location.\n\nA volume can be referenced using its identifier: ```\u003ccatalogName\u003e.\u003cschemaName\u003e.\u003cvolumeName\u003e```, where:\n\n* ```\u003ccatalogName\u003e```: The name of the catalog containing the Volume.\n* ```\u003cschemaName\u003e```: The name of the schema containing the Volume.\n* ```\u003cvolumeName\u003e```: The name of the Volume. It identifies the volume object.\n\nThe path to access files in volumes uses the following format:\n\n```/Volumes/\u003ccatalog\u003e/\u003cschema\u003e/\u003cvolume\u003e/\u003cpath\u003e/\u003cfile_name\u003e```\n\nDatabricks also supports an optional ```dbfs:/``` scheme, so the following path also works:\n\n```dbfs:/Volumes/\u003ccatalog\u003e/\u003cschema\u003e/\u003cvolume\u003e/\u003cpath\u003e/\u003cfile_name\u003e```\n\nThis resource manages Volumes in Unity Catalog.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    comment: \"this catalog is managed by terraform\",\n    properties: {\n        purpose: \"testing\",\n    },\n});\nconst things = new databricks.Schema(\"things\", {\n    catalogName: sandbox.name,\n    name: \"things\",\n    comment: \"this schema is managed by terraform\",\n    properties: {\n        kind: \"various\",\n    },\n});\nconst external = new databricks.StorageCredential(\"external\", {\n    name: \"creds\",\n    awsIamRole: {\n        roleArn: externalDataAccess.arn,\n    },\n});\nconst some = new databricks.ExternalLocation(\"some\", {\n    name: \"external_location\",\n    url: `s3://${externalAwsS3Bucket.id}/some`,\n    credentialName: external.name,\n});\nconst _this = new databricks.Volume(\"this\", {\n    name: \"quickstart_volume\",\n    catalogName: sandbox.name,\n    schemaName: things.name,\n    volumeType: \"EXTERNAL\",\n    storageLocation: some.url,\n    comment: \"this volume is managed by terraform\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    comment=\"this catalog is managed by terraform\",\n    properties={\n        \"purpose\": \"testing\",\n    })\nthings = databricks.Schema(\"things\",\n    catalog_name=sandbox.name,\n    name=\"things\",\n    comment=\"this schema is managed by terraform\",\n    properties={\n        \"kind\": \"various\",\n    })\nexternal = databricks.StorageCredential(\"external\",\n    name=\"creds\",\n    aws_iam_role={\n        \"role_arn\": external_data_access[\"arn\"],\n    })\nsome = databricks.ExternalLocation(\"some\",\n    name=\"external_location\",\n    url=f\"s3://{external_aws_s3_bucket['id']}/some\",\n    credential_name=external.name)\nthis = databricks.Volume(\"this\",\n    name=\"quickstart_volume\",\n    catalog_name=sandbox.name,\n    schema_name=things.name,\n    volume_type=\"EXTERNAL\",\n    storage_location=some.url,\n    comment=\"this volume is managed by terraform\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        Comment = \"this catalog is managed by terraform\",\n        Properties = \n        {\n            { \"purpose\", \"testing\" },\n        },\n    });\n\n    var things = new Databricks.Schema(\"things\", new()\n    {\n        CatalogName = sandbox.Name,\n        Name = \"things\",\n        Comment = \"this schema is managed by terraform\",\n        Properties = \n        {\n            { \"kind\", \"various\" },\n        },\n    });\n\n    var external = new Databricks.StorageCredential(\"external\", new()\n    {\n        Name = \"creds\",\n        AwsIamRole = new Databricks.Inputs.StorageCredentialAwsIamRoleArgs\n        {\n            RoleArn = externalDataAccess.Arn,\n        },\n    });\n\n    var some = new Databricks.ExternalLocation(\"some\", new()\n    {\n        Name = \"external_location\",\n        Url = $\"s3://{externalAwsS3Bucket.Id}/some\",\n        CredentialName = external.Name,\n    });\n\n    var @this = new Databricks.Volume(\"this\", new()\n    {\n        Name = \"quickstart_volume\",\n        CatalogName = sandbox.Name,\n        SchemaName = things.Name,\n        VolumeType = \"EXTERNAL\",\n        StorageLocation = some.Url,\n        Comment = \"this volume is managed by terraform\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:    pulumi.String(\"sandbox\"),\n\t\t\tComment: pulumi.String(\"this catalog is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"purpose\": pulumi.String(\"testing\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthings, err := databricks.NewSchema(ctx, \"things\", \u0026databricks.SchemaArgs{\n\t\t\tCatalogName: sandbox.Name,\n\t\t\tName:        pulumi.String(\"things\"),\n\t\t\tComment:     pulumi.String(\"this schema is managed by terraform\"),\n\t\t\tProperties: pulumi.StringMap{\n\t\t\t\t\"kind\": pulumi.String(\"various\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\texternal, err := databricks.NewStorageCredential(ctx, \"external\", \u0026databricks.StorageCredentialArgs{\n\t\t\tName: pulumi.String(\"creds\"),\n\t\t\tAwsIamRole: \u0026databricks.StorageCredentialAwsIamRoleArgs{\n\t\t\t\tRoleArn: pulumi.Any(externalDataAccess.Arn),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tsome, err := databricks.NewExternalLocation(ctx, \"some\", \u0026databricks.ExternalLocationArgs{\n\t\t\tName:           pulumi.String(\"external_location\"),\n\t\t\tUrl:            pulumi.Sprintf(\"s3://%v/some\", externalAwsS3Bucket.Id),\n\t\t\tCredentialName: external.Name,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewVolume(ctx, \"this\", \u0026databricks.VolumeArgs{\n\t\t\tName:            pulumi.String(\"quickstart_volume\"),\n\t\t\tCatalogName:     sandbox.Name,\n\t\t\tSchemaName:      things.Name,\n\t\t\tVolumeType:      pulumi.String(\"EXTERNAL\"),\n\t\t\tStorageLocation: some.Url,\n\t\t\tComment:         pulumi.String(\"this volume is managed by terraform\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.Schema;\nimport com.pulumi.databricks.SchemaArgs;\nimport com.pulumi.databricks.StorageCredential;\nimport com.pulumi.databricks.StorageCredentialArgs;\nimport com.pulumi.databricks.inputs.StorageCredentialAwsIamRoleArgs;\nimport com.pulumi.databricks.ExternalLocation;\nimport com.pulumi.databricks.ExternalLocationArgs;\nimport com.pulumi.databricks.Volume;\nimport com.pulumi.databricks.VolumeArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .comment(\"this catalog is managed by terraform\")\n            .properties(Map.of(\"purpose\", \"testing\"))\n            .build());\n\n        var things = new Schema(\"things\", SchemaArgs.builder()\n            .catalogName(sandbox.name())\n            .name(\"things\")\n            .comment(\"this schema is managed by terraform\")\n            .properties(Map.of(\"kind\", \"various\"))\n            .build());\n\n        var external = new StorageCredential(\"external\", StorageCredentialArgs.builder()\n            .name(\"creds\")\n            .awsIamRole(StorageCredentialAwsIamRoleArgs.builder()\n                .roleArn(externalDataAccess.arn())\n                .build())\n            .build());\n\n        var some = new ExternalLocation(\"some\", ExternalLocationArgs.builder()\n            .name(\"external_location\")\n            .url(String.format(\"s3://%s/some\", externalAwsS3Bucket.id()))\n            .credentialName(external.name())\n            .build());\n\n        var this_ = new Volume(\"this\", VolumeArgs.builder()\n            .name(\"quickstart_volume\")\n            .catalogName(sandbox.name())\n            .schemaName(things.name())\n            .volumeType(\"EXTERNAL\")\n            .storageLocation(some.url())\n            .comment(\"this volume is managed by terraform\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      comment: this catalog is managed by terraform\n      properties:\n        purpose: testing\n  things:\n    type: databricks:Schema\n    properties:\n      catalogName: ${sandbox.name}\n      name: things\n      comment: this schema is managed by terraform\n      properties:\n        kind: various\n  external:\n    type: databricks:StorageCredential\n    properties:\n      name: creds\n      awsIamRole:\n        roleArn: ${externalDataAccess.arn}\n  some:\n    type: databricks:ExternalLocation\n    properties:\n      name: external_location\n      url: s3://${externalAwsS3Bucket.id}/some\n      credentialName: ${external.name}\n  this:\n    type: databricks:Volume\n    properties:\n      name: quickstart_volume\n      catalogName: ${sandbox.name}\n      schemaName: ${things.name}\n      volumeType: EXTERNAL\n      storageLocation: ${some.url}\n      comment: this volume is managed by terraform\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"catalogName":{"type":"string","description":"Name of parent Catalog. Change forces creation of a new resource.\n"},"comment":{"type":"string","description":"Free-form text.\n"},"name":{"type":"string","description":"Name of the Volume\n"},"owner":{"type":"string","description":"Name of the volume owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/VolumeProviderConfig:VolumeProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaName":{"type":"string","description":"Name of parent Schema relative to parent Catalog. Change forces creation of a new resource.\n"},"storageLocation":{"type":"string","description":"URL for the volume (should be inside of an existing External Location). Only used for `EXTERNAL` Volumes.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). Change forces creation of a new resource.\n"},"volumePath":{"type":"string","description":"base file path for this Unity Catalog Volume in form of `/Volumes/\u003ccatalog\u003e/\u003cschema\u003e/\u003cname\u003e`.\n"},"volumeType":{"type":"string","description":"Volume type. `EXTERNAL` or `MANAGED`. Change forces creation of a new resource.\n"}},"required":["catalogName","name","owner","schemaName","volumePath","volumeType"],"inputProperties":{"catalogName":{"type":"string","description":"Name of parent Catalog. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"comment":{"type":"string","description":"Free-form text.\n"},"name":{"type":"string","description":"Name of the Volume\n"},"owner":{"type":"string","description":"Name of the volume owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/VolumeProviderConfig:VolumeProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaName":{"type":"string","description":"Name of parent Schema relative to parent Catalog. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"storageLocation":{"type":"string","description":"URL for the volume (should be inside of an existing External Location). Only used for `EXTERNAL` Volumes.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). Change forces creation of a new resource.\n","willReplaceOnChanges":true},"volumeType":{"type":"string","description":"Volume type. `EXTERNAL` or `MANAGED`. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"requiredInputs":["catalogName","schemaName","volumeType"],"stateInputs":{"description":"Input properties used for looking up and filtering Volume resources.\n","properties":{"catalogName":{"type":"string","description":"Name of parent Catalog. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"comment":{"type":"string","description":"Free-form text.\n"},"name":{"type":"string","description":"Name of the Volume\n"},"owner":{"type":"string","description":"Name of the volume owner.\n"},"providerConfig":{"$ref":"#/types/databricks:index/VolumeProviderConfig:VolumeProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaName":{"type":"string","description":"Name of parent Schema relative to parent Catalog. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"storageLocation":{"type":"string","description":"URL for the volume (should be inside of an existing External Location). Only used for `EXTERNAL` Volumes.  If the URL contains special characters, such as space, `\u0026`, etc., they should be percent-encoded (space \u003e `%20`, etc.). Change forces creation of a new resource.\n","willReplaceOnChanges":true},"volumePath":{"type":"string","description":"base file path for this Unity Catalog Volume in form of `/Volumes/\u003ccatalog\u003e/\u003cschema\u003e/\u003cname\u003e`.\n"},"volumeType":{"type":"string","description":"Volume type. `EXTERNAL` or `MANAGED`. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/warehousesDefaultWarehouseOverride:WarehousesDefaultWarehouseOverride":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThe Default Warehouse Override resource allows you to configure a user's default warehouse selection behavior in Databricks SQL. This resource enables customization of how a user's default warehouse is selected for SQL operations.\n\nUsers can configure their default warehouse to either:\n- Remember their last-selected warehouse (`LAST_SELECTED` type)\n- Use a specific warehouse (`CUSTOM` type with a warehouse ID)\n\n\u003e **Note** The \u003cspan pulumi-lang-nodejs=\"`defaultWarehouseOverrideId`\" pulumi-lang-dotnet=\"`DefaultWarehouseOverrideId`\" pulumi-lang-go=\"`defaultWarehouseOverrideId`\" pulumi-lang-python=\"`default_warehouse_override_id`\" pulumi-lang-yaml=\"`defaultWarehouseOverrideId`\" pulumi-lang-java=\"`defaultWarehouseOverrideId`\"\u003e`default_warehouse_override_id`\u003c/span\u003e field represents the **user ID** of the user whose default warehouse behavior is being configured.\n\n## Example Usage\n\n### Basic Example with Last Selected Type\nThis example creates a default warehouse override that remembers the user's last-selected warehouse.\nThe \u003cspan pulumi-lang-nodejs=\"`defaultWarehouseOverrideId`\" pulumi-lang-dotnet=\"`DefaultWarehouseOverrideId`\" pulumi-lang-go=\"`defaultWarehouseOverrideId`\" pulumi-lang-python=\"`default_warehouse_override_id`\" pulumi-lang-yaml=\"`defaultWarehouseOverrideId`\" pulumi-lang-java=\"`defaultWarehouseOverrideId`\"\u003e`default_warehouse_override_id`\u003c/span\u003e represents the user ID of the target user:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst lastSelected = new databricks.index.DefaultWarehouseOverride(\"last_selected\", {\n    defaultWarehouseOverrideId: example.id,\n    type: \"LAST_SELECTED\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nlast_selected = databricks.index.DefaultWarehouseOverride(\"last_selected\",\n    default_warehouse_override_id=example.id,\n    type=LAST_SELECTED)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var lastSelected = new Databricks.Index.DefaultWarehouseOverride(\"last_selected\", new()\n    {\n        DefaultWarehouseOverrideId = example.Id,\n        Type = \"LAST_SELECTED\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDefaultWarehouseOverride(ctx, \"last_selected\", \u0026databricks.DefaultWarehouseOverrideArgs{\n\t\t\tDefaultWarehouseOverrideId: example.Id,\n\t\t\tType:                       \"LAST_SELECTED\",\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DefaultWarehouseOverride;\nimport com.pulumi.databricks.DefaultWarehouseOverrideArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var lastSelected = new DefaultWarehouseOverride(\"lastSelected\", DefaultWarehouseOverrideArgs.builder()\n            .defaultWarehouseOverrideId(example.id())\n            .type(\"LAST_SELECTED\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  lastSelected:\n    type: databricks:DefaultWarehouseOverride\n    name: last_selected\n    properties:\n      defaultWarehouseOverrideId: ${example.id}\n      type: LAST_SELECTED\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Custom Warehouse Example\nThis example creates a default warehouse override that always uses a specific warehouse:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst custom = new databricks.index.DefaultWarehouseOverride(\"custom\", {\n    defaultWarehouseOverrideId: example.id,\n    type: \"CUSTOM\",\n    warehouseId: exampleDatabricksSqlEndpoint.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncustom = databricks.index.DefaultWarehouseOverride(\"custom\",\n    default_warehouse_override_id=example.id,\n    type=CUSTOM,\n    warehouse_id=example_databricks_sql_endpoint.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var custom = new Databricks.Index.DefaultWarehouseOverride(\"custom\", new()\n    {\n        DefaultWarehouseOverrideId = example.Id,\n        Type = \"CUSTOM\",\n        WarehouseId = exampleDatabricksSqlEndpoint.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewDefaultWarehouseOverride(ctx, \"custom\", \u0026databricks.DefaultWarehouseOverrideArgs{\n\t\t\tDefaultWarehouseOverrideId: example.Id,\n\t\t\tType:                       \"CUSTOM\",\n\t\t\tWarehouseId:                exampleDatabricksSqlEndpoint.Id,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DefaultWarehouseOverride;\nimport com.pulumi.databricks.DefaultWarehouseOverrideArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var custom = new DefaultWarehouseOverride(\"custom\", DefaultWarehouseOverrideArgs.builder()\n            .defaultWarehouseOverrideId(example.id())\n            .type(\"CUSTOM\")\n            .warehouseId(exampleDatabricksSqlEndpoint.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  custom:\n    type: databricks:DefaultWarehouseOverride\n    properties:\n      defaultWarehouseOverrideId: ${example.id}\n      type: CUSTOM\n      warehouseId: ${exampleDatabricksSqlEndpoint.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"defaultWarehouseOverrideId":{"type":"string","description":"The ID component of the resource name (user ID)\n"},"name":{"type":"string","description":"(string) - The resource name of the default warehouse override.\nFormat: default-warehouse-overrides/{default_warehouse_override_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/WarehousesDefaultWarehouseOverrideProviderConfig:WarehousesDefaultWarehouseOverrideProviderConfig","description":"Configure the provider for management through account provider.\n"},"type":{"type":"string","description":"The type of override behavior. Possible values are: `CUSTOM`, `LAST_SELECTED`\n"},"warehouseId":{"type":"string","description":"The specific warehouse ID when type is CUSTOM.\nNot set for LAST_SELECTED type\n"}},"required":["defaultWarehouseOverrideId","name","type"],"inputProperties":{"defaultWarehouseOverrideId":{"type":"string","description":"The ID component of the resource name (user ID)\n"},"providerConfig":{"$ref":"#/types/databricks:index/WarehousesDefaultWarehouseOverrideProviderConfig:WarehousesDefaultWarehouseOverrideProviderConfig","description":"Configure the provider for management through account provider.\n"},"type":{"type":"string","description":"The type of override behavior. Possible values are: `CUSTOM`, `LAST_SELECTED`\n"},"warehouseId":{"type":"string","description":"The specific warehouse ID when type is CUSTOM.\nNot set for LAST_SELECTED type\n"}},"requiredInputs":["defaultWarehouseOverrideId","type"],"stateInputs":{"description":"Input properties used for looking up and filtering WarehousesDefaultWarehouseOverride resources.\n","properties":{"defaultWarehouseOverrideId":{"type":"string","description":"The ID component of the resource name (user ID)\n"},"name":{"type":"string","description":"(string) - The resource name of the default warehouse override.\nFormat: default-warehouse-overrides/{default_warehouse_override_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/WarehousesDefaultWarehouseOverrideProviderConfig:WarehousesDefaultWarehouseOverrideProviderConfig","description":"Configure the provider for management through account provider.\n"},"type":{"type":"string","description":"The type of override behavior. Possible values are: `CUSTOM`, `LAST_SELECTED`\n"},"warehouseId":{"type":"string","description":"The specific warehouse ID when type is CUSTOM.\nNot set for LAST_SELECTED type\n"}},"type":"object"}},"databricks:index/workspaceBinding:WorkspaceBinding":{"description":"If you use workspaces to isolate user data access, you may want to limit access to catalog, external locations or storage credentials from specific workspaces in your account, also known as workspace binding\n\n\u003e This resource can only be used with a workspace-level provider!\n\nBy default, Databricks assigns the securable to all workspaces attached to the current metastore. By using \u003cspan pulumi-lang-nodejs=\"`databricks.WorkspaceBinding`\" pulumi-lang-dotnet=\"`databricks.WorkspaceBinding`\" pulumi-lang-go=\"`WorkspaceBinding`\" pulumi-lang-python=\"`WorkspaceBinding`\" pulumi-lang-yaml=\"`databricks.WorkspaceBinding`\" pulumi-lang-java=\"`databricks.WorkspaceBinding`\"\u003e`databricks.WorkspaceBinding`\u003c/span\u003e, the securable will be unassigned from all workspaces and only assigned explicitly using this resource.\n\n\u003e To use this resource the securable must have its isolation mode set to `ISOLATED` (for databricks_catalog) or `ISOLATION_MODE_ISOLATED` (for  (for databricks_external_location,\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003eor databricks_credential) for the \u003cspan pulumi-lang-nodejs=\"`isolationMode`\" pulumi-lang-dotnet=\"`IsolationMode`\" pulumi-lang-go=\"`isolationMode`\" pulumi-lang-python=\"`isolation_mode`\" pulumi-lang-yaml=\"`isolationMode`\" pulumi-lang-java=\"`isolationMode`\"\u003e`isolation_mode`\u003c/span\u003e attribute. Alternatively, the isolation mode can be set using the UI or API by following [this guide](https://docs.databricks.com/data-governance/unity-catalog/create-catalogs.html#configuration), [this guide](https://docs.databricks.com/en/connect/unity-catalog/external-locations.html#workspace-binding) or [this guide](https://docs.databricks.com/en/connect/unity-catalog/storage-credentials.html#optional-assign-a-storage-credential-to-specific-workspaces).\n\n\u003e If the securable's isolation mode was set to `ISOLATED` using Pulumi then the securable will have been automatically bound to the workspace it was created from.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = new databricks.Catalog(\"sandbox\", {\n    name: \"sandbox\",\n    isolationMode: \"ISOLATED\",\n});\nconst sandboxWorkspaceBinding = new databricks.WorkspaceBinding(\"sandbox\", {\n    securableName: sandbox.name,\n    workspaceId: other.workspaceId,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.Catalog(\"sandbox\",\n    name=\"sandbox\",\n    isolation_mode=\"ISOLATED\")\nsandbox_workspace_binding = databricks.WorkspaceBinding(\"sandbox\",\n    securable_name=sandbox.name,\n    workspace_id=other[\"workspaceId\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = new Databricks.Catalog(\"sandbox\", new()\n    {\n        Name = \"sandbox\",\n        IsolationMode = \"ISOLATED\",\n    });\n\n    var sandboxWorkspaceBinding = new Databricks.WorkspaceBinding(\"sandbox\", new()\n    {\n        SecurableName = sandbox.Name,\n        WorkspaceId = other.WorkspaceId,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.NewCatalog(ctx, \"sandbox\", \u0026databricks.CatalogArgs{\n\t\t\tName:          pulumi.String(\"sandbox\"),\n\t\t\tIsolationMode: pulumi.String(\"ISOLATED\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewWorkspaceBinding(ctx, \"sandbox\", \u0026databricks.WorkspaceBindingArgs{\n\t\t\tSecurableName: sandbox.Name,\n\t\t\tWorkspaceId:   pulumi.Any(other.WorkspaceId),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.Catalog;\nimport com.pulumi.databricks.CatalogArgs;\nimport com.pulumi.databricks.WorkspaceBinding;\nimport com.pulumi.databricks.WorkspaceBindingArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var sandbox = new Catalog(\"sandbox\", CatalogArgs.builder()\n            .name(\"sandbox\")\n            .isolationMode(\"ISOLATED\")\n            .build());\n\n        var sandboxWorkspaceBinding = new WorkspaceBinding(\"sandboxWorkspaceBinding\", WorkspaceBindingArgs.builder()\n            .securableName(sandbox.name())\n            .workspaceId(other.workspaceId())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  sandbox:\n    type: databricks:Catalog\n    properties:\n      name: sandbox\n      isolationMode: ISOLATED\n  sandboxWorkspaceBinding:\n    type: databricks:WorkspaceBinding\n    name: sandbox\n    properties:\n      securableName: ${sandbox.name}\n      workspaceId: ${other.workspaceId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Migration from\u003cspan pulumi-lang-nodejs=\" databricks.CatalogWorkspaceBinding\n\" pulumi-lang-dotnet=\" databricks.CatalogWorkspaceBinding\n\" pulumi-lang-go=\" CatalogWorkspaceBinding\n\" pulumi-lang-python=\" CatalogWorkspaceBinding\n\" pulumi-lang-yaml=\" databricks.CatalogWorkspaceBinding\n\" pulumi-lang-java=\" databricks.CatalogWorkspaceBinding\n\"\u003e databricks.CatalogWorkspaceBinding\n\u003c/span\u003e\nYou can migrate from the deprecated \u003cspan pulumi-lang-nodejs=\"`databricks.CatalogWorkspaceBinding`\" pulumi-lang-dotnet=\"`databricks.CatalogWorkspaceBinding`\" pulumi-lang-go=\"`CatalogWorkspaceBinding`\" pulumi-lang-python=\"`CatalogWorkspaceBinding`\" pulumi-lang-yaml=\"`databricks.CatalogWorkspaceBinding`\" pulumi-lang-java=\"`databricks.CatalogWorkspaceBinding`\"\u003e`databricks.CatalogWorkspaceBinding`\u003c/span\u003e to \u003cspan pulumi-lang-nodejs=\"`databricks.WorkspaceBinding`\" pulumi-lang-dotnet=\"`databricks.WorkspaceBinding`\" pulumi-lang-go=\"`WorkspaceBinding`\" pulumi-lang-python=\"`WorkspaceBinding`\" pulumi-lang-yaml=\"`databricks.WorkspaceBinding`\" pulumi-lang-java=\"`databricks.WorkspaceBinding`\"\u003e`databricks.WorkspaceBinding`\u003c/span\u003e without re-binding catalog.\n\n","properties":{"bindingType":{"type":"string","description":"Binding mode. Default to `BINDING_TYPE_READ_WRITE`. Possible values are `BINDING_TYPE_READ_ONLY`, `BINDING_TYPE_READ_WRITE`.\n"},"catalogName":{"type":"string","deprecationMessage":"Please use 'securable_name' and 'securable_type instead."},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceBindingProviderConfig:WorkspaceBindingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"securableName":{"type":"string","description":"Name of securable. Change forces creation of a new resource.\n"},"securableType":{"type":"string","description":"Type of securable. Can be \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`externalLocation`\" pulumi-lang-dotnet=\"`ExternalLocation`\" pulumi-lang-go=\"`externalLocation`\" pulumi-lang-python=\"`external_location`\" pulumi-lang-yaml=\"`externalLocation`\" pulumi-lang-java=\"`externalLocation`\"\u003e`external_location`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`storageCredential`\" pulumi-lang-dotnet=\"`StorageCredential`\" pulumi-lang-go=\"`storageCredential`\" pulumi-lang-python=\"`storage_credential`\" pulumi-lang-yaml=\"`storageCredential`\" pulumi-lang-java=\"`storageCredential`\"\u003e`storage_credential`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`credential`\" pulumi-lang-dotnet=\"`Credential`\" pulumi-lang-go=\"`credential`\" pulumi-lang-python=\"`credential`\" pulumi-lang-yaml=\"`credential`\" pulumi-lang-java=\"`credential`\"\u003e`credential`\u003c/span\u003e. Default to \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e. Change forces creation of a new resource.\n"},"workspaceId":{"type":"string","description":"ID of the workspace. Change forces creation of a new resource.\n"}},"required":["securableName","workspaceId"],"inputProperties":{"bindingType":{"type":"string","description":"Binding mode. Default to `BINDING_TYPE_READ_WRITE`. Possible values are `BINDING_TYPE_READ_ONLY`, `BINDING_TYPE_READ_WRITE`.\n","willReplaceOnChanges":true},"catalogName":{"type":"string","deprecationMessage":"Please use 'securable_name' and 'securable_type instead.","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceBindingProviderConfig:WorkspaceBindingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"securableName":{"type":"string","description":"Name of securable. Change forces creation of a new resource.\n"},"securableType":{"type":"string","description":"Type of securable. Can be \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`externalLocation`\" pulumi-lang-dotnet=\"`ExternalLocation`\" pulumi-lang-go=\"`externalLocation`\" pulumi-lang-python=\"`external_location`\" pulumi-lang-yaml=\"`externalLocation`\" pulumi-lang-java=\"`externalLocation`\"\u003e`external_location`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`storageCredential`\" pulumi-lang-dotnet=\"`StorageCredential`\" pulumi-lang-go=\"`storageCredential`\" pulumi-lang-python=\"`storage_credential`\" pulumi-lang-yaml=\"`storageCredential`\" pulumi-lang-java=\"`storageCredential`\"\u003e`storage_credential`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`credential`\" pulumi-lang-dotnet=\"`Credential`\" pulumi-lang-go=\"`credential`\" pulumi-lang-python=\"`credential`\" pulumi-lang-yaml=\"`credential`\" pulumi-lang-java=\"`credential`\"\u003e`credential`\u003c/span\u003e. Default to \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"workspaceId":{"type":"string","description":"ID of the workspace. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"requiredInputs":["workspaceId"],"stateInputs":{"description":"Input properties used for looking up and filtering WorkspaceBinding resources.\n","properties":{"bindingType":{"type":"string","description":"Binding mode. Default to `BINDING_TYPE_READ_WRITE`. Possible values are `BINDING_TYPE_READ_ONLY`, `BINDING_TYPE_READ_WRITE`.\n","willReplaceOnChanges":true},"catalogName":{"type":"string","deprecationMessage":"Please use 'securable_name' and 'securable_type instead.","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceBindingProviderConfig:WorkspaceBindingProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"securableName":{"type":"string","description":"Name of securable. Change forces creation of a new resource.\n"},"securableType":{"type":"string","description":"Type of securable. Can be \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`externalLocation`\" pulumi-lang-dotnet=\"`ExternalLocation`\" pulumi-lang-go=\"`externalLocation`\" pulumi-lang-python=\"`external_location`\" pulumi-lang-yaml=\"`externalLocation`\" pulumi-lang-java=\"`externalLocation`\"\u003e`external_location`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`storageCredential`\" pulumi-lang-dotnet=\"`StorageCredential`\" pulumi-lang-go=\"`storageCredential`\" pulumi-lang-python=\"`storage_credential`\" pulumi-lang-yaml=\"`storageCredential`\" pulumi-lang-java=\"`storageCredential`\"\u003e`storage_credential`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`credential`\" pulumi-lang-dotnet=\"`Credential`\" pulumi-lang-go=\"`credential`\" pulumi-lang-python=\"`credential`\" pulumi-lang-yaml=\"`credential`\" pulumi-lang-java=\"`credential`\"\u003e`credential`\u003c/span\u003e. Default to \u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e. Change forces creation of a new resource.\n","willReplaceOnChanges":true},"workspaceId":{"type":"string","description":"ID of the workspace. Change forces creation of a new resource.\n","willReplaceOnChanges":true}},"type":"object"}},"databricks:index/workspaceConf:WorkspaceConf":{"description":"Manages workspace configuration for expert usage. Currently, more than one instance of resource can exist in Pulumi state, though there's no deterministic behavior, when they manage the same property. We strongly recommend to use a single \u003cspan pulumi-lang-nodejs=\"`databricks.WorkspaceConf`\" pulumi-lang-dotnet=\"`databricks.WorkspaceConf`\" pulumi-lang-go=\"`WorkspaceConf`\" pulumi-lang-python=\"`WorkspaceConf`\" pulumi-lang-yaml=\"`databricks.WorkspaceConf`\" pulumi-lang-java=\"`databricks.WorkspaceConf`\"\u003e`databricks.WorkspaceConf`\u003c/span\u003e per workspace.\n\n\u003e This resource can only be used with a workspace-level provider!\n\n\u003e This resource has an evolving API, which may change in future versions of the provider.\n\n\u003e Deleting \u003cspan pulumi-lang-nodejs=\"`databricks.WorkspaceConf`\" pulumi-lang-dotnet=\"`databricks.WorkspaceConf`\" pulumi-lang-go=\"`WorkspaceConf`\" pulumi-lang-python=\"`WorkspaceConf`\" pulumi-lang-yaml=\"`databricks.WorkspaceConf`\" pulumi-lang-java=\"`databricks.WorkspaceConf`\"\u003e`databricks.WorkspaceConf`\u003c/span\u003e resources may fail depending on the configuration properties set, including but not limited to `enableIpAccessLists`, `enableGp3`, and `maxTokenLifetimeDays`. The provider will print a warning if this occurs. You can verify the workspace configuration by reviewing [the workspace settings in the UI](https://docs.databricks.com/en/admin/workspace-settings/index.html).\n\n## Example Usage\n\nAllows specification of custom configuration properties for expert usage:\n\n- `enableIpAccessLists` - enables the use of\u003cspan pulumi-lang-nodejs=\" databricks.IpAccessList \" pulumi-lang-dotnet=\" databricks.IpAccessList \" pulumi-lang-go=\" IpAccessList \" pulumi-lang-python=\" IpAccessList \" pulumi-lang-yaml=\" databricks.IpAccessList \" pulumi-lang-java=\" databricks.IpAccessList \"\u003e databricks.IpAccessList \u003c/span\u003eresources\n- `maxTokenLifetimeDays` - (string) Maximum token lifetime of new tokens in days, as an integer. This value can range from 1 day to 730 days (2 years). If not specified, the maximum lifetime of new tokens is 730 days. **WARNING:** This limit only applies to new tokens, so there may be tokens with lifetimes longer than this value, including unlimited lifetime. Such tokens may have been created before the current maximum token lifetime was set.\n- `enableTokensConfig` - (boolean) Enable or disable personal access tokens for this workspace.\n- `enableDeprecatedClusterNamedInitScripts` - (boolean) Enable or disable [legacy cluster-named init scripts](https://docs.databricks.com/clusters/init-scripts.html#disable-legacy-cluster-named-init-scripts-for-a-workspace) for this workspace.\n- `enableDeprecatedGlobalInitScripts` - (boolean) Enable or disable [legacy global init scripts](https://docs.databricks.com/clusters/init-scripts.html#migrate-legacy-scripts) for this workspace.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.WorkspaceConf(\"this\", {customConfig: {\n    enableIpAccessLists: \"true\",\n}});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.WorkspaceConf(\"this\", custom_config={\n    \"enableIpAccessLists\": \"true\",\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.WorkspaceConf(\"this\", new()\n    {\n        CustomConfig = \n        {\n            { \"enableIpAccessLists\", \"true\" },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewWorkspaceConf(ctx, \"this\", \u0026databricks.WorkspaceConfArgs{\n\t\t\tCustomConfig: pulumi.StringMap{\n\t\t\t\t\"enableIpAccessLists\": pulumi.String(\"true\"),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.WorkspaceConf;\nimport com.pulumi.databricks.WorkspaceConfArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new WorkspaceConf(\"this\", WorkspaceConfArgs.builder()\n            .customConfig(Map.of(\"enableIpAccessLists\", \"true\"))\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:WorkspaceConf\n    properties:\n      customConfig:\n        enableIpAccessLists: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Import\n\n!\u003e Importing this resource is not currently supported.\n\n","properties":{"customConfig":{"type":"object","additionalProperties":{"type":"string"},"description":"Key-value map of strings that represent workspace configuration. Upon resource deletion, properties that start with \u003cspan pulumi-lang-nodejs=\"`enable`\" pulumi-lang-dotnet=\"`Enable`\" pulumi-lang-go=\"`enable`\" pulumi-lang-python=\"`enable`\" pulumi-lang-yaml=\"`enable`\" pulumi-lang-java=\"`enable`\"\u003e`enable`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`enforce`\" pulumi-lang-dotnet=\"`Enforce`\" pulumi-lang-go=\"`enforce`\" pulumi-lang-python=\"`enforce`\" pulumi-lang-yaml=\"`enforce`\" pulumi-lang-java=\"`enforce`\"\u003e`enforce`\u003c/span\u003e will be reset to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e value, regardless of initial default one.\n"},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceConfProviderConfig:WorkspaceConfProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"inputProperties":{"customConfig":{"type":"object","additionalProperties":{"type":"string"},"description":"Key-value map of strings that represent workspace configuration. Upon resource deletion, properties that start with \u003cspan pulumi-lang-nodejs=\"`enable`\" pulumi-lang-dotnet=\"`Enable`\" pulumi-lang-go=\"`enable`\" pulumi-lang-python=\"`enable`\" pulumi-lang-yaml=\"`enable`\" pulumi-lang-java=\"`enable`\"\u003e`enable`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`enforce`\" pulumi-lang-dotnet=\"`Enforce`\" pulumi-lang-go=\"`enforce`\" pulumi-lang-python=\"`enforce`\" pulumi-lang-yaml=\"`enforce`\" pulumi-lang-java=\"`enforce`\"\u003e`enforce`\u003c/span\u003e will be reset to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e value, regardless of initial default one.\n"},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceConfProviderConfig:WorkspaceConfProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering WorkspaceConf resources.\n","properties":{"customConfig":{"type":"object","additionalProperties":{"type":"string"},"description":"Key-value map of strings that represent workspace configuration. Upon resource deletion, properties that start with \u003cspan pulumi-lang-nodejs=\"`enable`\" pulumi-lang-dotnet=\"`Enable`\" pulumi-lang-go=\"`enable`\" pulumi-lang-python=\"`enable`\" pulumi-lang-yaml=\"`enable`\" pulumi-lang-java=\"`enable`\"\u003e`enable`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`enforce`\" pulumi-lang-dotnet=\"`Enforce`\" pulumi-lang-go=\"`enforce`\" pulumi-lang-python=\"`enforce`\" pulumi-lang-yaml=\"`enforce`\" pulumi-lang-java=\"`enforce`\"\u003e`enforce`\u003c/span\u003e will be reset to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e value, regardless of initial default one.\n"},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceConfProviderConfig:WorkspaceConfProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"}},"type":"object"}},"databricks:index/workspaceEntityTagAssignment:WorkspaceEntityTagAssignment":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis resource allows you to create, update, list, and delete tag assignments for workspace scoped entities.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst appTag = new databricks.WorkspaceEntityTagAssignment(\"app_tag\", {\n    entityType: \"apps\",\n    entityId: \"2807324866692453\",\n    tagKey: \"sensitivity_level\",\n    tagValue: \"high\",\n});\nconst dashboardTag = new databricks.WorkspaceEntityTagAssignment(\"dashboard_tag\", {\n    entityType: \"dashboards\",\n    entityId: \"2807324866692453\",\n    tagKey: \"sensitivity_level\",\n    tagValue: \"high\",\n});\nconst geniespaceTag = new databricks.WorkspaceEntityTagAssignment(\"geniespace_tag\", {\n    entityType: \"geniespaces\",\n    entityId: \"2807324866692453\",\n    tagKey: \"sensitivity_level\",\n    tagValue: \"high\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\napp_tag = databricks.WorkspaceEntityTagAssignment(\"app_tag\",\n    entity_type=\"apps\",\n    entity_id=\"2807324866692453\",\n    tag_key=\"sensitivity_level\",\n    tag_value=\"high\")\ndashboard_tag = databricks.WorkspaceEntityTagAssignment(\"dashboard_tag\",\n    entity_type=\"dashboards\",\n    entity_id=\"2807324866692453\",\n    tag_key=\"sensitivity_level\",\n    tag_value=\"high\")\ngeniespace_tag = databricks.WorkspaceEntityTagAssignment(\"geniespace_tag\",\n    entity_type=\"geniespaces\",\n    entity_id=\"2807324866692453\",\n    tag_key=\"sensitivity_level\",\n    tag_value=\"high\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var appTag = new Databricks.WorkspaceEntityTagAssignment(\"app_tag\", new()\n    {\n        EntityType = \"apps\",\n        EntityId = \"2807324866692453\",\n        TagKey = \"sensitivity_level\",\n        TagValue = \"high\",\n    });\n\n    var dashboardTag = new Databricks.WorkspaceEntityTagAssignment(\"dashboard_tag\", new()\n    {\n        EntityType = \"dashboards\",\n        EntityId = \"2807324866692453\",\n        TagKey = \"sensitivity_level\",\n        TagValue = \"high\",\n    });\n\n    var geniespaceTag = new Databricks.WorkspaceEntityTagAssignment(\"geniespace_tag\", new()\n    {\n        EntityType = \"geniespaces\",\n        EntityId = \"2807324866692453\",\n        TagKey = \"sensitivity_level\",\n        TagValue = \"high\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewWorkspaceEntityTagAssignment(ctx, \"app_tag\", \u0026databricks.WorkspaceEntityTagAssignmentArgs{\n\t\t\tEntityType: pulumi.String(\"apps\"),\n\t\t\tEntityId:   pulumi.String(\"2807324866692453\"),\n\t\t\tTagKey:     pulumi.String(\"sensitivity_level\"),\n\t\t\tTagValue:   pulumi.String(\"high\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewWorkspaceEntityTagAssignment(ctx, \"dashboard_tag\", \u0026databricks.WorkspaceEntityTagAssignmentArgs{\n\t\t\tEntityType: pulumi.String(\"dashboards\"),\n\t\t\tEntityId:   pulumi.String(\"2807324866692453\"),\n\t\t\tTagKey:     pulumi.String(\"sensitivity_level\"),\n\t\t\tTagValue:   pulumi.String(\"high\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewWorkspaceEntityTagAssignment(ctx, \"geniespace_tag\", \u0026databricks.WorkspaceEntityTagAssignmentArgs{\n\t\t\tEntityType: pulumi.String(\"geniespaces\"),\n\t\t\tEntityId:   pulumi.String(\"2807324866692453\"),\n\t\t\tTagKey:     pulumi.String(\"sensitivity_level\"),\n\t\t\tTagValue:   pulumi.String(\"high\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.WorkspaceEntityTagAssignment;\nimport com.pulumi.databricks.WorkspaceEntityTagAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var appTag = new WorkspaceEntityTagAssignment(\"appTag\", WorkspaceEntityTagAssignmentArgs.builder()\n            .entityType(\"apps\")\n            .entityId(\"2807324866692453\")\n            .tagKey(\"sensitivity_level\")\n            .tagValue(\"high\")\n            .build());\n\n        var dashboardTag = new WorkspaceEntityTagAssignment(\"dashboardTag\", WorkspaceEntityTagAssignmentArgs.builder()\n            .entityType(\"dashboards\")\n            .entityId(\"2807324866692453\")\n            .tagKey(\"sensitivity_level\")\n            .tagValue(\"high\")\n            .build());\n\n        var geniespaceTag = new WorkspaceEntityTagAssignment(\"geniespaceTag\", WorkspaceEntityTagAssignmentArgs.builder()\n            .entityType(\"geniespaces\")\n            .entityId(\"2807324866692453\")\n            .tagKey(\"sensitivity_level\")\n            .tagValue(\"high\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  appTag:\n    type: databricks:WorkspaceEntityTagAssignment\n    name: app_tag\n    properties:\n      entityType: apps\n      entityId: '2807324866692453'\n      tagKey: sensitivity_level\n      tagValue: high\n  dashboardTag:\n    type: databricks:WorkspaceEntityTagAssignment\n    name: dashboard_tag\n    properties:\n      entityType: dashboards\n      entityId: '2807324866692453'\n      tagKey: sensitivity_level\n      tagValue: high\n  geniespaceTag:\n    type: databricks:WorkspaceEntityTagAssignment\n    name: geniespace_tag\n    properties:\n      entityType: geniespaces\n      entityId: '2807324866692453'\n      tagKey: sensitivity_level\n      tagValue: high\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"entityId":{"type":"string","description":"The identifier of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of entity to which the tag is assigned. Allowed values are apps, dashboards, geniespaces\n"},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceEntityTagAssignmentProviderConfig:WorkspaceEntityTagAssignmentProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string","description":"The key of the tag. The characters , . : / - = and leading/trailing spaces are not allowed\n"},"tagValue":{"type":"string","description":"The value of the tag\n"}},"required":["entityId","entityType","tagKey"],"inputProperties":{"entityId":{"type":"string","description":"The identifier of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of entity to which the tag is assigned. Allowed values are apps, dashboards, geniespaces\n"},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceEntityTagAssignmentProviderConfig:WorkspaceEntityTagAssignmentProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string","description":"The key of the tag. The characters , . : / - = and leading/trailing spaces are not allowed\n"},"tagValue":{"type":"string","description":"The value of the tag\n"}},"requiredInputs":["entityId","entityType","tagKey"],"stateInputs":{"description":"Input properties used for looking up and filtering WorkspaceEntityTagAssignment resources.\n","properties":{"entityId":{"type":"string","description":"The identifier of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of entity to which the tag is assigned. Allowed values are apps, dashboards, geniespaces\n"},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceEntityTagAssignmentProviderConfig:WorkspaceEntityTagAssignmentProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string","description":"The key of the tag. The characters , . : / - = and leading/trailing spaces are not allowed\n"},"tagValue":{"type":"string","description":"The value of the tag\n"}},"type":"object"}},"databricks:index/workspaceFile:WorkspaceFile":{"description":"This resource allows you to manage [Databricks Workspace Files](https://docs.databricks.com/files/workspace.html).\n\n\u003e This resource can only be used with a workspace-level provider!\n\n","properties":{"contentBase64":{"type":"string","description":"The base64-encoded file content. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances, like creating a workspace file with configuration properties for a data pipeline.\n"},"md5":{"type":"string"},"objectId":{"type":"integer","description":"Unique identifier for a workspace file\n"},"path":{"type":"string","description":"The absolute path of the workspace file, beginning with \"/\", e.g. \"/Demo\".\n"},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceFileProviderConfig:WorkspaceFileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"source":{"type":"string","description":"Path to file on local filesystem. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n"},"url":{"type":"string","description":"Routable URL of the workspace file\n"},"workspacePath":{"type":"string","description":"path on Workspace File System (WSFS) in form of `/Workspace` + \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e\n"}},"required":["objectId","path","url","workspacePath"],"inputProperties":{"contentBase64":{"type":"string","description":"The base64-encoded file content. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances, like creating a workspace file with configuration properties for a data pipeline.\n"},"md5":{"type":"string"},"objectId":{"type":"integer","description":"Unique identifier for a workspace file\n"},"path":{"type":"string","description":"The absolute path of the workspace file, beginning with \"/\", e.g. \"/Demo\".\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceFileProviderConfig:WorkspaceFileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"source":{"type":"string","description":"Path to file on local filesystem. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n"}},"requiredInputs":["path"],"stateInputs":{"description":"Input properties used for looking up and filtering WorkspaceFile resources.\n","properties":{"contentBase64":{"type":"string","description":"The base64-encoded file content. Conflicts with \u003cspan pulumi-lang-nodejs=\"`source`\" pulumi-lang-dotnet=\"`Source`\" pulumi-lang-go=\"`source`\" pulumi-lang-python=\"`source`\" pulumi-lang-yaml=\"`source`\" pulumi-lang-java=\"`source`\"\u003e`source`\u003c/span\u003e. Use of \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e is discouraged, as it's increasing memory footprint of Pulumi state and should only be used in exceptional circumstances, like creating a workspace file with configuration properties for a data pipeline.\n"},"md5":{"type":"string"},"objectId":{"type":"integer","description":"Unique identifier for a workspace file\n"},"path":{"type":"string","description":"The absolute path of the workspace file, beginning with \"/\", e.g. \"/Demo\".\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceFileProviderConfig:WorkspaceFileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"source":{"type":"string","description":"Path to file on local filesystem. Conflicts with \u003cspan pulumi-lang-nodejs=\"`contentBase64`\" pulumi-lang-dotnet=\"`ContentBase64`\" pulumi-lang-go=\"`contentBase64`\" pulumi-lang-python=\"`content_base64`\" pulumi-lang-yaml=\"`contentBase64`\" pulumi-lang-java=\"`contentBase64`\"\u003e`content_base64`\u003c/span\u003e.\n"},"url":{"type":"string","description":"Routable URL of the workspace file\n"},"workspacePath":{"type":"string","description":"path on Workspace File System (WSFS) in form of `/Workspace` + \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e\n"}},"type":"object"}},"databricks:index/workspaceNetworkOption:WorkspaceNetworkOption":{"description":"[![GA](https://img.shields.io/badge/Release_Stage-GA-green)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nWorkspace network options allow configuration of network settings for Databricks workspaces by selecting which network policy to associate with the workspace.\n\nEach workspace is always associated with exactly one network policy that controls which network destinations can be accessed from the Databricks environment. By default, workspaces are associated with the `default-policy` network policy.\n\nThis resource has the following characteristics:\n\n- You cannot create or delete a workspace's network option\n- You can only update it to associate the workspace with a different policy\n- This resource is used to change the network policy assignment for existing workspaces\n\n\u003e **Note** This resource can only be used with an account-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst exampleWorkspaceNetworkOption = new databricks.WorkspaceNetworkOption(\"example_workspace_network_option\", {\n    workspaceId: \"9999999999999999\",\n    networkPolicyId: \"default-policy\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexample_workspace_network_option = databricks.WorkspaceNetworkOption(\"example_workspace_network_option\",\n    workspace_id=\"9999999999999999\",\n    network_policy_id=\"default-policy\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var exampleWorkspaceNetworkOption = new Databricks.WorkspaceNetworkOption(\"example_workspace_network_option\", new()\n    {\n        WorkspaceId = \"9999999999999999\",\n        NetworkPolicyId = \"default-policy\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewWorkspaceNetworkOption(ctx, \"example_workspace_network_option\", \u0026databricks.WorkspaceNetworkOptionArgs{\n\t\t\tWorkspaceId:     pulumi.String(\"9999999999999999\"),\n\t\t\tNetworkPolicyId: pulumi.String(\"default-policy\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.WorkspaceNetworkOption;\nimport com.pulumi.databricks.WorkspaceNetworkOptionArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var exampleWorkspaceNetworkOption = new WorkspaceNetworkOption(\"exampleWorkspaceNetworkOption\", WorkspaceNetworkOptionArgs.builder()\n            .workspaceId(\"9999999999999999\")\n            .networkPolicyId(\"default-policy\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  exampleWorkspaceNetworkOption:\n    type: databricks:WorkspaceNetworkOption\n    name: example_workspace_network_option\n    properties:\n      workspaceId: '9999999999999999'\n      networkPolicyId: default-policy\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"networkPolicyId":{"type":"string","description":"The network policy ID to apply to the workspace. This controls the network access rules\nfor all serverless compute resources in the workspace. Each workspace can only be\nlinked to one policy at a time. If no policy is explicitly assigned,\nthe workspace will use 'default-policy'\n"},"workspaceId":{"type":"string","description":"The workspace ID\n"}},"inputProperties":{"networkPolicyId":{"type":"string","description":"The network policy ID to apply to the workspace. This controls the network access rules\nfor all serverless compute resources in the workspace. Each workspace can only be\nlinked to one policy at a time. If no policy is explicitly assigned,\nthe workspace will use 'default-policy'\n"},"workspaceId":{"type":"string","description":"The workspace ID\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering WorkspaceNetworkOption resources.\n","properties":{"networkPolicyId":{"type":"string","description":"The network policy ID to apply to the workspace. This controls the network access rules\nfor all serverless compute resources in the workspace. Each workspace can only be\nlinked to one policy at a time. If no policy is explicitly assigned,\nthe workspace will use 'default-policy'\n"},"workspaceId":{"type":"string","description":"The workspace ID\n"}},"type":"object"}},"databricks:index/workspaceSettingV2:WorkspaceSettingV2":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nSetting is a configurable value or control that determines how a feature or behavior works within the databricks platform.\n\n[//]: # (todo: add public link to metadata api after production doc link available)\nSee settings-metadata api for list of settings that can be modified using this resource. \n\n## Example Usage\n\nGetting a workspace level setting:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = new databricks.WorkspaceSettingV2(\"this\", {\n    name: \"llm_proxy_partner_powered\",\n    booleanVal: {\n        value: false,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.WorkspaceSettingV2(\"this\",\n    name=\"llm_proxy_partner_powered\",\n    boolean_val={\n        \"value\": False,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = new Databricks.WorkspaceSettingV2(\"this\", new()\n    {\n        Name = \"llm_proxy_partner_powered\",\n        BooleanVal = new Databricks.Inputs.WorkspaceSettingV2BooleanValArgs\n        {\n            Value = false,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewWorkspaceSettingV2(ctx, \"this\", \u0026databricks.WorkspaceSettingV2Args{\n\t\t\tName: pulumi.String(\"llm_proxy_partner_powered\"),\n\t\t\tBooleanVal: \u0026databricks.WorkspaceSettingV2BooleanValArgs{\n\t\t\t\tValue: pulumi.Bool(false),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.WorkspaceSettingV2;\nimport com.pulumi.databricks.WorkspaceSettingV2Args;\nimport com.pulumi.databricks.inputs.WorkspaceSettingV2BooleanValArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var this_ = new WorkspaceSettingV2(\"this\", WorkspaceSettingV2Args.builder()\n            .name(\"llm_proxy_partner_powered\")\n            .booleanVal(WorkspaceSettingV2BooleanValArgs.builder()\n                .value(false)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  this:\n    type: databricks:WorkspaceSettingV2\n    properties:\n      name: llm_proxy_partner_powered\n      booleanVal:\n        value: false\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","properties":{"aibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy:WorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingAccessPolicy \"\u003e effective_aibi_dashboard_embedding_access_policy \u003c/span\u003efor final setting value\n"},"aibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains:WorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingApprovedDomains \"\u003e effective_aibi_dashboard_embedding_approved_domains \u003c/span\u003efor final setting value\n"},"automaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspace:WorkspaceSettingV2AutomaticClusterUpdateWorkspace","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" EffectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-python=\" effective_automatic_cluster_update_workspace \" pulumi-lang-yaml=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-java=\" effectiveAutomaticClusterUpdateWorkspace \"\u003e effective_automatic_cluster_update_workspace \u003c/span\u003efor final setting value\n"},"booleanVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2BooleanVal:WorkspaceSettingV2BooleanVal","description":"Setting value for boolean type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveBooleanVal \" pulumi-lang-dotnet=\" EffectiveBooleanVal \" pulumi-lang-go=\" effectiveBooleanVal \" pulumi-lang-python=\" effective_boolean_val \" pulumi-lang-yaml=\" effectiveBooleanVal \" pulumi-lang-java=\" effectiveBooleanVal \"\u003e effective_boolean_val \u003c/span\u003efor final setting value\n"},"effectiveAibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:WorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_access_policy\n"},"effectiveAibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:WorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_approved_domains\n"},"effectiveAutomaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the final effective value of setting. To set a value use automatic_cluster_update_workspace\n"},"effectiveBooleanVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveBooleanVal:WorkspaceSettingV2EffectiveBooleanVal","description":"(BooleanMessage) - Effective setting value for boolean type setting. This is the final effective value of setting. To set a value use boolean_val\n"},"effectiveIntegerVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveIntegerVal:WorkspaceSettingV2EffectiveIntegerVal","description":"(IntegerMessage) - Effective setting value for integer type setting. This is the final effective value of setting. To set a value use integer_val\n"},"effectivePersonalCompute":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectivePersonalCompute:WorkspaceSettingV2EffectivePersonalCompute","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the final effective value of setting. To set a value use personal_compute\n"},"effectiveRestrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveRestrictWorkspaceAdmins:WorkspaceSettingV2EffectiveRestrictWorkspaceAdmins","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the final effective value of setting. To set a value use restrict_workspace_admins\n"},"effectiveStringVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveStringVal:WorkspaceSettingV2EffectiveStringVal","description":"(StringMessage) - Effective setting value for string type setting. This is the final effective value of setting. To set a value use string_val\n"},"integerVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2IntegerVal:WorkspaceSettingV2IntegerVal","description":"Setting value for integer type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveIntegerVal \" pulumi-lang-dotnet=\" EffectiveIntegerVal \" pulumi-lang-go=\" effectiveIntegerVal \" pulumi-lang-python=\" effective_integer_val \" pulumi-lang-yaml=\" effectiveIntegerVal \" pulumi-lang-java=\" effectiveIntegerVal \"\u003e effective_integer_val \u003c/span\u003efor final setting value\n"},"name":{"type":"string","description":"Name of the setting\n"},"personalCompute":{"$ref":"#/types/databricks:index/WorkspaceSettingV2PersonalCompute:WorkspaceSettingV2PersonalCompute","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectivePersonalCompute \" pulumi-lang-dotnet=\" EffectivePersonalCompute \" pulumi-lang-go=\" effectivePersonalCompute \" pulumi-lang-python=\" effective_personal_compute \" pulumi-lang-yaml=\" effectivePersonalCompute \" pulumi-lang-java=\" effectivePersonalCompute \"\u003e effective_personal_compute \u003c/span\u003efor final setting value\n"},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceSettingV2ProviderConfig:WorkspaceSettingV2ProviderConfig","description":"Configure the provider for management through account provider.\n"},"restrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/WorkspaceSettingV2RestrictWorkspaceAdmins:WorkspaceSettingV2RestrictWorkspaceAdmins","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-dotnet=\" EffectiveRestrictWorkspaceAdmins \" pulumi-lang-go=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-python=\" effective_restrict_workspace_admins \" pulumi-lang-yaml=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-java=\" effectiveRestrictWorkspaceAdmins \"\u003e effective_restrict_workspace_admins \u003c/span\u003efor final setting value\n"},"stringVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2StringVal:WorkspaceSettingV2StringVal","description":"Setting value for string type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveStringVal \" pulumi-lang-dotnet=\" EffectiveStringVal \" pulumi-lang-go=\" effectiveStringVal \" pulumi-lang-python=\" effective_string_val \" pulumi-lang-yaml=\" effectiveStringVal \" pulumi-lang-java=\" effectiveStringVal \"\u003e effective_string_val \u003c/span\u003efor final setting value\n"}},"required":["effectiveBooleanVal","effectiveIntegerVal","effectiveStringVal","name"],"inputProperties":{"aibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy:WorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingAccessPolicy \"\u003e effective_aibi_dashboard_embedding_access_policy \u003c/span\u003efor final setting value\n"},"aibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains:WorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingApprovedDomains \"\u003e effective_aibi_dashboard_embedding_approved_domains \u003c/span\u003efor final setting value\n"},"automaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspace:WorkspaceSettingV2AutomaticClusterUpdateWorkspace","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" EffectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-python=\" effective_automatic_cluster_update_workspace \" pulumi-lang-yaml=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-java=\" effectiveAutomaticClusterUpdateWorkspace \"\u003e effective_automatic_cluster_update_workspace \u003c/span\u003efor final setting value\n"},"booleanVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2BooleanVal:WorkspaceSettingV2BooleanVal","description":"Setting value for boolean type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveBooleanVal \" pulumi-lang-dotnet=\" EffectiveBooleanVal \" pulumi-lang-go=\" effectiveBooleanVal \" pulumi-lang-python=\" effective_boolean_val \" pulumi-lang-yaml=\" effectiveBooleanVal \" pulumi-lang-java=\" effectiveBooleanVal \"\u003e effective_boolean_val \u003c/span\u003efor final setting value\n"},"effectiveAibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:WorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_access_policy\n"},"effectiveAibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:WorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_approved_domains\n"},"effectiveAutomaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the final effective value of setting. To set a value use automatic_cluster_update_workspace\n"},"effectivePersonalCompute":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectivePersonalCompute:WorkspaceSettingV2EffectivePersonalCompute","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the final effective value of setting. To set a value use personal_compute\n"},"effectiveRestrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveRestrictWorkspaceAdmins:WorkspaceSettingV2EffectiveRestrictWorkspaceAdmins","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the final effective value of setting. To set a value use restrict_workspace_admins\n"},"integerVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2IntegerVal:WorkspaceSettingV2IntegerVal","description":"Setting value for integer type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveIntegerVal \" pulumi-lang-dotnet=\" EffectiveIntegerVal \" pulumi-lang-go=\" effectiveIntegerVal \" pulumi-lang-python=\" effective_integer_val \" pulumi-lang-yaml=\" effectiveIntegerVal \" pulumi-lang-java=\" effectiveIntegerVal \"\u003e effective_integer_val \u003c/span\u003efor final setting value\n"},"name":{"type":"string","description":"Name of the setting\n"},"personalCompute":{"$ref":"#/types/databricks:index/WorkspaceSettingV2PersonalCompute:WorkspaceSettingV2PersonalCompute","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectivePersonalCompute \" pulumi-lang-dotnet=\" EffectivePersonalCompute \" pulumi-lang-go=\" effectivePersonalCompute \" pulumi-lang-python=\" effective_personal_compute \" pulumi-lang-yaml=\" effectivePersonalCompute \" pulumi-lang-java=\" effectivePersonalCompute \"\u003e effective_personal_compute \u003c/span\u003efor final setting value\n"},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceSettingV2ProviderConfig:WorkspaceSettingV2ProviderConfig","description":"Configure the provider for management through account provider.\n"},"restrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/WorkspaceSettingV2RestrictWorkspaceAdmins:WorkspaceSettingV2RestrictWorkspaceAdmins","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-dotnet=\" EffectiveRestrictWorkspaceAdmins \" pulumi-lang-go=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-python=\" effective_restrict_workspace_admins \" pulumi-lang-yaml=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-java=\" effectiveRestrictWorkspaceAdmins \"\u003e effective_restrict_workspace_admins \u003c/span\u003efor final setting value\n"},"stringVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2StringVal:WorkspaceSettingV2StringVal","description":"Setting value for string type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveStringVal \" pulumi-lang-dotnet=\" EffectiveStringVal \" pulumi-lang-go=\" effectiveStringVal \" pulumi-lang-python=\" effective_string_val \" pulumi-lang-yaml=\" effectiveStringVal \" pulumi-lang-java=\" effectiveStringVal \"\u003e effective_string_val \u003c/span\u003efor final setting value\n"}},"stateInputs":{"description":"Input properties used for looking up and filtering WorkspaceSettingV2 resources.\n","properties":{"aibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy:WorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingAccessPolicy \"\u003e effective_aibi_dashboard_embedding_access_policy \u003c/span\u003efor final setting value\n"},"aibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains:WorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingApprovedDomains \"\u003e effective_aibi_dashboard_embedding_approved_domains \u003c/span\u003efor final setting value\n"},"automaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/WorkspaceSettingV2AutomaticClusterUpdateWorkspace:WorkspaceSettingV2AutomaticClusterUpdateWorkspace","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" EffectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-python=\" effective_automatic_cluster_update_workspace \" pulumi-lang-yaml=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-java=\" effectiveAutomaticClusterUpdateWorkspace \"\u003e effective_automatic_cluster_update_workspace \u003c/span\u003efor final setting value\n"},"booleanVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2BooleanVal:WorkspaceSettingV2BooleanVal","description":"Setting value for boolean type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveBooleanVal \" pulumi-lang-dotnet=\" EffectiveBooleanVal \" pulumi-lang-go=\" effectiveBooleanVal \" pulumi-lang-python=\" effective_boolean_val \" pulumi-lang-yaml=\" effectiveBooleanVal \" pulumi-lang-java=\" effectiveBooleanVal \"\u003e effective_boolean_val \u003c/span\u003efor final setting value\n"},"effectiveAibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:WorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_access_policy\n"},"effectiveAibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:WorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_approved_domains\n"},"effectiveAutomaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace:WorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the final effective value of setting. To set a value use automatic_cluster_update_workspace\n"},"effectiveBooleanVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveBooleanVal:WorkspaceSettingV2EffectiveBooleanVal","description":"(BooleanMessage) - Effective setting value for boolean type setting. This is the final effective value of setting. To set a value use boolean_val\n"},"effectiveIntegerVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveIntegerVal:WorkspaceSettingV2EffectiveIntegerVal","description":"(IntegerMessage) - Effective setting value for integer type setting. This is the final effective value of setting. To set a value use integer_val\n"},"effectivePersonalCompute":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectivePersonalCompute:WorkspaceSettingV2EffectivePersonalCompute","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the final effective value of setting. To set a value use personal_compute\n"},"effectiveRestrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveRestrictWorkspaceAdmins:WorkspaceSettingV2EffectiveRestrictWorkspaceAdmins","description":"Effective setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the final effective value of setting. To set a value use restrict_workspace_admins\n"},"effectiveStringVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2EffectiveStringVal:WorkspaceSettingV2EffectiveStringVal","description":"(StringMessage) - Effective setting value for string type setting. This is the final effective value of setting. To set a value use string_val\n"},"integerVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2IntegerVal:WorkspaceSettingV2IntegerVal","description":"Setting value for integer type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveIntegerVal \" pulumi-lang-dotnet=\" EffectiveIntegerVal \" pulumi-lang-go=\" effectiveIntegerVal \" pulumi-lang-python=\" effective_integer_val \" pulumi-lang-yaml=\" effectiveIntegerVal \" pulumi-lang-java=\" effectiveIntegerVal \"\u003e effective_integer_val \u003c/span\u003efor final setting value\n"},"name":{"type":"string","description":"Name of the setting\n"},"personalCompute":{"$ref":"#/types/databricks:index/WorkspaceSettingV2PersonalCompute:WorkspaceSettingV2PersonalCompute","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectivePersonalCompute \" pulumi-lang-dotnet=\" EffectivePersonalCompute \" pulumi-lang-go=\" effectivePersonalCompute \" pulumi-lang-python=\" effective_personal_compute \" pulumi-lang-yaml=\" effectivePersonalCompute \" pulumi-lang-java=\" effectivePersonalCompute \"\u003e effective_personal_compute \u003c/span\u003efor final setting value\n"},"providerConfig":{"$ref":"#/types/databricks:index/WorkspaceSettingV2ProviderConfig:WorkspaceSettingV2ProviderConfig","description":"Configure the provider for management through account provider.\n"},"restrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/WorkspaceSettingV2RestrictWorkspaceAdmins:WorkspaceSettingV2RestrictWorkspaceAdmins","description":"Setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-dotnet=\" EffectiveRestrictWorkspaceAdmins \" pulumi-lang-go=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-python=\" effective_restrict_workspace_admins \" pulumi-lang-yaml=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-java=\" effectiveRestrictWorkspaceAdmins \"\u003e effective_restrict_workspace_admins \u003c/span\u003efor final setting value\n"},"stringVal":{"$ref":"#/types/databricks:index/WorkspaceSettingV2StringVal:WorkspaceSettingV2StringVal","description":"Setting value for string type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveStringVal \" pulumi-lang-dotnet=\" EffectiveStringVal \" pulumi-lang-go=\" effectiveStringVal \" pulumi-lang-python=\" effective_string_val \" pulumi-lang-yaml=\" effectiveStringVal \" pulumi-lang-java=\" effectiveStringVal \"\u003e effective_string_val \u003c/span\u003efor final setting value\n"}},"type":"object"}}},"functions":{"databricks:index/getAccountFederationPolicies:getAccountFederationPolicies":{"description":"[![GA](https://img.shields.io/badge/Release_Stage-GA-green)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to fetch the list of account federation policies.\n\n\u003e **Note** This data source can only be used with an account-level provider!\n\n## Example Usage\n\nGetting a list of all account federation policies:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getAccountFederationPolicies({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_account_federation_policies()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetAccountFederationPolicies.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetAccountFederationPolicies(ctx, \u0026databricks.GetAccountFederationPoliciesArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAccountFederationPoliciesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getAccountFederationPolicies(GetAccountFederationPoliciesArgs.builder()\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getAccountFederationPolicies\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getAccountFederationPolicies.\n","properties":{"pageSize":{"type":"integer"}},"type":"object"},"outputs":{"description":"A collection of values returned by getAccountFederationPolicies.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"policies":{"items":{"$ref":"#/types/databricks:index/getAccountFederationPoliciesPolicy:getAccountFederationPoliciesPolicy"},"type":"array"}},"required":["policies","id"],"type":"object"}},"databricks:index/getAccountFederationPolicy:getAccountFederationPolicy":{"description":"[![GA](https://img.shields.io/badge/Release_Stage-GA-green)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single account federation policy.\n\n\u003e **Note** This data source can only be used with an account-level provider!\n\n## Example Usage\n\nReferring to an account federation policy by id:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```yaml\nvariables:\n  myPolicy:\n    fn::invoke:\n      function: databricks:getAccountFederationPolicy\n      arguments:\n        policyId: my-policy\n        oidcPolicy:\n          issuer: https://myidp.example.com\n          subjectClaim: sub\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getAccountFederationPolicy.\n","properties":{"policyId":{"type":"string","description":"The ID of the federation policy. Output only\n"}},"type":"object","required":["policyId"]},"outputs":{"description":"A collection of values returned by getAccountFederationPolicy.\n","properties":{"createTime":{"description":"(string) - Creation time of the federation policy\n","type":"string"},"description":{"description":"(string) - Description of the federation policy\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"(string) - Resource name for the federation policy. Example values include\n`accounts/\u003caccount-id\u003e/federationPolicies/my-federation-policy` for Account Federation Policies, and\n`accounts/\u003caccount-id\u003e/servicePrincipals/\u003cservice-principal-id\u003e/federationPolicies/my-federation-policy`\nfor Service Principal Federation Policies. Typically an output parameter, which does not need to be\nspecified in create or update requests. If specified in a request, must match the value in the\nrequest URL\n","type":"string"},"oidcPolicy":{"$ref":"#/types/databricks:index/getAccountFederationPolicyOidcPolicy:getAccountFederationPolicyOidcPolicy","description":"(OidcFederationPolicy)\n"},"policyId":{"description":"(string) - The ID of the federation policy. Output only\n","type":"string"},"servicePrincipalId":{"description":"(integer) - The service principal ID that this federation policy applies to. Output only. Only set for service principal federation policies\n","type":"integer"},"uid":{"description":"(string) - Unique, immutable id of the federation policy\n","type":"string"},"updateTime":{"description":"(string) - Last update time of the federation policy\n","type":"string"}},"required":["createTime","description","name","oidcPolicy","policyId","servicePrincipalId","uid","updateTime","id"],"type":"object"}},"databricks:index/getAccountNetworkPolicies:getAccountNetworkPolicies":{"description":"[![GA](https://img.shields.io/badge/Release_Stage-GA-green)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to fetch the list of network policies.\n\n\u003e **Note** This data source can only be used with an account-level provider!\n\n## Example Usage\n\nGetting a list of all network policies:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getAccountNetworkPolicies({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_account_network_policies()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetAccountNetworkPolicies.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetAccountNetworkPolicies(ctx, map[string]interface{}{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getAccountNetworkPolicies(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference);\n\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getAccountNetworkPolicies\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","outputs":{"description":"A collection of values returned by getAccountNetworkPolicies.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"items":{"items":{"$ref":"#/types/databricks:index/getAccountNetworkPoliciesItem:getAccountNetworkPoliciesItem"},"type":"array"}},"required":["items","id"],"type":"object"}},"databricks:index/getAccountNetworkPolicy:getAccountNetworkPolicy":{"description":"[![GA](https://img.shields.io/badge/Release_Stage-GA-green)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single network policy.\n\n\u003e **Note** This data source can only be used with an account-level provider!\n\n## Example Usage\n\nReferring to a network policy by id:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getAccountNetworkPolicy\n      arguments:\n        policyId: test\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getAccountNetworkPolicy.\n","properties":{"networkPolicyId":{"type":"string","description":"The unique identifier for the network policy\n"}},"type":"object","required":["networkPolicyId"]},"outputs":{"description":"A collection of values returned by getAccountNetworkPolicy.\n","properties":{"accountId":{"description":"(string) - The associated account ID for this Network Policy object\n","type":"string"},"egress":{"$ref":"#/types/databricks:index/getAccountNetworkPolicyEgress:getAccountNetworkPolicyEgress","description":"(NetworkPolicyEgress) - The network policies applying for egress traffic\n"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"networkPolicyId":{"description":"(string) - The unique identifier for the network policy\n","type":"string"}},"required":["accountId","egress","networkPolicyId","id"],"type":"object"}},"databricks:index/getAccountSettingUserPreferenceV2:getAccountSettingUserPreferenceV2":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single account user preference setting.\n\n\n","inputs":{"description":"A collection of arguments for invoking getAccountSettingUserPreferenceV2.\n","properties":{"name":{"type":"string","description":"Name of the setting\n"},"userId":{"type":"string","description":"User ID of the user\n"}},"type":"object","required":["name","userId"]},"outputs":{"description":"A collection of values returned by getAccountSettingUserPreferenceV2.\n","properties":{"booleanVal":{"$ref":"#/types/databricks:index/getAccountSettingUserPreferenceV2BooleanVal:getAccountSettingUserPreferenceV2BooleanVal","description":"(BooleanMessage)\n"},"effectiveBooleanVal":{"$ref":"#/types/databricks:index/getAccountSettingUserPreferenceV2EffectiveBooleanVal:getAccountSettingUserPreferenceV2EffectiveBooleanVal","description":"(BooleanMessage)\n"},"effectiveStringVal":{"$ref":"#/types/databricks:index/getAccountSettingUserPreferenceV2EffectiveStringVal:getAccountSettingUserPreferenceV2EffectiveStringVal","description":"(StringMessage)\n"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"(string) - Name of the setting\n","type":"string"},"stringVal":{"$ref":"#/types/databricks:index/getAccountSettingUserPreferenceV2StringVal:getAccountSettingUserPreferenceV2StringVal","description":"(StringMessage)\n"},"userId":{"description":"(string) - User ID of the user\n","type":"string"}},"required":["booleanVal","effectiveBooleanVal","effectiveStringVal","name","stringVal","userId","id"],"type":"object"}},"databricks:index/getAccountSettingV2:getAccountSettingV2":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single account setting. \n\n## Example Usage\n\nReferring to a setting by id\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getAccountSettingV2\n      arguments:\n        name: llm_proxy_partner_powered\n        booleanVal:\n          value: false\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getAccountSettingV2.\n","properties":{"name":{"type":"string","description":"Name of the setting\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getAccountSettingV2.\n","properties":{"aibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/getAccountSettingV2AibiDashboardEmbeddingAccessPolicy:getAccountSettingV2AibiDashboardEmbeddingAccessPolicy","description":"(AibiDashboardEmbeddingAccessPolicy) - Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingAccessPolicy \"\u003e effective_aibi_dashboard_embedding_access_policy \u003c/span\u003efor final setting value\n"},"aibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/getAccountSettingV2AibiDashboardEmbeddingApprovedDomains:getAccountSettingV2AibiDashboardEmbeddingApprovedDomains","description":"(AibiDashboardEmbeddingApprovedDomains) - Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingApprovedDomains \"\u003e effective_aibi_dashboard_embedding_approved_domains \u003c/span\u003efor final setting value\n"},"automaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/getAccountSettingV2AutomaticClusterUpdateWorkspace:getAccountSettingV2AutomaticClusterUpdateWorkspace","description":"(ClusterAutoRestartMessage) - Setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" EffectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-python=\" effective_automatic_cluster_update_workspace \" pulumi-lang-yaml=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-java=\" effectiveAutomaticClusterUpdateWorkspace \"\u003e effective_automatic_cluster_update_workspace \u003c/span\u003efor final setting value\n"},"booleanVal":{"$ref":"#/types/databricks:index/getAccountSettingV2BooleanVal:getAccountSettingV2BooleanVal","description":"(BooleanMessage) - Setting value for boolean type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveBooleanVal \" pulumi-lang-dotnet=\" EffectiveBooleanVal \" pulumi-lang-go=\" effectiveBooleanVal \" pulumi-lang-python=\" effective_boolean_val \" pulumi-lang-yaml=\" effectiveBooleanVal \" pulumi-lang-java=\" effectiveBooleanVal \"\u003e effective_boolean_val \u003c/span\u003efor final setting value\n"},"effectiveAibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:getAccountSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy","description":"(AibiDashboardEmbeddingAccessPolicy) - Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_access_policy\n"},"effectiveAibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:getAccountSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains","description":"(AibiDashboardEmbeddingApprovedDomains) - Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_approved_domains\n"},"effectiveAutomaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspace:getAccountSettingV2EffectiveAutomaticClusterUpdateWorkspace","description":"(ClusterAutoRestartMessage) - Effective setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the final effective value of setting. To set a value use automatic_cluster_update_workspace\n"},"effectiveBooleanVal":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectiveBooleanVal:getAccountSettingV2EffectiveBooleanVal","description":"(BooleanMessage) - Effective setting value for boolean type setting. This is the final effective value of setting. To set a value use boolean_val\n"},"effectiveIntegerVal":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectiveIntegerVal:getAccountSettingV2EffectiveIntegerVal","description":"(IntegerMessage) - Effective setting value for integer type setting. This is the final effective value of setting. To set a value use integer_val\n"},"effectivePersonalCompute":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectivePersonalCompute:getAccountSettingV2EffectivePersonalCompute","description":"(PersonalComputeMessage) - Effective setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the final effective value of setting. To set a value use personal_compute\n"},"effectiveRestrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectiveRestrictWorkspaceAdmins:getAccountSettingV2EffectiveRestrictWorkspaceAdmins","description":"(RestrictWorkspaceAdminsMessage) - Effective setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the final effective value of setting. To set a value use restrict_workspace_admins\n"},"effectiveStringVal":{"$ref":"#/types/databricks:index/getAccountSettingV2EffectiveStringVal:getAccountSettingV2EffectiveStringVal","description":"(StringMessage) - Effective setting value for string type setting. This is the final effective value of setting. To set a value use string_val\n"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"integerVal":{"$ref":"#/types/databricks:index/getAccountSettingV2IntegerVal:getAccountSettingV2IntegerVal","description":"(IntegerMessage) - Setting value for integer type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveIntegerVal \" pulumi-lang-dotnet=\" EffectiveIntegerVal \" pulumi-lang-go=\" effectiveIntegerVal \" pulumi-lang-python=\" effective_integer_val \" pulumi-lang-yaml=\" effectiveIntegerVal \" pulumi-lang-java=\" effectiveIntegerVal \"\u003e effective_integer_val \u003c/span\u003efor final setting value\n"},"name":{"description":"(string) - Name of the setting\n","type":"string"},"personalCompute":{"$ref":"#/types/databricks:index/getAccountSettingV2PersonalCompute:getAccountSettingV2PersonalCompute","description":"(PersonalComputeMessage) - Setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectivePersonalCompute \" pulumi-lang-dotnet=\" EffectivePersonalCompute \" pulumi-lang-go=\" effectivePersonalCompute \" pulumi-lang-python=\" effective_personal_compute \" pulumi-lang-yaml=\" effectivePersonalCompute \" pulumi-lang-java=\" effectivePersonalCompute \"\u003e effective_personal_compute \u003c/span\u003efor final setting value\n"},"restrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/getAccountSettingV2RestrictWorkspaceAdmins:getAccountSettingV2RestrictWorkspaceAdmins","description":"(RestrictWorkspaceAdminsMessage) - Setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-dotnet=\" EffectiveRestrictWorkspaceAdmins \" pulumi-lang-go=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-python=\" effective_restrict_workspace_admins \" pulumi-lang-yaml=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-java=\" effectiveRestrictWorkspaceAdmins \"\u003e effective_restrict_workspace_admins \u003c/span\u003efor final setting value\n"},"stringVal":{"$ref":"#/types/databricks:index/getAccountSettingV2StringVal:getAccountSettingV2StringVal","description":"(StringMessage) - Setting value for string type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveStringVal \" pulumi-lang-dotnet=\" EffectiveStringVal \" pulumi-lang-go=\" effectiveStringVal \" pulumi-lang-python=\" effective_string_val \" pulumi-lang-yaml=\" effectiveStringVal \" pulumi-lang-java=\" effectiveStringVal \"\u003e effective_string_val \u003c/span\u003efor final setting value\n"}},"required":["aibiDashboardEmbeddingAccessPolicy","aibiDashboardEmbeddingApprovedDomains","automaticClusterUpdateWorkspace","booleanVal","effectiveAibiDashboardEmbeddingAccessPolicy","effectiveAibiDashboardEmbeddingApprovedDomains","effectiveAutomaticClusterUpdateWorkspace","effectiveBooleanVal","effectiveIntegerVal","effectivePersonalCompute","effectiveRestrictWorkspaceAdmins","effectiveStringVal","integerVal","name","personalCompute","restrictWorkspaceAdmins","stringVal","id"],"type":"object"}},"databricks:index/getAlertV2:getAlertV2":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThe SQL Alert v2 data source allows you to retrieve detailed information about a specific alert in Databricks SQL. This data source provides access to all alert properties, including its configuration, evaluation criteria, notification settings, and schedule.\n\nYou can use this data source to:\n- Retrieve alert details for reference in other resources\n- Check the current state and configuration of an alert\n- Verify notification settings and subscribers\n- Examine the schedule configuration\n\n## Example Usage\n\n### Retrieve Alert by ID\nThis example retrieves a specific alert by its ID:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getAlertV2({\n    id: \"123\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_alert_v2(id=\"123\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetAlertV2.Invoke(new()\n    {\n        Id = \"123\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupAlertV2(ctx, \u0026databricks.LookupAlertV2Args{\n\t\t\tId: \"123\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAlertV2Args;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getAlertV2(GetAlertV2Args.builder()\n            .id(\"123\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getAlertV2\n      arguments:\n        id: '123'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getAlertV2.\n","properties":{"id":{"type":"string","description":"UUID identifying the alert\n"},"providerConfig":{"$ref":"#/types/databricks:index/getAlertV2ProviderConfig:getAlertV2ProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["id"]},"outputs":{"description":"A collection of values returned by getAlertV2.\n","properties":{"createTime":{"description":"(string) - The timestamp indicating when the alert was created\n","type":"string"},"customDescription":{"description":"(string) - Custom description for the alert. support mustache template\n","type":"string"},"customSummary":{"description":"(string) - Custom summary for the alert. support mustache template\n","type":"string"},"displayName":{"description":"(string) - The display name of the alert\n","type":"string"},"effectiveRunAs":{"$ref":"#/types/databricks:index/getAlertV2EffectiveRunAs:getAlertV2EffectiveRunAs","description":"(AlertV2RunAs) - The actual identity that will be used to execute the alert.\nThis is an output-only field that shows the resolved run-as identity after applying\npermissions and defaults\n"},"evaluation":{"$ref":"#/types/databricks:index/getAlertV2Evaluation:getAlertV2Evaluation","description":"(AlertV2Evaluation)\n"},"id":{"description":"(string) - UUID identifying the alert\n","type":"string"},"lifecycleState":{"description":"(string) - Indicates whether the query is trashed. Possible values are: `ACTIVE`, `DELETED`\n","type":"string"},"ownerUserName":{"description":"(string) - The owner's username. This field is set to \"Unavailable\" if the user has been deleted\n","type":"string"},"parentPath":{"description":"(string) - The workspace path of the folder containing the alert. Can only be set on create, and cannot be updated\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getAlertV2ProviderConfig:getAlertV2ProviderConfig"},"queryText":{"description":"(string) - Text of the query to be run\n","type":"string"},"runAs":{"$ref":"#/types/databricks:index/getAlertV2RunAs:getAlertV2RunAs","description":"(AlertV2RunAs) - Specifies the identity that will be used to run the alert.\nThis field allows you to configure alerts to run as a specific user or service principal.\n- For user identity: Set \u003cspan pulumi-lang-nodejs=\"`userName`\" pulumi-lang-dotnet=\"`UserName`\" pulumi-lang-go=\"`userName`\" pulumi-lang-python=\"`user_name`\" pulumi-lang-yaml=\"`userName`\" pulumi-lang-java=\"`userName`\"\u003e`user_name`\u003c/span\u003e to the email of an active workspace user. Users can only set this to their own email.\n- For service principal: Set \u003cspan pulumi-lang-nodejs=\"`servicePrincipalName`\" pulumi-lang-dotnet=\"`ServicePrincipalName`\" pulumi-lang-go=\"`servicePrincipalName`\" pulumi-lang-python=\"`service_principal_name`\" pulumi-lang-yaml=\"`servicePrincipalName`\" pulumi-lang-java=\"`servicePrincipalName`\"\u003e`service_principal_name`\u003c/span\u003e to the application ID. Requires the `servicePrincipal/user` role.\nIf not specified, the alert will run as the request user\n"},"runAsUserName":{"description":"(string, deprecated) - The run as username or application ID of service principal.\nOn Create and Update, this field can be set to application ID of an active service principal. Setting this field requires the servicePrincipal/user role.\nDeprecated: Use \u003cspan pulumi-lang-nodejs=\"`runAs`\" pulumi-lang-dotnet=\"`RunAs`\" pulumi-lang-go=\"`runAs`\" pulumi-lang-python=\"`run_as`\" pulumi-lang-yaml=\"`runAs`\" pulumi-lang-java=\"`runAs`\"\u003e`run_as`\u003c/span\u003e field instead. This field will be removed in a future release\n","type":"string"},"schedule":{"$ref":"#/types/databricks:index/getAlertV2Schedule:getAlertV2Schedule","description":"(CronSchedule)\n"},"updateTime":{"description":"(string) - The timestamp indicating when the alert was updated\n","type":"string"},"warehouseId":{"description":"(string) - ID of the SQL warehouse attached to the alert\n","type":"string"}},"required":["createTime","customDescription","customSummary","displayName","effectiveRunAs","evaluation","id","lifecycleState","ownerUserName","parentPath","queryText","runAs","runAsUserName","schedule","updateTime","warehouseId"],"type":"object"}},"databricks:index/getAlertsV2:getAlertsV2":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThe SQL Alerts v2 data source allows you to retrieve a list of alerts in Databricks SQL that are accessible to the current user. This data source returns alerts ordered by their creation time.\n\nYou can use this data source to:\n- Get a comprehensive list of all alerts in your workspace\n- Monitor and audit alert configurations across your workspace\n\n## Example Usage\n\n### List All Alerts\nThis example retrieves all alerts accessible to the current user:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getAlertV2({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_alert_v2()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetAlertV2.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupAlertV2(ctx, \u0026databricks.LookupAlertV2Args{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAlertV2Args;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getAlertV2(GetAlertV2Args.builder()\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getAlertV2\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getAlertsV2.\n","properties":{"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getAlertsV2ProviderConfig:getAlertsV2ProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getAlertsV2.\n","properties":{"alerts":{"items":{"$ref":"#/types/databricks:index/getAlertsV2Alert:getAlertsV2Alert"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getAlertsV2ProviderConfig:getAlertsV2ProviderConfig"}},"required":["alerts","id"],"type":"object"}},"databricks:index/getApp:getApp":{"description":"\u003e This data source can only be used with a workspace-level provider!\n\n[Databricks Apps](https://docs.databricks.com/en/dev-tools/databricks-apps/index.html) run directly on a customer’s Databricks instance, integrate with their data, use and extend Databricks services, and enable users to interact through single sign-on. This resource creates the application but does not handle app deployment, which should be handled separately as part of your CI/CD pipeline.\n\nThis data source allows you to fetch information about a Databricks App.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getApp({\n    name: \"my-custom-app\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_app(name=\"my-custom-app\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetApp.Invoke(new()\n    {\n        Name = \"my-custom-app\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupApp(ctx, \u0026databricks.LookupAppArgs{\n\t\t\tName: \"my-custom-app\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAppArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getApp(GetAppArgs.builder()\n            .name(\"my-custom-app\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getApp\n      arguments:\n        name: my-custom-app\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.App \" pulumi-lang-dotnet=\" databricks.App \" pulumi-lang-go=\" App \" pulumi-lang-python=\" App \" pulumi-lang-yaml=\" databricks.App \" pulumi-lang-java=\" databricks.App \"\u003e databricks.App \u003c/span\u003eto manage [Databricks Apps](https://docs.databricks.com/en/dev-tools/databricks-apps/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eto manage Databricks SQL [Endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.ModelServing \" pulumi-lang-dotnet=\" databricks.ModelServing \" pulumi-lang-go=\" ModelServing \" pulumi-lang-python=\" ModelServing \" pulumi-lang-yaml=\" databricks.ModelServing \" pulumi-lang-java=\" databricks.ModelServing \"\u003e databricks.ModelServing \u003c/span\u003eto serve this model on a Databricks serving endpoint.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Secret \" pulumi-lang-dotnet=\" databricks.Secret \" pulumi-lang-go=\" Secret \" pulumi-lang-python=\" Secret \" pulumi-lang-yaml=\" databricks.Secret \" pulumi-lang-java=\" databricks.Secret \"\u003e databricks.Secret \u003c/span\u003eto manage [secrets](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code.\n","inputs":{"description":"A collection of arguments for invoking getApp.\n","properties":{"name":{"type":"string","description":"The name of the app.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getAppProviderConfig:getAppProviderConfig"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getApp.\n","properties":{"app":{"$ref":"#/types/databricks:index/getAppApp:getAppApp","description":"attribute\n"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"The name of Genie Space.\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getAppProviderConfig:getAppProviderConfig"}},"required":["app","name","id"],"type":"object"}},"databricks:index/getApps:getApps":{"description":"\u003e This data source can only be used with a workspace-level provider!\n\n[Databricks Apps](https://docs.databricks.com/en/dev-tools/databricks-apps/index.html) run directly on a customer’s Databricks instance, integrate with their data, use and extend Databricks services, and enable users to interact through single sign-on. This resource creates the application but does not handle app deployment, which should be handled separately as part of your CI/CD pipeline.\n\nThis data source allows you to fetch information about all Databricks Apps within a workspace.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst allApps = databricks.getApps({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall_apps = databricks.get_apps()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var allApps = Databricks.GetApps.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetApps(ctx, \u0026databricks.GetAppsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAppsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var allApps = DatabricksFunctions.getApps(GetAppsArgs.builder()\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  allApps:\n    fn::invoke:\n      function: databricks:getApps\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.App \" pulumi-lang-dotnet=\" databricks.App \" pulumi-lang-go=\" App \" pulumi-lang-python=\" App \" pulumi-lang-yaml=\" databricks.App \" pulumi-lang-java=\" databricks.App \"\u003e databricks.App \u003c/span\u003eto manage [Databricks Apps](https://docs.databricks.com/en/dev-tools/databricks-apps/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eto manage Databricks SQL [Endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.ModelServing \" pulumi-lang-dotnet=\" databricks.ModelServing \" pulumi-lang-go=\" ModelServing \" pulumi-lang-python=\" ModelServing \" pulumi-lang-yaml=\" databricks.ModelServing \" pulumi-lang-java=\" databricks.ModelServing \"\u003e databricks.ModelServing \u003c/span\u003eto serve this model on a Databricks serving endpoint.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Secret \" pulumi-lang-dotnet=\" databricks.Secret \" pulumi-lang-go=\" Secret \" pulumi-lang-python=\" Secret \" pulumi-lang-yaml=\" databricks.Secret \" pulumi-lang-java=\" databricks.Secret \"\u003e databricks.Secret \u003c/span\u003eto manage [secrets](https://docs.databricks.com/security/secrets/index.html#secrets-user-guide) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code.\n","inputs":{"description":"A collection of arguments for invoking getApps.\n","properties":{"providerConfig":{"$ref":"#/types/databricks:index/getAppsProviderConfig:getAppsProviderConfig"}},"type":"object"},"outputs":{"description":"A collection of values returned by getApps.\n","properties":{"apps":{"items":{"$ref":"#/types/databricks:index/getAppsApp:getAppsApp"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getAppsProviderConfig:getAppsProviderConfig"}},"required":["apps","id"],"type":"object"}},"databricks:index/getAppsSettingsCustomTemplate:getAppsSettingsCustomTemplate":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single Custom Template.\n\n\n## Example Usage\n\nReferring to a Custom Template by name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst myTemplate = databricks.getAppsSettingsCustomTemplate({\n    name: \"my-custom-template\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nmy_template = databricks.get_apps_settings_custom_template(name=\"my-custom-template\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var myTemplate = Databricks.GetAppsSettingsCustomTemplate.Invoke(new()\n    {\n        Name = \"my-custom-template\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupAppsSettingsCustomTemplate(ctx, \u0026databricks.LookupAppsSettingsCustomTemplateArgs{\n\t\t\tName: \"my-custom-template\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAppsSettingsCustomTemplateArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var myTemplate = DatabricksFunctions.getAppsSettingsCustomTemplate(GetAppsSettingsCustomTemplateArgs.builder()\n            .name(\"my-custom-template\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  myTemplate:\n    fn::invoke:\n      function: databricks:getAppsSettingsCustomTemplate\n      arguments:\n        name: my-custom-template\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getAppsSettingsCustomTemplate.\n","properties":{"name":{"type":"string","description":"The name of the template. It must contain only alphanumeric characters, hyphens, underscores, and whitespaces.\nIt must be unique within the workspace\n"},"providerConfig":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplateProviderConfig:getAppsSettingsCustomTemplateProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getAppsSettingsCustomTemplate.\n","properties":{"creator":{"description":"(string)\n","type":"string"},"description":{"description":"(string) - Description of the App Resource\n","type":"string"},"gitProvider":{"description":"(string) - The Git provider of the template\n","type":"string"},"gitRepo":{"description":"(string) - The Git repository URL that the template resides in\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"manifest":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplateManifest:getAppsSettingsCustomTemplateManifest","description":"(AppManifest) - The manifest of the template. It defines fields and default values when installing the template\n"},"name":{"description":"(string) - Name of the App Resource\n","type":"string"},"path":{"description":"(string) - The path to the template within the Git repository\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplateProviderConfig:getAppsSettingsCustomTemplateProviderConfig"}},"required":["creator","description","gitProvider","gitRepo","manifest","name","path","id"],"type":"object"}},"databricks:index/getAppsSettingsCustomTemplates:getAppsSettingsCustomTemplates":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to fetch the list of Custom Templates within the workspace.\nThe list can then be accessed via the data object's \u003cspan pulumi-lang-nodejs=\"`templates`\" pulumi-lang-dotnet=\"`Templates`\" pulumi-lang-go=\"`templates`\" pulumi-lang-python=\"`templates`\" pulumi-lang-yaml=\"`templates`\" pulumi-lang-java=\"`templates`\"\u003e`templates`\u003c/span\u003e field.\n\n\n## Example Usage\n\nGetting a list of all Custom Templates:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getAppsSettingsCustomTemplates({});\nexport const allCustomTemplates = all.then(all =\u003e all.templates);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_apps_settings_custom_templates()\npulumi.export(\"allCustomTemplates\", all.templates)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetAppsSettingsCustomTemplates.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allCustomTemplates\"] = all.Apply(getAppsSettingsCustomTemplatesResult =\u003e getAppsSettingsCustomTemplatesResult.Templates),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetAppsSettingsCustomTemplates(ctx, \u0026databricks.GetAppsSettingsCustomTemplatesArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allCustomTemplates\", all.Templates)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAppsSettingsCustomTemplatesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getAppsSettingsCustomTemplates(GetAppsSettingsCustomTemplatesArgs.builder()\n            .build());\n\n        ctx.export(\"allCustomTemplates\", all.templates());\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getAppsSettingsCustomTemplates\n      arguments: {}\noutputs:\n  allCustomTemplates: ${all.templates}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getAppsSettingsCustomTemplates.\n","properties":{"pageSize":{"type":"integer","description":"Upper bound for items returned\n"},"providerConfig":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesProviderConfig:getAppsSettingsCustomTemplatesProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getAppsSettingsCustomTemplates.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesProviderConfig:getAppsSettingsCustomTemplatesProviderConfig"},"templates":{"items":{"$ref":"#/types/databricks:index/getAppsSettingsCustomTemplatesTemplate:getAppsSettingsCustomTemplatesTemplate"},"type":"array"}},"required":["templates","id"],"type":"object"}},"databricks:index/getAppsSpace:getAppsSpace":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getAppsSpace.\n","properties":{"name":{"type":"string","description":"The name of the app space. The name must contain only lowercase alphanumeric characters and hyphens.\nIt must be unique within the workspace\n"},"providerConfig":{"$ref":"#/types/databricks:index/getAppsSpaceProviderConfig:getAppsSpaceProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getAppsSpace.\n","properties":{"createTime":{"description":"(string) - The creation time of the app space. Formatted timestamp in ISO 6801\n","type":"string"},"creator":{"description":"(string) - The email of the user that created the app space\n","type":"string"},"description":{"description":"(string) - Description of the App Resource\n","type":"string"},"effectiveUsagePolicyId":{"description":"(string) - The effective usage policy ID used by apps in the space\n","type":"string"},"effectiveUserApiScopes":{"description":"(list of string) - The effective api scopes granted to the user access token\n","items":{"type":"string"},"type":"array"},"id":{"description":"(string) - Id of the SQL warehouse to grant permission on\n","type":"string"},"name":{"description":"(string) - Name of the serving endpoint to grant permission on\n","type":"string"},"oauth2AppClientId":{"description":"(string) - The OAuth2 app client ID for the app space\n","type":"string"},"oauth2AppIntegrationId":{"description":"(string) - The OAuth2 app integration ID for the app space\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getAppsSpaceProviderConfig:getAppsSpaceProviderConfig"},"resources":{"description":"(list of AppResource) - Resources for the app space. Resources configured at the space level are available to all apps in the space\n","items":{"$ref":"#/types/databricks:index/getAppsSpaceResource:getAppsSpaceResource"},"type":"array"},"servicePrincipalClientId":{"description":"(string) - The service principal client ID for the app space\n","type":"string"},"servicePrincipalId":{"description":"(integer) - The service principal ID for the app space\n","type":"integer"},"servicePrincipalName":{"description":"(string) - The service principal name for the app space\n","type":"string"},"status":{"$ref":"#/types/databricks:index/getAppsSpaceStatus:getAppsSpaceStatus","description":"(SpaceStatus) - The status of the app space\n"},"updateTime":{"description":"(string) - The update time of the app space. Formatted timestamp in ISO 6801\n","type":"string"},"updater":{"description":"(string) - The email of the user that last updated the app space\n","type":"string"},"usagePolicyId":{"description":"(string) - The usage policy ID for managing cost at the space level\n","type":"string"},"userApiScopes":{"description":"(list of string) - OAuth scopes for apps in the space\n","items":{"type":"string"},"type":"array"}},"required":["createTime","creator","description","effectiveUsagePolicyId","effectiveUserApiScopes","id","name","oauth2AppClientId","oauth2AppIntegrationId","resources","servicePrincipalClientId","servicePrincipalId","servicePrincipalName","status","updateTime","updater","usagePolicyId","userApiScopes"],"type":"object"}},"databricks:index/getAppsSpaces:getAppsSpaces":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getAppsSpaces.\n","properties":{"pageSize":{"type":"integer","description":"Upper bound for items returned\n"},"providerConfig":{"$ref":"#/types/databricks:index/getAppsSpacesProviderConfig:getAppsSpacesProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getAppsSpaces.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getAppsSpacesProviderConfig:getAppsSpacesProviderConfig"},"spaces":{"items":{"$ref":"#/types/databricks:index/getAppsSpacesSpace:getAppsSpacesSpace"},"type":"array"}},"required":["spaces","id"],"type":"object"}},"databricks:index/getAwsAssumeRolePolicy:getAwsAssumeRolePolicy":{"description":"This data source constructs necessary AWS STS assume role policy for you.\n\n\u003e This data source can be used with an account or workspace-level provider.\n\n## Example Usage\n\nEnd-to-end example of provisioning Cross-account IAM role with\u003cspan pulumi-lang-nodejs=\" databricks.MwsCredentials \" pulumi-lang-dotnet=\" databricks.MwsCredentials \" pulumi-lang-go=\" MwsCredentials \" pulumi-lang-python=\" MwsCredentials \" pulumi-lang-yaml=\" databricks.MwsCredentials \" pulumi-lang-java=\" databricks.MwsCredentials \"\u003e databricks.MwsCredentials \u003c/span\u003eand aws_iam_role:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as aws from \"@pulumi/aws\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst config = new pulumi.Config();\n// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\nconst databricksAccountId = config.requireObject\u003cany\u003e(\"databricksAccountId\");\nconst _this = databricks.getAwsCrossAccountPolicy({});\nconst crossAccountPolicy = new aws.index.IamPolicy(\"cross_account_policy\", {\n    name: `${prefix}-crossaccount-iam-policy`,\n    policy: _this.json,\n});\nconst thisGetAwsAssumeRolePolicy = databricks.getAwsAssumeRolePolicy({\n    externalId: databricksAccountId,\n});\nconst crossAccount = new aws.index.IamRole(\"cross_account\", {\n    name: `${prefix}-crossaccount-iam-role`,\n    assumeRolePolicy: thisGetAwsAssumeRolePolicy.json,\n    description: \"Grants Databricks full access to VPC resources\",\n});\nconst crossAccountIamRolePolicyAttachment = new aws.index.IamRolePolicyAttachment(\"cross_account\", {\n    policyArn: crossAccountPolicy.arn,\n    role: crossAccount.name,\n});\n// required only in case of multi-workspace setup\nconst thisMwsCredentials = new databricks.MwsCredentials(\"this\", {\n    accountId: databricksAccountId,\n    credentialsName: `${prefix}-creds`,\n    roleArn: crossAccount.arn,\n});\n```\n```python\nimport pulumi\nimport pulumi_aws as aws\nimport pulumi_databricks as databricks\n\nconfig = pulumi.Config()\n# Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\ndatabricks_account_id = config.require_object(\"databricksAccountId\")\nthis = databricks.get_aws_cross_account_policy()\ncross_account_policy = aws.index.IamPolicy(\"cross_account_policy\",\n    name=f{prefix}-crossaccount-iam-policy,\n    policy=this.json)\nthis_get_aws_assume_role_policy = databricks.get_aws_assume_role_policy(external_id=databricks_account_id)\ncross_account = aws.index.IamRole(\"cross_account\",\n    name=f{prefix}-crossaccount-iam-role,\n    assume_role_policy=this_get_aws_assume_role_policy.json,\n    description=Grants Databricks full access to VPC resources)\ncross_account_iam_role_policy_attachment = aws.index.IamRolePolicyAttachment(\"cross_account\",\n    policy_arn=cross_account_policy.arn,\n    role=cross_account.name)\n# required only in case of multi-workspace setup\nthis_mws_credentials = databricks.MwsCredentials(\"this\",\n    account_id=databricks_account_id,\n    credentials_name=f\"{prefix}-creds\",\n    role_arn=cross_account[\"arn\"])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Aws = Pulumi.Aws;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var config = new Config();\n    // Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n    var databricksAccountId = config.RequireObject\u003cdynamic\u003e(\"databricksAccountId\");\n    var @this = Databricks.GetAwsCrossAccountPolicy.Invoke();\n\n    var crossAccountPolicy = new Aws.Index.IamPolicy(\"cross_account_policy\", new()\n    {\n        Name = $\"{prefix}-crossaccount-iam-policy\",\n        Policy = @this.Apply(getAwsCrossAccountPolicyResult =\u003e getAwsCrossAccountPolicyResult.Json),\n    });\n\n    var thisGetAwsAssumeRolePolicy = Databricks.GetAwsAssumeRolePolicy.Invoke(new()\n    {\n        ExternalId = databricksAccountId,\n    });\n\n    var crossAccount = new Aws.Index.IamRole(\"cross_account\", new()\n    {\n        Name = $\"{prefix}-crossaccount-iam-role\",\n        AssumeRolePolicy = thisGetAwsAssumeRolePolicy.Apply(getAwsAssumeRolePolicyResult =\u003e getAwsAssumeRolePolicyResult.Json),\n        Description = \"Grants Databricks full access to VPC resources\",\n    });\n\n    var crossAccountIamRolePolicyAttachment = new Aws.Index.IamRolePolicyAttachment(\"cross_account\", new()\n    {\n        PolicyArn = crossAccountPolicy.Arn,\n        Role = crossAccount.Name,\n    });\n\n    // required only in case of multi-workspace setup\n    var thisMwsCredentials = new Databricks.MwsCredentials(\"this\", new()\n    {\n        AccountId = databricksAccountId,\n        CredentialsName = $\"{prefix}-creds\",\n        RoleArn = crossAccount.Arn,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-aws/sdk/v7/go/aws\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tcfg := config.New(ctx, \"\")\n\t\t// Account Id that could be found in the top right corner of https://accounts.cloud.databricks.com/\n\t\tdatabricksAccountId := cfg.RequireObject(\"databricksAccountId\")\n\t\tthis, err := databricks.GetAwsCrossAccountPolicy(ctx, \u0026databricks.GetAwsCrossAccountPolicyArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tcrossAccountPolicy, err := aws.NewIamPolicy(ctx, \"cross_account_policy\", \u0026aws.IamPolicyArgs{\n\t\t\tName:   fmt.Sprintf(\"%v-crossaccount-iam-policy\", prefix),\n\t\t\tPolicy: this.Json,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisGetAwsAssumeRolePolicy, err := databricks.GetAwsAssumeRolePolicy(ctx, \u0026databricks.GetAwsAssumeRolePolicyArgs{\n\t\t\tExternalId: databricksAccountId,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tcrossAccount, err := aws.NewIamRole(ctx, \"cross_account\", \u0026aws.IamRoleArgs{\n\t\t\tName:             fmt.Sprintf(\"%v-crossaccount-iam-role\", prefix),\n\t\t\tAssumeRolePolicy: thisGetAwsAssumeRolePolicy.Json,\n\t\t\tDescription:      \"Grants Databricks full access to VPC resources\",\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewIamRolePolicyAttachment(ctx, \"cross_account\", \u0026aws.IamRolePolicyAttachmentArgs{\n\t\t\tPolicyArn: crossAccountPolicy.Arn,\n\t\t\tRole:      crossAccount.Name,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// required only in case of multi-workspace setup\n\t\t_, err = databricks.NewMwsCredentials(ctx, \"this\", \u0026databricks.MwsCredentialsArgs{\n\t\t\tAccountId:       pulumi.Any(databricksAccountId),\n\t\t\tCredentialsName: pulumi.Sprintf(\"%v-creds\", prefix),\n\t\t\tRoleArn:         crossAccount.Arn,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAwsCrossAccountPolicyArgs;\nimport com.pulumi.aws.IamPolicy;\nimport com.pulumi.aws.IamPolicyArgs;\nimport com.pulumi.databricks.inputs.GetAwsAssumeRolePolicyArgs;\nimport com.pulumi.aws.IamRole;\nimport com.pulumi.aws.IamRoleArgs;\nimport com.pulumi.aws.IamRolePolicyAttachment;\nimport com.pulumi.aws.IamRolePolicyAttachmentArgs;\nimport com.pulumi.databricks.MwsCredentials;\nimport com.pulumi.databricks.MwsCredentialsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var config = ctx.config();\n        final var databricksAccountId = config.get(\"databricksAccountId\");\n        final var this = DatabricksFunctions.getAwsCrossAccountPolicy(GetAwsCrossAccountPolicyArgs.builder()\n            .build());\n\n        var crossAccountPolicy = new IamPolicy(\"crossAccountPolicy\", IamPolicyArgs.builder()\n            .name(String.format(\"%s-crossaccount-iam-policy\", prefix))\n            .policy(this_.json())\n            .build());\n\n        final var thisGetAwsAssumeRolePolicy = DatabricksFunctions.getAwsAssumeRolePolicy(GetAwsAssumeRolePolicyArgs.builder()\n            .externalId(databricksAccountId)\n            .build());\n\n        var crossAccount = new IamRole(\"crossAccount\", IamRoleArgs.builder()\n            .name(String.format(\"%s-crossaccount-iam-role\", prefix))\n            .assumeRolePolicy(thisGetAwsAssumeRolePolicy.json())\n            .description(\"Grants Databricks full access to VPC resources\")\n            .build());\n\n        var crossAccountIamRolePolicyAttachment = new IamRolePolicyAttachment(\"crossAccountIamRolePolicyAttachment\", IamRolePolicyAttachmentArgs.builder()\n            .policyArn(crossAccountPolicy.arn())\n            .role(crossAccount.name())\n            .build());\n\n        // required only in case of multi-workspace setup\n        var thisMwsCredentials = new MwsCredentials(\"thisMwsCredentials\", MwsCredentialsArgs.builder()\n            .accountId(databricksAccountId)\n            .credentialsName(String.format(\"%s-creds\", prefix))\n            .roleArn(crossAccount.arn())\n            .build());\n\n    }\n}\n```\n```yaml\nconfiguration:\n  databricksAccountId:\n    type: dynamic\nresources:\n  crossAccountPolicy:\n    type: aws:IamPolicy\n    name: cross_account_policy\n    properties:\n      name: ${prefix}-crossaccount-iam-policy\n      policy: ${this.json}\n  crossAccount:\n    type: aws:IamRole\n    name: cross_account\n    properties:\n      name: ${prefix}-crossaccount-iam-role\n      assumeRolePolicy: ${thisGetAwsAssumeRolePolicy.json}\n      description: Grants Databricks full access to VPC resources\n  crossAccountIamRolePolicyAttachment:\n    type: aws:IamRolePolicyAttachment\n    name: cross_account\n    properties:\n      policyArn: ${crossAccountPolicy.arn}\n      role: ${crossAccount.name}\n  # required only in case of multi-workspace setup\n  thisMwsCredentials:\n    type: databricks:MwsCredentials\n    name: this\n    properties:\n      accountId: ${databricksAccountId}\n      credentialsName: ${prefix}-creds\n      roleArn: ${crossAccount.arn}\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getAwsCrossAccountPolicy\n      arguments: {}\n  thisGetAwsAssumeRolePolicy:\n    fn::invoke:\n      function: databricks:getAwsAssumeRolePolicy\n      arguments:\n        externalId: ${databricksAccountId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* Provisioning AWS Databricks workspaces with a Hub \u0026 Spoke firewall for data exfiltration protection guide\n*\u003cspan pulumi-lang-nodejs=\" databricks.getAwsBucketPolicy \" pulumi-lang-dotnet=\" databricks.getAwsBucketPolicy \" pulumi-lang-go=\" getAwsBucketPolicy \" pulumi-lang-python=\" get_aws_bucket_policy \" pulumi-lang-yaml=\" databricks.getAwsBucketPolicy \" pulumi-lang-java=\" databricks.getAwsBucketPolicy \"\u003e databricks.getAwsBucketPolicy \u003c/span\u003edata to configure a simple access policy for AWS S3 buckets, so that Databricks can access data in it.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getAwsCrossAccountPolicy \" pulumi-lang-dotnet=\" databricks.getAwsCrossAccountPolicy \" pulumi-lang-go=\" getAwsCrossAccountPolicy \" pulumi-lang-python=\" get_aws_cross_account_policy \" pulumi-lang-yaml=\" databricks.getAwsCrossAccountPolicy \" pulumi-lang-java=\" databricks.getAwsCrossAccountPolicy \"\u003e databricks.getAwsCrossAccountPolicy \u003c/span\u003edata to construct the necessary AWS cross-account policy for you, which is based on [official documentation](https://docs.databricks.com/administration-guide/account-api/iam-role.html#language-Your%C2%A0VPC,%C2%A0default).\n","inputs":{"description":"A collection of arguments for invoking getAwsAssumeRolePolicy.\n","properties":{"awsPartition":{"type":"string","description":"AWS partition. The options are \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e, `aws-us-gov`, or `aws-us-gov-dod`. Defaults to \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e\n","willReplaceOnChanges":true},"databricksAccountId":{"type":"string","deprecationMessage":"databricks_account_id will be will be removed in the next major release.","willReplaceOnChanges":true},"externalId":{"type":"string","description":"Account Id that could be found in the top right corner of [Accounts Console](https://accounts.cloud.databricks.com/).\n","willReplaceOnChanges":true},"forLogDelivery":{"type":"boolean","description":"Either or not this assume role policy should be created for usage log delivery. Defaults to false.\n","willReplaceOnChanges":true}},"type":"object","required":["externalId"]},"outputs":{"description":"A collection of values returned by getAwsAssumeRolePolicy.\n","properties":{"awsPartition":{"type":"string"},"databricksAccountId":{"deprecationMessage":"databricks_account_id will be will be removed in the next major release.","type":"string"},"externalId":{"type":"string"},"forLogDelivery":{"type":"boolean"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"json":{"description":"AWS IAM Policy JSON document\n","type":"string"}},"required":["externalId","json","id"],"type":"object"}},"databricks:index/getAwsBucketPolicy:getAwsBucketPolicy":{"description":"This datasource configures a simple access policy for AWS S3 buckets, so that Databricks can access data in it.\n\n\u003e This data source can be used with an account or workspace-level provider.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as aws from \"@pulumi/aws\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst thisS3Bucket = new aws.index.S3Bucket(\"this\", {\n    bucket: \"\u003cunique_bucket_name\u003e\",\n    forceDestroy: true,\n});\nconst _this = databricks.getAwsBucketPolicy({\n    bucket: thisS3Bucket.bucket,\n});\nconst thisS3BucketPolicy = new aws.index.S3BucketPolicy(\"this\", {\n    bucket: thisS3Bucket.id,\n    policy: _this.json,\n});\n```\n```python\nimport pulumi\nimport pulumi_aws as aws\nimport pulumi_databricks as databricks\n\nthis_s3_bucket = aws.index.S3Bucket(\"this\",\n    bucket=\u003cunique_bucket_name\u003e,\n    force_destroy=True)\nthis = databricks.get_aws_bucket_policy(bucket=this_s3_bucket[\"bucket\"])\nthis_s3_bucket_policy = aws.index.S3BucketPolicy(\"this\",\n    bucket=this_s3_bucket.id,\n    policy=this.json)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Aws = Pulumi.Aws;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var thisS3Bucket = new Aws.Index.S3Bucket(\"this\", new()\n    {\n        Bucket = \"\u003cunique_bucket_name\u003e\",\n        ForceDestroy = true,\n    });\n\n    var @this = Databricks.GetAwsBucketPolicy.Invoke(new()\n    {\n        Bucket = thisS3Bucket.Bucket,\n    });\n\n    var thisS3BucketPolicy = new Aws.Index.S3BucketPolicy(\"this\", new()\n    {\n        Bucket = thisS3Bucket.Id,\n        Policy = @this.Apply(getAwsBucketPolicyResult =\u003e getAwsBucketPolicyResult.Json),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-aws/sdk/v7/go/aws\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthisS3Bucket, err := aws.NewS3Bucket(ctx, \"this\", \u0026aws.S3BucketArgs{\n\t\t\tBucket:       \"\u003cunique_bucket_name\u003e\",\n\t\t\tForceDestroy: true,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.GetAwsBucketPolicy(ctx, \u0026databricks.GetAwsBucketPolicyArgs{\n\t\t\tBucket: thisS3Bucket.Bucket,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewS3BucketPolicy(ctx, \"this\", \u0026aws.S3BucketPolicyArgs{\n\t\t\tBucket: thisS3Bucket.Id,\n\t\t\tPolicy: this.Json,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.aws.S3Bucket;\nimport com.pulumi.aws.S3BucketArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAwsBucketPolicyArgs;\nimport com.pulumi.aws.S3BucketPolicy;\nimport com.pulumi.aws.S3BucketPolicyArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var thisS3Bucket = new S3Bucket(\"thisS3Bucket\", S3BucketArgs.builder()\n            .bucket(\"\u003cunique_bucket_name\u003e\")\n            .forceDestroy(true)\n            .build());\n\n        final var this = DatabricksFunctions.getAwsBucketPolicy(GetAwsBucketPolicyArgs.builder()\n            .bucket(thisS3Bucket.bucket())\n            .build());\n\n        var thisS3BucketPolicy = new S3BucketPolicy(\"thisS3BucketPolicy\", S3BucketPolicyArgs.builder()\n            .bucket(thisS3Bucket.id())\n            .policy(this_.json())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  thisS3Bucket:\n    type: aws:S3Bucket\n    name: this\n    properties:\n      bucket: \u003cunique_bucket_name\u003e\n      forceDestroy: true\n  thisS3BucketPolicy:\n    type: aws:S3BucketPolicy\n    name: this\n    properties:\n      bucket: ${thisS3Bucket.id}\n      policy: ${this.json}\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getAwsBucketPolicy\n      arguments:\n        bucket: ${thisS3Bucket.bucket}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nBucket policy with full access:\n\n","inputs":{"description":"A collection of arguments for invoking getAwsBucketPolicy.\n","properties":{"awsPartition":{"type":"string","description":"AWS partition. The options are \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e, `aws-us-gov`, or `aws-us-gov-dod`. Defaults to \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e\n","willReplaceOnChanges":true},"bucket":{"type":"string","description":"AWS S3 Bucket name for which to generate the policy document. The name must follow the [S3 bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html).\n","willReplaceOnChanges":true},"databricksAccountId":{"type":"string","deprecationMessage":"databricks_account_id will be will be removed in the next major release.","willReplaceOnChanges":true},"databricksE2AccountId":{"type":"string","description":"Your Databricks account ID. Used to generate  restrictive IAM policies that will increase the security of your root bucket\n","willReplaceOnChanges":true},"fullAccessRole":{"type":"string","description":"Data access role that can have full access for this bucket\n","willReplaceOnChanges":true}},"type":"object","required":["bucket"]},"outputs":{"description":"A collection of values returned by getAwsBucketPolicy.\n","properties":{"awsPartition":{"type":"string"},"bucket":{"type":"string"},"databricksAccountId":{"deprecationMessage":"databricks_account_id will be will be removed in the next major release.","type":"string"},"databricksE2AccountId":{"type":"string"},"fullAccessRole":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"json":{"description":"(Read-only) AWS IAM Policy JSON document to grant Databricks full access to bucket.\n","type":"string"}},"required":["bucket","json","id"],"type":"object"}},"databricks:index/getAwsCrossAccountPolicy:getAwsCrossAccountPolicy":{"description":"This data source constructs necessary AWS cross-account policy for you, which is based on [official documentation](https://docs.databricks.com/administration-guide/account-api/iam-role.html#language-Your%C2%A0VPC,%C2%A0default).\n\n\u003e This data source can be used with an account or workspace-level provider.\n\n## Example Usage\n\nFor more detailed usage please see\u003cspan pulumi-lang-nodejs=\" databricks.getAwsAssumeRolePolicy \" pulumi-lang-dotnet=\" databricks.getAwsAssumeRolePolicy \" pulumi-lang-go=\" getAwsAssumeRolePolicy \" pulumi-lang-python=\" get_aws_assume_role_policy \" pulumi-lang-yaml=\" databricks.getAwsAssumeRolePolicy \" pulumi-lang-java=\" databricks.getAwsAssumeRolePolicy \"\u003e databricks.getAwsAssumeRolePolicy \u003c/span\u003eor\u003cspan pulumi-lang-nodejs=\" databricksAwsS3Mount \" pulumi-lang-dotnet=\" DatabricksAwsS3Mount \" pulumi-lang-go=\" databricksAwsS3Mount \" pulumi-lang-python=\" databricks_aws_s3_mount \" pulumi-lang-yaml=\" databricksAwsS3Mount \" pulumi-lang-java=\" databricksAwsS3Mount \"\u003e databricks_aws_s3_mount \u003c/span\u003epages.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getAwsCrossAccountPolicy({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_aws_cross_account_policy()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetAwsCrossAccountPolicy.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetAwsCrossAccountPolicy(ctx, \u0026databricks.GetAwsCrossAccountPolicyArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAwsCrossAccountPolicyArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getAwsCrossAccountPolicy(GetAwsCrossAccountPolicyArgs.builder()\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getAwsCrossAccountPolicy\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* Provisioning AWS Databricks workspaces with a Hub \u0026 Spoke firewall for data exfiltration protection guide\n*\u003cspan pulumi-lang-nodejs=\" databricks.getAwsAssumeRolePolicy \" pulumi-lang-dotnet=\" databricks.getAwsAssumeRolePolicy \" pulumi-lang-go=\" getAwsAssumeRolePolicy \" pulumi-lang-python=\" get_aws_assume_role_policy \" pulumi-lang-yaml=\" databricks.getAwsAssumeRolePolicy \" pulumi-lang-java=\" databricks.getAwsAssumeRolePolicy \"\u003e databricks.getAwsAssumeRolePolicy \u003c/span\u003edata to construct the necessary AWS STS assume role policy.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getAwsBucketPolicy \" pulumi-lang-dotnet=\" databricks.getAwsBucketPolicy \" pulumi-lang-go=\" getAwsBucketPolicy \" pulumi-lang-python=\" get_aws_bucket_policy \" pulumi-lang-yaml=\" databricks.getAwsBucketPolicy \" pulumi-lang-java=\" databricks.getAwsBucketPolicy \"\u003e databricks.getAwsBucketPolicy \u003c/span\u003edata to configure a simple access policy for AWS S3 buckets, so that Databricks can access data in it.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n","inputs":{"description":"A collection of arguments for invoking getAwsCrossAccountPolicy.\n","properties":{"awsAccountId":{"type":"string","description":"— Your AWS account ID, which is a number.\n","willReplaceOnChanges":true},"awsPartition":{"type":"string","description":"AWS partition. The options are \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e, `aws-us-gov`, or `aws-us-gov-dod`. Defaults to \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e\n","willReplaceOnChanges":true},"passRoles":{"type":"array","items":{"type":"string"},"description":"List of Data IAM role ARNs that are explicitly granted `iam:PassRole` action.\nThe below arguments are only valid for \u003cspan pulumi-lang-nodejs=\"`restricted`\" pulumi-lang-dotnet=\"`Restricted`\" pulumi-lang-go=\"`restricted`\" pulumi-lang-python=\"`restricted`\" pulumi-lang-yaml=\"`restricted`\" pulumi-lang-java=\"`restricted`\"\u003e`restricted`\u003c/span\u003e policy type\n","willReplaceOnChanges":true},"policyType":{"type":"string","description":"The type of cross account policy to generated: \u003cspan pulumi-lang-nodejs=\"`managed`\" pulumi-lang-dotnet=\"`Managed`\" pulumi-lang-go=\"`managed`\" pulumi-lang-python=\"`managed`\" pulumi-lang-yaml=\"`managed`\" pulumi-lang-java=\"`managed`\"\u003e`managed`\u003c/span\u003e for Databricks-managed VPC and \u003cspan pulumi-lang-nodejs=\"`customer`\" pulumi-lang-dotnet=\"`Customer`\" pulumi-lang-go=\"`customer`\" pulumi-lang-python=\"`customer`\" pulumi-lang-yaml=\"`customer`\" pulumi-lang-java=\"`customer`\"\u003e`customer`\u003c/span\u003e for customer-managed VPC, \u003cspan pulumi-lang-nodejs=\"`restricted`\" pulumi-lang-dotnet=\"`Restricted`\" pulumi-lang-go=\"`restricted`\" pulumi-lang-python=\"`restricted`\" pulumi-lang-yaml=\"`restricted`\" pulumi-lang-java=\"`restricted`\"\u003e`restricted`\u003c/span\u003e for customer-managed VPC with policy restrictions\n","willReplaceOnChanges":true},"region":{"type":"string","description":"— AWS Region name for your VPC deployment, for example `us-west-2`.\n","willReplaceOnChanges":true},"securityGroupId":{"type":"string","description":"— ID of your AWS security group. When you add a security group restriction, you cannot reuse the cross-account IAM role or reference a credentials ID (\u003cspan pulumi-lang-nodejs=\"`credentialsId`\" pulumi-lang-dotnet=\"`CredentialsId`\" pulumi-lang-go=\"`credentialsId`\" pulumi-lang-python=\"`credentials_id`\" pulumi-lang-yaml=\"`credentialsId`\" pulumi-lang-java=\"`credentialsId`\"\u003e`credentials_id`\u003c/span\u003e) for any other workspaces. For those other workspaces, you must create separate roles, policies, and credentials objects.\n","willReplaceOnChanges":true},"vpcId":{"type":"string","description":"— ID of the AWS VPC where you want to launch workspaces.\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getAwsCrossAccountPolicy.\n","properties":{"awsAccountId":{"type":"string"},"awsPartition":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"json":{"description":"AWS IAM Policy JSON document\n","type":"string"},"passRoles":{"items":{"type":"string"},"type":"array"},"policyType":{"type":"string"},"region":{"type":"string"},"securityGroupId":{"type":"string"},"vpcId":{"type":"string"}},"required":["json","id"],"type":"object"}},"databricks:index/getAwsUnityCatalogAssumeRolePolicy:getAwsUnityCatalogAssumeRolePolicy":{"description":"This data source constructs the necessary AWS Unity Catalog assume role policy for you.\n\n\u003e This data source can be used with an account or workspace-level provider.\n\n\u003e This data source has an evolving API, which may change in future versions of the provider. Please always consult [latest documentation](https://docs.databricks.com/data-governance/unity-catalog/get-started.html#configure-a-storage-bucket-and-iam-role-in-aws) in case of any questions.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as aws from \"@pulumi/aws\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getAwsUnityCatalogPolicy({\n    awsAccountId: awsAccountId,\n    bucketName: \"databricks-bucket\",\n    roleName: `${prefix}-uc-access`,\n    kmsName: \"arn:aws:kms:us-west-2:111122223333:key/databricks-kms\",\n});\nconst thisGetAwsUnityCatalogAssumeRolePolicy = databricks.getAwsUnityCatalogAssumeRolePolicy({\n    awsAccountId: awsAccountId,\n    roleName: `${prefix}-uc-access`,\n    externalId: \"12345\",\n});\nconst unityMetastore = new aws.index.IamPolicy(\"unity_metastore\", {\n    name: `${prefix}-unity-catalog-metastore-access-iam-policy`,\n    policy: _this.json,\n});\nconst metastoreDataAccess = new aws.index.IamRole(\"metastore_data_access\", {\n    name: `${prefix}-uc-access`,\n    assumeRolePolicy: thisGetAwsUnityCatalogAssumeRolePolicy.json,\n});\nconst metastoreDataAccessIamRolePolicyAttachment = new aws.index.IamRolePolicyAttachment(\"metastore_data_access\", {\n    role: metastoreDataAccess.name,\n    policyArn: unityMetastore.arn,\n});\n```\n```python\nimport pulumi\nimport pulumi_aws as aws\nimport pulumi_databricks as databricks\n\nthis = databricks.get_aws_unity_catalog_policy(aws_account_id=aws_account_id,\n    bucket_name=\"databricks-bucket\",\n    role_name=f\"{prefix}-uc-access\",\n    kms_name=\"arn:aws:kms:us-west-2:111122223333:key/databricks-kms\")\nthis_get_aws_unity_catalog_assume_role_policy = databricks.get_aws_unity_catalog_assume_role_policy(aws_account_id=aws_account_id,\n    role_name=f\"{prefix}-uc-access\",\n    external_id=\"12345\")\nunity_metastore = aws.index.IamPolicy(\"unity_metastore\",\n    name=f{prefix}-unity-catalog-metastore-access-iam-policy,\n    policy=this.json)\nmetastore_data_access = aws.index.IamRole(\"metastore_data_access\",\n    name=f{prefix}-uc-access,\n    assume_role_policy=this_get_aws_unity_catalog_assume_role_policy.json)\nmetastore_data_access_iam_role_policy_attachment = aws.index.IamRolePolicyAttachment(\"metastore_data_access\",\n    role=metastore_data_access.name,\n    policy_arn=unity_metastore.arn)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Aws = Pulumi.Aws;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetAwsUnityCatalogPolicy.Invoke(new()\n    {\n        AwsAccountId = awsAccountId,\n        BucketName = \"databricks-bucket\",\n        RoleName = $\"{prefix}-uc-access\",\n        KmsName = \"arn:aws:kms:us-west-2:111122223333:key/databricks-kms\",\n    });\n\n    var thisGetAwsUnityCatalogAssumeRolePolicy = Databricks.GetAwsUnityCatalogAssumeRolePolicy.Invoke(new()\n    {\n        AwsAccountId = awsAccountId,\n        RoleName = $\"{prefix}-uc-access\",\n        ExternalId = \"12345\",\n    });\n\n    var unityMetastore = new Aws.Index.IamPolicy(\"unity_metastore\", new()\n    {\n        Name = $\"{prefix}-unity-catalog-metastore-access-iam-policy\",\n        Policy = @this.Apply(getAwsUnityCatalogPolicyResult =\u003e getAwsUnityCatalogPolicyResult.Json),\n    });\n\n    var metastoreDataAccess = new Aws.Index.IamRole(\"metastore_data_access\", new()\n    {\n        Name = $\"{prefix}-uc-access\",\n        AssumeRolePolicy = thisGetAwsUnityCatalogAssumeRolePolicy.Apply(getAwsUnityCatalogAssumeRolePolicyResult =\u003e getAwsUnityCatalogAssumeRolePolicyResult.Json),\n    });\n\n    var metastoreDataAccessIamRolePolicyAttachment = new Aws.Index.IamRolePolicyAttachment(\"metastore_data_access\", new()\n    {\n        Role = metastoreDataAccess.Name,\n        PolicyArn = unityMetastore.Arn,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-aws/sdk/v7/go/aws\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetAwsUnityCatalogPolicy(ctx, \u0026databricks.GetAwsUnityCatalogPolicyArgs{\n\t\t\tAwsAccountId: awsAccountId,\n\t\t\tBucketName:   \"databricks-bucket\",\n\t\t\tRoleName:     fmt.Sprintf(\"%v-uc-access\", prefix),\n\t\t\tKmsName:      pulumi.StringRef(\"arn:aws:kms:us-west-2:111122223333:key/databricks-kms\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisGetAwsUnityCatalogAssumeRolePolicy, err := databricks.GetAwsUnityCatalogAssumeRolePolicy(ctx, \u0026databricks.GetAwsUnityCatalogAssumeRolePolicyArgs{\n\t\t\tAwsAccountId: awsAccountId,\n\t\t\tRoleName:     fmt.Sprintf(\"%v-uc-access\", prefix),\n\t\t\tExternalId:   \"12345\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tunityMetastore, err := aws.NewIamPolicy(ctx, \"unity_metastore\", \u0026aws.IamPolicyArgs{\n\t\t\tName:   fmt.Sprintf(\"%v-unity-catalog-metastore-access-iam-policy\", prefix),\n\t\t\tPolicy: this.Json,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmetastoreDataAccess, err := aws.NewIamRole(ctx, \"metastore_data_access\", \u0026aws.IamRoleArgs{\n\t\t\tName:             fmt.Sprintf(\"%v-uc-access\", prefix),\n\t\t\tAssumeRolePolicy: thisGetAwsUnityCatalogAssumeRolePolicy.Json,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewIamRolePolicyAttachment(ctx, \"metastore_data_access\", \u0026aws.IamRolePolicyAttachmentArgs{\n\t\t\tRole:      metastoreDataAccess.Name,\n\t\t\tPolicyArn: unityMetastore.Arn,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAwsUnityCatalogPolicyArgs;\nimport com.pulumi.databricks.inputs.GetAwsUnityCatalogAssumeRolePolicyArgs;\nimport com.pulumi.aws.IamPolicy;\nimport com.pulumi.aws.IamPolicyArgs;\nimport com.pulumi.aws.IamRole;\nimport com.pulumi.aws.IamRoleArgs;\nimport com.pulumi.aws.IamRolePolicyAttachment;\nimport com.pulumi.aws.IamRolePolicyAttachmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getAwsUnityCatalogPolicy(GetAwsUnityCatalogPolicyArgs.builder()\n            .awsAccountId(awsAccountId)\n            .bucketName(\"databricks-bucket\")\n            .roleName(String.format(\"%s-uc-access\", prefix))\n            .kmsName(\"arn:aws:kms:us-west-2:111122223333:key/databricks-kms\")\n            .build());\n\n        final var thisGetAwsUnityCatalogAssumeRolePolicy = DatabricksFunctions.getAwsUnityCatalogAssumeRolePolicy(GetAwsUnityCatalogAssumeRolePolicyArgs.builder()\n            .awsAccountId(awsAccountId)\n            .roleName(String.format(\"%s-uc-access\", prefix))\n            .externalId(\"12345\")\n            .build());\n\n        var unityMetastore = new IamPolicy(\"unityMetastore\", IamPolicyArgs.builder()\n            .name(String.format(\"%s-unity-catalog-metastore-access-iam-policy\", prefix))\n            .policy(this_.json())\n            .build());\n\n        var metastoreDataAccess = new IamRole(\"metastoreDataAccess\", IamRoleArgs.builder()\n            .name(String.format(\"%s-uc-access\", prefix))\n            .assumeRolePolicy(thisGetAwsUnityCatalogAssumeRolePolicy.json())\n            .build());\n\n        var metastoreDataAccessIamRolePolicyAttachment = new IamRolePolicyAttachment(\"metastoreDataAccessIamRolePolicyAttachment\", IamRolePolicyAttachmentArgs.builder()\n            .role(metastoreDataAccess.name())\n            .policyArn(unityMetastore.arn())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  unityMetastore:\n    type: aws:IamPolicy\n    name: unity_metastore\n    properties:\n      name: ${prefix}-unity-catalog-metastore-access-iam-policy\n      policy: ${this.json}\n  metastoreDataAccess:\n    type: aws:IamRole\n    name: metastore_data_access\n    properties:\n      name: ${prefix}-uc-access\n      assumeRolePolicy: ${thisGetAwsUnityCatalogAssumeRolePolicy.json}\n  metastoreDataAccessIamRolePolicyAttachment:\n    type: aws:IamRolePolicyAttachment\n    name: metastore_data_access\n    properties:\n      role: ${metastoreDataAccess.name}\n      policyArn: ${unityMetastore.arn}\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getAwsUnityCatalogPolicy\n      arguments:\n        awsAccountId: ${awsAccountId}\n        bucketName: databricks-bucket\n        roleName: ${prefix}-uc-access\n        kmsName: arn:aws:kms:us-west-2:111122223333:key/databricks-kms\n  thisGetAwsUnityCatalogAssumeRolePolicy:\n    fn::invoke:\n      function: databricks:getAwsUnityCatalogAssumeRolePolicy\n      arguments:\n        awsAccountId: ${awsAccountId}\n        roleName: ${prefix}-uc-access\n        externalId: '12345'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getAwsUnityCatalogAssumeRolePolicy.\n","properties":{"awsAccountId":{"type":"string","description":"The Account ID of the current AWS account (not your Databricks account).\n","willReplaceOnChanges":true},"awsPartition":{"type":"string","description":"AWS partition. The options are \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e,`aws-us-gov` or `aws-us-gov-dod`. Defaults to \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e\n","willReplaceOnChanges":true},"externalId":{"type":"string","description":"The storage credential external id.\n","willReplaceOnChanges":true},"roleName":{"type":"string","description":"The name of the AWS IAM role to be created for Unity Catalog.\n","willReplaceOnChanges":true},"unityCatalogIamArn":{"type":"string","description":"The Databricks Unity Catalog IAM Role ARN. Defaults to `arn:aws:iam::414351767826:role/unity-catalog-prod-UCMasterRole-14S5ZJVKOTYTL` on standard AWS partition selection, `arn:aws-us-gov:iam::044793339203:role/unity-catalog-prod-UCMasterRole-1QRFA8SGY15OJ` on GovCloud partition selection, and `arn:aws-us-gov:iam::170661010020:role/unity-catalog-prod-UCMasterRole-1DI6DL6ZP26AS` on GovCloud DoD partition selection\n"}},"type":"object","required":["awsAccountId","externalId","roleName"]},"outputs":{"description":"A collection of values returned by getAwsUnityCatalogAssumeRolePolicy.\n","properties":{"awsAccountId":{"type":"string"},"awsPartition":{"type":"string"},"externalId":{"type":"string"},"id":{"type":"string"},"json":{"description":"AWS IAM Policy JSON document for assume role\n","type":"string"},"roleName":{"type":"string"},"unityCatalogIamArn":{"type":"string"}},"required":["awsAccountId","externalId","id","json","roleName","unityCatalogIamArn"],"type":"object"}},"databricks:index/getAwsUnityCatalogPolicy:getAwsUnityCatalogPolicy":{"description":"This data source constructs the necessary AWS Unity Catalog policy for you.\n\n\u003e This data source can be used with an account or workspace-level provider.\n\n\u003e This data source has an evolving API, which may change in future versions of the provider. Please always consult [latest documentation](https://docs.databricks.com/data-governance/unity-catalog/get-started.html#configure-a-storage-bucket-and-iam-role-in-aws) in case of any questions.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as aws from \"@pulumi/aws\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getAwsUnityCatalogPolicy({\n    awsAccountId: awsAccountId,\n    bucketName: \"databricks-bucket\",\n    roleName: `${prefix}-uc-access`,\n    kmsName: \"arn:aws:kms:us-west-2:111122223333:key/databricks-kms\",\n});\nconst thisGetAwsUnityCatalogAssumeRolePolicy = databricks.getAwsUnityCatalogAssumeRolePolicy({\n    awsAccountId: awsAccountId,\n    roleName: `${prefix}-uc-access`,\n    externalId: \"12345\",\n});\nconst unityMetastore = new aws.index.IamPolicy(\"unity_metastore\", {\n    name: `${prefix}-unity-catalog-metastore-access-iam-policy`,\n    policy: _this.json,\n});\nconst metastoreDataAccess = new aws.index.IamRole(\"metastore_data_access\", {\n    name: `${prefix}-uc-access`,\n    assumeRolePolicy: thisGetAwsUnityCatalogAssumeRolePolicy.json,\n});\nconst metastoreDataAccessIamRolePolicyAttachment = new aws.index.IamRolePolicyAttachment(\"metastore_data_access\", {\n    role: metastoreDataAccess.name,\n    policyArn: unityMetastore.arn,\n});\n```\n```python\nimport pulumi\nimport pulumi_aws as aws\nimport pulumi_databricks as databricks\n\nthis = databricks.get_aws_unity_catalog_policy(aws_account_id=aws_account_id,\n    bucket_name=\"databricks-bucket\",\n    role_name=f\"{prefix}-uc-access\",\n    kms_name=\"arn:aws:kms:us-west-2:111122223333:key/databricks-kms\")\nthis_get_aws_unity_catalog_assume_role_policy = databricks.get_aws_unity_catalog_assume_role_policy(aws_account_id=aws_account_id,\n    role_name=f\"{prefix}-uc-access\",\n    external_id=\"12345\")\nunity_metastore = aws.index.IamPolicy(\"unity_metastore\",\n    name=f{prefix}-unity-catalog-metastore-access-iam-policy,\n    policy=this.json)\nmetastore_data_access = aws.index.IamRole(\"metastore_data_access\",\n    name=f{prefix}-uc-access,\n    assume_role_policy=this_get_aws_unity_catalog_assume_role_policy.json)\nmetastore_data_access_iam_role_policy_attachment = aws.index.IamRolePolicyAttachment(\"metastore_data_access\",\n    role=metastore_data_access.name,\n    policy_arn=unity_metastore.arn)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Aws = Pulumi.Aws;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetAwsUnityCatalogPolicy.Invoke(new()\n    {\n        AwsAccountId = awsAccountId,\n        BucketName = \"databricks-bucket\",\n        RoleName = $\"{prefix}-uc-access\",\n        KmsName = \"arn:aws:kms:us-west-2:111122223333:key/databricks-kms\",\n    });\n\n    var thisGetAwsUnityCatalogAssumeRolePolicy = Databricks.GetAwsUnityCatalogAssumeRolePolicy.Invoke(new()\n    {\n        AwsAccountId = awsAccountId,\n        RoleName = $\"{prefix}-uc-access\",\n        ExternalId = \"12345\",\n    });\n\n    var unityMetastore = new Aws.Index.IamPolicy(\"unity_metastore\", new()\n    {\n        Name = $\"{prefix}-unity-catalog-metastore-access-iam-policy\",\n        Policy = @this.Apply(getAwsUnityCatalogPolicyResult =\u003e getAwsUnityCatalogPolicyResult.Json),\n    });\n\n    var metastoreDataAccess = new Aws.Index.IamRole(\"metastore_data_access\", new()\n    {\n        Name = $\"{prefix}-uc-access\",\n        AssumeRolePolicy = thisGetAwsUnityCatalogAssumeRolePolicy.Apply(getAwsUnityCatalogAssumeRolePolicyResult =\u003e getAwsUnityCatalogAssumeRolePolicyResult.Json),\n    });\n\n    var metastoreDataAccessIamRolePolicyAttachment = new Aws.Index.IamRolePolicyAttachment(\"metastore_data_access\", new()\n    {\n        Role = metastoreDataAccess.Name,\n        PolicyArn = unityMetastore.Arn,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-aws/sdk/v7/go/aws\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetAwsUnityCatalogPolicy(ctx, \u0026databricks.GetAwsUnityCatalogPolicyArgs{\n\t\t\tAwsAccountId: awsAccountId,\n\t\t\tBucketName:   \"databricks-bucket\",\n\t\t\tRoleName:     fmt.Sprintf(\"%v-uc-access\", prefix),\n\t\t\tKmsName:      pulumi.StringRef(\"arn:aws:kms:us-west-2:111122223333:key/databricks-kms\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisGetAwsUnityCatalogAssumeRolePolicy, err := databricks.GetAwsUnityCatalogAssumeRolePolicy(ctx, \u0026databricks.GetAwsUnityCatalogAssumeRolePolicyArgs{\n\t\t\tAwsAccountId: awsAccountId,\n\t\t\tRoleName:     fmt.Sprintf(\"%v-uc-access\", prefix),\n\t\t\tExternalId:   \"12345\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tunityMetastore, err := aws.NewIamPolicy(ctx, \"unity_metastore\", \u0026aws.IamPolicyArgs{\n\t\t\tName:   fmt.Sprintf(\"%v-unity-catalog-metastore-access-iam-policy\", prefix),\n\t\t\tPolicy: this.Json,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tmetastoreDataAccess, err := aws.NewIamRole(ctx, \"metastore_data_access\", \u0026aws.IamRoleArgs{\n\t\t\tName:             fmt.Sprintf(\"%v-uc-access\", prefix),\n\t\t\tAssumeRolePolicy: thisGetAwsUnityCatalogAssumeRolePolicy.Json,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = aws.NewIamRolePolicyAttachment(ctx, \"metastore_data_access\", \u0026aws.IamRolePolicyAttachmentArgs{\n\t\t\tRole:      metastoreDataAccess.Name,\n\t\t\tPolicyArn: unityMetastore.Arn,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetAwsUnityCatalogPolicyArgs;\nimport com.pulumi.databricks.inputs.GetAwsUnityCatalogAssumeRolePolicyArgs;\nimport com.pulumi.aws.IamPolicy;\nimport com.pulumi.aws.IamPolicyArgs;\nimport com.pulumi.aws.IamRole;\nimport com.pulumi.aws.IamRoleArgs;\nimport com.pulumi.aws.IamRolePolicyAttachment;\nimport com.pulumi.aws.IamRolePolicyAttachmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getAwsUnityCatalogPolicy(GetAwsUnityCatalogPolicyArgs.builder()\n            .awsAccountId(awsAccountId)\n            .bucketName(\"databricks-bucket\")\n            .roleName(String.format(\"%s-uc-access\", prefix))\n            .kmsName(\"arn:aws:kms:us-west-2:111122223333:key/databricks-kms\")\n            .build());\n\n        final var thisGetAwsUnityCatalogAssumeRolePolicy = DatabricksFunctions.getAwsUnityCatalogAssumeRolePolicy(GetAwsUnityCatalogAssumeRolePolicyArgs.builder()\n            .awsAccountId(awsAccountId)\n            .roleName(String.format(\"%s-uc-access\", prefix))\n            .externalId(\"12345\")\n            .build());\n\n        var unityMetastore = new IamPolicy(\"unityMetastore\", IamPolicyArgs.builder()\n            .name(String.format(\"%s-unity-catalog-metastore-access-iam-policy\", prefix))\n            .policy(this_.json())\n            .build());\n\n        var metastoreDataAccess = new IamRole(\"metastoreDataAccess\", IamRoleArgs.builder()\n            .name(String.format(\"%s-uc-access\", prefix))\n            .assumeRolePolicy(thisGetAwsUnityCatalogAssumeRolePolicy.json())\n            .build());\n\n        var metastoreDataAccessIamRolePolicyAttachment = new IamRolePolicyAttachment(\"metastoreDataAccessIamRolePolicyAttachment\", IamRolePolicyAttachmentArgs.builder()\n            .role(metastoreDataAccess.name())\n            .policyArn(unityMetastore.arn())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  unityMetastore:\n    type: aws:IamPolicy\n    name: unity_metastore\n    properties:\n      name: ${prefix}-unity-catalog-metastore-access-iam-policy\n      policy: ${this.json}\n  metastoreDataAccess:\n    type: aws:IamRole\n    name: metastore_data_access\n    properties:\n      name: ${prefix}-uc-access\n      assumeRolePolicy: ${thisGetAwsUnityCatalogAssumeRolePolicy.json}\n  metastoreDataAccessIamRolePolicyAttachment:\n    type: aws:IamRolePolicyAttachment\n    name: metastore_data_access\n    properties:\n      role: ${metastoreDataAccess.name}\n      policyArn: ${unityMetastore.arn}\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getAwsUnityCatalogPolicy\n      arguments:\n        awsAccountId: ${awsAccountId}\n        bucketName: databricks-bucket\n        roleName: ${prefix}-uc-access\n        kmsName: arn:aws:kms:us-west-2:111122223333:key/databricks-kms\n  thisGetAwsUnityCatalogAssumeRolePolicy:\n    fn::invoke:\n      function: databricks:getAwsUnityCatalogAssumeRolePolicy\n      arguments:\n        awsAccountId: ${awsAccountId}\n        roleName: ${prefix}-uc-access\n        externalId: '12345'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getAwsUnityCatalogPolicy.\n","properties":{"awsAccountId":{"type":"string","description":"The Account ID of the current AWS account (not your Databricks account).\n","willReplaceOnChanges":true},"awsPartition":{"type":"string","description":"AWS partition. The options are \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e, `aws-us-gov`, or `aws-us-gov-dod`. Defaults to \u003cspan pulumi-lang-nodejs=\"`aws`\" pulumi-lang-dotnet=\"`Aws`\" pulumi-lang-go=\"`aws`\" pulumi-lang-python=\"`aws`\" pulumi-lang-yaml=\"`aws`\" pulumi-lang-java=\"`aws`\"\u003e`aws`\u003c/span\u003e\n","willReplaceOnChanges":true},"bucketName":{"type":"string","description":"The name of the S3 bucket used as root storage location for [managed tables](https://docs.databricks.com/data-governance/unity-catalog/index.html#managed-table) in Unity Catalog.  The name must follow the [S3 bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html).\n","willReplaceOnChanges":true},"kmsName":{"type":"string","description":"If encryption is enabled, provide the ARN of the KMS key that encrypts the S3 bucket contents. If encryption is disabled, do not provide this argument.\n","willReplaceOnChanges":true},"roleName":{"type":"string","description":"The name of the AWS IAM role that you created in the previous step in the [official documentation](https://docs.databricks.com/data-governance/unity-catalog/get-started.html#configure-a-storage-bucket-and-iam-role-in-aws).\n","willReplaceOnChanges":true}},"type":"object","required":["awsAccountId","bucketName","roleName"]},"outputs":{"description":"A collection of values returned by getAwsUnityCatalogPolicy.\n","properties":{"awsAccountId":{"type":"string"},"awsPartition":{"type":"string"},"bucketName":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"json":{"description":"AWS IAM Policy JSON document\n","type":"string"},"kmsName":{"type":"string"},"roleName":{"type":"string"}},"required":["awsAccountId","bucketName","json","roleName","id"],"type":"object"}},"databricks:index/getBudgetPolicies:getBudgetPolicies":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to fetch the list of budget policies.\n\n\u003e **Note** This data source can only be used with an account-level provider!\n\n## Example Usage\n\nGetting a list of all budget policies:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getBudgetPolicies({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_budget_policies()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetBudgetPolicies.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetBudgetPolicies(ctx, \u0026databricks.GetBudgetPoliciesArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetBudgetPoliciesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getBudgetPolicies(GetBudgetPoliciesArgs.builder()\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getBudgetPolicies\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getBudgetPolicies.\n","properties":{"filterBy":{"$ref":"#/types/databricks:index/getBudgetPoliciesFilterBy:getBudgetPoliciesFilterBy","description":"A filter to apply to the list of policies\n"},"pageSize":{"type":"integer","description":"The maximum number of budget policies to return.\nIf unspecified, at most 100 budget policies will be returned.\nThe maximum value is 1000; values above 1000 will be coerced to 1000\n"},"sortSpec":{"$ref":"#/types/databricks:index/getBudgetPoliciesSortSpec:getBudgetPoliciesSortSpec","description":"The sort specification\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getBudgetPolicies.\n","properties":{"filterBy":{"$ref":"#/types/databricks:index/getBudgetPoliciesFilterBy:getBudgetPoliciesFilterBy"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"policies":{"items":{"$ref":"#/types/databricks:index/getBudgetPoliciesPolicy:getBudgetPoliciesPolicy"},"type":"array"},"sortSpec":{"$ref":"#/types/databricks:index/getBudgetPoliciesSortSpec:getBudgetPoliciesSortSpec"}},"required":["policies","id"],"type":"object"}},"databricks:index/getBudgetPolicy:getBudgetPolicy":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single budget policy.\n\n\u003e **Note** This data source can only be used with an account-level provider!\n\n## Example Usage\n\nReferring to a budget policy by id:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getBudgetPolicy({\n    policyId: \"test\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_budget_policy(policy_id=\"test\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetBudgetPolicy.Invoke(new()\n    {\n        PolicyId = \"test\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupBudgetPolicy(ctx, \u0026databricks.LookupBudgetPolicyArgs{\n\t\t\tPolicyId: \"test\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetBudgetPolicyArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getBudgetPolicy(GetBudgetPolicyArgs.builder()\n            .policyId(\"test\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getBudgetPolicy\n      arguments:\n        policyId: test\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getBudgetPolicy.\n","properties":{"policyId":{"type":"string","description":"The Id of the policy. This field is generated by Databricks and globally unique\n"}},"type":"object","required":["policyId"]},"outputs":{"description":"A collection of values returned by getBudgetPolicy.\n","properties":{"bindingWorkspaceIds":{"description":"(list of integer) - List of workspaces that this budget policy will be exclusively bound to.\nAn empty binding implies that this budget policy is open to any workspace in the account\n","items":{"type":"integer"},"type":"array"},"customTags":{"description":"(list of CustomPolicyTag) - A list of tags defined by the customer. At most 20 entries are allowed per policy\n","items":{"$ref":"#/types/databricks:index/getBudgetPolicyCustomTag:getBudgetPolicyCustomTag"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"policyId":{"description":"(string) - The Id of the policy. This field is generated by Databricks and globally unique\n","type":"string"},"policyName":{"description":"(string) - The name of the policy.\n- Must be unique among active policies.\n- Can contain only characters from the ISO 8859-1 (latin1) set.\n- Can't start with reserved keywords such as `databricks:default-policy`\n","type":"string"}},"required":["bindingWorkspaceIds","customTags","policyId","policyName","id"],"type":"object"}},"databricks:index/getCatalog:getCatalog":{"description":"Retrieves details of a specific catalog in Unity Catalog, that were created by Pulumi or manually. Use\u003cspan pulumi-lang-nodejs=\" databricks.getCatalogs \" pulumi-lang-dotnet=\" databricks.getCatalogs \" pulumi-lang-go=\" getCatalogs \" pulumi-lang-python=\" get_catalogs \" pulumi-lang-yaml=\" databricks.getCatalogs \" pulumi-lang-java=\" databricks.getCatalogs \"\u003e databricks.getCatalogs \u003c/span\u003eto retrieve IDs of multiple catalogs from Unity Catalog\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nRead  on a specific catalog \u003cspan pulumi-lang-nodejs=\"`test`\" pulumi-lang-dotnet=\"`Test`\" pulumi-lang-go=\"`test`\" pulumi-lang-python=\"`test`\" pulumi-lang-yaml=\"`test`\" pulumi-lang-java=\"`test`\"\u003e`test`\u003c/span\u003e:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst test = databricks.getCatalog({\n    name: \"test\",\n});\nconst things = new databricks.Grants(\"things\", {\n    catalog: test.then(test =\u003e test.name),\n    grants: [{\n        principal: \"sensitive\",\n        privileges: [\"USE_CATALOG\"],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ntest = databricks.get_catalog(name=\"test\")\nthings = databricks.Grants(\"things\",\n    catalog=test.name,\n    grants=[{\n        \"principal\": \"sensitive\",\n        \"privileges\": [\"USE_CATALOG\"],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var test = Databricks.GetCatalog.Invoke(new()\n    {\n        Name = \"test\",\n    });\n\n    var things = new Databricks.Grants(\"things\", new()\n    {\n        Catalog = test.Apply(getCatalogResult =\u003e getCatalogResult.Name),\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"sensitive\",\n                Privileges = new[]\n                {\n                    \"USE_CATALOG\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\ttest, err := databricks.LookupCatalog(ctx, \u0026databricks.LookupCatalogArgs{\n\t\t\tName: \"test\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"things\", \u0026databricks.GrantsArgs{\n\t\t\tCatalog: pulumi.String(test.Name),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"sensitive\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"USE_CATALOG\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetCatalogArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var test = DatabricksFunctions.getCatalog(GetCatalogArgs.builder()\n            .name(\"test\")\n            .build());\n\n        var things = new Grants(\"things\", GrantsArgs.builder()\n            .catalog(test.name())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"sensitive\")\n                .privileges(\"USE_CATALOG\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  things:\n    type: databricks:Grants\n    properties:\n      catalog: ${test.name}\n      grants:\n        - principal: sensitive\n          privileges:\n            - USE_CATALOG\nvariables:\n  test:\n    fn::invoke:\n      function: databricks:getCatalog\n      arguments:\n        name: test\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grant \" pulumi-lang-dotnet=\" databricks.Grant \" pulumi-lang-go=\" Grant \" pulumi-lang-python=\" Grant \" pulumi-lang-yaml=\" databricks.Grant \" pulumi-lang-java=\" databricks.Grant \"\u003e databricks.Grant \u003c/span\u003eto manage grants within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getCatalogs \" pulumi-lang-dotnet=\" databricks.getCatalogs \" pulumi-lang-go=\" getCatalogs \" pulumi-lang-python=\" get_catalogs \" pulumi-lang-yaml=\" databricks.getCatalogs \" pulumi-lang-java=\" databricks.getCatalogs \"\u003e databricks.getCatalogs \u003c/span\u003eto list all catalogs within Unity Catalog metastore.\n","inputs":{"description":"A collection of arguments for invoking getCatalog.\n","properties":{"catalogInfo":{"$ref":"#/types/databricks:index/getCatalogCatalogInfo:getCatalogCatalogInfo","description":"the [CatalogInfo](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CatalogInfo) object for a Unity Catalog catalog. This contains the following attributes (see ):\n"},"id":{"type":"string","description":"same as the \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e\n"},"name":{"type":"string","description":"name of the catalog\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getCatalogProviderConfig:getCatalogProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getCatalog.\n","properties":{"catalogInfo":{"$ref":"#/types/databricks:index/getCatalogCatalogInfo:getCatalogCatalogInfo","description":"the [CatalogInfo](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CatalogInfo) object for a Unity Catalog catalog. This contains the following attributes (see ):\n"},"id":{"description":"same as the \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e\n","type":"string"},"name":{"description":"Name of the catalog\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getCatalogProviderConfig:getCatalogProviderConfig"}},"required":["catalogInfo","id","name"],"type":"object"}},"databricks:index/getCatalogs:getCatalogs":{"description":"Retrieves a list of\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eids, that were created by Pulumi or manually, so that special handling could be applied.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nListing all catalogs:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getCatalogs({});\nexport const allCatalogs = all;\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_catalogs()\npulumi.export(\"allCatalogs\", all)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetCatalogs.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allCatalogs\"] = all,\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetCatalogs(ctx, \u0026databricks.GetCatalogsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allCatalogs\", all)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetCatalogsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getCatalogs(GetCatalogsArgs.builder()\n            .build());\n\n        ctx.export(\"allCatalogs\", all);\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getCatalogs\n      arguments: {}\noutputs:\n  allCatalogs: ${all}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eto manage schemas within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eto manage catalogs within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getCatalogs.\n","properties":{"ids":{"type":"array","items":{"type":"string"},"description":"set of\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003enames\n"},"providerConfig":{"$ref":"#/types/databricks:index/getCatalogsProviderConfig:getCatalogsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getCatalogs.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"ids":{"description":"set of\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003enames\n","items":{"type":"string"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getCatalogsProviderConfig:getCatalogsProviderConfig"}},"required":["ids","id"],"type":"object"}},"databricks:index/getCluster:getCluster":{"description":"Retrieves information about a\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eusing its id. This could be retrieved programmatically using\u003cspan pulumi-lang-nodejs=\" databricks.getClusters \" pulumi-lang-dotnet=\" databricks.getClusters \" pulumi-lang-go=\" getClusters \" pulumi-lang-python=\" get_clusters \" pulumi-lang-yaml=\" databricks.getClusters \" pulumi-lang-java=\" databricks.getClusters \"\u003e databricks.getClusters \u003c/span\u003edata source.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nRetrieve attributes of each SQL warehouses in a workspace\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getClusters({});\nconst allGetCluster = all.then(all =\u003e .reduce((__obj, [__key, __value]) =\u003e ({ ...__obj, [__key]: databricks.getCluster({\n    clusterId: __value,\n}) })));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_clusters()\nall_get_cluster = {__key: databricks.get_cluster(cluster_id=__value) for __key, __value in all.ids}\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetClusters.Invoke();\n\n    var allGetCluster = ;\n\n});\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Multiple clusters with the same name\n\nWhen fetching a cluster whose name is not unique (including terminated but not permanently deleted clusters), you must use the \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e argument to uniquely identify the cluster. Combine this data source with \u003cspan pulumi-lang-nodejs=\"`databricks.getClusters`\" pulumi-lang-dotnet=\"`databricks.getClusters`\" pulumi-lang-go=\"`getClusters`\" pulumi-lang-python=\"`get_clusters`\" pulumi-lang-yaml=\"`databricks.getClusters`\" pulumi-lang-java=\"`databricks.getClusters`\"\u003e`databricks.getClusters`\u003c/span\u003e to get the \u003cspan pulumi-lang-nodejs=\"`clusterId`\" pulumi-lang-dotnet=\"`ClusterId`\" pulumi-lang-go=\"`clusterId`\" pulumi-lang-python=\"`cluster_id`\" pulumi-lang-yaml=\"`clusterId`\" pulumi-lang-java=\"`clusterId`\"\u003e`cluster_id`\u003c/span\u003e of the cluster you want to fetch.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst myCluster = databricks.getClusters({\n    clusterNameContains: \"my-cluster\",\n    filterBy: {\n        clusterStates: [\"RUNNING\"],\n    },\n});\nconst myClusterGetCluster = myCluster.then(myCluster =\u003e databricks.getCluster({\n    clusterId: myCluster.ids?.[0],\n}));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nmy_cluster = databricks.get_clusters(cluster_name_contains=\"my-cluster\",\n    filter_by={\n        \"cluster_states\": [\"RUNNING\"],\n    })\nmy_cluster_get_cluster = databricks.get_cluster(cluster_id=my_cluster.ids[0])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var myCluster = Databricks.GetClusters.Invoke(new()\n    {\n        ClusterNameContains = \"my-cluster\",\n        FilterBy = new Databricks.Inputs.GetClustersFilterByInputArgs\n        {\n            ClusterStates = new[]\n            {\n                \"RUNNING\",\n            },\n        },\n    });\n\n    var myClusterGetCluster = Databricks.GetCluster.Invoke(new()\n    {\n        ClusterId = myCluster.Apply(getClustersResult =\u003e getClustersResult.Ids[0]),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tmyCluster, err := databricks.GetClusters(ctx, \u0026databricks.GetClustersArgs{\n\t\t\tClusterNameContains: pulumi.StringRef(\"my-cluster\"),\n\t\t\tFilterBy: databricks.GetClustersFilterBy{\n\t\t\t\tClusterStates: []string{\n\t\t\t\t\t\"RUNNING\",\n\t\t\t\t},\n\t\t\t},\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.LookupCluster(ctx, \u0026databricks.LookupClusterArgs{\n\t\t\tClusterId: pulumi.StringRef(myCluster.Ids[0]),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetClustersArgs;\nimport com.pulumi.databricks.inputs.GetClustersFilterByArgs;\nimport com.pulumi.databricks.inputs.GetClusterArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var myCluster = DatabricksFunctions.getClusters(GetClustersArgs.builder()\n            .clusterNameContains(\"my-cluster\")\n            .filterBy(GetClustersFilterByArgs.builder()\n                .clusterStates(\"RUNNING\")\n                .build())\n            .build());\n\n        final var myClusterGetCluster = DatabricksFunctions.getCluster(GetClusterArgs.builder()\n            .clusterId(myCluster.ids()[0])\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  myCluster:\n    fn::invoke:\n      function: databricks:getClusters\n      arguments:\n        clusterNameContains: my-cluster\n        filterBy:\n          clusterStates:\n            - RUNNING\n  myClusterGetCluster:\n    fn::invoke:\n      function: databricks:getCluster\n      arguments:\n        clusterId: ${myCluster.ids[0]}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.ClusterPolicy \" pulumi-lang-dotnet=\" databricks.ClusterPolicy \" pulumi-lang-go=\" ClusterPolicy \" pulumi-lang-python=\" ClusterPolicy \" pulumi-lang-yaml=\" databricks.ClusterPolicy \" pulumi-lang-java=\" databricks.ClusterPolicy \"\u003e databricks.ClusterPolicy \u003c/span\u003eto create a\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003epolicy, which limits the ability to create clusters based on a set of rules.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstancePool \" pulumi-lang-dotnet=\" databricks.InstancePool \" pulumi-lang-go=\" InstancePool \" pulumi-lang-python=\" InstancePool \" pulumi-lang-yaml=\" databricks.InstancePool \" pulumi-lang-java=\" databricks.InstancePool \"\u003e databricks.InstancePool \u003c/span\u003eto manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a databricks_cluster.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Library \" pulumi-lang-dotnet=\" databricks.Library \" pulumi-lang-go=\" Library \" pulumi-lang-python=\" Library \" pulumi-lang-yaml=\" databricks.Library \" pulumi-lang-java=\" databricks.Library \"\u003e databricks.Library \u003c/span\u003eto install a [library](https://docs.databricks.com/libraries/index.html) on databricks_cluster.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Pipeline \" pulumi-lang-dotnet=\" databricks.Pipeline \" pulumi-lang-go=\" Pipeline \" pulumi-lang-python=\" Pipeline \" pulumi-lang-yaml=\" databricks.Pipeline \" pulumi-lang-java=\" databricks.Pipeline \"\u003e databricks.Pipeline \u003c/span\u003eto deploy [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt).\n","inputs":{"description":"A collection of arguments for invoking getCluster.\n","properties":{"clusterId":{"type":"string","description":"The id of the cluster.\n"},"clusterInfo":{"$ref":"#/types/databricks:index/getClusterClusterInfo:getClusterClusterInfo","description":"block, consisting of following fields:\n"},"clusterName":{"type":"string","description":"The exact name of the cluster to search. Can only be specified if there is exactly one cluster with the provided name.\n"},"id":{"type":"string","description":"cluster ID\n"},"providerConfig":{"$ref":"#/types/databricks:index/getClusterProviderConfig:getClusterProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getCluster.\n","properties":{"clusterId":{"type":"string"},"clusterInfo":{"$ref":"#/types/databricks:index/getClusterClusterInfo:getClusterClusterInfo","description":"block, consisting of following fields:\n"},"clusterName":{"description":"Cluster name, which doesn’t have to be unique.\n","type":"string"},"id":{"description":"cluster ID\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getClusterProviderConfig:getClusterProviderConfig"}},"required":["clusterId","clusterInfo","clusterName","id"],"type":"object"}},"databricks:index/getClusterPolicy:getClusterPolicy":{"description":"Retrieves information about databricks_cluster_policy.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nReferring to a cluster policy by name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst personal = databricks.getClusterPolicy({\n    name: \"Personal Compute\",\n});\nconst myCluster = new databricks.Cluster(\"my_cluster\", {policyId: personal.then(personal =\u003e personal.id)});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\npersonal = databricks.get_cluster_policy(name=\"Personal Compute\")\nmy_cluster = databricks.Cluster(\"my_cluster\", policy_id=personal.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var personal = Databricks.GetClusterPolicy.Invoke(new()\n    {\n        Name = \"Personal Compute\",\n    });\n\n    var myCluster = new Databricks.Cluster(\"my_cluster\", new()\n    {\n        PolicyId = personal.Apply(getClusterPolicyResult =\u003e getClusterPolicyResult.Id),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tpersonal, err := databricks.LookupClusterPolicy(ctx, \u0026databricks.LookupClusterPolicyArgs{\n\t\t\tName: pulumi.StringRef(\"Personal Compute\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewCluster(ctx, \"my_cluster\", \u0026databricks.ClusterArgs{\n\t\t\tPolicyId: pulumi.String(personal.Id),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetClusterPolicyArgs;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var personal = DatabricksFunctions.getClusterPolicy(GetClusterPolicyArgs.builder()\n            .name(\"Personal Compute\")\n            .build());\n\n        var myCluster = new Cluster(\"myCluster\", ClusterArgs.builder()\n            .policyId(personal.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  myCluster:\n    type: databricks:Cluster\n    name: my_cluster\n    properties:\n      policyId: ${personal.id}\nvariables:\n  personal:\n    fn::invoke:\n      function: databricks:getClusterPolicy\n      arguments:\n        name: Personal Compute\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getClusterPolicy.\n","properties":{"definition":{"type":"string","description":"Policy definition: JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition).\n"},"description":{"type":"string","description":"Additional human-readable description of the cluster policy.\n"},"id":{"type":"string","description":"The id of the cluster policy.\n"},"isDefault":{"type":"boolean","description":"If true, policy is a default policy created and managed by Databricks.\n"},"maxClustersPerUser":{"type":"integer","description":"Max number of clusters per user that can be active using this policy.\n"},"name":{"type":"string","description":"Name of the cluster policy. The cluster policy must exist before this resource can be planned.\n"},"policyFamilyDefinitionOverrides":{"type":"string","description":"Policy definition JSON document expressed in Databricks [Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definitions).\n"},"policyFamilyId":{"type":"string","description":"ID of the policy family.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getClusterPolicyProviderConfig:getClusterPolicyProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getClusterPolicy.\n","properties":{"definition":{"description":"Policy definition: JSON document expressed in [Databricks Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definition).\n","type":"string"},"description":{"description":"Additional human-readable description of the cluster policy.\n","type":"string"},"id":{"description":"The id of the cluster policy.\n","type":"string"},"isDefault":{"description":"If true, policy is a default policy created and managed by Databricks.\n","type":"boolean"},"maxClustersPerUser":{"description":"Max number of clusters per user that can be active using this policy.\n","type":"integer"},"name":{"type":"string"},"policyFamilyDefinitionOverrides":{"description":"Policy definition JSON document expressed in Databricks [Policy Definition Language](https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-policy-definitions).\n","type":"string"},"policyFamilyId":{"description":"ID of the policy family.\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getClusterPolicyProviderConfig:getClusterPolicyProviderConfig"}},"required":["definition","description","id","isDefault","maxClustersPerUser","name","policyFamilyDefinitionOverrides","policyFamilyId"],"type":"object"}},"databricks:index/getClusters:getClusters":{"description":"Retrieves a list of\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eids, that were created by Pulumi or manually, with or without databricks_cluster_policy.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nRetrieve cluster IDs for all clusters:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getClusters({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_clusters()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetClusters.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetClusters(ctx, \u0026databricks.GetClustersArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetClustersArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getClusters(GetClustersArgs.builder()\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getClusters\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nRetrieve cluster IDs for all clusters having \"Shared\" in the cluster name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst allShared = databricks.getClusters({\n    clusterNameContains: \"shared\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall_shared = databricks.get_clusters(cluster_name_contains=\"shared\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var allShared = Databricks.GetClusters.Invoke(new()\n    {\n        ClusterNameContains = \"shared\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetClusters(ctx, \u0026databricks.GetClustersArgs{\n\t\t\tClusterNameContains: pulumi.StringRef(\"shared\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetClustersArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var allShared = DatabricksFunctions.getClusters(GetClustersArgs.builder()\n            .clusterNameContains(\"shared\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  allShared:\n    fn::invoke:\n      function: databricks:getClusters\n      arguments:\n        clusterNameContains: shared\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n### Filtering clusters\n\nListing clusters can be slow for workspaces containing many clusters. Use filters to limit the number of clusters returned for better performance. You can filter clusters by state, source, policy, or pinned status:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst allRunningClusters = databricks.getClusters({\n    filterBy: {\n        clusterStates: [\"RUNNING\"],\n    },\n});\nconst allClustersWithPolicy = databricks.getClusters({\n    filterBy: {\n        policyId: \"1234-5678-9012\",\n    },\n});\nconst allApiClusters = databricks.getClusters({\n    filterBy: {\n        clusterSources: [\"API\"],\n    },\n});\nconst allPinnedClusters = databricks.getClusters({\n    filterBy: {\n        isPinned: true,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall_running_clusters = databricks.get_clusters(filter_by={\n    \"cluster_states\": [\"RUNNING\"],\n})\nall_clusters_with_policy = databricks.get_clusters(filter_by={\n    \"policy_id\": \"1234-5678-9012\",\n})\nall_api_clusters = databricks.get_clusters(filter_by={\n    \"cluster_sources\": [\"API\"],\n})\nall_pinned_clusters = databricks.get_clusters(filter_by={\n    \"is_pinned\": True,\n})\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var allRunningClusters = Databricks.GetClusters.Invoke(new()\n    {\n        FilterBy = new Databricks.Inputs.GetClustersFilterByInputArgs\n        {\n            ClusterStates = new[]\n            {\n                \"RUNNING\",\n            },\n        },\n    });\n\n    var allClustersWithPolicy = Databricks.GetClusters.Invoke(new()\n    {\n        FilterBy = new Databricks.Inputs.GetClustersFilterByInputArgs\n        {\n            PolicyId = \"1234-5678-9012\",\n        },\n    });\n\n    var allApiClusters = Databricks.GetClusters.Invoke(new()\n    {\n        FilterBy = new Databricks.Inputs.GetClustersFilterByInputArgs\n        {\n            ClusterSources = new[]\n            {\n                \"API\",\n            },\n        },\n    });\n\n    var allPinnedClusters = Databricks.GetClusters.Invoke(new()\n    {\n        FilterBy = new Databricks.Inputs.GetClustersFilterByInputArgs\n        {\n            IsPinned = true,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetClusters(ctx, \u0026databricks.GetClustersArgs{\n\t\t\tFilterBy: databricks.GetClustersFilterBy{\n\t\t\t\tClusterStates: []string{\n\t\t\t\t\t\"RUNNING\",\n\t\t\t\t},\n\t\t\t},\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.GetClusters(ctx, \u0026databricks.GetClustersArgs{\n\t\t\tFilterBy: databricks.GetClustersFilterBy{\n\t\t\t\tPolicyId: pulumi.StringRef(\"1234-5678-9012\"),\n\t\t\t},\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.GetClusters(ctx, \u0026databricks.GetClustersArgs{\n\t\t\tFilterBy: databricks.GetClustersFilterBy{\n\t\t\t\tClusterSources: []string{\n\t\t\t\t\t\"API\",\n\t\t\t\t},\n\t\t\t},\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.GetClusters(ctx, \u0026databricks.GetClustersArgs{\n\t\t\tFilterBy: databricks.GetClustersFilterBy{\n\t\t\t\tIsPinned: pulumi.BoolRef(true),\n\t\t\t},\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetClustersArgs;\nimport com.pulumi.databricks.inputs.GetClustersFilterByArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var allRunningClusters = DatabricksFunctions.getClusters(GetClustersArgs.builder()\n            .filterBy(GetClustersFilterByArgs.builder()\n                .clusterStates(\"RUNNING\")\n                .build())\n            .build());\n\n        final var allClustersWithPolicy = DatabricksFunctions.getClusters(GetClustersArgs.builder()\n            .filterBy(GetClustersFilterByArgs.builder()\n                .policyId(\"1234-5678-9012\")\n                .build())\n            .build());\n\n        final var allApiClusters = DatabricksFunctions.getClusters(GetClustersArgs.builder()\n            .filterBy(GetClustersFilterByArgs.builder()\n                .clusterSources(\"API\")\n                .build())\n            .build());\n\n        final var allPinnedClusters = DatabricksFunctions.getClusters(GetClustersArgs.builder()\n            .filterBy(GetClustersFilterByArgs.builder()\n                .isPinned(true)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  allRunningClusters:\n    fn::invoke:\n      function: databricks:getClusters\n      arguments:\n        filterBy:\n          clusterStates:\n            - RUNNING\n  allClustersWithPolicy:\n    fn::invoke:\n      function: databricks:getClusters\n      arguments:\n        filterBy:\n          policyId: 1234-5678-9012\n  allApiClusters:\n    fn::invoke:\n      function: databricks:getClusters\n      arguments:\n        filterBy:\n          clusterSources:\n            - API\n  allPinnedClusters:\n    fn::invoke:\n      function: databricks:getClusters\n      arguments:\n        filterBy:\n          isPinned: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.ClusterPolicy \" pulumi-lang-dotnet=\" databricks.ClusterPolicy \" pulumi-lang-go=\" ClusterPolicy \" pulumi-lang-python=\" ClusterPolicy \" pulumi-lang-yaml=\" databricks.ClusterPolicy \" pulumi-lang-java=\" databricks.ClusterPolicy \"\u003e databricks.ClusterPolicy \u003c/span\u003eto create a\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003epolicy, which limits the ability to create clusters based on a set of rules.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstancePool \" pulumi-lang-dotnet=\" databricks.InstancePool \" pulumi-lang-go=\" InstancePool \" pulumi-lang-python=\" InstancePool \" pulumi-lang-yaml=\" databricks.InstancePool \" pulumi-lang-java=\" databricks.InstancePool \"\u003e databricks.InstancePool \u003c/span\u003eto manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a databricks_cluster.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Library \" pulumi-lang-dotnet=\" databricks.Library \" pulumi-lang-go=\" Library \" pulumi-lang-python=\" Library \" pulumi-lang-yaml=\" databricks.Library \" pulumi-lang-java=\" databricks.Library \"\u003e databricks.Library \u003c/span\u003eto install a [library](https://docs.databricks.com/libraries/index.html) on databricks_cluster.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Pipeline \" pulumi-lang-dotnet=\" databricks.Pipeline \" pulumi-lang-go=\" Pipeline \" pulumi-lang-python=\" Pipeline \" pulumi-lang-yaml=\" databricks.Pipeline \" pulumi-lang-java=\" databricks.Pipeline \"\u003e databricks.Pipeline \u003c/span\u003eto deploy [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt).\n","inputs":{"description":"A collection of arguments for invoking getClusters.\n","properties":{"clusterNameContains":{"type":"string","description":"Only return\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eids that match the given name string.\n","willReplaceOnChanges":true},"filterBy":{"$ref":"#/types/databricks:index/getClustersFilterBy:getClustersFilterBy","description":"Filters to apply to the listed clusters. See\u003cspan pulumi-lang-nodejs=\" filterBy \" pulumi-lang-dotnet=\" FilterBy \" pulumi-lang-go=\" filterBy \" pulumi-lang-python=\" filter_by \" pulumi-lang-yaml=\" filterBy \" pulumi-lang-java=\" filterBy \"\u003e filter_by \u003c/span\u003eConfiguration Block below for details.\n","willReplaceOnChanges":true},"id":{"type":"string"},"ids":{"type":"array","items":{"type":"string"},"description":"list of\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eids\n"},"providerConfig":{"$ref":"#/types/databricks:index/getClustersProviderConfig:getClustersProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getClusters.\n","properties":{"clusterNameContains":{"type":"string"},"filterBy":{"$ref":"#/types/databricks:index/getClustersFilterBy:getClustersFilterBy"},"id":{"type":"string"},"ids":{"description":"list of\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eids\n","items":{"type":"string"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getClustersProviderConfig:getClustersProviderConfig"}},"required":["id","ids"],"type":"object"}},"databricks:index/getCurrentConfig:getCurrentConfig":{"description":"Retrieves information about the currently configured provider to make a decision, for example, add a dynamic block based on the specific cloud.\n\n\u003e This data source can be used with an account or workspace-level provider.\n\n## Example Usage\n\nCreate cloud-specific databricks_storage_credential:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nfunction singleOrNone\u003cT\u003e(elements: pulumi.Input\u003cT\u003e[]): pulumi.Input\u003cT\u003e {\n    if (elements.length != 1) {\n        throw new Error(\"singleOrNone expected input list to have a single element\");\n    }\n    return elements[0];\n}\n\nconst _this = databricks.getCurrentConfig({});\nconst external = new databricks.StorageCredential(\"external\", {\n    awsIamRole: singleOrNone(.map(entry =\u003e ({\n        roleArn: cloudCredentialId,\n    }))),\n    azureManagedIdentity: singleOrNone(.map(entry2 =\u003e ({\n        accessConnectorId: cloudCredentialId,\n    }))),\n    databricksGcpServiceAccount: singleOrNone(.map(entry3 =\u003e ({}))),\n    name: \"storage_cred\",\n    comment: \"Managed by TF\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ndef single_or_none(elements):\n    if len(elements) != 1:\n        raise Exception(\"single_or_none expected input list to have a single element\")\n    return elements[0]\n\n\nthis = databricks.get_current_config()\nexternal = databricks.StorageCredential(\"external\",\n    aws_iam_role=single_or_none([{\"key\": k, \"value\": v} for k, v in {} if this.cloud_type == \"aws\" else {\n        \"aws\": True,\n    }].apply(lambda entries: [{\n        \"roleArn\": cloud_credential_id,\n    } for entry in entries])),\n    azure_managed_identity=single_or_none([{\"key\": k, \"value\": v} for k, v in {} if this.cloud_type == \"azure\" else {\n        \"azure\": True,\n    }].apply(lambda entries: [{\n        \"accessConnectorId\": cloud_credential_id,\n    } for entry2 in entries])),\n    databricks_gcp_service_account=single_or_none([{\"key\": k, \"value\": v} for k, v in {} if this.cloud_type == \"gcp\" else {\n        \"gcp\": True,\n    }].apply(lambda entries: [{} for entry3 in entries])),\n    name=\"storage_cred\",\n    comment=\"Managed by TF\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetCurrentConfig.Invoke();\n\n    var external = new Databricks.StorageCredential(\"external\", new()\n    {\n        AwsIamRole = Enumerable.Single(),\n        AzureManagedIdentity = Enumerable.Single(),\n        DatabricksGcpServiceAccount = Enumerable.Single(),\n        Name = \"storage_cred\",\n        Comment = \"Managed by TF\",\n    });\n\n});\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Exported attributes\n\nData source exposes the following attributes:\n\n* \u003cspan pulumi-lang-nodejs=\"`isAccount`\" pulumi-lang-dotnet=\"`IsAccount`\" pulumi-lang-go=\"`isAccount`\" pulumi-lang-python=\"`is_account`\" pulumi-lang-yaml=\"`isAccount`\" pulumi-lang-java=\"`isAccount`\"\u003e`is_account`\u003c/span\u003e - Whether the provider is configured at account-level\n* \u003cspan pulumi-lang-nodejs=\"`accountId`\" pulumi-lang-dotnet=\"`AccountId`\" pulumi-lang-go=\"`accountId`\" pulumi-lang-python=\"`account_id`\" pulumi-lang-yaml=\"`accountId`\" pulumi-lang-java=\"`accountId`\"\u003e`account_id`\u003c/span\u003e - Account Id if provider is configured at account-level\n* \u003cspan pulumi-lang-nodejs=\"`host`\" pulumi-lang-dotnet=\"`Host`\" pulumi-lang-go=\"`host`\" pulumi-lang-python=\"`host`\" pulumi-lang-yaml=\"`host`\" pulumi-lang-java=\"`host`\"\u003e`host`\u003c/span\u003e - Host of the Databricks workspace or account console\n* \u003cspan pulumi-lang-nodejs=\"`cloudType`\" pulumi-lang-dotnet=\"`CloudType`\" pulumi-lang-go=\"`cloudType`\" pulumi-lang-python=\"`cloud_type`\" pulumi-lang-yaml=\"`cloudType`\" pulumi-lang-java=\"`cloudType`\"\u003e`cloud_type`\u003c/span\u003e - Cloud type specified in the provider\n* \u003cspan pulumi-lang-nodejs=\"`authType`\" pulumi-lang-dotnet=\"`AuthType`\" pulumi-lang-go=\"`authType`\" pulumi-lang-python=\"`auth_type`\" pulumi-lang-yaml=\"`authType`\" pulumi-lang-java=\"`authType`\"\u003e`auth_type`\u003c/span\u003e - Auth type used by the provider\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* End to end workspace management guide\n*\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eto manage directories in [Databricks Workpace](https://docs.databricks.com/workspace/workspace-objects.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Repo \" pulumi-lang-dotnet=\" databricks.Repo \" pulumi-lang-go=\" Repo \" pulumi-lang-python=\" Repo \" pulumi-lang-yaml=\" databricks.Repo \" pulumi-lang-java=\" databricks.Repo \"\u003e databricks.Repo \u003c/span\u003eto manage [Databricks Repos](https://docs.databricks.com/repos.html).\n","inputs":{"description":"A collection of arguments for invoking getCurrentConfig.\n","properties":{"accountId":{"type":"string"},"authType":{"type":"string"},"cloudType":{"type":"string"},"host":{"type":"string"},"isAccount":{"type":"boolean"},"providerConfig":{"$ref":"#/types/databricks:index/getCurrentConfigProviderConfig:getCurrentConfigProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getCurrentConfig.\n","properties":{"accountId":{"type":"string"},"authType":{"type":"string"},"cloudType":{"type":"string"},"host":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"isAccount":{"type":"boolean"},"providerConfig":{"$ref":"#/types/databricks:index/getCurrentConfigProviderConfig:getCurrentConfigProviderConfig"}},"required":["accountId","authType","cloudType","host","isAccount","id"],"type":"object"}},"databricks:index/getCurrentMetastore:getCurrentMetastore":{"description":"Retrieves information about metastore attached to a given workspace.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nMetastoreSummary response for a metastore attached to the current workspace.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getCurrentMetastore({});\nexport const someMetastore = _this.then(_this =\u003e _this.metastoreInfo);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_current_metastore()\npulumi.export(\"someMetastore\", this.metastore_info)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetCurrentMetastore.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"someMetastore\"] = @this.Apply(@this =\u003e @this.Apply(getCurrentMetastoreResult =\u003e getCurrentMetastoreResult.MetastoreInfo)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetCurrentMetastore(ctx, \u0026databricks.GetCurrentMetastoreArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"someMetastore\", this.MetastoreInfo)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetCurrentMetastoreArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getCurrentMetastore(GetCurrentMetastoreArgs.builder()\n            .build());\n\n        ctx.export(\"someMetastore\", this_.metastoreInfo());\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getCurrentMetastore\n      arguments: {}\noutputs:\n  someMetastore: ${this.metastoreInfo}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eto get information for a metastore with a given ID.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getMetastores \" pulumi-lang-dotnet=\" databricks.getMetastores \" pulumi-lang-go=\" getMetastores \" pulumi-lang-python=\" get_metastores \" pulumi-lang-yaml=\" databricks.getMetastores \" pulumi-lang-java=\" databricks.getMetastores \"\u003e databricks.getMetastores \u003c/span\u003eto get a mapping of name to id of all metastores.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eto manage Metastores within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eto manage catalogs within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getCurrentMetastore.\n","properties":{"id":{"type":"string","description":"metastore ID. Will be \u003cspan pulumi-lang-nodejs=\"`noMetastore`\" pulumi-lang-dotnet=\"`NoMetastore`\" pulumi-lang-go=\"`noMetastore`\" pulumi-lang-python=\"`no_metastore`\" pulumi-lang-yaml=\"`noMetastore`\" pulumi-lang-java=\"`noMetastore`\"\u003e`no_metastore`\u003c/span\u003e if there is no metastore assigned for the current workspace\n"},"metastoreInfo":{"$ref":"#/types/databricks:index/getCurrentMetastoreMetastoreInfo:getCurrentMetastoreMetastoreInfo","description":"summary about a metastore attached to the current workspace returned by [Get a metastore summary API](https://docs.databricks.com/api/workspace/metastores/summary). This contains the following attributes (check the API page for up-to-date details):\n"},"providerConfig":{"$ref":"#/types/databricks:index/getCurrentMetastoreProviderConfig:getCurrentMetastoreProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getCurrentMetastore.\n","properties":{"id":{"description":"metastore ID. Will be \u003cspan pulumi-lang-nodejs=\"`noMetastore`\" pulumi-lang-dotnet=\"`NoMetastore`\" pulumi-lang-go=\"`noMetastore`\" pulumi-lang-python=\"`no_metastore`\" pulumi-lang-yaml=\"`noMetastore`\" pulumi-lang-java=\"`noMetastore`\"\u003e`no_metastore`\u003c/span\u003e if there is no metastore assigned for the current workspace\n","type":"string"},"metastoreInfo":{"$ref":"#/types/databricks:index/getCurrentMetastoreMetastoreInfo:getCurrentMetastoreMetastoreInfo","description":"summary about a metastore attached to the current workspace returned by [Get a metastore summary API](https://docs.databricks.com/api/workspace/metastores/summary). This contains the following attributes (check the API page for up-to-date details):\n"},"providerConfig":{"$ref":"#/types/databricks:index/getCurrentMetastoreProviderConfig:getCurrentMetastoreProviderConfig"}},"required":["id","metastoreInfo"],"type":"object"}},"databricks:index/getCurrentUser:getCurrentUser":{"description":"Retrieves information about\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eor databricks_service_principal, that is calling Databricks REST API. Might be useful in applying the same Pulumi by different users in the shared workspace for testing purposes.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n","inputs":{"description":"A collection of arguments for invoking getCurrentUser.\n","properties":{"providerConfig":{"$ref":"#/types/databricks:index/getCurrentUserProviderConfig:getCurrentUserProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getCurrentUser.\n","properties":{"aclPrincipalId":{"type":"string"},"alphanumeric":{"type":"string"},"externalId":{"type":"string"},"home":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getCurrentUserProviderConfig:getCurrentUserProviderConfig"},"repos":{"type":"string"},"userName":{"type":"string"},"workspaceUrl":{"type":"string"}},"required":["aclPrincipalId","alphanumeric","externalId","home","repos","userName","workspaceUrl","id"],"type":"object"}},"databricks:index/getDashboards:getDashboards":{"description":"This data source allows you to retrieve information about Databricks [Dashboards](https://docs.databricks.com/en/dashboards/index.html).\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nexport = async () =\u003e {\n    const all = await databricks.getDashboards({});\n    const dashboardsPermissions: databricks.Permissions[] = [];\n    for (const range of all.dashboards.map(__item =\u003e __item.dashboardId).map((v, k) =\u003e ({key: k, value: v}))) {\n        dashboardsPermissions.push(new databricks.Permissions(`dashboards_permissions-${range.key}`, {\n            depends: [all],\n            dashboardId: range.value,\n            accessControls: [{\n                groupName: \"Example Group\",\n                permissionLevel: \"CAN_MANAGE\",\n            }],\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_dashboards()\ndashboards_permissions = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate([__item.dashboard_id for __item in all.dashboards])]:\n    dashboards_permissions.append(databricks.Permissions(f\"dashboards_permissions-{range['key']}\",\n        depends=[all],\n        dashboard_id=range[\"value\"],\n        access_controls=[{\n            \"group_name\": \"Example Group\",\n            \"permission_level\": \"CAN_MANAGE\",\n        }]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var all = await Databricks.GetDashboards.InvokeAsync();\n\n    var dashboardsPermissions = new List\u003cDatabricks.Permissions\u003e();\n    foreach (var range in all.Dashboards.Select(__item =\u003e __item.DashboardId).ToList().Select((v, k) =\u003e new { Key = k, Value = v }))\n    {\n        dashboardsPermissions.Add(new Databricks.Permissions($\"dashboards_permissions-{range.Key}\", new()\n        {\n            Depends = new[]\n            {\n                all,\n            },\n            DashboardId = range.Value,\n            AccessControls = new[]\n            {\n                new Databricks.Inputs.PermissionsAccessControlArgs\n                {\n                    GroupName = \"Example Group\",\n                    PermissionLevel = \"CAN_MANAGE\",\n                },\n            },\n        }));\n    }\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetDashboards(ctx, \u0026databricks.GetDashboardsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar splat0 []*string\n\t\tfor _, val0 := range all.Dashboards {\n\t\t\tsplat0 = append(splat0, val0.DashboardId)\n\t\t}\n\t\tvar dashboardsPermissions []*databricks.Permissions\n\t\tfor key0, val0 := range splat0 {\n\t\t\t__res, err := databricks.NewPermissions(ctx, fmt.Sprintf(\"dashboards_permissions-%v\", key0), \u0026databricks.PermissionsArgs{\n\t\t\t\tDepends: []interface{}{\n\t\t\t\t\tall,\n\t\t\t\t},\n\t\t\t\tDashboardId: pulumi.String(val0),\n\t\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\t\tGroupName:       pulumi.String(\"Example Group\"),\n\t\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tdashboardsPermissions = append(dashboardsPermissions, __res)\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetDashboardsArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport com.pulumi.codegen.internal.KeyedValue;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getDashboards(GetDashboardsArgs.builder()\n            .build());\n\n        for (var range : KeyedValue.of(all.dashboards().stream().map(element -\u003e element.dashboardId()).collect(toList()))) {\n            new Permissions(\"dashboardsPermissions-\" + range.key(), PermissionsArgs.builder()\n                .depends(List.of(all))\n                .dashboardId(range.value())\n                .accessControls(PermissionsAccessControlArgs.builder()\n                    .groupName(\"Example Group\")\n                    .permissionLevel(\"CAN_MANAGE\")\n                    .build())\n                .build());\n        }\n\n    }\n}\n```\n```yaml\nresources:\n  dashboardsPermissions:\n    type: databricks:Permissions\n    name: dashboards_permissions\n    properties:\n      depends:\n        - ${all}\n      dashboardId: ${range.value}\n      accessControls:\n        - groupName: Example Group\n          permissionLevel: CAN_MANAGE\n    options: {}\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getDashboards\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getDashboards.\n","properties":{"dashboardNameContains":{"type":"string","description":"A **case-insensitive** substring to filter Dashboards by their name.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDashboardsProviderConfig:getDashboardsProviderConfig"}},"type":"object"},"outputs":{"description":"A collection of values returned by getDashboards.\n","properties":{"dashboardNameContains":{"type":"string"},"dashboards":{"description":"A list of dashboards matching the specified criteria. Each element contains the following attributes:\n","items":{"$ref":"#/types/databricks:index/getDashboardsDashboard:getDashboardsDashboard"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getDashboardsProviderConfig:getDashboardsProviderConfig"}},"required":["dashboards","id"],"type":"object"}},"databricks:index/getDataQualityMonitor:getDataQualityMonitor":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to fetch a data quality monitor.\n\nFor the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003cspan pulumi-lang-nodejs=\"`objectType`\" pulumi-lang-dotnet=\"`ObjectType`\" pulumi-lang-go=\"`objectType`\" pulumi-lang-python=\"`object_type`\" pulumi-lang-yaml=\"`objectType`\" pulumi-lang-java=\"`objectType`\"\u003e`object_type`\u003c/span\u003e, the caller must either:\n1. be an owner of the table's parent catalog\n2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema.\n3. have the following permissions:\n   - **USE_CATALOG** on the table's parent catalog\n   - **USE_SCHEMA** on the table's parent schema\n   - **SELECT** privilege on the table.\n\n\u003e **Note** This data source can only be used with a workspace-level provider!\n\n\n## Example Usage\n\nGetting a data quality monitor by Unity Catalog object type (currently supports \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e) and object id:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getSchema({\n    name: \"my_catalog.my_schema\",\n});\nconst thisGetDataQualityMonitor = _this.then(_this =\u003e databricks.getDataQualityMonitor({\n    objectType: \"schema\",\n    objectId: _this.schemaInfo?.schemaId,\n}));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_schema(name=\"my_catalog.my_schema\")\nthis_get_data_quality_monitor = databricks.get_data_quality_monitor(object_type=\"schema\",\n    object_id=this.schema_info.schema_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetSchema.Invoke(new()\n    {\n        Name = \"my_catalog.my_schema\",\n    });\n\n    var thisGetDataQualityMonitor = Databricks.GetDataQualityMonitor.Invoke(new()\n    {\n        ObjectType = \"schema\",\n        ObjectId = @this.Apply(getSchemaResult =\u003e getSchemaResult.SchemaInfo?.SchemaId),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupSchema(ctx, \u0026databricks.LookupSchemaArgs{\n\t\t\tName: \"my_catalog.my_schema\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.LookupDataQualityMonitor(ctx, \u0026databricks.LookupDataQualityMonitorArgs{\n\t\t\tObjectType: \"schema\",\n\t\t\tObjectId:   this.SchemaInfo.SchemaId,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetSchemaArgs;\nimport com.pulumi.databricks.inputs.GetDataQualityMonitorArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getSchema(GetSchemaArgs.builder()\n            .name(\"my_catalog.my_schema\")\n            .build());\n\n        final var thisGetDataQualityMonitor = DatabricksFunctions.getDataQualityMonitor(GetDataQualityMonitorArgs.builder()\n            .objectType(\"schema\")\n            .objectId(this_.schemaInfo().schemaId())\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getSchema\n      arguments:\n        name: my_catalog.my_schema\n  thisGetDataQualityMonitor:\n    fn::invoke:\n      function: databricks:getDataQualityMonitor\n      arguments:\n        objectType: schema\n        objectId: ${this.schemaInfo.schemaId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getDataQualityMonitor.\n","properties":{"objectId":{"type":"string","description":"The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[schemaId]\" pulumi-lang-dotnet=\"[SchemaId]\" pulumi-lang-go=\"[schemaId]\" pulumi-lang-python=\"[schema_id]\" pulumi-lang-yaml=\"[schemaId]\" pulumi-lang-java=\"[schemaId]\"\u003e[schema_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the `Schemas` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Schema ID` field.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[tableId]\" pulumi-lang-dotnet=\"[TableId]\" pulumi-lang-go=\"[tableId]\" pulumi-lang-python=\"[table_id]\" pulumi-lang-yaml=\"[tableId]\" pulumi-lang-java=\"[tableId]\"\u003e[table_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/tables/get#table_id) of the `Tables` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Table ID` field\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDataQualityMonitorProviderConfig:getDataQualityMonitorProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["objectId","objectType"]},"outputs":{"description":"A collection of values returned by getDataQualityMonitor.\n","properties":{"anomalyDetectionConfig":{"$ref":"#/types/databricks:index/getDataQualityMonitorAnomalyDetectionConfig:getDataQualityMonitorAnomalyDetectionConfig","description":"(AnomalyDetectionConfig) - Anomaly Detection Configuration, applicable to \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e object types\n"},"dataProfilingConfig":{"$ref":"#/types/databricks:index/getDataQualityMonitorDataProfilingConfig:getDataQualityMonitorDataProfilingConfig","description":"(DataProfilingConfig) - Data Profiling Configuration, applicable to \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e object types. Exactly one `Analysis Configuration`\nmust be present\n"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"objectId":{"description":"(string) - The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n","type":"string"},"objectType":{"description":"(string) - The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getDataQualityMonitorProviderConfig:getDataQualityMonitorProviderConfig"}},"required":["anomalyDetectionConfig","dataProfilingConfig","objectId","objectType","id"],"type":"object"}},"databricks:index/getDataQualityMonitors:getDataQualityMonitors":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to fetch the list of data quality monitors.\n\nFor the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003cspan pulumi-lang-nodejs=\"`objectType`\" pulumi-lang-dotnet=\"`ObjectType`\" pulumi-lang-go=\"`objectType`\" pulumi-lang-python=\"`object_type`\" pulumi-lang-yaml=\"`objectType`\" pulumi-lang-java=\"`objectType`\"\u003e`object_type`\u003c/span\u003e, the caller must either:\n1. be an owner of the table's parent catalog\n2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema.\n3. have the following permissions:\n    - **USE_CATALOG** on the table's parent catalog\n    - **USE_SCHEMA** on the table's parent schema\n    - **SELECT** privilege on the table.\n\n\u003e **Note** This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGetting a list of all data quality monitors:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getDataQualityMonitors({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_data_quality_monitors()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetDataQualityMonitors.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetDataQualityMonitors(ctx, \u0026databricks.GetDataQualityMonitorsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetDataQualityMonitorsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getDataQualityMonitors(GetDataQualityMonitorsArgs.builder()\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getDataQualityMonitors\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getDataQualityMonitors.\n","properties":{"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getDataQualityMonitorsProviderConfig:getDataQualityMonitorsProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getDataQualityMonitors.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"monitors":{"items":{"$ref":"#/types/databricks:index/getDataQualityMonitorsMonitor:getDataQualityMonitorsMonitor"},"type":"array"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getDataQualityMonitorsProviderConfig:getDataQualityMonitorsProviderConfig"}},"required":["monitors","id"],"type":"object"}},"databricks:index/getDataQualityRefresh:getDataQualityRefresh":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to fetch a data quality refresh on a Unity Catalog table.\n\nThe caller must either:\n1. be an owner of the table's parent catalog\n2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema.\n3. have the following permissions:\n   - **USE_CATALOG** on the table's parent catalog\n   - **USE_SCHEMA** on the table's parent schema\n   - **SELECT** privilege on the table.\n\n\u003e **Note** This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGetting a data quality refresh by Unity Catalog object type (currently supports \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e) and object id:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getTable({\n    name: \"my_catalog.my_schema.my_table\",\n});\nconst thisGetDataQualityRefresh = _this.then(_this =\u003e databricks.getDataQualityRefresh({\n    objectType: \"table\",\n    objectId: _this.id,\n}));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_table(name=\"my_catalog.my_schema.my_table\")\nthis_get_data_quality_refresh = databricks.get_data_quality_refresh(object_type=\"table\",\n    object_id=this.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetTable.Invoke(new()\n    {\n        Name = \"my_catalog.my_schema.my_table\",\n    });\n\n    var thisGetDataQualityRefresh = Databricks.GetDataQualityRefresh.Invoke(new()\n    {\n        ObjectType = \"table\",\n        ObjectId = @this.Apply(getTableResult =\u003e getTableResult.Id),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupTable(ctx, \u0026databricks.LookupTableArgs{\n\t\t\tName: \"my_catalog.my_schema.my_table\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.LookupDataQualityRefresh(ctx, \u0026databricks.LookupDataQualityRefreshArgs{\n\t\t\tObjectType: \"table\",\n\t\t\tObjectId:   this.Id,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetTableArgs;\nimport com.pulumi.databricks.inputs.GetDataQualityRefreshArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getTable(GetTableArgs.builder()\n            .name(\"my_catalog.my_schema.my_table\")\n            .build());\n\n        final var thisGetDataQualityRefresh = DatabricksFunctions.getDataQualityRefresh(GetDataQualityRefreshArgs.builder()\n            .objectType(\"table\")\n            .objectId(this_.id())\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getTable\n      arguments:\n        name: my_catalog.my_schema.my_table\n  thisGetDataQualityRefresh:\n    fn::invoke:\n      function: databricks:getDataQualityRefresh\n      arguments:\n        objectType: table\n        objectId: ${this.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getDataQualityRefresh.\n","properties":{"objectId":{"type":"string","description":"The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[schemaId]\" pulumi-lang-dotnet=\"[SchemaId]\" pulumi-lang-go=\"[schemaId]\" pulumi-lang-python=\"[schema_id]\" pulumi-lang-yaml=\"[schemaId]\" pulumi-lang-java=\"[schemaId]\"\u003e[schema_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the `Schemas` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Schema ID` field.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[tableId]\" pulumi-lang-dotnet=\"[TableId]\" pulumi-lang-go=\"[tableId]\" pulumi-lang-python=\"[table_id]\" pulumi-lang-yaml=\"[tableId]\" pulumi-lang-java=\"[tableId]\"\u003e[table_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/tables/get#table_id) of the `Tables` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Table ID` field\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003eor \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDataQualityRefreshProviderConfig:getDataQualityRefreshProviderConfig","description":"Configure the provider for management through account provider.\n"},"refreshId":{"type":"integer","description":"Unique id of the refresh operation\n"}},"type":"object","required":["objectId","objectType","refreshId"]},"outputs":{"description":"A collection of values returned by getDataQualityRefresh.\n","properties":{"endTimeMs":{"description":"(integer) - Time when the refresh ended (milliseconds since 1/1/1970 UTC)\n","type":"integer"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"message":{"description":"(string) - An optional message to give insight into the current state of the refresh (e.g. FAILURE messages)\n","type":"string"},"objectId":{"description":"(string) - The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n","type":"string"},"objectType":{"description":"(string) - The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003eor \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getDataQualityRefreshProviderConfig:getDataQualityRefreshProviderConfig"},"refreshId":{"description":"(integer) - Unique id of the refresh operation\n","type":"integer"},"startTimeMs":{"description":"(integer) - Time when the refresh started (milliseconds since 1/1/1970 UTC)\n","type":"integer"},"state":{"description":"(string) - The current state of the refresh. Possible values are: `MONITOR_REFRESH_STATE_CANCELED`, `MONITOR_REFRESH_STATE_FAILED`, `MONITOR_REFRESH_STATE_PENDING`, `MONITOR_REFRESH_STATE_RUNNING`, `MONITOR_REFRESH_STATE_SUCCESS`, `MONITOR_REFRESH_STATE_UNKNOWN`\n","type":"string"},"trigger":{"description":"(string) - What triggered the refresh. Possible values are: `MONITOR_REFRESH_TRIGGER_DATA_CHANGE`, `MONITOR_REFRESH_TRIGGER_MANUAL`, `MONITOR_REFRESH_TRIGGER_SCHEDULE`, `MONITOR_REFRESH_TRIGGER_UNKNOWN`\n","type":"string"}},"required":["endTimeMs","message","objectId","objectType","refreshId","startTimeMs","state","trigger","id"],"type":"object"}},"databricks:index/getDataQualityRefreshes:getDataQualityRefreshes":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to fetch the list of data quality refreshes on a Unity Catalog table.\n\nThe caller must either:\n1. be an owner of the table's parent catalog\n2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema.\n3. have the following permissions:\n   - **USE_CATALOG** on the table's parent catalog\n   - **USE_SCHEMA** on the table's parent schema\n   - **SELECT** privilege on the table.\n\n\u003e **Note** This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGetting a list of all data quality refresh for a given table:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getTable({\n    name: \"my_catalog.my_schema.my_table\",\n});\nconst all = _this.then(_this =\u003e databricks.getDataQualityRefreshes({\n    objectType: \"table\",\n    objectId: _this.id,\n}));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_table(name=\"my_catalog.my_schema.my_table\")\nall = databricks.get_data_quality_refreshes(object_type=\"table\",\n    object_id=this.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetTable.Invoke(new()\n    {\n        Name = \"my_catalog.my_schema.my_table\",\n    });\n\n    var all = Databricks.GetDataQualityRefreshes.Invoke(new()\n    {\n        ObjectType = \"table\",\n        ObjectId = @this.Apply(getTableResult =\u003e getTableResult.Id),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupTable(ctx, \u0026databricks.LookupTableArgs{\n\t\t\tName: \"my_catalog.my_schema.my_table\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.GetDataQualityRefreshes(ctx, \u0026databricks.GetDataQualityRefreshesArgs{\n\t\t\tObjectType: \"table\",\n\t\t\tObjectId:   this.Id,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetTableArgs;\nimport com.pulumi.databricks.inputs.GetDataQualityRefreshesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getTable(GetTableArgs.builder()\n            .name(\"my_catalog.my_schema.my_table\")\n            .build());\n\n        final var all = DatabricksFunctions.getDataQualityRefreshes(GetDataQualityRefreshesArgs.builder()\n            .objectType(\"table\")\n            .objectId(this_.id())\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getTable\n      arguments:\n        name: my_catalog.my_schema.my_table\n  all:\n    fn::invoke:\n      function: databricks:getDataQualityRefreshes\n      arguments:\n        objectType: table\n        objectId: ${this.id}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getDataQualityRefreshes.\n","properties":{"objectId":{"type":"string","description":"The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[schemaId]\" pulumi-lang-dotnet=\"[SchemaId]\" pulumi-lang-go=\"[schemaId]\" pulumi-lang-python=\"[schema_id]\" pulumi-lang-yaml=\"[schemaId]\" pulumi-lang-java=\"[schemaId]\"\u003e[schema_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/schemas/get#schema_id) of the `Schemas` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Schema ID` field.\n\nFind the \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e from either:\n1. The \u003cspan pulumi-lang-nodejs=\"[tableId]\" pulumi-lang-dotnet=\"[TableId]\" pulumi-lang-go=\"[tableId]\" pulumi-lang-python=\"[table_id]\" pulumi-lang-yaml=\"[tableId]\" pulumi-lang-java=\"[tableId]\"\u003e[table_id]\u003c/span\u003e(https://docs.databricks.com/api/workspace/tables/get#table_id) of the `Tables` resource.\n2. In [Catalog Explorer](https://docs.databricks.com/aws/en/catalog-explorer/) \u003e select the \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e \u003e go to the `Details` tab \u003e the `Table ID` field\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getDataQualityRefreshesProviderConfig:getDataQualityRefreshesProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["objectId","objectType"]},"outputs":{"description":"A collection of values returned by getDataQualityRefreshes.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"objectId":{"description":"(string) - The UUID of the request object. It is \u003cspan pulumi-lang-nodejs=\"`schemaId`\" pulumi-lang-dotnet=\"`SchemaId`\" pulumi-lang-go=\"`schemaId`\" pulumi-lang-python=\"`schema_id`\" pulumi-lang-yaml=\"`schemaId`\" pulumi-lang-java=\"`schemaId`\"\u003e`schema_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`tableId`\" pulumi-lang-dotnet=\"`TableId`\" pulumi-lang-go=\"`tableId`\" pulumi-lang-python=\"`table_id`\" pulumi-lang-yaml=\"`tableId`\" pulumi-lang-java=\"`tableId`\"\u003e`table_id`\u003c/span\u003e for \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e.\n","type":"string"},"objectType":{"description":"(string) - The type of the monitored object. Can be one of the following: \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003eor \u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e\n","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getDataQualityRefreshesProviderConfig:getDataQualityRefreshesProviderConfig"},"refreshes":{"items":{"$ref":"#/types/databricks:index/getDataQualityRefreshesRefresh:getDataQualityRefreshesRefresh"},"type":"array"}},"required":["objectId","objectType","refreshes","id"],"type":"object"}},"databricks:index/getDatabaseDatabaseCatalog:getDatabaseDatabaseCatalog":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single Database Catalog.\n\n\n## Example Usage\n\nReferring to a Database Catalog by name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getDatabaseDatabaseCatalog({\n    name: \"my-database-catalog\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_database_database_catalog(name=\"my-database-catalog\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetDatabaseDatabaseCatalog.Invoke(new()\n    {\n        Name = \"my-database-catalog\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupDatabaseDatabaseCatalog(ctx, \u0026databricks.LookupDatabaseDatabaseCatalogArgs{\n\t\t\tName: \"my-database-catalog\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetDatabaseDatabaseCatalogArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getDatabaseDatabaseCatalog(GetDatabaseDatabaseCatalogArgs.builder()\n            .name(\"my-database-catalog\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getDatabaseDatabaseCatalog\n      arguments:\n        name: my-database-catalog\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getDatabaseDatabaseCatalog.\n","properties":{"name":{"type":"string","description":"The name of the catalog in UC\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseDatabaseCatalogProviderConfig:getDatabaseDatabaseCatalogProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getDatabaseDatabaseCatalog.\n","properties":{"createDatabaseIfNotExists":{"description":"(boolean)\n","type":"boolean"},"databaseInstanceName":{"description":"(string) - The name of the DatabaseInstance housing the database\n","type":"string"},"databaseName":{"description":"(string) - The name of the database (in a instance) associated with the catalog\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"(string) - The name of the catalog in UC\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseDatabaseCatalogProviderConfig:getDatabaseDatabaseCatalogProviderConfig"},"uid":{"description":"(string)\n","type":"string"}},"required":["createDatabaseIfNotExists","databaseInstanceName","databaseName","name","uid","id"],"type":"object"}},"databricks:index/getDatabaseDatabaseCatalogs:getDatabaseDatabaseCatalogs":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getDatabaseDatabaseCatalogs.\n","properties":{"instanceName":{"type":"string","description":"Name of the instance to get database catalogs for\n"},"pageSize":{"type":"integer","description":"Upper bound for items returned\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseDatabaseCatalogsProviderConfig:getDatabaseDatabaseCatalogsProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["instanceName"]},"outputs":{"description":"A collection of values returned by getDatabaseDatabaseCatalogs.\n","properties":{"databaseCatalogs":{"items":{"$ref":"#/types/databricks:index/getDatabaseDatabaseCatalogsDatabaseCatalog:getDatabaseDatabaseCatalogsDatabaseCatalog"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"instanceName":{"type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseDatabaseCatalogsProviderConfig:getDatabaseDatabaseCatalogsProviderConfig"}},"required":["databaseCatalogs","instanceName","id"],"type":"object"}},"databricks:index/getDatabaseInstance:getDatabaseInstance":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single Database Instance.\n\n\n## Example Usage\n\nReferring to a Database Instance by name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getDatabaseInstance({\n    name: \"my-database-instance\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_database_instance(name=\"my-database-instance\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetDatabaseInstance.Invoke(new()\n    {\n        Name = \"my-database-instance\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupDatabaseInstance(ctx, \u0026databricks.LookupDatabaseInstanceArgs{\n\t\t\tName: \"my-database-instance\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetDatabaseInstanceArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getDatabaseInstance(GetDatabaseInstanceArgs.builder()\n            .name(\"my-database-instance\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getDatabaseInstance\n      arguments:\n        name: my-database-instance\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getDatabaseInstance.\n","properties":{"name":{"type":"string","description":"The name of the instance. This is the unique identifier for the instance\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseInstanceProviderConfig:getDatabaseInstanceProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getDatabaseInstance.\n","properties":{"capacity":{"description":"(string) - The sku of the instance. Valid values are \"CU_1\", \"CU_2\", \"CU_4\", \"CU_8\"\n","type":"string"},"childInstanceRefs":{"description":"(list of DatabaseInstanceRef) - The refs of the child instances. This is only available if the instance is\nparent instance\n","items":{"$ref":"#/types/databricks:index/getDatabaseInstanceChildInstanceRef:getDatabaseInstanceChildInstanceRef"},"type":"array"},"creationTime":{"description":"(string) - The timestamp when the instance was created\n","type":"string"},"creator":{"description":"(string) - The email of the creator of the instance\n","type":"string"},"customTags":{"description":"(list of CustomTag) - Custom tags associated with the instance. This field is only included on create and update responses\n","items":{"$ref":"#/types/databricks:index/getDatabaseInstanceCustomTag:getDatabaseInstanceCustomTag"},"type":"array"},"effectiveCapacity":{"description":"(string, deprecated) - Deprecated. The sku of the instance; this field will always match the value of capacity.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n","type":"string"},"effectiveCustomTags":{"description":"(list of CustomTag) - The recorded custom tags associated with the instance.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n","items":{"$ref":"#/types/databricks:index/getDatabaseInstanceEffectiveCustomTag:getDatabaseInstanceEffectiveCustomTag"},"type":"array"},"effectiveEnablePgNativeLogin":{"description":"(boolean) - Whether the instance has PG native password login enabled.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n","type":"boolean"},"effectiveEnableReadableSecondaries":{"description":"(boolean) - Whether secondaries serving read-only traffic are enabled. Defaults to false.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n","type":"boolean"},"effectiveNodeCount":{"description":"(integer) - The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to\n1 primary and 0 secondaries.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n","type":"integer"},"effectiveRetentionWindowInDays":{"description":"(integer) - The retention window for the instance. This is the time window in days\nfor which the historical data is retained.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n","type":"integer"},"effectiveStopped":{"description":"(boolean) - Whether the instance is stopped.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n","type":"boolean"},"effectiveUsagePolicyId":{"description":"(string) - The policy that is applied to the instance.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n","type":"string"},"enablePgNativeLogin":{"description":"(boolean) - Whether to enable PG native password login on the instance. Defaults to false\n","type":"boolean"},"enableReadableSecondaries":{"description":"(boolean) - Whether to enable secondaries to serve read-only traffic. Defaults to false\n","type":"boolean"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"(string) - Name of the ref database instance\n","type":"string"},"nodeCount":{"description":"(integer) - The number of nodes in the instance, composed of 1 primary and 0 or more secondaries. Defaults to\n1 primary and 0 secondaries. This field is input only, see\u003cspan pulumi-lang-nodejs=\" effectiveNodeCount \" pulumi-lang-dotnet=\" EffectiveNodeCount \" pulumi-lang-go=\" effectiveNodeCount \" pulumi-lang-python=\" effective_node_count \" pulumi-lang-yaml=\" effectiveNodeCount \" pulumi-lang-java=\" effectiveNodeCount \"\u003e effective_node_count \u003c/span\u003efor the output\n","type":"integer"},"parentInstanceRef":{"$ref":"#/types/databricks:index/getDatabaseInstanceParentInstanceRef:getDatabaseInstanceParentInstanceRef","description":"(DatabaseInstanceRef) - The ref of the parent instance. This is only available if the instance is\nchild instance.\nInput: For specifying the parent instance to create a child instance. Optional.\nOutput: Only populated if provided as input to create a child instance\n"},"pgVersion":{"description":"(string) - The version of Postgres running on the instance\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseInstanceProviderConfig:getDatabaseInstanceProviderConfig"},"readOnlyDns":{"description":"(string) - The DNS endpoint to connect to the instance for read only access. This is only available if\u003cspan pulumi-lang-nodejs=\"\nenableReadableSecondaries \" pulumi-lang-dotnet=\"\nEnableReadableSecondaries \" pulumi-lang-go=\"\nenableReadableSecondaries \" pulumi-lang-python=\"\nenable_readable_secondaries \" pulumi-lang-yaml=\"\nenableReadableSecondaries \" pulumi-lang-java=\"\nenableReadableSecondaries \"\u003e\nenable_readable_secondaries \u003c/span\u003eis true\n","type":"string"},"readWriteDns":{"description":"(string) - The DNS endpoint to connect to the instance for read+write access\n","type":"string"},"retentionWindowInDays":{"description":"(integer) - The retention window for the instance. This is the time window in days\nfor which the historical data is retained. The default value is 7 days.\nValid values are 2 to 35 days\n","type":"integer"},"state":{"description":"(string) - The current state of the instance. Possible values are: `AVAILABLE`, `DELETING`, `FAILING_OVER`, `STARTING`, `STOPPED`, `UPDATING`\n","type":"string"},"stopped":{"description":"(boolean) - Whether to stop the instance. An input only param, see\u003cspan pulumi-lang-nodejs=\" effectiveStopped \" pulumi-lang-dotnet=\" EffectiveStopped \" pulumi-lang-go=\" effectiveStopped \" pulumi-lang-python=\" effective_stopped \" pulumi-lang-yaml=\" effectiveStopped \" pulumi-lang-java=\" effectiveStopped \"\u003e effective_stopped \u003c/span\u003efor the output\n","type":"boolean"},"uid":{"description":"(string) - Id of the ref database instance\n","type":"string"},"usagePolicyId":{"description":"(string) - The desired usage policy to associate with the instance\n","type":"string"}},"required":["capacity","childInstanceRefs","creationTime","creator","customTags","effectiveCapacity","effectiveCustomTags","effectiveEnablePgNativeLogin","effectiveEnableReadableSecondaries","effectiveNodeCount","effectiveRetentionWindowInDays","effectiveStopped","effectiveUsagePolicyId","enablePgNativeLogin","enableReadableSecondaries","name","nodeCount","parentInstanceRef","pgVersion","readOnlyDns","readWriteDns","retentionWindowInDays","state","stopped","uid","usagePolicyId","id"],"type":"object"}},"databricks:index/getDatabaseInstances:getDatabaseInstances":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to fetch the list of Database Instances within the workspace.\nThe list can then be accessed via the data object's \u003cspan pulumi-lang-nodejs=\"`databaseInstances`\" pulumi-lang-dotnet=\"`DatabaseInstances`\" pulumi-lang-go=\"`databaseInstances`\" pulumi-lang-python=\"`database_instances`\" pulumi-lang-yaml=\"`databaseInstances`\" pulumi-lang-java=\"`databaseInstances`\"\u003e`database_instances`\u003c/span\u003e field.\n\n\n## Example Usage\n\nGetting a list of all Database Instances:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getDatabaseInstances({});\nexport const allDatabaseInstances = all.then(all =\u003e all.databaseInstances);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_database_instances()\npulumi.export(\"allDatabaseInstances\", all.database_instances)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetDatabaseInstances.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allDatabaseInstances\"] = all.Apply(getDatabaseInstancesResult =\u003e getDatabaseInstancesResult.DatabaseInstances),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetDatabaseInstances(ctx, \u0026databricks.GetDatabaseInstancesArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allDatabaseInstances\", all.DatabaseInstances)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetDatabaseInstancesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getDatabaseInstances(GetDatabaseInstancesArgs.builder()\n            .build());\n\n        ctx.export(\"allDatabaseInstances\", all.databaseInstances());\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getDatabaseInstances\n      arguments: {}\noutputs:\n  allDatabaseInstances: ${all.databaseInstances}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getDatabaseInstances.\n","properties":{"pageSize":{"type":"integer","description":"Upper bound for items returned. The maximum value is 100\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseInstancesProviderConfig:getDatabaseInstancesProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getDatabaseInstances.\n","properties":{"databaseInstances":{"items":{"$ref":"#/types/databricks:index/getDatabaseInstancesDatabaseInstance:getDatabaseInstancesDatabaseInstance"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseInstancesProviderConfig:getDatabaseInstancesProviderConfig"}},"required":["databaseInstances","id"],"type":"object"}},"databricks:index/getDatabaseSyncedDatabaseTable:getDatabaseSyncedDatabaseTable":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single Synced Database Table.\n\n\n## Example Usage\n\nReferring to a Database Instance by name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getDatabaseSyncedDatabaseTable({\n    name: \"my_database_catalog.public.synced_table\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_database_synced_database_table(name=\"my_database_catalog.public.synced_table\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetDatabaseSyncedDatabaseTable.Invoke(new()\n    {\n        Name = \"my_database_catalog.public.synced_table\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupDatabaseSyncedDatabaseTable(ctx, \u0026databricks.LookupDatabaseSyncedDatabaseTableArgs{\n\t\t\tName: \"my_database_catalog.public.synced_table\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetDatabaseSyncedDatabaseTableArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getDatabaseSyncedDatabaseTable(GetDatabaseSyncedDatabaseTableArgs.builder()\n            .name(\"my_database_catalog.public.synced_table\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getDatabaseSyncedDatabaseTable\n      arguments:\n        name: my_database_catalog.public.synced_table\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getDatabaseSyncedDatabaseTable.\n","properties":{"name":{"type":"string","description":"Full three-part (catalog, schema, table) name of the table\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableProviderConfig:getDatabaseSyncedDatabaseTableProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getDatabaseSyncedDatabaseTable.\n","properties":{"dataSynchronizationStatus":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableDataSynchronizationStatus:getDatabaseSyncedDatabaseTableDataSynchronizationStatus","description":"(SyncedTableStatus) - Synced Table data synchronization status\n"},"databaseInstanceName":{"description":"(string) - Name of the target database instance. This is required when creating synced database tables in standard catalogs.\nThis is optional when creating synced database tables in registered catalogs. If this field is specified\nwhen creating synced database tables in registered catalogs, the database instance name MUST\nmatch that of the registered catalog (or the request will be rejected)\n","type":"string"},"effectiveDatabaseInstanceName":{"description":"(string) - The name of the database instance that this table is registered to. This field is always returned, and for\ntables inside database catalogs is inferred database instance associated with the catalog.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n","type":"string"},"effectiveLogicalDatabaseName":{"description":"(string) - The name of the logical database that this table is registered to.\nThis is an output only field that contains the value computed from the input field combined with\nserver side defaults. Use the field without the effective_ prefix to set the value\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"logicalDatabaseName":{"description":"(string) - Target Postgres database object (logical database) name for this table.\n","type":"string"},"name":{"description":"(string) - Full three-part (catalog, schema, table) name of the table\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableProviderConfig:getDatabaseSyncedDatabaseTableProviderConfig"},"spec":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTableSpec:getDatabaseSyncedDatabaseTableSpec","description":"(SyncedTableSpec)\n"},"unityCatalogProvisioningState":{"description":"(string) - The provisioning state of the synced table entity in Unity Catalog. This is distinct from the\nstate of the data synchronization pipeline (i.e. the table may be in \"ACTIVE\" but the pipeline\nmay be in \"PROVISIONING\" as it runs asynchronously). Possible values are: `ACTIVE`, `DEGRADED`, `DELETING`, `FAILED`, `PROVISIONING`, `UPDATING`\n","type":"string"}},"required":["dataSynchronizationStatus","databaseInstanceName","effectiveDatabaseInstanceName","effectiveLogicalDatabaseName","logicalDatabaseName","name","spec","unityCatalogProvisioningState","id"],"type":"object"}},"databricks:index/getDatabaseSyncedDatabaseTables:getDatabaseSyncedDatabaseTables":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getDatabaseSyncedDatabaseTables.\n","properties":{"instanceName":{"type":"string","description":"Name of the instance to get synced tables for\n"},"pageSize":{"type":"integer","description":"Upper bound for items returned\n"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesProviderConfig:getDatabaseSyncedDatabaseTablesProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["instanceName"]},"outputs":{"description":"A collection of values returned by getDatabaseSyncedDatabaseTables.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"instanceName":{"type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesProviderConfig:getDatabaseSyncedDatabaseTablesProviderConfig"},"syncedTables":{"items":{"$ref":"#/types/databricks:index/getDatabaseSyncedDatabaseTablesSyncedTable:getDatabaseSyncedDatabaseTablesSyncedTable"},"type":"array"}},"required":["instanceName","syncedTables","id"],"type":"object"}},"databricks:index/getDbfsFile:getDbfsFile":{"description":"This data source allows to get file content from [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst report = databricks.getDbfsFile({\n    path: \"dbfs:/reports/some.csv\",\n    limitFileSize: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nreport = databricks.get_dbfs_file(path=\"dbfs:/reports/some.csv\",\n    limit_file_size=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var report = Databricks.GetDbfsFile.Invoke(new()\n    {\n        Path = \"dbfs:/reports/some.csv\",\n        LimitFileSize = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupDbfsFile(ctx, \u0026databricks.LookupDbfsFileArgs{\n\t\t\tPath:          \"dbfs:/reports/some.csv\",\n\t\t\tLimitFileSize: true,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetDbfsFileArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var report = DatabricksFunctions.getDbfsFile(GetDbfsFileArgs.builder()\n            .path(\"dbfs:/reports/some.csv\")\n            .limitFileSize(true)\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  report:\n    fn::invoke:\n      function: databricks:getDbfsFile\n      arguments:\n        path: dbfs:/reports/some.csv\n        limitFileSize: 'true'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getDbfsFilePaths \" pulumi-lang-dotnet=\" databricks.getDbfsFilePaths \" pulumi-lang-go=\" getDbfsFilePaths \" pulumi-lang-python=\" get_dbfs_file_paths \" pulumi-lang-yaml=\" databricks.getDbfsFilePaths \" pulumi-lang-java=\" databricks.getDbfsFilePaths \"\u003e databricks.getDbfsFilePaths \u003c/span\u003edata to get list of file names from get file content from [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.DbfsFile \" pulumi-lang-dotnet=\" databricks.DbfsFile \" pulumi-lang-go=\" DbfsFile \" pulumi-lang-python=\" DbfsFile \" pulumi-lang-yaml=\" databricks.DbfsFile \" pulumi-lang-java=\" databricks.DbfsFile \"\u003e databricks.DbfsFile \u003c/span\u003eto manage relatively small files on [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Mount \" pulumi-lang-dotnet=\" databricks.Mount \" pulumi-lang-go=\" Mount \" pulumi-lang-python=\" Mount \" pulumi-lang-yaml=\" databricks.Mount \" pulumi-lang-java=\" databricks.Mount \"\u003e databricks.Mount \u003c/span\u003eto [mount your cloud storage](https://docs.databricks.com/data/databricks-file-system.html#mount-object-storage-to-dbfs) on `dbfs:/mnt/name`.\n","inputs":{"description":"A collection of arguments for invoking getDbfsFile.\n","properties":{"limitFileSize":{"type":"boolean","description":"Do not load content for files larger than 4MB.\n","willReplaceOnChanges":true},"path":{"type":"string","description":"Path on DBFS for the file from which to get content.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getDbfsFileProviderConfig:getDbfsFileProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object","required":["limitFileSize","path"]},"outputs":{"description":"A collection of values returned by getDbfsFile.\n","properties":{"content":{"description":"base64-encoded file contents\n","type":"string"},"fileSize":{"description":"size of the file in bytes\n","type":"integer"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"limitFileSize":{"type":"boolean"},"path":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getDbfsFileProviderConfig:getDbfsFileProviderConfig"}},"required":["content","fileSize","limitFileSize","path","id"],"type":"object"}},"databricks:index/getDbfsFilePaths:getDbfsFilePaths":{"description":"This data source allows to get list of file names from get file content from [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst partitions = databricks.getDbfsFilePaths({\n    path: \"dbfs:/user/hive/default.db/table\",\n    recursive: false,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\npartitions = databricks.get_dbfs_file_paths(path=\"dbfs:/user/hive/default.db/table\",\n    recursive=False)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var partitions = Databricks.GetDbfsFilePaths.Invoke(new()\n    {\n        Path = \"dbfs:/user/hive/default.db/table\",\n        Recursive = false,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetDbfsFilePaths(ctx, \u0026databricks.GetDbfsFilePathsArgs{\n\t\t\tPath:      \"dbfs:/user/hive/default.db/table\",\n\t\t\tRecursive: false,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetDbfsFilePathsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var partitions = DatabricksFunctions.getDbfsFilePaths(GetDbfsFilePathsArgs.builder()\n            .path(\"dbfs:/user/hive/default.db/table\")\n            .recursive(false)\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  partitions:\n    fn::invoke:\n      function: databricks:getDbfsFilePaths\n      arguments:\n        path: dbfs:/user/hive/default.db/table\n        recursive: false\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.DbfsFile \" pulumi-lang-dotnet=\" databricks.DbfsFile \" pulumi-lang-go=\" DbfsFile \" pulumi-lang-python=\" DbfsFile \" pulumi-lang-yaml=\" databricks.DbfsFile \" pulumi-lang-java=\" databricks.DbfsFile \"\u003e databricks.DbfsFile \u003c/span\u003edata to get file content from [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.getDbfsFilePaths \" pulumi-lang-dotnet=\" databricks.getDbfsFilePaths \" pulumi-lang-go=\" getDbfsFilePaths \" pulumi-lang-python=\" get_dbfs_file_paths \" pulumi-lang-yaml=\" databricks.getDbfsFilePaths \" pulumi-lang-java=\" databricks.getDbfsFilePaths \"\u003e databricks.getDbfsFilePaths \u003c/span\u003edata to get list of file names from get file content from [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.DbfsFile \" pulumi-lang-dotnet=\" databricks.DbfsFile \" pulumi-lang-go=\" DbfsFile \" pulumi-lang-python=\" DbfsFile \" pulumi-lang-yaml=\" databricks.DbfsFile \" pulumi-lang-java=\" databricks.DbfsFile \"\u003e databricks.DbfsFile \u003c/span\u003eto manage relatively small files on [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Library \" pulumi-lang-dotnet=\" databricks.Library \" pulumi-lang-go=\" Library \" pulumi-lang-python=\" Library \" pulumi-lang-yaml=\" databricks.Library \" pulumi-lang-java=\" databricks.Library \"\u003e databricks.Library \u003c/span\u003eto install a [library](https://docs.databricks.com/libraries/index.html) on databricks_cluster.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Mount \" pulumi-lang-dotnet=\" databricks.Mount \" pulumi-lang-go=\" Mount \" pulumi-lang-python=\" Mount \" pulumi-lang-yaml=\" databricks.Mount \" pulumi-lang-java=\" databricks.Mount \"\u003e databricks.Mount \u003c/span\u003eto [mount your cloud storage](https://docs.databricks.com/data/databricks-file-system.html#mount-object-storage-to-dbfs) on `dbfs:/mnt/name`.\n","inputs":{"description":"A collection of arguments for invoking getDbfsFilePaths.\n","properties":{"path":{"type":"string","description":"Path on DBFS for the file to perform listing\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getDbfsFilePathsProviderConfig:getDbfsFilePathsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"recursive":{"type":"boolean","description":"Either or not recursively list all files\n","willReplaceOnChanges":true}},"type":"object","required":["path","recursive"]},"outputs":{"description":"A collection of values returned by getDbfsFilePaths.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"path":{"type":"string"},"pathLists":{"description":"returns list of objects with \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`fileSize`\" pulumi-lang-dotnet=\"`FileSize`\" pulumi-lang-go=\"`fileSize`\" pulumi-lang-python=\"`file_size`\" pulumi-lang-yaml=\"`fileSize`\" pulumi-lang-java=\"`fileSize`\"\u003e`file_size`\u003c/span\u003e attributes in each\n","items":{"$ref":"#/types/databricks:index/getDbfsFilePathsPathList:getDbfsFilePathsPathList"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getDbfsFilePathsProviderConfig:getDbfsFilePathsProviderConfig"},"recursive":{"type":"boolean"}},"required":["path","pathLists","recursive","id"],"type":"object"}},"databricks:index/getDirectory:getDirectory":{"description":"This data source allows to get information about a directory in a Databricks Workspace.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst prod = databricks.getDirectory({\n    path: \"/Production\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nprod = databricks.get_directory(path=\"/Production\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var prod = Databricks.GetDirectory.Invoke(new()\n    {\n        Path = \"/Production\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupDirectory(ctx, \u0026databricks.LookupDirectoryArgs{\n\t\t\tPath: \"/Production\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetDirectoryArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var prod = DatabricksFunctions.getDirectory(GetDirectoryArgs.builder()\n            .path(\"/Production\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  prod:\n    fn::invoke:\n      function: databricks:getDirectory\n      arguments:\n        path: /Production\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getDirectory.\n","properties":{"id":{"type":"string"},"objectId":{"type":"integer","description":"directory object ID\n"},"path":{"type":"string","description":"Path to a directory in the workspace\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getDirectoryProviderConfig:getDirectoryProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"workspacePath":{"type":"string","description":"path on Workspace File System (WSFS) in form of `/Workspace` + \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e\n"}},"type":"object","required":["path"]},"outputs":{"description":"A collection of values returned by getDirectory.\n","properties":{"id":{"type":"string"},"objectId":{"description":"directory object ID\n","type":"integer"},"path":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getDirectoryProviderConfig:getDirectoryProviderConfig"},"workspacePath":{"description":"path on Workspace File System (WSFS) in form of `/Workspace` + \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e\n","type":"string"}},"required":["id","objectId","path","workspacePath"],"type":"object"}},"databricks:index/getEndpoint:getEndpoint":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nEndpoint datasource retrieves information about a single network connectivity endpoint for private access to Databricks workspaces.\n\n\u003e **Note** This resource can only be used with an account-level provider!\n\n\n## Example Usage\n\n### Example for Azure cloud\nThis is an example for getting an endpoint in Azure cloud:\n\u003c!--Start PulumiCodeChooser --\u003e\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getEndpoint\n      arguments:\n        accountId: eae3abf6-1496-494e-9983-4660a5ad5aab\n        endpointId: endpoint-123\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getEndpoint.\n","properties":{"name":{"type":"string","description":"The resource name of the endpoint, which uniquely identifies the endpoint\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getEndpoint.\n","properties":{"accountId":{"description":"(string) - The Databricks Account in which the endpoint object exists\n","type":"string"},"azurePrivateEndpointInfo":{"$ref":"#/types/databricks:index/getEndpointAzurePrivateEndpointInfo:getEndpointAzurePrivateEndpointInfo","description":"(AzurePrivateEndpointInfo) - Info for an Azure private endpoint\n"},"createTime":{"description":"(string) - The timestamp when the endpoint was created. The timestamp is in RFC 3339 format in UTC timezone\n","type":"string"},"displayName":{"description":"(string) - The human-readable display name of this endpoint.\nThe input should conform to RFC-1034, which restricts to letters, numbers, and hyphens,\nwith the first character a letter, the last a letter or a number, and a 63 character maximum\n","type":"string"},"endpointId":{"description":"(string) - The unique identifier for this endpoint under the account. This field is a UUID generated by Databricks\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"(string) - The resource name of the endpoint, which uniquely identifies the endpoint\n","type":"string"},"region":{"description":"(string) - The cloud provider region where this endpoint is located\n","type":"string"},"state":{"description":"(string) - The state of the endpoint. The endpoint can only be used if the state is `APPROVED`. Possible values are: `APPROVED`, `DISCONNECTED`, `FAILED`, `PENDING`\n","type":"string"},"useCase":{"description":"(string) - The use case that determines the type of network connectivity this endpoint provides.\nThis field is automatically determined based on the endpoint configuration and cloud-specific settings. Possible values are: `SERVICE_DIRECT`\n","type":"string"}},"required":["accountId","azurePrivateEndpointInfo","createTime","displayName","endpointId","name","region","state","useCase","id"],"type":"object"}},"databricks:index/getEndpoints:getEndpoints":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nEndpoints datasource retrieves a list of all network connectivity endpoints for private access to Databricks workspaces.\n\n\u003e **Note** This resource can only be used with an account-level provider!\n\n\n## Example Usage\n\n### Example for Azure cloud\nThis is an example for listing endpoints in Azure cloud:\n\u003c!--Start PulumiCodeChooser --\u003e\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getEndpoints\n      arguments:\n        accountId: eae3abf6-1496-494e-9983-4660a5ad5aab\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getEndpoints.\n","properties":{"pageSize":{"type":"integer"},"parent":{"type":"string"}},"type":"object","required":["parent"]},"outputs":{"description":"A collection of values returned by getEndpoints.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"items":{"items":{"$ref":"#/types/databricks:index/getEndpointsItem:getEndpointsItem"},"type":"array"},"pageSize":{"type":"integer"},"parent":{"type":"string"}},"required":["items","parent","id"],"type":"object"}},"databricks:index/getEntityTagAssignment:getEntityTagAssignment":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source allows you to get information about a tag assignment for a specific entity using the entity type, entity name, and tag key.\n\n## Example Usage\n\n### Get environment tag from a catalog\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst catalogTag = databricks.getEntityTagAssignment({\n    entityType: \"catalogs\",\n    entityName: \"production_catalog\",\n    tagKey: \"environment\",\n});\nconst schemaTag = databricks.getEntityTagAssignment({\n    entityType: \"schemas\",\n    entityName: \"production_catalog.analytics_data\",\n    tagKey: \"cost_center\",\n});\nconst tableTag = databricks.getEntityTagAssignment({\n    entityType: \"tables\",\n    entityName: \"production_catalog.sales_data.customer_orders\",\n    tagKey: \"owner\",\n});\nconst columnTag = databricks.getEntityTagAssignment({\n    entityType: \"columns\",\n    entityName: \"production_catalog.customer_data.users.email_address\",\n    tagKey: \"pii_classification\",\n});\nconst volumeTag = databricks.getEntityTagAssignment({\n    entityType: \"volumes\",\n    entityName: \"production_catalog.raw_data.landing_zone\",\n    tagKey: \"purpose\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncatalog_tag = databricks.get_entity_tag_assignment(entity_type=\"catalogs\",\n    entity_name=\"production_catalog\",\n    tag_key=\"environment\")\nschema_tag = databricks.get_entity_tag_assignment(entity_type=\"schemas\",\n    entity_name=\"production_catalog.analytics_data\",\n    tag_key=\"cost_center\")\ntable_tag = databricks.get_entity_tag_assignment(entity_type=\"tables\",\n    entity_name=\"production_catalog.sales_data.customer_orders\",\n    tag_key=\"owner\")\ncolumn_tag = databricks.get_entity_tag_assignment(entity_type=\"columns\",\n    entity_name=\"production_catalog.customer_data.users.email_address\",\n    tag_key=\"pii_classification\")\nvolume_tag = databricks.get_entity_tag_assignment(entity_type=\"volumes\",\n    entity_name=\"production_catalog.raw_data.landing_zone\",\n    tag_key=\"purpose\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var catalogTag = Databricks.GetEntityTagAssignment.Invoke(new()\n    {\n        EntityType = \"catalogs\",\n        EntityName = \"production_catalog\",\n        TagKey = \"environment\",\n    });\n\n    var schemaTag = Databricks.GetEntityTagAssignment.Invoke(new()\n    {\n        EntityType = \"schemas\",\n        EntityName = \"production_catalog.analytics_data\",\n        TagKey = \"cost_center\",\n    });\n\n    var tableTag = Databricks.GetEntityTagAssignment.Invoke(new()\n    {\n        EntityType = \"tables\",\n        EntityName = \"production_catalog.sales_data.customer_orders\",\n        TagKey = \"owner\",\n    });\n\n    var columnTag = Databricks.GetEntityTagAssignment.Invoke(new()\n    {\n        EntityType = \"columns\",\n        EntityName = \"production_catalog.customer_data.users.email_address\",\n        TagKey = \"pii_classification\",\n    });\n\n    var volumeTag = Databricks.GetEntityTagAssignment.Invoke(new()\n    {\n        EntityType = \"volumes\",\n        EntityName = \"production_catalog.raw_data.landing_zone\",\n        TagKey = \"purpose\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupEntityTagAssignment(ctx, \u0026databricks.LookupEntityTagAssignmentArgs{\n\t\t\tEntityType: \"catalogs\",\n\t\t\tEntityName: \"production_catalog\",\n\t\t\tTagKey:     \"environment\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.LookupEntityTagAssignment(ctx, \u0026databricks.LookupEntityTagAssignmentArgs{\n\t\t\tEntityType: \"schemas\",\n\t\t\tEntityName: \"production_catalog.analytics_data\",\n\t\t\tTagKey:     \"cost_center\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.LookupEntityTagAssignment(ctx, \u0026databricks.LookupEntityTagAssignmentArgs{\n\t\t\tEntityType: \"tables\",\n\t\t\tEntityName: \"production_catalog.sales_data.customer_orders\",\n\t\t\tTagKey:     \"owner\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.LookupEntityTagAssignment(ctx, \u0026databricks.LookupEntityTagAssignmentArgs{\n\t\t\tEntityType: \"columns\",\n\t\t\tEntityName: \"production_catalog.customer_data.users.email_address\",\n\t\t\tTagKey:     \"pii_classification\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.LookupEntityTagAssignment(ctx, \u0026databricks.LookupEntityTagAssignmentArgs{\n\t\t\tEntityType: \"volumes\",\n\t\t\tEntityName: \"production_catalog.raw_data.landing_zone\",\n\t\t\tTagKey:     \"purpose\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetEntityTagAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var catalogTag = DatabricksFunctions.getEntityTagAssignment(GetEntityTagAssignmentArgs.builder()\n            .entityType(\"catalogs\")\n            .entityName(\"production_catalog\")\n            .tagKey(\"environment\")\n            .build());\n\n        final var schemaTag = DatabricksFunctions.getEntityTagAssignment(GetEntityTagAssignmentArgs.builder()\n            .entityType(\"schemas\")\n            .entityName(\"production_catalog.analytics_data\")\n            .tagKey(\"cost_center\")\n            .build());\n\n        final var tableTag = DatabricksFunctions.getEntityTagAssignment(GetEntityTagAssignmentArgs.builder()\n            .entityType(\"tables\")\n            .entityName(\"production_catalog.sales_data.customer_orders\")\n            .tagKey(\"owner\")\n            .build());\n\n        final var columnTag = DatabricksFunctions.getEntityTagAssignment(GetEntityTagAssignmentArgs.builder()\n            .entityType(\"columns\")\n            .entityName(\"production_catalog.customer_data.users.email_address\")\n            .tagKey(\"pii_classification\")\n            .build());\n\n        final var volumeTag = DatabricksFunctions.getEntityTagAssignment(GetEntityTagAssignmentArgs.builder()\n            .entityType(\"volumes\")\n            .entityName(\"production_catalog.raw_data.landing_zone\")\n            .tagKey(\"purpose\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  catalogTag:\n    fn::invoke:\n      function: databricks:getEntityTagAssignment\n      arguments:\n        entityType: catalogs\n        entityName: production_catalog\n        tagKey: environment\n  schemaTag:\n    fn::invoke:\n      function: databricks:getEntityTagAssignment\n      arguments:\n        entityType: schemas\n        entityName: production_catalog.analytics_data\n        tagKey: cost_center\n  tableTag:\n    fn::invoke:\n      function: databricks:getEntityTagAssignment\n      arguments:\n        entityType: tables\n        entityName: production_catalog.sales_data.customer_orders\n        tagKey: owner\n  columnTag:\n    fn::invoke:\n      function: databricks:getEntityTagAssignment\n      arguments:\n        entityType: columns\n        entityName: production_catalog.customer_data.users.email_address\n        tagKey: pii_classification\n  volumeTag:\n    fn::invoke:\n      function: databricks:getEntityTagAssignment\n      arguments:\n        entityType: volumes\n        entityName: production_catalog.raw_data.landing_zone\n        tagKey: purpose\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getEntityTagAssignment.\n","properties":{"entityName":{"type":"string","description":"The fully qualified name of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of the entity to which the tag is assigned. Allowed values are: catalogs, schemas, tables, columns, volumes\n"},"providerConfig":{"$ref":"#/types/databricks:index/getEntityTagAssignmentProviderConfig:getEntityTagAssignmentProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string","description":"The key of the tag\n"}},"type":"object","required":["entityName","entityType","tagKey"]},"outputs":{"description":"A collection of values returned by getEntityTagAssignment.\n","properties":{"entityName":{"description":"(string) - The fully qualified name of the entity to which the tag is assigned\n","type":"string"},"entityType":{"description":"(string) - The type of the entity to which the tag is assigned. Allowed values are: catalogs, schemas, tables, columns, volumes\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getEntityTagAssignmentProviderConfig:getEntityTagAssignmentProviderConfig"},"sourceType":{"description":"(string) - The source type of the tag assignment, e.g., user-assigned or system-assigned. Possible values are: `TAG_ASSIGNMENT_SOURCE_TYPE_SYSTEM_DATA_CLASSIFICATION`\n","type":"string"},"tagKey":{"description":"(string) - The key of the tag\n","type":"string"},"tagValue":{"description":"(string) - The value of the tag\n","type":"string"},"updateTime":{"description":"(string) - The timestamp when the tag assignment was last updated\n","type":"string"},"updatedBy":{"description":"(string) - The user or principal who updated the tag assignment\n","type":"string"}},"required":["entityName","entityType","sourceType","tagKey","tagValue","updateTime","updatedBy","id"],"type":"object"}},"databricks:index/getEntityTagAssignments:getEntityTagAssignments":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source allows you to retrieve tag assignments that have been applied to a particular entity in Unity Catalog.\n\n## Example Usage\n\n### Get all tag assignments for a catalog\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst catalogTags = databricks.getEntityTagAssignments({\n    entityType: \"catalogs\",\n    entityName: \"production_catalog\",\n});\nconst schemaTags = databricks.getEntityTagAssignments({\n    entityType: \"schemas\",\n    entityName: \"production_catalog.sales_data\",\n});\nconst tableTags = databricks.getEntityTagAssignments({\n    entityType: \"tables\",\n    entityName: \"production_catalog.sales_data.customer_orders\",\n});\nconst columnTags = databricks.getEntityTagAssignments({\n    entityType: \"columns\",\n    entityName: \"production_catalog.customer_data.users.email_address\",\n});\nconst volumeTags = databricks.getEntityTagAssignments({\n    entityType: \"volumes\",\n    entityName: \"production_catalog.raw_data.landing_zone\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncatalog_tags = databricks.get_entity_tag_assignments(entity_type=\"catalogs\",\n    entity_name=\"production_catalog\")\nschema_tags = databricks.get_entity_tag_assignments(entity_type=\"schemas\",\n    entity_name=\"production_catalog.sales_data\")\ntable_tags = databricks.get_entity_tag_assignments(entity_type=\"tables\",\n    entity_name=\"production_catalog.sales_data.customer_orders\")\ncolumn_tags = databricks.get_entity_tag_assignments(entity_type=\"columns\",\n    entity_name=\"production_catalog.customer_data.users.email_address\")\nvolume_tags = databricks.get_entity_tag_assignments(entity_type=\"volumes\",\n    entity_name=\"production_catalog.raw_data.landing_zone\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var catalogTags = Databricks.GetEntityTagAssignments.Invoke(new()\n    {\n        EntityType = \"catalogs\",\n        EntityName = \"production_catalog\",\n    });\n\n    var schemaTags = Databricks.GetEntityTagAssignments.Invoke(new()\n    {\n        EntityType = \"schemas\",\n        EntityName = \"production_catalog.sales_data\",\n    });\n\n    var tableTags = Databricks.GetEntityTagAssignments.Invoke(new()\n    {\n        EntityType = \"tables\",\n        EntityName = \"production_catalog.sales_data.customer_orders\",\n    });\n\n    var columnTags = Databricks.GetEntityTagAssignments.Invoke(new()\n    {\n        EntityType = \"columns\",\n        EntityName = \"production_catalog.customer_data.users.email_address\",\n    });\n\n    var volumeTags = Databricks.GetEntityTagAssignments.Invoke(new()\n    {\n        EntityType = \"volumes\",\n        EntityName = \"production_catalog.raw_data.landing_zone\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetEntityTagAssignments(ctx, \u0026databricks.GetEntityTagAssignmentsArgs{\n\t\t\tEntityType: \"catalogs\",\n\t\t\tEntityName: \"production_catalog\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.GetEntityTagAssignments(ctx, \u0026databricks.GetEntityTagAssignmentsArgs{\n\t\t\tEntityType: \"schemas\",\n\t\t\tEntityName: \"production_catalog.sales_data\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.GetEntityTagAssignments(ctx, \u0026databricks.GetEntityTagAssignmentsArgs{\n\t\t\tEntityType: \"tables\",\n\t\t\tEntityName: \"production_catalog.sales_data.customer_orders\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.GetEntityTagAssignments(ctx, \u0026databricks.GetEntityTagAssignmentsArgs{\n\t\t\tEntityType: \"columns\",\n\t\t\tEntityName: \"production_catalog.customer_data.users.email_address\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.GetEntityTagAssignments(ctx, \u0026databricks.GetEntityTagAssignmentsArgs{\n\t\t\tEntityType: \"volumes\",\n\t\t\tEntityName: \"production_catalog.raw_data.landing_zone\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetEntityTagAssignmentsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var catalogTags = DatabricksFunctions.getEntityTagAssignments(GetEntityTagAssignmentsArgs.builder()\n            .entityType(\"catalogs\")\n            .entityName(\"production_catalog\")\n            .build());\n\n        final var schemaTags = DatabricksFunctions.getEntityTagAssignments(GetEntityTagAssignmentsArgs.builder()\n            .entityType(\"schemas\")\n            .entityName(\"production_catalog.sales_data\")\n            .build());\n\n        final var tableTags = DatabricksFunctions.getEntityTagAssignments(GetEntityTagAssignmentsArgs.builder()\n            .entityType(\"tables\")\n            .entityName(\"production_catalog.sales_data.customer_orders\")\n            .build());\n\n        final var columnTags = DatabricksFunctions.getEntityTagAssignments(GetEntityTagAssignmentsArgs.builder()\n            .entityType(\"columns\")\n            .entityName(\"production_catalog.customer_data.users.email_address\")\n            .build());\n\n        final var volumeTags = DatabricksFunctions.getEntityTagAssignments(GetEntityTagAssignmentsArgs.builder()\n            .entityType(\"volumes\")\n            .entityName(\"production_catalog.raw_data.landing_zone\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  catalogTags:\n    fn::invoke:\n      function: databricks:getEntityTagAssignments\n      arguments:\n        entityType: catalogs\n        entityName: production_catalog\n  schemaTags:\n    fn::invoke:\n      function: databricks:getEntityTagAssignments\n      arguments:\n        entityType: schemas\n        entityName: production_catalog.sales_data\n  tableTags:\n    fn::invoke:\n      function: databricks:getEntityTagAssignments\n      arguments:\n        entityType: tables\n        entityName: production_catalog.sales_data.customer_orders\n  columnTags:\n    fn::invoke:\n      function: databricks:getEntityTagAssignments\n      arguments:\n        entityType: columns\n        entityName: production_catalog.customer_data.users.email_address\n  volumeTags:\n    fn::invoke:\n      function: databricks:getEntityTagAssignments\n      arguments:\n        entityType: volumes\n        entityName: production_catalog.raw_data.landing_zone\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getEntityTagAssignments.\n","properties":{"entityName":{"type":"string","description":"The fully qualified name of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of the entity to which the tag is assigned. Allowed values are: catalogs, schemas, tables, columns, volumes\n"},"maxResults":{"type":"integer","description":"Optional. Maximum number of tag assignments to return in a single page\n"},"providerConfig":{"$ref":"#/types/databricks:index/getEntityTagAssignmentsProviderConfig:getEntityTagAssignmentsProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["entityName","entityType"]},"outputs":{"description":"A collection of values returned by getEntityTagAssignments.\n","properties":{"entityName":{"description":"(string) - The fully qualified name of the entity to which the tag is assigned\n","type":"string"},"entityType":{"description":"(string) - The type of the entity to which the tag is assigned. Allowed values are: catalogs, schemas, tables, columns, volumes\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"maxResults":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getEntityTagAssignmentsProviderConfig:getEntityTagAssignmentsProviderConfig"},"tagAssignments":{"items":{"$ref":"#/types/databricks:index/getEntityTagAssignmentsTagAssignment:getEntityTagAssignmentsTagAssignment"},"type":"array"}},"required":["entityName","entityType","tagAssignments","id"],"type":"object"}},"databricks:index/getExternalLocation:getExternalLocation":{"description":"Retrieves details about a\u003cspan pulumi-lang-nodejs=\" databricks.ExternalLocation \" pulumi-lang-dotnet=\" databricks.ExternalLocation \" pulumi-lang-go=\" ExternalLocation \" pulumi-lang-python=\" ExternalLocation \" pulumi-lang-yaml=\" databricks.ExternalLocation \" pulumi-lang-java=\" databricks.ExternalLocation \"\u003e databricks.ExternalLocation \u003c/span\u003ethat were created by Pulumi or manually.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGetting details of an existing external location in the metastore\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getExternalLocation({\n    name: \"this\",\n});\nexport const createdBy = _this.then(_this =\u003e _this.externalLocationInfo?.createdBy);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_external_location(name=\"this\")\npulumi.export(\"createdBy\", this.external_location_info.created_by)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetExternalLocation.Invoke(new()\n    {\n        Name = \"this\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"createdBy\"] = @this.Apply(@this =\u003e @this.Apply(getExternalLocationResult =\u003e getExternalLocationResult.ExternalLocationInfo?.CreatedBy)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupExternalLocation(ctx, \u0026databricks.LookupExternalLocationArgs{\n\t\t\tName: \"this\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"createdBy\", this.ExternalLocationInfo.CreatedBy)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetExternalLocationArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getExternalLocation(GetExternalLocationArgs.builder()\n            .name(\"this\")\n            .build());\n\n        ctx.export(\"createdBy\", this_.externalLocationInfo().createdBy());\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getExternalLocation\n      arguments:\n        name: this\noutputs:\n  createdBy: ${this.externalLocationInfo.createdBy}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.getExternalLocations \" pulumi-lang-dotnet=\" databricks.getExternalLocations \" pulumi-lang-go=\" getExternalLocations \" pulumi-lang-python=\" get_external_locations \" pulumi-lang-yaml=\" databricks.getExternalLocations \" pulumi-lang-java=\" databricks.getExternalLocations \"\u003e databricks.getExternalLocations \u003c/span\u003eto get names of all external locations\n*\u003cspan pulumi-lang-nodejs=\" databricks.ExternalLocation \" pulumi-lang-dotnet=\" databricks.ExternalLocation \" pulumi-lang-go=\" ExternalLocation \" pulumi-lang-python=\" ExternalLocation \" pulumi-lang-yaml=\" databricks.ExternalLocation \" pulumi-lang-java=\" databricks.ExternalLocation \"\u003e databricks.ExternalLocation \u003c/span\u003eto manage external locations within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getExternalLocation.\n","properties":{"externalLocationInfo":{"$ref":"#/types/databricks:index/getExternalLocationExternalLocationInfo:getExternalLocationExternalLocationInfo","description":"array of objects with information about external location:\n"},"id":{"type":"string","description":"external location ID - same as name.\n"},"name":{"type":"string","description":"The name of the external location\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getExternalLocationProviderConfig:getExternalLocationProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getExternalLocation.\n","properties":{"externalLocationInfo":{"$ref":"#/types/databricks:index/getExternalLocationExternalLocationInfo:getExternalLocationExternalLocationInfo","description":"array of objects with information about external location:\n"},"id":{"description":"external location ID - same as name.\n","type":"string"},"name":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getExternalLocationProviderConfig:getExternalLocationProviderConfig"}},"required":["externalLocationInfo","id","name"],"type":"object"}},"databricks:index/getExternalLocations:getExternalLocations":{"description":"Retrieves a list of\u003cspan pulumi-lang-nodejs=\" databricks.ExternalLocation \" pulumi-lang-dotnet=\" databricks.ExternalLocation \" pulumi-lang-go=\" ExternalLocation \" pulumi-lang-python=\" ExternalLocation \" pulumi-lang-yaml=\" databricks.ExternalLocation \" pulumi-lang-java=\" databricks.ExternalLocation \"\u003e databricks.ExternalLocation \u003c/span\u003eobjects, that were created by Pulumi or manually, so that special handling could be applied.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nList all external locations in the metastore\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getExternalLocations({});\nexport const allExternalLocations = all.then(all =\u003e all.names);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_external_locations()\npulumi.export(\"allExternalLocations\", all.names)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetExternalLocations.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allExternalLocations\"] = all.Apply(getExternalLocationsResult =\u003e getExternalLocationsResult.Names),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetExternalLocations(ctx, \u0026databricks.GetExternalLocationsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allExternalLocations\", all.Names)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetExternalLocationsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getExternalLocations(GetExternalLocationsArgs.builder()\n            .build());\n\n        ctx.export(\"allExternalLocations\", all.names());\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getExternalLocations\n      arguments: {}\noutputs:\n  allExternalLocations: ${all.names}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.ExternalLocation \" pulumi-lang-dotnet=\" databricks.ExternalLocation \" pulumi-lang-go=\" ExternalLocation \" pulumi-lang-python=\" ExternalLocation \" pulumi-lang-yaml=\" databricks.ExternalLocation \" pulumi-lang-java=\" databricks.ExternalLocation \"\u003e databricks.ExternalLocation \u003c/span\u003eto get information about a single external location\n*\u003cspan pulumi-lang-nodejs=\" databricks.ExternalLocation \" pulumi-lang-dotnet=\" databricks.ExternalLocation \" pulumi-lang-go=\" ExternalLocation \" pulumi-lang-python=\" ExternalLocation \" pulumi-lang-yaml=\" databricks.ExternalLocation \" pulumi-lang-java=\" databricks.ExternalLocation \"\u003e databricks.ExternalLocation \u003c/span\u003eto manage external locations within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getExternalLocations.\n","properties":{"names":{"type":"array","items":{"type":"string"},"description":"List of names of\u003cspan pulumi-lang-nodejs=\" databricks.ExternalLocation \" pulumi-lang-dotnet=\" databricks.ExternalLocation \" pulumi-lang-go=\" ExternalLocation \" pulumi-lang-python=\" ExternalLocation \" pulumi-lang-yaml=\" databricks.ExternalLocation \" pulumi-lang-java=\" databricks.ExternalLocation \"\u003e databricks.ExternalLocation \u003c/span\u003ein the metastore\n"},"providerConfig":{"$ref":"#/types/databricks:index/getExternalLocationsProviderConfig:getExternalLocationsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getExternalLocations.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"names":{"description":"List of names of\u003cspan pulumi-lang-nodejs=\" databricks.ExternalLocation \" pulumi-lang-dotnet=\" databricks.ExternalLocation \" pulumi-lang-go=\" ExternalLocation \" pulumi-lang-python=\" ExternalLocation \" pulumi-lang-yaml=\" databricks.ExternalLocation \" pulumi-lang-java=\" databricks.ExternalLocation \"\u003e databricks.ExternalLocation \u003c/span\u003ein the metastore\n","items":{"type":"string"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getExternalLocationsProviderConfig:getExternalLocationsProviderConfig"}},"required":["names","id"],"type":"object"}},"databricks:index/getExternalMetadata:getExternalMetadata":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single external metadata object.\n\n\u003e **Note** This resource can only be used with an workspace-level provider!\n\n## Example Usage\n\nReferring to an external metadata object by name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getExternalMetadata({\n    name: \"security_events_stream\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_external_metadata(name=\"security_events_stream\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetExternalMetadata.Invoke(new()\n    {\n        Name = \"security_events_stream\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupExternalMetadata(ctx, \u0026databricks.LookupExternalMetadataArgs{\n\t\t\tName: \"security_events_stream\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetExternalMetadataArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getExternalMetadata(GetExternalMetadataArgs.builder()\n            .name(\"security_events_stream\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getExternalMetadata\n      arguments:\n        name: security_events_stream\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getExternalMetadata.\n","properties":{"name":{"type":"string","description":"Name of the external metadata object\n"},"providerConfig":{"$ref":"#/types/databricks:index/getExternalMetadataProviderConfig:getExternalMetadataProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getExternalMetadata.\n","properties":{"columns":{"description":"(list of string) - List of columns associated with the external metadata object\n","items":{"type":"string"},"type":"array"},"createTime":{"description":"(string) - Time at which this external metadata object was created\n","type":"string"},"createdBy":{"description":"(string) - Username of external metadata object creator\n","type":"string"},"description":{"description":"(string) - User-provided free-form text description\n","type":"string"},"entityType":{"description":"(string) - Type of entity within the external system\n","type":"string"},"id":{"description":"(string) - Unique identifier of the external metadata object\n","type":"string"},"metastoreId":{"description":"(string) - Unique identifier of parent metastore\n","type":"string"},"name":{"description":"(string) - Name of the external metadata object\n","type":"string"},"owner":{"description":"(string) - Owner of the external metadata object\n","type":"string"},"properties":{"additionalProperties":{"type":"string"},"description":"(object) - A map of key-value properties attached to the external metadata object\n","type":"object"},"providerConfig":{"$ref":"#/types/databricks:index/getExternalMetadataProviderConfig:getExternalMetadataProviderConfig"},"systemType":{"description":"(string) - Type of external system. Possible values are: `AMAZON_REDSHIFT`, `AZURE_SYNAPSE`, `CONFLUENT`, `DATABRICKS`, `GOOGLE_BIGQUERY`, `KAFKA`, `LOOKER`, `MICROSOFT_FABRIC`, `MICROSOFT_SQL_SERVER`, `MONGODB`, `MYSQL`, `ORACLE`, `OTHER`, `POSTGRESQL`, `POWER_BI`, `SALESFORCE`, `SAP`, `SERVICENOW`, `SNOWFLAKE`, `STREAM_NATIVE`, `TABLEAU`, `TERADATA`, `WORKDAY`\n","type":"string"},"updateTime":{"description":"(string) - Time at which this external metadata object was last modified\n","type":"string"},"updatedBy":{"description":"(string) - Username of user who last modified external metadata object\n","type":"string"},"url":{"description":"(string) - URL associated with the external metadata object\n","type":"string"}},"required":["columns","createTime","createdBy","description","entityType","id","metastoreId","name","owner","properties","systemType","updateTime","updatedBy","url"],"type":"object"}},"databricks:index/getExternalMetadatas:getExternalMetadatas":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to fetch the list of external metadata objects.\n\n\u003e **Note** This resource can only be used with an workspace-level provider!\n\n## Example Usage\n\nGetting a list of all external metadata objects:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getExternalMetadatas({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_external_metadatas()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetExternalMetadatas.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetExternalMetadatas(ctx, \u0026databricks.GetExternalMetadatasArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetExternalMetadatasArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getExternalMetadatas(GetExternalMetadatasArgs.builder()\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getExternalMetadatas\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getExternalMetadatas.\n","properties":{"pageSize":{"type":"integer","description":"Specifies the maximum number of external metadata objects to return in a single response.\nThe value must be less than or equal to 1000\n"},"providerConfig":{"$ref":"#/types/databricks:index/getExternalMetadatasProviderConfig:getExternalMetadatasProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getExternalMetadatas.\n","properties":{"externalMetadatas":{"items":{"$ref":"#/types/databricks:index/getExternalMetadatasExternalMetadata:getExternalMetadatasExternalMetadata"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getExternalMetadatasProviderConfig:getExternalMetadatasProviderConfig"}},"required":["externalMetadatas","id"],"type":"object"}},"databricks:index/getFeatureEngineeringFeature:getFeatureEngineeringFeature":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getFeatureEngineeringFeature.\n","properties":{"fullName":{"type":"string","description":"The full three-part name (catalog, schema, name) of the feature\n"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureProviderConfig:getFeatureEngineeringFeatureProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["fullName"]},"outputs":{"description":"A collection of values returned by getFeatureEngineeringFeature.\n","properties":{"description":{"description":"(string) - The description of the feature\n","type":"string"},"filterCondition":{"description":"(string) - The filter condition applied to the source data before aggregation\n","type":"string"},"fullName":{"description":"(string) - The full three-part (catalog, schema, table) name of the Delta table\n","type":"string"},"function":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureFunction:getFeatureEngineeringFeatureFunction","description":"(Function) - The function by which the feature is computed\n"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"inputs":{"description":"(list of string) - The input columns from which the feature is computed\n","items":{"type":"string"},"type":"array"},"lineageContext":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureLineageContext:getFeatureEngineeringFeatureLineageContext","description":"(LineageContext) - WARNING: This field is primarily intended for internal use by Databricks systems and\nis automatically populated when features are created through Databricks notebooks or jobs.\nUsers should not manually set this field as incorrect values may lead to inaccurate lineage tracking or unexpected behavior.\nThis field will be set by feature-engineering client and should be left unset by SDK and terraform users\n"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureProviderConfig:getFeatureEngineeringFeatureProviderConfig"},"source":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureSource:getFeatureEngineeringFeatureSource","description":"(DataSource) - The data source of the feature\n"},"timeWindow":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeatureTimeWindow:getFeatureEngineeringFeatureTimeWindow","description":"(TimeWindow) - The time window in which the feature is computed\n"}},"required":["description","filterCondition","fullName","function","inputs","lineageContext","source","timeWindow","id"],"type":"object"}},"databricks:index/getFeatureEngineeringFeatures:getFeatureEngineeringFeatures":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getFeatureEngineeringFeatures.\n","properties":{"pageSize":{"type":"integer","description":"The maximum number of results to return\n"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesProviderConfig:getFeatureEngineeringFeaturesProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getFeatureEngineeringFeatures.\n","properties":{"features":{"items":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesFeature:getFeatureEngineeringFeaturesFeature"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringFeaturesProviderConfig:getFeatureEngineeringFeaturesProviderConfig"}},"required":["features","id"],"type":"object"}},"databricks:index/getFeatureEngineeringKafkaConfig:getFeatureEngineeringKafkaConfig":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getFeatureEngineeringKafkaConfig.\n","properties":{"name":{"type":"string","description":"Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature.\nCan be distinct from topic name\n"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigProviderConfig:getFeatureEngineeringKafkaConfigProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getFeatureEngineeringKafkaConfig.\n","properties":{"authConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigAuthConfig:getFeatureEngineeringKafkaConfigAuthConfig","description":"(AuthConfig) - Authentication configuration for connection to topics\n"},"backfillSource":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigBackfillSource:getFeatureEngineeringKafkaConfigBackfillSource","description":"(BackfillSource) - A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config.\nIn the future, a separate table will be maintained by Databricks for forward filling data.\nThe schema for this source must match exactly that of the key and value schemas specified for this Kafka config\n"},"bootstrapServers":{"description":"(string) - A comma-separated list of host/port pairs pointing to Kafka cluster\n","type":"string"},"extraOptions":{"additionalProperties":{"type":"string"},"description":"(object) - Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*)\n","type":"object"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"keySchema":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigKeySchema:getFeatureEngineeringKafkaConfigKeySchema","description":"(SchemaConfig) - Schema configuration for extracting message keys from topics. At least one of\u003cspan pulumi-lang-nodejs=\" keySchema \" pulumi-lang-dotnet=\" KeySchema \" pulumi-lang-go=\" keySchema \" pulumi-lang-python=\" key_schema \" pulumi-lang-yaml=\" keySchema \" pulumi-lang-java=\" keySchema \"\u003e key_schema \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" valueSchema \" pulumi-lang-dotnet=\" ValueSchema \" pulumi-lang-go=\" valueSchema \" pulumi-lang-python=\" value_schema \" pulumi-lang-yaml=\" valueSchema \" pulumi-lang-java=\" valueSchema \"\u003e value_schema \u003c/span\u003emust be provided\n"},"name":{"description":"(string) - Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature.\nCan be distinct from topic name\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigProviderConfig:getFeatureEngineeringKafkaConfigProviderConfig"},"subscriptionMode":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigSubscriptionMode:getFeatureEngineeringKafkaConfigSubscriptionMode","description":"(SubscriptionMode) - Options to configure which Kafka topics to pull data from\n"},"valueSchema":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigValueSchema:getFeatureEngineeringKafkaConfigValueSchema","description":"(SchemaConfig) - Schema configuration for extracting message values from topics. At least one of\u003cspan pulumi-lang-nodejs=\" keySchema \" pulumi-lang-dotnet=\" KeySchema \" pulumi-lang-go=\" keySchema \" pulumi-lang-python=\" key_schema \" pulumi-lang-yaml=\" keySchema \" pulumi-lang-java=\" keySchema \"\u003e key_schema \u003c/span\u003eand\u003cspan pulumi-lang-nodejs=\" valueSchema \" pulumi-lang-dotnet=\" ValueSchema \" pulumi-lang-go=\" valueSchema \" pulumi-lang-python=\" value_schema \" pulumi-lang-yaml=\" valueSchema \" pulumi-lang-java=\" valueSchema \"\u003e value_schema \u003c/span\u003emust be provided\n"}},"required":["authConfig","backfillSource","bootstrapServers","extraOptions","keySchema","name","subscriptionMode","valueSchema","id"],"type":"object"}},"databricks:index/getFeatureEngineeringKafkaConfigs:getFeatureEngineeringKafkaConfigs":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getFeatureEngineeringKafkaConfigs.\n","properties":{"pageSize":{"type":"integer","description":"The maximum number of results to return\n"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigsProviderConfig:getFeatureEngineeringKafkaConfigsProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getFeatureEngineeringKafkaConfigs.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"kafkaConfigs":{"items":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigsKafkaConfig:getFeatureEngineeringKafkaConfigsKafkaConfig"},"type":"array"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringKafkaConfigsProviderConfig:getFeatureEngineeringKafkaConfigsProviderConfig"}},"required":["kafkaConfigs","id"],"type":"object"}},"databricks:index/getFeatureEngineeringMaterializedFeature:getFeatureEngineeringMaterializedFeature":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getFeatureEngineeringMaterializedFeature.\n","properties":{"materializedFeatureId":{"type":"string","description":"Unique identifier for the materialized feature\n"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringMaterializedFeatureProviderConfig:getFeatureEngineeringMaterializedFeatureProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["materializedFeatureId"]},"outputs":{"description":"A collection of values returned by getFeatureEngineeringMaterializedFeature.\n","properties":{"cronSchedule":{"description":"(string) - The quartz cron expression that defines the schedule of the materialization pipeline. The schedule is evaluated in the UTC timezone\n","type":"string"},"featureName":{"description":"(string) - The full name of the feature in Unity Catalog\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"lastMaterializationTime":{"description":"(string) - The timestamp when the pipeline last ran and updated the materialized feature values.\nIf the pipeline has not run yet, this field will be null\n","type":"string"},"materializedFeatureId":{"description":"(string) - Unique identifier for the materialized feature\n","type":"string"},"offlineStoreConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringMaterializedFeatureOfflineStoreConfig:getFeatureEngineeringMaterializedFeatureOfflineStoreConfig","description":"(OfflineStoreConfig)\n"},"onlineStoreConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringMaterializedFeatureOnlineStoreConfig:getFeatureEngineeringMaterializedFeatureOnlineStoreConfig","description":"(OnlineStoreConfig)\n"},"pipelineScheduleState":{"description":"(string) - The schedule state of the materialization pipeline. Possible values are: `ACTIVE`, `PAUSED`, `SNAPSHOT`\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringMaterializedFeatureProviderConfig:getFeatureEngineeringMaterializedFeatureProviderConfig"},"tableName":{"description":"(string) - The fully qualified Unity Catalog path to the table containing the materialized feature (Delta table or Lakebase table). Output only\n","type":"string"}},"required":["cronSchedule","featureName","lastMaterializationTime","materializedFeatureId","offlineStoreConfig","onlineStoreConfig","pipelineScheduleState","tableName","id"],"type":"object"}},"databricks:index/getFeatureEngineeringMaterializedFeatures:getFeatureEngineeringMaterializedFeatures":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getFeatureEngineeringMaterializedFeatures.\n","properties":{"featureName":{"type":"string","description":"Filter by feature name. If specified, only materialized features materialized from this feature will be returned\n"},"pageSize":{"type":"integer","description":"The maximum number of results to return. Defaults to 100 if not specified. Cannot be greater than 1000\n"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringMaterializedFeaturesProviderConfig:getFeatureEngineeringMaterializedFeaturesProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getFeatureEngineeringMaterializedFeatures.\n","properties":{"featureName":{"description":"(string) - The full name of the feature in Unity Catalog\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"materializedFeatures":{"items":{"$ref":"#/types/databricks:index/getFeatureEngineeringMaterializedFeaturesMaterializedFeature:getFeatureEngineeringMaterializedFeaturesMaterializedFeature"},"type":"array"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getFeatureEngineeringMaterializedFeaturesProviderConfig:getFeatureEngineeringMaterializedFeaturesProviderConfig"}},"required":["materializedFeatures","id"],"type":"object"}},"databricks:index/getFunctions:getFunctions":{"description":"Retrieves a list of [User-Defined Functions (UDFs) registered in the Unity Catalog](https://docs.databricks.com/en/udf/unity-catalog.html).\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nList all functions defined in a specific schema (`main.default` in this example):\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getFunctions({\n    catalogName: \"main\",\n    schemaName: \"default\",\n});\nexport const allExternalLocations = all.then(all =\u003e all.functions);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_functions(catalog_name=\"main\",\n    schema_name=\"default\")\npulumi.export(\"allExternalLocations\", all.functions)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetFunctions.Invoke(new()\n    {\n        CatalogName = \"main\",\n        SchemaName = \"default\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allExternalLocations\"] = all.Apply(getFunctionsResult =\u003e getFunctionsResult.Functions),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetFunctions(ctx, \u0026databricks.GetFunctionsArgs{\n\t\t\tCatalogName: \"main\",\n\t\t\tSchemaName:  \"default\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allExternalLocations\", all.Functions)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetFunctionsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getFunctions(GetFunctionsArgs.builder()\n            .catalogName(\"main\")\n            .schemaName(\"default\")\n            .build());\n\n        ctx.export(\"allExternalLocations\", all.functions());\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getFunctions\n      arguments:\n        catalogName: main\n        schemaName: default\noutputs:\n  allExternalLocations: ${all.functions}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eto get information about a single schema\n","inputs":{"description":"A collection of arguments for invoking getFunctions.\n","properties":{"catalogName":{"type":"string","description":"Name of databricks_catalog.\n"},"functions":{"type":"array","items":{"$ref":"#/types/databricks:index/getFunctionsFunction:getFunctionsFunction"},"description":"list of objects describing individual UDF. Each object consists of the following attributes (refer to [REST API documentation](https://docs.databricks.com/api/workspace/functions/list#functions) for up-to-date list of attributes. Default type is String):\n"},"includeBrowse":{"type":"boolean","description":"flag to specify if include UDFs in the response for which the principal can only access selective metadata for.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getFunctionsProviderConfig:getFunctionsProviderConfig"},"schemaName":{"type":"string","description":"Name of databricks_schema.\n"}},"type":"object","required":["catalogName","schemaName"]},"outputs":{"description":"A collection of values returned by getFunctions.\n","properties":{"catalogName":{"description":"Name of parent catalog.\n","type":"string"},"functions":{"description":"list of objects describing individual UDF. Each object consists of the following attributes (refer to [REST API documentation](https://docs.databricks.com/api/workspace/functions/list#functions) for up-to-date list of attributes. Default type is String):\n","items":{"$ref":"#/types/databricks:index/getFunctionsFunction:getFunctionsFunction"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"includeBrowse":{"type":"boolean"},"providerConfig":{"$ref":"#/types/databricks:index/getFunctionsProviderConfig:getFunctionsProviderConfig"},"schemaName":{"description":"Name of parent schema relative to its parent catalog.\n","type":"string"}},"required":["catalogName","functions","schemaName","id"],"type":"object"}},"databricks:index/getGroup:getGroup":{"description":"Retrieves information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n\n\u003e This data source can be used with an account or workspace-level provider.\n\n## Example Usage\n\nAdding user to administrative group\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst admins = databricks.getGroup({\n    displayName: \"admins\",\n});\nconst me = new databricks.User(\"me\", {userName: \"me@example.com\"});\nconst myMemberA = new databricks.GroupMember(\"my_member_a\", {\n    groupId: admins.then(admins =\u003e admins.id),\n    memberId: me.id,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nadmins = databricks.get_group(display_name=\"admins\")\nme = databricks.User(\"me\", user_name=\"me@example.com\")\nmy_member_a = databricks.GroupMember(\"my_member_a\",\n    group_id=admins.id,\n    member_id=me.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var admins = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"admins\",\n    });\n\n    var me = new Databricks.User(\"me\", new()\n    {\n        UserName = \"me@example.com\",\n    });\n\n    var myMemberA = new Databricks.GroupMember(\"my_member_a\", new()\n    {\n        GroupId = admins.Apply(getGroupResult =\u003e getGroupResult.Id),\n        MemberId = me.Id,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tadmins, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"admins\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tme, err := databricks.NewUser(ctx, \"me\", \u0026databricks.UserArgs{\n\t\t\tUserName: pulumi.String(\"me@example.com\"),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupMember(ctx, \"my_member_a\", \u0026databricks.GroupMemberArgs{\n\t\t\tGroupId:  pulumi.String(admins.Id),\n\t\t\tMemberId: me.ID(),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.User;\nimport com.pulumi.databricks.UserArgs;\nimport com.pulumi.databricks.GroupMember;\nimport com.pulumi.databricks.GroupMemberArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var admins = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"admins\")\n            .build());\n\n        var me = new User(\"me\", UserArgs.builder()\n            .userName(\"me@example.com\")\n            .build());\n\n        var myMemberA = new GroupMember(\"myMemberA\", GroupMemberArgs.builder()\n            .groupId(admins.id())\n            .memberId(me.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  me:\n    type: databricks:User\n    properties:\n      userName: me@example.com\n  myMemberA:\n    type: databricks:GroupMember\n    name: my_member_a\n    properties:\n      groupId: ${admins.id}\n      memberId: ${me.id}\nvariables:\n  admins:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: admins\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* End to end workspace management guide\n*\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Directory \" pulumi-lang-dotnet=\" databricks.Directory \" pulumi-lang-go=\" Directory \" pulumi-lang-python=\" Directory \" pulumi-lang-yaml=\" databricks.Directory \" pulumi-lang-java=\" databricks.Directory \"\u003e databricks.Directory \u003c/span\u003eto manage directories in [Databricks Workpace](https://docs.databricks.com/workspace/workspace-objects.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eto manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eto [manage users](https://docs.databricks.com/administration-guide/users-groups/users.html), that could be added to\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003ewithin the workspace.\n","inputs":{"description":"A collection of arguments for invoking getGroup.\n","properties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `groups/Some Group`.\n"},"allowClusterCreate":{"type":"boolean","description":"True if group members can create clusters\n","willReplaceOnChanges":true},"allowInstancePoolCreate":{"type":"boolean","description":"True if group members can create instance pools\n","willReplaceOnChanges":true},"childGroups":{"type":"array","items":{"type":"string"},"description":"Set of\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eidentifiers, that can be modified with\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eresource.\n"},"databricksSqlAccess":{"type":"boolean","willReplaceOnChanges":true},"displayName":{"type":"string","description":"Display name of the group. The group must exist before this resource can be planned.\n","willReplaceOnChanges":true},"externalId":{"type":"string","description":"ID of the group in an external identity provider.\n"},"groups":{"type":"array","items":{"type":"string"},"description":"Set of group identifiers, that can be modified with\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eresource.\n"},"instanceProfiles":{"type":"array","items":{"type":"string"},"description":"Set of instance profile ARNs, that can be modified by\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eresource.\n"},"members":{"type":"array","items":{"type":"string"},"deprecationMessage":"Please use \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`servicePrincipals`\" pulumi-lang-dotnet=\"`ServicePrincipals`\" pulumi-lang-go=\"`servicePrincipals`\" pulumi-lang-python=\"`service_principals`\" pulumi-lang-yaml=\"`servicePrincipals`\" pulumi-lang-java=\"`servicePrincipals`\"\u003e`service_principals`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`childGroups`\" pulumi-lang-dotnet=\"`ChildGroups`\" pulumi-lang-go=\"`childGroups`\" pulumi-lang-python=\"`child_groups`\" pulumi-lang-yaml=\"`childGroups`\" pulumi-lang-java=\"`childGroups`\"\u003e`child_groups`\u003c/span\u003e instead"},"providerConfig":{"$ref":"#/types/databricks:index/getGroupProviderConfig:getGroupProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"recursive":{"type":"boolean","description":"Collect information for all nested groups. *Defaults to true.*\n","willReplaceOnChanges":true},"servicePrincipals":{"type":"array","items":{"type":"string"},"description":"Set of\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eidentifiers, that can be modified with\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eresource.\n"},"users":{"type":"array","items":{"type":"string"},"description":"Set of\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eidentifiers, that can be modified with\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eresource.\n"},"workspaceAccess":{"type":"boolean","willReplaceOnChanges":true},"workspaceConsume":{"type":"boolean","willReplaceOnChanges":true}},"type":"object","required":["displayName"]},"outputs":{"description":"A collection of values returned by getGroup.\n","properties":{"aclPrincipalId":{"description":"identifier for use in databricks_access_control_rule_set, e.g. `groups/Some Group`.\n","type":"string"},"allowClusterCreate":{"description":"True if group members can create clusters\n","type":"boolean"},"allowInstancePoolCreate":{"description":"True if group members can create instance pools\n","type":"boolean"},"childGroups":{"description":"Set of\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eidentifiers, that can be modified with\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eresource.\n","items":{"type":"string"},"type":"array"},"databricksSqlAccess":{"type":"boolean"},"displayName":{"type":"string"},"externalId":{"description":"ID of the group in an external identity provider.\n","type":"string"},"groups":{"description":"Set of group identifiers, that can be modified with\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eresource.\n","items":{"type":"string"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"instanceProfiles":{"description":"Set of instance profile ARNs, that can be modified by\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eresource.\n","items":{"type":"string"},"type":"array"},"members":{"deprecationMessage":"Please use \u003cspan pulumi-lang-nodejs=\"`users`\" pulumi-lang-dotnet=\"`Users`\" pulumi-lang-go=\"`users`\" pulumi-lang-python=\"`users`\" pulumi-lang-yaml=\"`users`\" pulumi-lang-java=\"`users`\"\u003e`users`\u003c/span\u003e, \u003cspan pulumi-lang-nodejs=\"`servicePrincipals`\" pulumi-lang-dotnet=\"`ServicePrincipals`\" pulumi-lang-go=\"`servicePrincipals`\" pulumi-lang-python=\"`service_principals`\" pulumi-lang-yaml=\"`servicePrincipals`\" pulumi-lang-java=\"`servicePrincipals`\"\u003e`service_principals`\u003c/span\u003e, and \u003cspan pulumi-lang-nodejs=\"`childGroups`\" pulumi-lang-dotnet=\"`ChildGroups`\" pulumi-lang-go=\"`childGroups`\" pulumi-lang-python=\"`child_groups`\" pulumi-lang-yaml=\"`childGroups`\" pulumi-lang-java=\"`childGroups`\"\u003e`child_groups`\u003c/span\u003e instead","items":{"type":"string"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getGroupProviderConfig:getGroupProviderConfig"},"recursive":{"type":"boolean"},"servicePrincipals":{"description":"Set of\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003eidentifiers, that can be modified with\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eresource.\n","items":{"type":"string"},"type":"array"},"users":{"description":"Set of\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eidentifiers, that can be modified with\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eresource.\n","items":{"type":"string"},"type":"array"},"workspaceAccess":{"type":"boolean"},"workspaceConsume":{"type":"boolean"}},"required":["aclPrincipalId","childGroups","displayName","externalId","groups","instanceProfiles","members","servicePrincipals","users","id"],"type":"object"}},"databricks:index/getInstancePool:getInstancePool":{"description":"Retrieves information about databricks_instance_pool.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nReferring to an instance pool by name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst pool = databricks.getInstancePool({\n    name: \"All spot\",\n});\nconst myCluster = new databricks.Cluster(\"my_cluster\", {instancePoolId: pool.then(pool =\u003e pool.id)});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\npool = databricks.get_instance_pool(name=\"All spot\")\nmy_cluster = databricks.Cluster(\"my_cluster\", instance_pool_id=pool.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var pool = Databricks.GetInstancePool.Invoke(new()\n    {\n        Name = \"All spot\",\n    });\n\n    var myCluster = new Databricks.Cluster(\"my_cluster\", new()\n    {\n        InstancePoolId = pool.Apply(getInstancePoolResult =\u003e getInstancePoolResult.Id),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tpool, err := databricks.LookupInstancePool(ctx, \u0026databricks.LookupInstancePoolArgs{\n\t\t\tName: \"All spot\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewCluster(ctx, \"my_cluster\", \u0026databricks.ClusterArgs{\n\t\t\tInstancePoolId: pulumi.String(pool.Id),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetInstancePoolArgs;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var pool = DatabricksFunctions.getInstancePool(GetInstancePoolArgs.builder()\n            .name(\"All spot\")\n            .build());\n\n        var myCluster = new Cluster(\"myCluster\", ClusterArgs.builder()\n            .instancePoolId(pool.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  myCluster:\n    type: databricks:Cluster\n    name: my_cluster\n    properties:\n      instancePoolId: ${pool.id}\nvariables:\n  pool:\n    fn::invoke:\n      function: databricks:getInstancePool\n      arguments:\n        name: All spot\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getInstancePool.\n","properties":{"name":{"type":"string","description":"Name of the instance pool. The instance pool must exist before this resource can be planned.\n","willReplaceOnChanges":true},"poolInfo":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfo:getInstancePoolPoolInfo","description":"block describing instance pool and its state. Check documentation for\u003cspan pulumi-lang-nodejs=\" databricks.InstancePool \" pulumi-lang-dotnet=\" databricks.InstancePool \" pulumi-lang-go=\" InstancePool \" pulumi-lang-python=\" InstancePool \" pulumi-lang-yaml=\" databricks.InstancePool \" pulumi-lang-java=\" databricks.InstancePool \"\u003e databricks.InstancePool \u003c/span\u003efor a list of exposed attributes.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getInstancePoolProviderConfig:getInstancePoolProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getInstancePool.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"type":"string"},"poolInfo":{"$ref":"#/types/databricks:index/getInstancePoolPoolInfo:getInstancePoolPoolInfo","description":"block describing instance pool and its state. Check documentation for\u003cspan pulumi-lang-nodejs=\" databricks.InstancePool \" pulumi-lang-dotnet=\" databricks.InstancePool \" pulumi-lang-go=\" InstancePool \" pulumi-lang-python=\" InstancePool \" pulumi-lang-yaml=\" databricks.InstancePool \" pulumi-lang-java=\" databricks.InstancePool \"\u003e databricks.InstancePool \u003c/span\u003efor a list of exposed attributes.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getInstancePoolProviderConfig:getInstancePoolProviderConfig"}},"required":["name","poolInfo","id"],"type":"object"}},"databricks:index/getInstanceProfiles:getInstanceProfiles":{"description":"Lists all available databricks_instance_profiles.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGet all instance profiles:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getInstanceProfiles({});\nexport const allInstanceProfiles = all.then(all =\u003e all.instanceProfiles);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_instance_profiles()\npulumi.export(\"allInstanceProfiles\", all.instance_profiles)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetInstanceProfiles.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allInstanceProfiles\"] = all.Apply(getInstanceProfilesResult =\u003e getInstanceProfilesResult.InstanceProfiles),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetInstanceProfiles(ctx, \u0026databricks.GetInstanceProfilesArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allInstanceProfiles\", all.InstanceProfiles)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetInstanceProfilesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getInstanceProfiles(GetInstanceProfilesArgs.builder()\n            .build());\n\n        ctx.export(\"allInstanceProfiles\", all.instanceProfiles());\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getInstanceProfiles\n      arguments: {}\noutputs:\n  allInstanceProfiles: ${all.instanceProfiles}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getInstanceProfiles.\n","properties":{"instanceProfiles":{"type":"array","items":{"$ref":"#/types/databricks:index/getInstanceProfilesInstanceProfile:getInstanceProfilesInstanceProfile"},"description":"Set of objects for a databricks_instance_profile. This contains the following attributes:\n"},"providerConfig":{"$ref":"#/types/databricks:index/getInstanceProfilesProviderConfig:getInstanceProfilesProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getInstanceProfiles.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"instanceProfiles":{"description":"Set of objects for a databricks_instance_profile. This contains the following attributes:\n","items":{"$ref":"#/types/databricks:index/getInstanceProfilesInstanceProfile:getInstanceProfilesInstanceProfile"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getInstanceProfilesProviderConfig:getInstanceProfilesProviderConfig"}},"required":["instanceProfiles","id"],"type":"object"}},"databricks:index/getJob:getJob":{"description":"Retrieves the settings of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eby name or by id. Complements the feature of the\u003cspan pulumi-lang-nodejs=\" databricks.getJobs \" pulumi-lang-dotnet=\" databricks.getJobs \" pulumi-lang-go=\" getJobs \" pulumi-lang-python=\" get_jobs \" pulumi-lang-yaml=\" databricks.getJobs \" pulumi-lang-java=\" databricks.getJobs \"\u003e databricks.getJobs \u003c/span\u003edata source.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGetting the existing cluster id of specific\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eby name or by id:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getJob({\n    jobName: \"My job\",\n});\nexport const jobNumWorkers = _this.then(_this =\u003e _this.jobSettings?.settings?.newCluster?.numWorkers);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_job(job_name=\"My job\")\npulumi.export(\"jobNumWorkers\", this.job_settings.settings.new_cluster.num_workers)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetJob.Invoke(new()\n    {\n        JobName = \"My job\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"jobNumWorkers\"] = @this.Apply(@this =\u003e @this.Apply(getJobResult =\u003e getJobResult.JobSettings?.Settings?.NewCluster?.NumWorkers)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupJob(ctx, \u0026databricks.LookupJobArgs{\n\t\t\tJobName: pulumi.StringRef(\"My job\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"jobNumWorkers\", this.JobSettings.Settings.NewCluster.NumWorkers)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetJobArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getJob(GetJobArgs.builder()\n            .jobName(\"My job\")\n            .build());\n\n        ctx.export(\"jobNumWorkers\", this_.jobSettings().settings().newCluster().numWorkers());\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getJob\n      arguments:\n        jobName: My job\noutputs:\n  jobNumWorkers: ${this.jobSettings.settings.newCluster.numWorkers}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.getJobs \" pulumi-lang-dotnet=\" databricks.getJobs \" pulumi-lang-go=\" getJobs \" pulumi-lang-python=\" get_jobs \" pulumi-lang-yaml=\" databricks.getJobs \" pulumi-lang-java=\" databricks.getJobs \"\u003e databricks.getJobs \u003c/span\u003edata to get all jobs and their names from a workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a databricks_cluster.\n","inputs":{"description":"A collection of arguments for invoking getJob.\n","properties":{"id":{"type":"string","description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n"},"jobId":{"type":"string"},"jobName":{"type":"string"},"jobSettings":{"$ref":"#/types/databricks:index/getJobJobSettings:getJobJobSettings","description":"the same fields as in databricks_job.\n"},"name":{"type":"string","description":"the job name of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by id.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getJobProviderConfig:getJobProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getJob.\n","properties":{"id":{"description":"the id of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by name.\n","type":"string"},"jobId":{"type":"string"},"jobName":{"type":"string"},"jobSettings":{"$ref":"#/types/databricks:index/getJobJobSettings:getJobJobSettings","description":"the same fields as in databricks_job.\n"},"name":{"description":"the job name of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eif the resource was matched by id.\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getJobProviderConfig:getJobProviderConfig"}},"required":["id","jobId","jobName","jobSettings","name"],"type":"object"}},"databricks:index/getJobs:getJobs":{"description":"Retrieves a list of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eids, that were created by Pulumi or manually, so that special handling could be applied.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n\u003e By default, this data resource will error in case of jobs with duplicate names. To support duplicate names, set `key = \"id\"` to map jobs by ID.\n\n## Example Usage\n\nGranting view\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eto all\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003ewithin the workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nexport = async () =\u003e {\n    const _this = await databricks.getJobs({});\n    const everyoneCanViewAllJobs: databricks.Permissions[] = [];\n    for (const range of Object.entries(_this.ids).map(([k, v]) =\u003e ({key: k, value: v}))) {\n        everyoneCanViewAllJobs.push(new databricks.Permissions(`everyone_can_view_all_jobs-${range.key}`, {\n            jobId: range.value,\n            accessControls: [{\n                groupName: \"users\",\n                permissionLevel: \"CAN_VIEW\",\n            }],\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_jobs()\neveryone_can_view_all_jobs = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate(this.ids)]:\n    everyone_can_view_all_jobs.append(databricks.Permissions(f\"everyone_can_view_all_jobs-{range['key']}\",\n        job_id=range[\"value\"],\n        access_controls=[{\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_VIEW\",\n        }]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var @this = await Databricks.GetJobs.InvokeAsync();\n\n    var everyoneCanViewAllJobs = new List\u003cDatabricks.Permissions\u003e();\n    foreach (var range in )\n    {\n        everyoneCanViewAllJobs.Add(new Databricks.Permissions($\"everyone_can_view_all_jobs-{range.Key}\", new()\n        {\n            JobId = range.Value,\n            AccessControls = new[]\n            {\n                new Databricks.Inputs.PermissionsAccessControlArgs\n                {\n                    GroupName = \"users\",\n                    PermissionLevel = \"CAN_VIEW\",\n                },\n            },\n        }));\n    }\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetJobs(ctx, \u0026databricks.GetJobsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar everyoneCanViewAllJobs []*databricks.Permissions\n\t\tfor key0, val0 := range this.Ids {\n\t\t\t__res, err := databricks.NewPermissions(ctx, fmt.Sprintf(\"everyone_can_view_all_jobs-%v\", key0), \u0026databricks.PermissionsArgs{\n\t\t\t\tJobId: pulumi.String(val0),\n\t\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_VIEW\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\teveryoneCanViewAllJobs = append(everyoneCanViewAllJobs, __res)\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetJobsArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport com.pulumi.codegen.internal.KeyedValue;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getJobs(GetJobsArgs.builder()\n            .build());\n\n        final var everyoneCanViewAllJobs = this.applyValue(getJobsResult -\u003e {\n            final var resources = new ArrayList\u003cPermissions\u003e();\n            for (var range : KeyedValue.of(getJobsResult.ids())) {\n                var resource = new Permissions(\"everyoneCanViewAllJobs-\" + range.key(), PermissionsArgs.builder()\n                    .jobId(range.value())\n                    .accessControls(PermissionsAccessControlArgs.builder()\n                        .groupName(\"users\")\n                        .permissionLevel(\"CAN_VIEW\")\n                        .build())\n                    .build());\n\n                resources.add(resource);\n            }\n\n            return resources;\n        });\n\n    }\n}\n```\n```yaml\nresources:\n  everyoneCanViewAllJobs:\n    type: databricks:Permissions\n    name: everyone_can_view_all_jobs\n    properties:\n      jobId: ${range.value}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_VIEW\n    options: {}\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getJobs\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nGetting ID of specific\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eby name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getJobs({\n    jobNameContains: \"test\",\n});\nexport const x = _this.then(_this =\u003e `ID of `x` job is ${_this.ids?.x}`);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_jobs(job_name_contains=\"test\")\npulumi.export(\"x\", f\"ID of `x` job is {this.ids['x']}\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetJobs.Invoke(new()\n    {\n        JobNameContains = \"test\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"x\"] = @this.Apply(@this =\u003e $\"ID of `x` job is {@this.Apply(getJobsResult =\u003e getJobsResult.Ids?.X)}\"),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetJobs(ctx, \u0026databricks.GetJobsArgs{\n\t\t\tJobNameContains: pulumi.StringRef(\"test\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"x\", pulumi.Sprintf(\"ID of `x` job is %v\", this.Ids.X))\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetJobsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getJobs(GetJobsArgs.builder()\n            .jobNameContains(\"test\")\n            .build());\n\n        ctx.export(\"x\", String.format(\"ID of `x` job is %s\", this_.ids().x()));\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getJobs\n      arguments:\n        jobNameContains: test\noutputs:\n  x: ID of `x` job is ${this.ids.x}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nGetting IDs of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003emapped by ID, allowing duplicate job names:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nexport = async () =\u003e {\n    const _this = await databricks.getJobs({\n        key: \"id\",\n    });\n    const everyoneCanViewAllJobs: databricks.Permissions[] = [];\n    for (const range of Object.entries(_this.ids).map(([k, v]) =\u003e ({key: k, value: v}))) {\n        everyoneCanViewAllJobs.push(new databricks.Permissions(`everyone_can_view_all_jobs-${range.key}`, {\n            jobId: range.value,\n            accessControls: [{\n                groupName: \"users\",\n                permissionLevel: \"CAN_VIEW\",\n            }],\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_jobs(key=\"id\")\neveryone_can_view_all_jobs = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate(this.ids)]:\n    everyone_can_view_all_jobs.append(databricks.Permissions(f\"everyone_can_view_all_jobs-{range['key']}\",\n        job_id=range[\"value\"],\n        access_controls=[{\n            \"group_name\": \"users\",\n            \"permission_level\": \"CAN_VIEW\",\n        }]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var @this = await Databricks.GetJobs.InvokeAsync(new()\n    {\n        Key = \"id\",\n    });\n\n    var everyoneCanViewAllJobs = new List\u003cDatabricks.Permissions\u003e();\n    foreach (var range in )\n    {\n        everyoneCanViewAllJobs.Add(new Databricks.Permissions($\"everyone_can_view_all_jobs-{range.Key}\", new()\n        {\n            JobId = range.Value,\n            AccessControls = new[]\n            {\n                new Databricks.Inputs.PermissionsAccessControlArgs\n                {\n                    GroupName = \"users\",\n                    PermissionLevel = \"CAN_VIEW\",\n                },\n            },\n        }));\n    }\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetJobs(ctx, \u0026databricks.GetJobsArgs{\n\t\t\tKey: pulumi.StringRef(\"id\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar everyoneCanViewAllJobs []*databricks.Permissions\n\t\tfor key0, val0 := range this.Ids {\n\t\t\t__res, err := databricks.NewPermissions(ctx, fmt.Sprintf(\"everyone_can_view_all_jobs-%v\", key0), \u0026databricks.PermissionsArgs{\n\t\t\t\tJobId: pulumi.String(val0),\n\t\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_VIEW\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\teveryoneCanViewAllJobs = append(everyoneCanViewAllJobs, __res)\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetJobsArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport com.pulumi.codegen.internal.KeyedValue;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getJobs(GetJobsArgs.builder()\n            .key(\"id\")\n            .build());\n\n        final var everyoneCanViewAllJobs = this.applyValue(getJobsResult -\u003e {\n            final var resources = new ArrayList\u003cPermissions\u003e();\n            for (var range : KeyedValue.of(getJobsResult.ids())) {\n                var resource = new Permissions(\"everyoneCanViewAllJobs-\" + range.key(), PermissionsArgs.builder()\n                    .jobId(range.value())\n                    .accessControls(PermissionsAccessControlArgs.builder()\n                        .groupName(\"users\")\n                        .permissionLevel(\"CAN_VIEW\")\n                        .build())\n                    .build());\n\n                resources.add(resource);\n            }\n\n            return resources;\n        });\n\n    }\n}\n```\n```yaml\nresources:\n  everyoneCanViewAllJobs:\n    type: databricks:Permissions\n    name: everyone_can_view_all_jobs\n    properties:\n      jobId: ${range.value}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_VIEW\n    options: {}\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getJobs\n      arguments:\n        key: id\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a databricks_cluster.\n","inputs":{"description":"A collection of arguments for invoking getJobs.\n","properties":{"ids":{"type":"object","additionalProperties":{"type":"string"},"description":"map of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003enames to ids\n"},"jobNameContains":{"type":"string","description":"Only return\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eids that match the given name string (case-insensitive).\n","willReplaceOnChanges":true},"key":{"type":"string","description":"Attribute to use for keys in the returned map of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eids by. Possible values are \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e (default) or \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e. Setting to \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e uses the job ID as the map key, allowing duplicate job names.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getJobsProviderConfig:getJobsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getJobs.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"ids":{"additionalProperties":{"type":"string"},"description":"map of\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003enames to ids\n","type":"object"},"jobNameContains":{"type":"string"},"key":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getJobsProviderConfig:getJobsProviderConfig"}},"required":["ids","id"],"type":"object"}},"databricks:index/getMaterializedFeaturesFeatureTag:getMaterializedFeaturesFeatureTag":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getMaterializedFeaturesFeatureTag.\n","properties":{"key":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getMaterializedFeaturesFeatureTagProviderConfig:getMaterializedFeaturesFeatureTagProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["key"]},"outputs":{"description":"A collection of values returned by getMaterializedFeaturesFeatureTag.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"key":{"description":"(string)\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getMaterializedFeaturesFeatureTagProviderConfig:getMaterializedFeaturesFeatureTagProviderConfig"},"value":{"description":"(string)\n","type":"string"}},"required":["key","value","id"],"type":"object"}},"databricks:index/getMaterializedFeaturesFeatureTags:getMaterializedFeaturesFeatureTags":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getMaterializedFeaturesFeatureTags.\n","properties":{"featureName":{"type":"string"},"pageSize":{"type":"integer","description":"The maximum number of results to return\n"},"providerConfig":{"$ref":"#/types/databricks:index/getMaterializedFeaturesFeatureTagsProviderConfig:getMaterializedFeaturesFeatureTagsProviderConfig","description":"Configure the provider for management through account provider.\n"},"tableName":{"type":"string"}},"type":"object","required":["featureName","tableName"]},"outputs":{"description":"A collection of values returned by getMaterializedFeaturesFeatureTags.\n","properties":{"featureName":{"type":"string"},"featureTags":{"items":{"$ref":"#/types/databricks:index/getMaterializedFeaturesFeatureTagsFeatureTag:getMaterializedFeaturesFeatureTagsFeatureTag"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getMaterializedFeaturesFeatureTagsProviderConfig:getMaterializedFeaturesFeatureTagsProviderConfig"},"tableName":{"type":"string"}},"required":["featureName","featureTags","tableName","id"],"type":"object"}},"databricks:index/getMetastore:getMetastore":{"description":"Retrieves information about metastore for a given id of\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eobject, that was created by Pulumi or manually, so that special handling could be applied.\n\n\u003e This data source can only be used with an account-level provider!\n\n## Example Usage\n\nMetastoreInfo response for a given metastore id\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as aws from \"@pulumi/aws\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst metastore = new aws.index.S3Bucket(\"metastore\", {\n    bucket: `${prefix}-metastore`,\n    forceDestroy: true,\n});\nconst thisMetastore = new databricks.Metastore(\"this\", {\n    name: \"primary\",\n    storageRoot: `s3://${metastore.id}/metastore`,\n    owner: unityAdminGroup,\n    forceDestroy: true,\n});\nconst _this = databricks.getMetastoreOutput({\n    metastoreId: thisMetastore.id,\n});\nexport const someMetastore = _this.apply(_this =\u003e _this.metastoreInfo);\n```\n```python\nimport pulumi\nimport pulumi_aws as aws\nimport pulumi_databricks as databricks\n\nmetastore = aws.index.S3Bucket(\"metastore\",\n    bucket=f{prefix}-metastore,\n    force_destroy=True)\nthis_metastore = databricks.Metastore(\"this\",\n    name=\"primary\",\n    storage_root=f\"s3://{metastore['id']}/metastore\",\n    owner=unity_admin_group,\n    force_destroy=True)\nthis = databricks.get_metastore_output(metastore_id=this_metastore.id)\npulumi.export(\"someMetastore\", this.metastore_info)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Aws = Pulumi.Aws;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var metastore = new Aws.Index.S3Bucket(\"metastore\", new()\n    {\n        Bucket = $\"{prefix}-metastore\",\n        ForceDestroy = true,\n    });\n\n    var thisMetastore = new Databricks.Metastore(\"this\", new()\n    {\n        Name = \"primary\",\n        StorageRoot = $\"s3://{metastore.Id}/metastore\",\n        Owner = unityAdminGroup,\n        ForceDestroy = true,\n    });\n\n    var @this = Databricks.GetMetastore.Invoke(new()\n    {\n        MetastoreId = thisMetastore.Id,\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"someMetastore\"] = @this.Apply(@this =\u003e @this.Apply(getMetastoreResult =\u003e getMetastoreResult.MetastoreInfo)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-aws/sdk/v7/go/aws\"\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tmetastore, err := aws.NewS3Bucket(ctx, \"metastore\", \u0026aws.S3BucketArgs{\n\t\t\tBucket:       fmt.Sprintf(\"%v-metastore\", prefix),\n\t\t\tForceDestroy: true,\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthisMetastore, err := databricks.NewMetastore(ctx, \"this\", \u0026databricks.MetastoreArgs{\n\t\t\tName:         pulumi.String(\"primary\"),\n\t\t\tStorageRoot:  pulumi.Sprintf(\"s3://%v/metastore\", metastore.Id),\n\t\t\tOwner:        pulumi.Any(unityAdminGroup),\n\t\t\tForceDestroy: pulumi.Bool(true),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis := databricks.LookupMetastoreOutput(ctx, databricks.GetMetastoreOutputArgs{\n\t\t\tMetastoreId: thisMetastore.ID(),\n\t\t}, nil)\n\t\tctx.Export(\"someMetastore\", this.ApplyT(func(this databricks.GetMetastoreResult) (databricks.GetMetastoreMetastoreInfo, error) {\n\t\t\treturn this.MetastoreInfo, nil\n\t\t}).(databricks.GetMetastoreMetastoreInfoOutput))\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.aws.S3Bucket;\nimport com.pulumi.aws.S3BucketArgs;\nimport com.pulumi.databricks.Metastore;\nimport com.pulumi.databricks.MetastoreArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetMetastoreArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var metastore = new S3Bucket(\"metastore\", S3BucketArgs.builder()\n            .bucket(String.format(\"%s-metastore\", prefix))\n            .forceDestroy(true)\n            .build());\n\n        var thisMetastore = new Metastore(\"thisMetastore\", MetastoreArgs.builder()\n            .name(\"primary\")\n            .storageRoot(String.format(\"s3://%s/metastore\", metastore.id()))\n            .owner(unityAdminGroup)\n            .forceDestroy(true)\n            .build());\n\n        final var this = DatabricksFunctions.getMetastore(GetMetastoreArgs.builder()\n            .metastoreId(thisMetastore.id())\n            .build());\n\n        ctx.export(\"someMetastore\", this_.applyValue(_this_ -\u003e _this_.metastoreInfo()));\n    }\n}\n```\n```yaml\nresources:\n  metastore:\n    type: aws:S3Bucket\n    properties:\n      bucket: ${prefix}-metastore\n      forceDestroy: true\n  thisMetastore:\n    type: databricks:Metastore\n    name: this\n    properties:\n      name: primary\n      storageRoot: s3://${metastore.id}/metastore\n      owner: ${unityAdminGroup}\n      forceDestroy: true\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getMetastore\n      arguments:\n        metastoreId: ${thisMetastore.id}\noutputs:\n  someMetastore: ${this.metastoreInfo}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.getMetastores \" pulumi-lang-dotnet=\" databricks.getMetastores \" pulumi-lang-go=\" getMetastores \" pulumi-lang-python=\" get_metastores \" pulumi-lang-yaml=\" databricks.getMetastores \" pulumi-lang-java=\" databricks.getMetastores \"\u003e databricks.getMetastores \u003c/span\u003eto get mapping of name to id of all metastores.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eto manage Metastores within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eto manage catalogs within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getMetastore.\n","properties":{"id":{"type":"string","description":"ID of the metastore\n"},"metastoreId":{"type":"string","description":"ID of the metastore\n"},"metastoreInfo":{"$ref":"#/types/databricks:index/getMetastoreMetastoreInfo:getMetastoreMetastoreInfo","description":"MetastoreInfo object for a databricks_metastore. This contains the following attributes:\n"},"name":{"type":"string","description":"Name of the metastore\n"},"region":{"type":"string","description":"Region of the metastore\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getMetastore.\n","properties":{"id":{"description":"ID of the metastore\n","type":"string"},"metastoreId":{"description":"Unique identifier of the metastore.\n","type":"string"},"metastoreInfo":{"$ref":"#/types/databricks:index/getMetastoreMetastoreInfo:getMetastoreMetastoreInfo","description":"MetastoreInfo object for a databricks_metastore. This contains the following attributes:\n"},"name":{"description":"Name of metastore.\n","type":"string"},"region":{"description":"Cloud region which the metastore serves (e.g., `us-west-2`, \u003cspan pulumi-lang-nodejs=\"`westus`\" pulumi-lang-dotnet=\"`Westus`\" pulumi-lang-go=\"`westus`\" pulumi-lang-python=\"`westus`\" pulumi-lang-yaml=\"`westus`\" pulumi-lang-java=\"`westus`\"\u003e`westus`\u003c/span\u003e).\n","type":"string"}},"required":["id","metastoreId","metastoreInfo","name","region"],"type":"object"}},"databricks:index/getMetastores:getMetastores":{"description":"Retrieves a mapping of name to id of\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eobjects, that were created by Pulumi or manually, so that special handling could be applied.\n\n\u003e This data source can only be used with an account-level provider!\n\n\u003e Data resource will error in case of metastores with duplicate names.\n\n## Example Usage\n\nMapping of name to id of all metastores:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getMetastores({});\nexport const allMetastores = all.then(all =\u003e all.ids);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_metastores()\npulumi.export(\"allMetastores\", all.ids)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetMetastores.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allMetastores\"] = all.Apply(getMetastoresResult =\u003e getMetastoresResult.Ids),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetMetastores(ctx, \u0026databricks.GetMetastoresArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allMetastores\", all.Ids)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetMetastoresArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getMetastores(GetMetastoresArgs.builder()\n            .build());\n\n        ctx.export(\"allMetastores\", all.ids());\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getMetastores\n      arguments: {}\noutputs:\n  allMetastores: ${all.ids}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eto get information about a single metastore.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eto manage Metastores within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eto manage catalogs within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getMetastores.\n","properties":{"ids":{"type":"object","additionalProperties":{"type":"string"},"description":"Mapping of name to id of databricks_metastore\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getMetastores.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"ids":{"additionalProperties":{"type":"string"},"description":"Mapping of name to id of databricks_metastore\n","type":"object"}},"required":["ids","id"],"type":"object"}},"databricks:index/getMlflowExperiment:getMlflowExperiment":{"description":"Retrieves the settings of\u003cspan pulumi-lang-nodejs=\" databricks.MlflowExperiment \" pulumi-lang-dotnet=\" databricks.MlflowExperiment \" pulumi-lang-go=\" MlflowExperiment \" pulumi-lang-python=\" MlflowExperiment \" pulumi-lang-yaml=\" databricks.MlflowExperiment \" pulumi-lang-java=\" databricks.MlflowExperiment \"\u003e databricks.MlflowExperiment \u003c/span\u003eby id or name.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n","inputs":{"description":"A collection of arguments for invoking getMlflowExperiment.\n","properties":{"artifactLocation":{"type":"string","description":"Location where artifacts for the experiment are stored.\n"},"creationTime":{"type":"integer","description":"Creation time in unix time stamp.\n"},"experimentId":{"type":"string","description":"Unique identifier for the experiment.\n"},"id":{"type":"string","description":"Unique identifier for the experiment. (same as \u003cspan pulumi-lang-nodejs=\"`experimentId`\" pulumi-lang-dotnet=\"`ExperimentId`\" pulumi-lang-go=\"`experimentId`\" pulumi-lang-python=\"`experiment_id`\" pulumi-lang-yaml=\"`experimentId`\" pulumi-lang-java=\"`experimentId`\"\u003e`experiment_id`\u003c/span\u003e)\n"},"lastUpdateTime":{"type":"integer","description":"Last update time in unix time stamp.\n"},"lifecycleStage":{"type":"string","description":"Current life cycle stage of the experiment: \u003cspan pulumi-lang-nodejs=\"`active`\" pulumi-lang-dotnet=\"`Active`\" pulumi-lang-go=\"`active`\" pulumi-lang-python=\"`active`\" pulumi-lang-yaml=\"`active`\" pulumi-lang-java=\"`active`\"\u003e`active`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`deleted`\" pulumi-lang-dotnet=\"`Deleted`\" pulumi-lang-go=\"`deleted`\" pulumi-lang-python=\"`deleted`\" pulumi-lang-yaml=\"`deleted`\" pulumi-lang-java=\"`deleted`\"\u003e`deleted`\u003c/span\u003e.\n"},"name":{"type":"string","description":"Path to experiment.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getMlflowExperimentProviderConfig:getMlflowExperimentProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/getMlflowExperimentTag:getMlflowExperimentTag"},"description":"Additional metadata key-value pairs.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getMlflowExperiment.\n","properties":{"artifactLocation":{"description":"Location where artifacts for the experiment are stored.\n","type":"string"},"creationTime":{"description":"Creation time in unix time stamp.\n","type":"integer"},"experimentId":{"description":"Unique identifier for the experiment. (same as \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e)\n","type":"string"},"id":{"description":"Unique identifier for the experiment. (same as \u003cspan pulumi-lang-nodejs=\"`experimentId`\" pulumi-lang-dotnet=\"`ExperimentId`\" pulumi-lang-go=\"`experimentId`\" pulumi-lang-python=\"`experiment_id`\" pulumi-lang-yaml=\"`experimentId`\" pulumi-lang-java=\"`experimentId`\"\u003e`experiment_id`\u003c/span\u003e)\n","type":"string"},"lastUpdateTime":{"description":"Last update time in unix time stamp.\n","type":"integer"},"lifecycleStage":{"description":"Current life cycle stage of the experiment: \u003cspan pulumi-lang-nodejs=\"`active`\" pulumi-lang-dotnet=\"`Active`\" pulumi-lang-go=\"`active`\" pulumi-lang-python=\"`active`\" pulumi-lang-yaml=\"`active`\" pulumi-lang-java=\"`active`\"\u003e`active`\u003c/span\u003e or \u003cspan pulumi-lang-nodejs=\"`deleted`\" pulumi-lang-dotnet=\"`Deleted`\" pulumi-lang-go=\"`deleted`\" pulumi-lang-python=\"`deleted`\" pulumi-lang-yaml=\"`deleted`\" pulumi-lang-java=\"`deleted`\"\u003e`deleted`\u003c/span\u003e.\n","type":"string"},"name":{"description":"Path to experiment.\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getMlflowExperimentProviderConfig:getMlflowExperimentProviderConfig"},"tags":{"description":"Additional metadata key-value pairs.\n","items":{"$ref":"#/types/databricks:index/getMlflowExperimentTag:getMlflowExperimentTag"},"type":"array"}},"required":["artifactLocation","creationTime","experimentId","id","lastUpdateTime","lifecycleStage","name","tags"],"type":"object"}},"databricks:index/getMlflowModel:getMlflowModel":{"description":"Retrieves the settings of\u003cspan pulumi-lang-nodejs=\" databricks.MlflowModel \" pulumi-lang-dotnet=\" databricks.MlflowModel \" pulumi-lang-go=\" MlflowModel \" pulumi-lang-python=\" MlflowModel \" pulumi-lang-yaml=\" databricks.MlflowModel \" pulumi-lang-java=\" databricks.MlflowModel \"\u003e databricks.MlflowModel \u003c/span\u003eby name.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst thisMlflowModel = new databricks.MlflowModel(\"this\", {\n    name: \"My MLflow Model\",\n    description: \"My MLflow model description\",\n    tags: [\n        {\n            key: \"key1\",\n            value: \"value1\",\n        },\n        {\n            key: \"key2\",\n            value: \"value2\",\n        },\n    ],\n});\nconst _this = databricks.getMlflowModel({\n    name: \"My MLflow Model\",\n});\nexport const model = _this;\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis_mlflow_model = databricks.MlflowModel(\"this\",\n    name=\"My MLflow Model\",\n    description=\"My MLflow model description\",\n    tags=[\n        {\n            \"key\": \"key1\",\n            \"value\": \"value1\",\n        },\n        {\n            \"key\": \"key2\",\n            \"value\": \"value2\",\n        },\n    ])\nthis = databricks.get_mlflow_model(name=\"My MLflow Model\")\npulumi.export(\"model\", this)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var thisMlflowModel = new Databricks.MlflowModel(\"this\", new()\n    {\n        Name = \"My MLflow Model\",\n        Description = \"My MLflow model description\",\n        Tags = new[]\n        {\n            new Databricks.Inputs.MlflowModelTagArgs\n            {\n                Key = \"key1\",\n                Value = \"value1\",\n            },\n            new Databricks.Inputs.MlflowModelTagArgs\n            {\n                Key = \"key2\",\n                Value = \"value2\",\n            },\n        },\n    });\n\n    var @this = Databricks.GetMlflowModel.Invoke(new()\n    {\n        Name = \"My MLflow Model\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"model\"] = @this,\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewMlflowModel(ctx, \"this\", \u0026databricks.MlflowModelArgs{\n\t\t\tName:        pulumi.String(\"My MLflow Model\"),\n\t\t\tDescription: pulumi.String(\"My MLflow model description\"),\n\t\t\tTags: databricks.MlflowModelTagArray{\n\t\t\t\t\u0026databricks.MlflowModelTagArgs{\n\t\t\t\t\tKey:   pulumi.String(\"key1\"),\n\t\t\t\t\tValue: pulumi.String(\"value1\"),\n\t\t\t\t},\n\t\t\t\t\u0026databricks.MlflowModelTagArgs{\n\t\t\t\t\tKey:   pulumi.String(\"key2\"),\n\t\t\t\t\tValue: pulumi.String(\"value2\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tthis, err := databricks.LookupMlflowModel(ctx, \u0026databricks.LookupMlflowModelArgs{\n\t\t\tName: \"My MLflow Model\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"model\", this)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.MlflowModel;\nimport com.pulumi.databricks.MlflowModelArgs;\nimport com.pulumi.databricks.inputs.MlflowModelTagArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetMlflowModelArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var thisMlflowModel = new MlflowModel(\"thisMlflowModel\", MlflowModelArgs.builder()\n            .name(\"My MLflow Model\")\n            .description(\"My MLflow model description\")\n            .tags(            \n                MlflowModelTagArgs.builder()\n                    .key(\"key1\")\n                    .value(\"value1\")\n                    .build(),\n                MlflowModelTagArgs.builder()\n                    .key(\"key2\")\n                    .value(\"value2\")\n                    .build())\n            .build());\n\n        final var this = DatabricksFunctions.getMlflowModel(GetMlflowModelArgs.builder()\n            .name(\"My MLflow Model\")\n            .build());\n\n        ctx.export(\"model\", this_);\n    }\n}\n```\n```yaml\nresources:\n  thisMlflowModel:\n    type: databricks:MlflowModel\n    name: this\n    properties:\n      name: My MLflow Model\n      description: My MLflow model description\n      tags:\n        - key: key1\n          value: value1\n        - key: key2\n          value: value2\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getMlflowModel\n      arguments:\n        name: My MLflow Model\noutputs:\n  model: ${this}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getMlflowModel({\n    name: \"My MLflow Model with multiple versions\",\n});\nconst thisModelServing = new databricks.ModelServing(\"this\", {\n    name: \"model-serving-endpoint\",\n    config: {\n        servedModels: [{\n            name: \"model_serving_prod\",\n            modelName: _this.then(_this =\u003e _this.name),\n            modelVersion: _this.then(_this =\u003e _this.latestVersions?.[0]?.version),\n            workloadSize: \"Small\",\n            scaleToZeroEnabled: true,\n        }],\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_mlflow_model(name=\"My MLflow Model with multiple versions\")\nthis_model_serving = databricks.ModelServing(\"this\",\n    name=\"model-serving-endpoint\",\n    config={\n        \"served_models\": [{\n            \"name\": \"model_serving_prod\",\n            \"model_name\": this.name,\n            \"model_version\": this.latest_versions[0].version,\n            \"workload_size\": \"Small\",\n            \"scale_to_zero_enabled\": True,\n        }],\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetMlflowModel.Invoke(new()\n    {\n        Name = \"My MLflow Model with multiple versions\",\n    });\n\n    var thisModelServing = new Databricks.ModelServing(\"this\", new()\n    {\n        Name = \"model-serving-endpoint\",\n        Config = new Databricks.Inputs.ModelServingConfigArgs\n        {\n            ServedModels = new[]\n            {\n                new Databricks.Inputs.ModelServingConfigServedModelArgs\n                {\n                    Name = \"model_serving_prod\",\n                    ModelName = @this.Apply(@this =\u003e @this.Apply(getMlflowModelResult =\u003e getMlflowModelResult.Name)),\n                    ModelVersion = @this.Apply(@this =\u003e @this.Apply(getMlflowModelResult =\u003e getMlflowModelResult.LatestVersions[0]?.Version)),\n                    WorkloadSize = \"Small\",\n                    ScaleToZeroEnabled = true,\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupMlflowModel(ctx, \u0026databricks.LookupMlflowModelArgs{\n\t\t\tName: \"My MLflow Model with multiple versions\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewModelServing(ctx, \"this\", \u0026databricks.ModelServingArgs{\n\t\t\tName: pulumi.String(\"model-serving-endpoint\"),\n\t\t\tConfig: \u0026databricks.ModelServingConfigArgs{\n\t\t\t\tServedModels: databricks.ModelServingConfigServedModelArray{\n\t\t\t\t\t\u0026databricks.ModelServingConfigServedModelArgs{\n\t\t\t\t\t\tName:               pulumi.String(\"model_serving_prod\"),\n\t\t\t\t\t\tModelName:          pulumi.String(this.Name),\n\t\t\t\t\t\tModelVersion:       pulumi.String(this.LatestVersions[0].Version),\n\t\t\t\t\t\tWorkloadSize:       pulumi.String(\"Small\"),\n\t\t\t\t\t\tScaleToZeroEnabled: pulumi.Bool(true),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetMlflowModelArgs;\nimport com.pulumi.databricks.ModelServing;\nimport com.pulumi.databricks.ModelServingArgs;\nimport com.pulumi.databricks.inputs.ModelServingConfigArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getMlflowModel(GetMlflowModelArgs.builder()\n            .name(\"My MLflow Model with multiple versions\")\n            .build());\n\n        var thisModelServing = new ModelServing(\"thisModelServing\", ModelServingArgs.builder()\n            .name(\"model-serving-endpoint\")\n            .config(ModelServingConfigArgs.builder()\n                .servedModels(ModelServingConfigServedModelArgs.builder()\n                    .name(\"model_serving_prod\")\n                    .modelName(this_.name())\n                    .modelVersion(this_.latestVersions()[0].version())\n                    .workloadSize(\"Small\")\n                    .scaleToZeroEnabled(true)\n                    .build())\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  thisModelServing:\n    type: databricks:ModelServing\n    name: this\n    properties:\n      name: model-serving-endpoint\n      config:\n        servedModels:\n          - name: model_serving_prod\n            modelName: ${this.name}\n            modelVersion: ${this.latestVersions[0].version}\n            workloadSize: Small\n            scaleToZeroEnabled: true\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getMlflowModel\n      arguments:\n        name: My MLflow Model with multiple versions\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getMlflowModel.\n","properties":{"description":{"type":"string","description":"User-specified description for the object.\n"},"latestVersions":{"type":"array","items":{"$ref":"#/types/databricks:index/getMlflowModelLatestVersion:getMlflowModelLatestVersion"},"description":"Array of model versions, each the latest version for its stage.\n"},"name":{"type":"string","description":"Name of the registered model.\n","willReplaceOnChanges":true},"permissionLevel":{"type":"string","description":"Permission level of the requesting user on the object. For what is allowed at each level, see MLflow Model permissions.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getMlflowModelProviderConfig:getMlflowModelProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"tags":{"type":"array","items":{"$ref":"#/types/databricks:index/getMlflowModelTag:getMlflowModelTag"},"description":"Array of tags associated with the model.\n"},"userId":{"type":"string","description":"The username of the user that created the object.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getMlflowModel.\n","properties":{"description":{"description":"User-specified description for the object.\n","type":"string"},"id":{"description":"Unique identifier for the object.\n","type":"string"},"latestVersions":{"description":"Array of model versions, each the latest version for its stage.\n","items":{"$ref":"#/types/databricks:index/getMlflowModelLatestVersion:getMlflowModelLatestVersion"},"type":"array"},"name":{"description":"Name of the model.\n","type":"string"},"permissionLevel":{"description":"Permission level of the requesting user on the object. For what is allowed at each level, see MLflow Model permissions.\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getMlflowModelProviderConfig:getMlflowModelProviderConfig"},"tags":{"description":"Array of tags associated with the model.\n","items":{"$ref":"#/types/databricks:index/getMlflowModelTag:getMlflowModelTag"},"type":"array"},"userId":{"description":"The username of the user that created the object.\n","type":"string"}},"required":["description","id","latestVersions","name","permissionLevel","tags","userId"],"type":"object"}},"databricks:index/getMlflowModels:getMlflowModels":{"description":"Retrieves a list of\u003cspan pulumi-lang-nodejs=\" databricks.MlflowModel \" pulumi-lang-dotnet=\" databricks.MlflowModel \" pulumi-lang-go=\" MlflowModel \" pulumi-lang-python=\" MlflowModel \" pulumi-lang-yaml=\" databricks.MlflowModel \" pulumi-lang-java=\" databricks.MlflowModel \"\u003e databricks.MlflowModel \u003c/span\u003eobjects, that were created by Pulumi or manually, so that special handling could be applied.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getMlflowModels({});\nexport const model = _this;\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_mlflow_models()\npulumi.export(\"model\", this)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetMlflowModels.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"model\"] = @this,\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetMlflowModels(ctx, \u0026databricks.GetMlflowModelsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"model\", this)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetMlflowModelsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getMlflowModels(GetMlflowModelsArgs.builder()\n            .build());\n\n        ctx.export(\"model\", this_);\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getMlflowModels\n      arguments: {}\noutputs:\n  model: ${this}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n","inputs":{"description":"A collection of arguments for invoking getMlflowModels.\n","properties":{"names":{"type":"array","items":{"type":"string"},"description":"List of names of databricks_mlflow_model\n"},"providerConfig":{"$ref":"#/types/databricks:index/getMlflowModelsProviderConfig:getMlflowModelsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getMlflowModels.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"names":{"description":"List of names of databricks_mlflow_model\n","items":{"type":"string"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getMlflowModelsProviderConfig:getMlflowModelsProviderConfig"}},"required":["names","id"],"type":"object"}},"databricks:index/getMwsCredentials:getMwsCredentials":{"description":"Lists all\u003cspan pulumi-lang-nodejs=\" databricks.MwsCredentials \" pulumi-lang-dotnet=\" databricks.MwsCredentials \" pulumi-lang-go=\" MwsCredentials \" pulumi-lang-python=\" MwsCredentials \" pulumi-lang-yaml=\" databricks.MwsCredentials \" pulumi-lang-java=\" databricks.MwsCredentials \"\u003e databricks.MwsCredentials \u003c/span\u003ein Databricks Account.\n\n\u003e This data source can only be used with an account-level provider!\n\n## Example Usage\n\nListing all credentials in Databricks Account\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getMwsCredentials({});\nexport const allMwsCredentials = all.then(all =\u003e all.ids);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_mws_credentials()\npulumi.export(\"allMwsCredentials\", all.ids)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetMwsCredentials.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allMwsCredentials\"] = all.Apply(getMwsCredentialsResult =\u003e getMwsCredentialsResult.Ids),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.LookupMwsCredentials(ctx, \u0026databricks.LookupMwsCredentialsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allMwsCredentials\", all.Ids)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetMwsCredentialsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getMwsCredentials(GetMwsCredentialsArgs.builder()\n            .build());\n\n        ctx.export(\"allMwsCredentials\", all.ids());\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getMwsCredentials\n      arguments: {}\noutputs:\n  allMwsCredentials: ${all.ids}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* Provisioning Databricks on AWS guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-dotnet=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-go=\" MwsCustomerManagedKeys \" pulumi-lang-python=\" MwsCustomerManagedKeys \" pulumi-lang-yaml=\" databricks.MwsCustomerManagedKeys \" pulumi-lang-java=\" databricks.MwsCustomerManagedKeys \"\u003e databricks.MwsCustomerManagedKeys \u003c/span\u003eto configure KMS keys for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsLogDelivery \" pulumi-lang-dotnet=\" databricks.MwsLogDelivery \" pulumi-lang-go=\" MwsLogDelivery \" pulumi-lang-python=\" MwsLogDelivery \" pulumi-lang-yaml=\" databricks.MwsLogDelivery \" pulumi-lang-java=\" databricks.MwsLogDelivery \"\u003e databricks.MwsLogDelivery \u003c/span\u003eto configure delivery of [billable usage logs](https://docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html) and [audit logs](https://docs.databricks.com/administration-guide/account-settings/audit-logs.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworks \" pulumi-lang-dotnet=\" databricks.MwsNetworks \" pulumi-lang-go=\" MwsNetworks \" pulumi-lang-python=\" MwsNetworks \" pulumi-lang-yaml=\" databricks.MwsNetworks \" pulumi-lang-java=\" databricks.MwsNetworks \"\u003e databricks.MwsNetworks \u003c/span\u003eto [configure VPC](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html) \u0026 subnets for new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsStorageConfigurations \" pulumi-lang-dotnet=\" databricks.MwsStorageConfigurations \" pulumi-lang-go=\" MwsStorageConfigurations \" pulumi-lang-python=\" MwsStorageConfigurations \" pulumi-lang-yaml=\" databricks.MwsStorageConfigurations \" pulumi-lang-java=\" databricks.MwsStorageConfigurations \"\u003e databricks.MwsStorageConfigurations \u003c/span\u003eto configure root bucket new workspaces within AWS.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto set up [AWS and GCP workspaces](https://docs.databricks.com/getting-started/overview.html#e2-architecture-1).\n","inputs":{"description":"A collection of arguments for invoking getMwsCredentials.\n","properties":{"ids":{"type":"object","additionalProperties":{"type":"string"},"description":"name-to-id map for all of the credentials in the account\n"},"providerConfig":{"$ref":"#/types/databricks:index/getMwsCredentialsProviderConfig:getMwsCredentialsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getMwsCredentials.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"ids":{"additionalProperties":{"type":"string"},"description":"name-to-id map for all of the credentials in the account\n","type":"object"},"providerConfig":{"$ref":"#/types/databricks:index/getMwsCredentialsProviderConfig:getMwsCredentialsProviderConfig"}},"required":["ids","id"],"type":"object"}},"databricks:index/getMwsNetworkConnectivityConfig:getMwsNetworkConnectivityConfig":{"description":"Retrieves information about\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-dotnet=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-go=\" MwsNetworkConnectivityConfig \" pulumi-lang-python=\" MwsNetworkConnectivityConfig \" pulumi-lang-yaml=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-java=\" databricks.MwsNetworkConnectivityConfig \"\u003e databricks.MwsNetworkConnectivityConfig \u003c/span\u003ein Databricks Account.\n\n\u003e This data source can only be used with an account-level provider!\n\n## Example Usage\n\nFetching information about a network connectivity configuration in Databricks Account\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getMwsNetworkConnectivityConfig({\n    name: \"ncc\",\n});\nexport const config = _this;\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_mws_network_connectivity_config(name=\"ncc\")\npulumi.export(\"config\", this)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetMwsNetworkConnectivityConfig.Invoke(new()\n    {\n        Name = \"ncc\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"config\"] = @this,\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupMwsNetworkConnectivityConfig(ctx, \u0026databricks.LookupMwsNetworkConnectivityConfigArgs{\n\t\t\tName: \"ncc\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"config\", this)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetMwsNetworkConnectivityConfigArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getMwsNetworkConnectivityConfig(GetMwsNetworkConnectivityConfigArgs.builder()\n            .name(\"ncc\")\n            .build());\n\n        ctx.export(\"config\", this_);\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getMwsNetworkConnectivityConfig\n      arguments:\n        name: ncc\noutputs:\n  config: ${this}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.getMwsNetworkConnectivityConfigs \" pulumi-lang-dotnet=\" databricks.getMwsNetworkConnectivityConfigs \" pulumi-lang-go=\" getMwsNetworkConnectivityConfigs \" pulumi-lang-python=\" get_mws_network_connectivity_configs \" pulumi-lang-yaml=\" databricks.getMwsNetworkConnectivityConfigs \" pulumi-lang-java=\" databricks.getMwsNetworkConnectivityConfigs \"\u003e databricks.getMwsNetworkConnectivityConfigs \u003c/span\u003eto get names of all network connectivity configurations.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-dotnet=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-go=\" MwsNetworkConnectivityConfig \" pulumi-lang-python=\" MwsNetworkConnectivityConfig \" pulumi-lang-yaml=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-java=\" databricks.MwsNetworkConnectivityConfig \"\u003e databricks.MwsNetworkConnectivityConfig \u003c/span\u003eto manage network connectivity configuration.\n","inputs":{"description":"A collection of arguments for invoking getMwsNetworkConnectivityConfig.\n","properties":{"accountId":{"type":"string","description":"The Databricks account ID associated with this network configuration.\n"},"creationTime":{"type":"integer","description":"Time in epoch milliseconds when this object was created.\n"},"egressConfig":{"$ref":"#/types/databricks:index/getMwsNetworkConnectivityConfigEgressConfig:getMwsNetworkConnectivityConfigEgressConfig","description":"Array of egress configuration objects.\n"},"name":{"type":"string","description":"Name of the network connectivity configuration.\n","willReplaceOnChanges":true},"networkConnectivityConfigId":{"type":"string","description":"The Databricks network connectivity configuration ID.\n"},"region":{"type":"string","description":"The region of the network connectivity configuration.\n"},"updatedTime":{"type":"integer","description":"Time in epoch milliseconds when the network was updated.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getMwsNetworkConnectivityConfig.\n","properties":{"accountId":{"description":"The Databricks account ID associated with this network configuration.\n","type":"string"},"creationTime":{"description":"Time in epoch milliseconds when this object was created.\n","type":"integer"},"egressConfig":{"$ref":"#/types/databricks:index/getMwsNetworkConnectivityConfigEgressConfig:getMwsNetworkConnectivityConfigEgressConfig","description":"Array of egress configuration objects.\n"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"The name of the network connectivity configuration.\n","type":"string"},"networkConnectivityConfigId":{"description":"The Databricks network connectivity configuration ID.\n","type":"string"},"region":{"description":"The region of the network connectivity configuration.\n","type":"string"},"updatedTime":{"description":"Time in epoch milliseconds when the network was updated.\n","type":"integer"}},"required":["accountId","creationTime","egressConfig","name","networkConnectivityConfigId","region","updatedTime","id"],"type":"object"}},"databricks:index/getMwsNetworkConnectivityConfigs:getMwsNetworkConnectivityConfigs":{"description":"Lists all\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-dotnet=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-go=\" MwsNetworkConnectivityConfig \" pulumi-lang-python=\" MwsNetworkConnectivityConfig \" pulumi-lang-yaml=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-java=\" databricks.MwsNetworkConnectivityConfig \"\u003e databricks.MwsNetworkConnectivityConfig \u003c/span\u003ein Databricks Account.\n\n\u003e This data source can only be used with an account-level provider!\n\n## Example Usage\n\nList all network connectivity configurations in Databricks Account\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getMwsNetworkConnectivityConfigs({});\nexport const all = _this;\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_mws_network_connectivity_configs()\npulumi.export(\"all\", this)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetMwsNetworkConnectivityConfigs.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"all\"] = @this,\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetMwsNetworkConnectivityConfigs(ctx, \u0026databricks.GetMwsNetworkConnectivityConfigsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"all\", this)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetMwsNetworkConnectivityConfigsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getMwsNetworkConnectivityConfigs(GetMwsNetworkConnectivityConfigsArgs.builder()\n            .build());\n\n        ctx.export(\"all\", this_);\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getMwsNetworkConnectivityConfigs\n      arguments: {}\noutputs:\n  all: ${this}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nList network connectivity configurations from a specific region in Databricks Account\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getMwsNetworkConnectivityConfigs({\n    region: \"us-east-1\",\n});\nexport const filtered = _this;\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_mws_network_connectivity_configs(region=\"us-east-1\")\npulumi.export(\"filtered\", this)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetMwsNetworkConnectivityConfigs.Invoke(new()\n    {\n        Region = \"us-east-1\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"filtered\"] = @this,\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetMwsNetworkConnectivityConfigs(ctx, \u0026databricks.GetMwsNetworkConnectivityConfigsArgs{\n\t\t\tRegion: pulumi.StringRef(\"us-east-1\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"filtered\", this)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetMwsNetworkConnectivityConfigsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getMwsNetworkConnectivityConfigs(GetMwsNetworkConnectivityConfigsArgs.builder()\n            .region(\"us-east-1\")\n            .build());\n\n        ctx.export(\"filtered\", this_);\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getMwsNetworkConnectivityConfigs\n      arguments:\n        region: us-east-1\noutputs:\n  filtered: ${this}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-dotnet=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-go=\" MwsNetworkConnectivityConfig \" pulumi-lang-python=\" MwsNetworkConnectivityConfig \" pulumi-lang-yaml=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-java=\" databricks.MwsNetworkConnectivityConfig \"\u003e databricks.MwsNetworkConnectivityConfig \u003c/span\u003eto get information about a single network connectivity configuration.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-dotnet=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-go=\" MwsNetworkConnectivityConfig \" pulumi-lang-python=\" MwsNetworkConnectivityConfig \" pulumi-lang-yaml=\" databricks.MwsNetworkConnectivityConfig \" pulumi-lang-java=\" databricks.MwsNetworkConnectivityConfig \"\u003e databricks.MwsNetworkConnectivityConfig \u003c/span\u003eto manage network connectivity configuration.\n","inputs":{"description":"A collection of arguments for invoking getMwsNetworkConnectivityConfigs.\n","properties":{"names":{"type":"array","items":{"type":"string"},"description":"List of names of databricks_mws_network_connectivity_config\n"},"region":{"type":"string","description":"Filter network connectivity configurations by region.\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getMwsNetworkConnectivityConfigs.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"names":{"description":"List of names of databricks_mws_network_connectivity_config\n","items":{"type":"string"},"type":"array"},"region":{"type":"string"}},"required":["names","id"],"type":"object"}},"databricks:index/getMwsWorkspaces:getMwsWorkspaces":{"description":"Lists all\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003ein Databricks Account.\n\n\u003e This data source can only be used with an account-level provider!\n\n## Example Usage\n\nListing all workspaces in\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getMwsWorkspaces({});\nexport const allMwsWorkspaces = all.then(all =\u003e all.ids);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_mws_workspaces()\npulumi.export(\"allMwsWorkspaces\", all.ids)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetMwsWorkspaces.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allMwsWorkspaces\"] = all.Apply(getMwsWorkspacesResult =\u003e getMwsWorkspacesResult.Ids),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.LookupMwsWorkspaces(ctx, \u0026databricks.LookupMwsWorkspacesArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allMwsWorkspaces\", all.Ids)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetMwsWorkspacesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getMwsWorkspaces(GetMwsWorkspacesArgs.builder()\n            .build());\n\n        ctx.export(\"allMwsWorkspaces\", all.ids());\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getMwsWorkspaces\n      arguments: {}\noutputs:\n  allMwsWorkspaces: ${all.ids}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eto manage Databricks Workspaces on AWS and GCP.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MetastoreAssignment \" pulumi-lang-dotnet=\" databricks.MetastoreAssignment \" pulumi-lang-go=\" MetastoreAssignment \" pulumi-lang-python=\" MetastoreAssignment \" pulumi-lang-yaml=\" databricks.MetastoreAssignment \" pulumi-lang-java=\" databricks.MetastoreAssignment \"\u003e databricks.MetastoreAssignment \u003c/span\u003eto assign\u003cspan pulumi-lang-nodejs=\" databricks.Metastore \" pulumi-lang-dotnet=\" databricks.Metastore \" pulumi-lang-go=\" Metastore \" pulumi-lang-python=\" Metastore \" pulumi-lang-yaml=\" databricks.Metastore \" pulumi-lang-java=\" databricks.Metastore \"\u003e databricks.Metastore \u003c/span\u003eto\u003cspan pulumi-lang-nodejs=\" databricks.MwsWorkspaces \" pulumi-lang-dotnet=\" databricks.MwsWorkspaces \" pulumi-lang-go=\" MwsWorkspaces \" pulumi-lang-python=\" MwsWorkspaces \" pulumi-lang-yaml=\" databricks.MwsWorkspaces \" pulumi-lang-java=\" databricks.MwsWorkspaces \"\u003e databricks.MwsWorkspaces \u003c/span\u003eor\u003cspan pulumi-lang-nodejs=\" azurermDatabricksWorkspace\n\" pulumi-lang-dotnet=\" AzurermDatabricksWorkspace\n\" pulumi-lang-go=\" azurermDatabricksWorkspace\n\" pulumi-lang-python=\" azurerm_databricks_workspace\n\" pulumi-lang-yaml=\" azurermDatabricksWorkspace\n\" pulumi-lang-java=\" azurermDatabricksWorkspace\n\"\u003e azurerm_databricks_workspace\n\u003c/span\u003e\n","inputs":{"description":"A collection of arguments for invoking getMwsWorkspaces.\n","properties":{"providerConfig":{"$ref":"#/types/databricks:index/getMwsWorkspacesProviderConfig:getMwsWorkspacesProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getMwsWorkspaces.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"ids":{"additionalProperties":{"type":"string"},"description":"name-to-id map for all of the workspaces in the account\n","type":"object"},"providerConfig":{"$ref":"#/types/databricks:index/getMwsWorkspacesProviderConfig:getMwsWorkspacesProviderConfig"}},"required":["ids","id"],"type":"object"}},"databricks:index/getNodeType:getNodeType":{"description":"Gets the smallest node type for\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003ethat fits search criteria, like amount of RAM or number of cores. [AWS](https://databricks.com/product/aws-pricing/instance-types) or [Azure](https://azure.microsoft.com/en-us/pricing/details/databricks/). Internally data source fetches [node types](https://docs.databricks.com/dev-tools/api/latest/clusters.html#list-node-types) available per cloud, similar to executing `databricks clusters list-node-types`, and filters it to return the smallest possible node with criteria.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n\u003e This is experimental functionality, which aims to simplify things. In case of wrong parameters given (e.g. \u003cspan pulumi-lang-nodejs=\"`minGpus \" pulumi-lang-dotnet=\"`MinGpus \" pulumi-lang-go=\"`minGpus \" pulumi-lang-python=\"`min_gpus \" pulumi-lang-yaml=\"`minGpus \" pulumi-lang-java=\"`minGpus \"\u003e`min_gpus \u003c/span\u003e= 876`) or no nodes matching, data source will return cloud-default node type, even though it doesn't match search criteria specified by data source arguments: [i3.xlarge](https://aws.amazon.com/ec2/instance-types/i3/) for AWS or [Standard_D3_v2](https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-sizes-specs#dv2-series) for Azure.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst withGpu = databricks.getNodeType({\n    localDisk: true,\n    minCores: 16,\n    gbPerCore: 1,\n    minGpus: 1,\n});\nconst gpuMl = databricks.getSparkVersion({\n    gpu: true,\n    ml: true,\n});\nconst research = new databricks.Cluster(\"research\", {\n    clusterName: \"Research Cluster\",\n    sparkVersion: gpuMl.then(gpuMl =\u003e gpuMl.id),\n    nodeTypeId: withGpu.then(withGpu =\u003e withGpu.id),\n    autoterminationMinutes: 20,\n    autoscale: {\n        minWorkers: 1,\n        maxWorkers: 50,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nwith_gpu = databricks.get_node_type(local_disk=True,\n    min_cores=16,\n    gb_per_core=1,\n    min_gpus=1)\ngpu_ml = databricks.get_spark_version(gpu=True,\n    ml=True)\nresearch = databricks.Cluster(\"research\",\n    cluster_name=\"Research Cluster\",\n    spark_version=gpu_ml.id,\n    node_type_id=with_gpu.id,\n    autotermination_minutes=20,\n    autoscale={\n        \"min_workers\": 1,\n        \"max_workers\": 50,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var withGpu = Databricks.GetNodeType.Invoke(new()\n    {\n        LocalDisk = true,\n        MinCores = 16,\n        GbPerCore = 1,\n        MinGpus = 1,\n    });\n\n    var gpuMl = Databricks.GetSparkVersion.Invoke(new()\n    {\n        Gpu = true,\n        Ml = true,\n    });\n\n    var research = new Databricks.Cluster(\"research\", new()\n    {\n        ClusterName = \"Research Cluster\",\n        SparkVersion = gpuMl.Apply(getSparkVersionResult =\u003e getSparkVersionResult.Id),\n        NodeTypeId = withGpu.Apply(getNodeTypeResult =\u003e getNodeTypeResult.Id),\n        AutoterminationMinutes = 20,\n        Autoscale = new Databricks.Inputs.ClusterAutoscaleArgs\n        {\n            MinWorkers = 1,\n            MaxWorkers = 50,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\twithGpu, err := databricks.GetNodeType(ctx, \u0026databricks.GetNodeTypeArgs{\n\t\t\tLocalDisk: pulumi.BoolRef(true),\n\t\t\tMinCores:  pulumi.IntRef(16),\n\t\t\tGbPerCore: pulumi.IntRef(1),\n\t\t\tMinGpus:   pulumi.IntRef(1),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tgpuMl, err := databricks.GetSparkVersion(ctx, \u0026databricks.GetSparkVersionArgs{\n\t\t\tGpu: pulumi.BoolRef(true),\n\t\t\tMl:  pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewCluster(ctx, \"research\", \u0026databricks.ClusterArgs{\n\t\t\tClusterName:            pulumi.String(\"Research Cluster\"),\n\t\t\tSparkVersion:           pulumi.String(gpuMl.Id),\n\t\t\tNodeTypeId:             pulumi.String(withGpu.Id),\n\t\t\tAutoterminationMinutes: pulumi.Int(20),\n\t\t\tAutoscale: \u0026databricks.ClusterAutoscaleArgs{\n\t\t\t\tMinWorkers: pulumi.Int(1),\n\t\t\t\tMaxWorkers: pulumi.Int(50),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetNodeTypeArgs;\nimport com.pulumi.databricks.inputs.GetSparkVersionArgs;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport com.pulumi.databricks.inputs.ClusterAutoscaleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var withGpu = DatabricksFunctions.getNodeType(GetNodeTypeArgs.builder()\n            .localDisk(true)\n            .minCores(16)\n            .gbPerCore(1)\n            .minGpus(1)\n            .build());\n\n        final var gpuMl = DatabricksFunctions.getSparkVersion(GetSparkVersionArgs.builder()\n            .gpu(true)\n            .ml(true)\n            .build());\n\n        var research = new Cluster(\"research\", ClusterArgs.builder()\n            .clusterName(\"Research Cluster\")\n            .sparkVersion(gpuMl.id())\n            .nodeTypeId(withGpu.id())\n            .autoterminationMinutes(20)\n            .autoscale(ClusterAutoscaleArgs.builder()\n                .minWorkers(1)\n                .maxWorkers(50)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  research:\n    type: databricks:Cluster\n    properties:\n      clusterName: Research Cluster\n      sparkVersion: ${gpuMl.id}\n      nodeTypeId: ${withGpu.id}\n      autoterminationMinutes: 20\n      autoscale:\n        minWorkers: 1\n        maxWorkers: 50\nvariables:\n  withGpu:\n    fn::invoke:\n      function: databricks:getNodeType\n      arguments:\n        localDisk: true\n        minCores: 16\n        gbPerCore: 1\n        minGpus: 1\n  gpuMl:\n    fn::invoke:\n      function: databricks:getSparkVersion\n      arguments:\n        gpu: true\n        ml: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.ClusterPolicy \" pulumi-lang-dotnet=\" databricks.ClusterPolicy \" pulumi-lang-go=\" ClusterPolicy \" pulumi-lang-python=\" ClusterPolicy \" pulumi-lang-yaml=\" databricks.ClusterPolicy \" pulumi-lang-java=\" databricks.ClusterPolicy \"\u003e databricks.ClusterPolicy \u003c/span\u003eto create a\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003epolicy, which limits the ability to create clusters based on a set of rules.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstancePool \" pulumi-lang-dotnet=\" databricks.InstancePool \" pulumi-lang-go=\" InstancePool \" pulumi-lang-python=\" InstancePool \" pulumi-lang-yaml=\" databricks.InstancePool \" pulumi-lang-java=\" databricks.InstancePool \"\u003e databricks.InstancePool \u003c/span\u003eto manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a databricks_cluster.\n","inputs":{"description":"A collection of arguments for invoking getNodeType.\n","properties":{"arm":{"type":"boolean","description":"if we should limit the search only to nodes with AWS Graviton or Azure Cobalt CPUs. Default to _false_.\n","willReplaceOnChanges":true},"category":{"type":"string","description":"Node category, which can be one of (depending on the cloud environment, could be checked with `databricks clusters list-node-types -o json|jq '.node_types[]|.category'|sort |uniq`):\n* `General Purpose` (all clouds)\n* `General Purpose (HDD)` (Azure)\n* `Compute Optimized` (all clouds)\n* `Memory Optimized` (all clouds)\n* `Memory Optimized (Remote HDD)` (Azure)\n* `Storage Optimized` (AWS, Azure)\n* `GPU Accelerated` (AWS, Azure)\n","willReplaceOnChanges":true},"fleet":{"type":"boolean","description":"if we should limit the search only to [AWS fleet instance types](https://docs.databricks.com/compute/aws-fleet-instances.html). Default to _false_.\n","willReplaceOnChanges":true},"gbPerCore":{"type":"integer","description":"Number of gigabytes per core available on instance. Conflicts with \u003cspan pulumi-lang-nodejs=\"`minMemoryGb`\" pulumi-lang-dotnet=\"`MinMemoryGb`\" pulumi-lang-go=\"`minMemoryGb`\" pulumi-lang-python=\"`min_memory_gb`\" pulumi-lang-yaml=\"`minMemoryGb`\" pulumi-lang-java=\"`minMemoryGb`\"\u003e`min_memory_gb`\u003c/span\u003e. Defaults to _0_.\n","willReplaceOnChanges":true},"graviton":{"type":"boolean","description":"if we should limit the search only to nodes with AWS Graviton or Azure Cobalt CPUs. Default to _false_. *Use \u003cspan pulumi-lang-nodejs=\"`arm`\" pulumi-lang-dotnet=\"`Arm`\" pulumi-lang-go=\"`arm`\" pulumi-lang-python=\"`arm`\" pulumi-lang-yaml=\"`arm`\" pulumi-lang-java=\"`arm`\"\u003e`arm`\u003c/span\u003e instead!*\n","deprecationMessage":"Use \u003cspan pulumi-lang-nodejs=\"`arm`\" pulumi-lang-dotnet=\"`Arm`\" pulumi-lang-go=\"`arm`\" pulumi-lang-python=\"`arm`\" pulumi-lang-yaml=\"`arm`\" pulumi-lang-java=\"`arm`\"\u003e`arm`\u003c/span\u003e instead","willReplaceOnChanges":true},"id":{"type":"string","description":"node type, that can be used for databricks_job, databricks_cluster, or databricks_instance_pool.\n"},"isIoCacheEnabled":{"type":"boolean","description":". Pick only nodes that have IO Cache. Defaults to _false_.\n","willReplaceOnChanges":true},"localDisk":{"type":"boolean","description":"Pick only nodes with local storage. Defaults to _false_.\n","willReplaceOnChanges":true},"localDiskMinSize":{"type":"integer","description":"Pick only nodes that have size local storage greater or equal to given value. Defaults to _0_.\n","willReplaceOnChanges":true},"minCores":{"type":"integer","description":"Minimum number of CPU cores available on instance. Defaults to _0_.\n","willReplaceOnChanges":true},"minGpus":{"type":"integer","description":"Minimum number of GPU's attached to instance. Defaults to _0_.\n","willReplaceOnChanges":true},"minMemoryGb":{"type":"integer","description":"Minimum amount of memory per node in gigabytes. Defaults to _0_.\n","willReplaceOnChanges":true},"photonDriverCapable":{"type":"boolean","description":"Pick only nodes that can run Photon driver. Defaults to _false_.\n","willReplaceOnChanges":true},"photonWorkerCapable":{"type":"boolean","description":"Pick only nodes that can run Photon workers. Defaults to _false_.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getNodeTypeProviderConfig:getNodeTypeProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"supportPortForwarding":{"type":"boolean","description":"Pick only nodes that support port forwarding. Defaults to _false_.\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getNodeType.\n","properties":{"arm":{"type":"boolean"},"category":{"type":"string"},"fleet":{"type":"boolean"},"gbPerCore":{"type":"integer"},"graviton":{"deprecationMessage":"Use \u003cspan pulumi-lang-nodejs=\"`arm`\" pulumi-lang-dotnet=\"`Arm`\" pulumi-lang-go=\"`arm`\" pulumi-lang-python=\"`arm`\" pulumi-lang-yaml=\"`arm`\" pulumi-lang-java=\"`arm`\"\u003e`arm`\u003c/span\u003e instead","type":"boolean"},"id":{"description":"node type, that can be used for databricks_job, databricks_cluster, or databricks_instance_pool.\n","type":"string"},"isIoCacheEnabled":{"type":"boolean"},"localDisk":{"type":"boolean"},"localDiskMinSize":{"type":"integer"},"minCores":{"type":"integer"},"minGpus":{"type":"integer"},"minMemoryGb":{"type":"integer"},"photonDriverCapable":{"type":"boolean"},"photonWorkerCapable":{"type":"boolean"},"providerConfig":{"$ref":"#/types/databricks:index/getNodeTypeProviderConfig:getNodeTypeProviderConfig"},"supportPortForwarding":{"type":"boolean"}},"required":["id"],"type":"object"}},"databricks:index/getNotebook:getNotebook":{"description":"This data source allows to export a notebook from Databricks Workspace.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst features = databricks.getNotebook({\n    path: \"/Production/Features\",\n    format: \"SOURCE\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nfeatures = databricks.get_notebook(path=\"/Production/Features\",\n    format=\"SOURCE\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var features = Databricks.GetNotebook.Invoke(new()\n    {\n        Path = \"/Production/Features\",\n        Format = \"SOURCE\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupNotebook(ctx, \u0026databricks.LookupNotebookArgs{\n\t\t\tPath:   \"/Production/Features\",\n\t\t\tFormat: \"SOURCE\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetNotebookArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var features = DatabricksFunctions.getNotebook(GetNotebookArgs.builder()\n            .path(\"/Production/Features\")\n            .format(\"SOURCE\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  features:\n    fn::invoke:\n      function: databricks:getNotebook\n      arguments:\n        path: /Production/Features\n        format: SOURCE\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getNotebook.\n","properties":{"format":{"type":"string","description":"Notebook format to export. Either `SOURCE`, `HTML`, `JUPYTER`, or `DBC`.\n","willReplaceOnChanges":true},"language":{"type":"string","description":"notebook language\n"},"objectId":{"type":"integer","description":"notebook object ID\n"},"objectType":{"type":"string","description":"notebook object type\n"},"path":{"type":"string","description":"Notebook path on the workspace\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getNotebookProviderConfig:getNotebookProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object","required":["format","path"]},"outputs":{"description":"A collection of values returned by getNotebook.\n","properties":{"content":{"description":"notebook content in selected format\n","type":"string"},"format":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"language":{"description":"notebook language\n","type":"string"},"objectId":{"description":"notebook object ID\n","type":"integer"},"objectType":{"description":"notebook object type\n","type":"string"},"path":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getNotebookProviderConfig:getNotebookProviderConfig"},"workspacePath":{"description":"path on Workspace File System (WSFS) in form of `/Workspace` + \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e\n","type":"string"}},"required":["content","format","language","objectId","objectType","path","workspacePath","id"],"type":"object"}},"databricks:index/getNotebookPaths:getNotebookPaths":{"description":"This data source allows to list notebooks in the Databricks Workspace.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst prod = databricks.getNotebookPaths({\n    path: \"/Production\",\n    recursive: true,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nprod = databricks.get_notebook_paths(path=\"/Production\",\n    recursive=True)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var prod = Databricks.GetNotebookPaths.Invoke(new()\n    {\n        Path = \"/Production\",\n        Recursive = true,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetNotebookPaths(ctx, \u0026databricks.GetNotebookPathsArgs{\n\t\t\tPath:      \"/Production\",\n\t\t\tRecursive: true,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetNotebookPathsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var prod = DatabricksFunctions.getNotebookPaths(GetNotebookPathsArgs.builder()\n            .path(\"/Production\")\n            .recursive(true)\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  prod:\n    fn::invoke:\n      function: databricks:getNotebookPaths\n      arguments:\n        path: /Production\n        recursive: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getNotebookPaths.\n","properties":{"path":{"type":"string","description":"Path to workspace directory\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getNotebookPathsProviderConfig:getNotebookPathsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"recursive":{"type":"boolean","description":"Either or recursively walk given path\n","willReplaceOnChanges":true}},"type":"object","required":["path","recursive"]},"outputs":{"description":"A collection of values returned by getNotebookPaths.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"notebookPathLists":{"description":"list of objects with \u003cspan pulumi-lang-nodejs=\"`path`\" pulumi-lang-dotnet=\"`Path`\" pulumi-lang-go=\"`path`\" pulumi-lang-python=\"`path`\" pulumi-lang-yaml=\"`path`\" pulumi-lang-java=\"`path`\"\u003e`path`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`language`\" pulumi-lang-dotnet=\"`Language`\" pulumi-lang-go=\"`language`\" pulumi-lang-python=\"`language`\" pulumi-lang-yaml=\"`language`\" pulumi-lang-java=\"`language`\"\u003e`language`\u003c/span\u003e attributes\n","items":{"$ref":"#/types/databricks:index/getNotebookPathsNotebookPathList:getNotebookPathsNotebookPathList"},"type":"array"},"path":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getNotebookPathsProviderConfig:getNotebookPathsProviderConfig"},"recursive":{"type":"boolean"}},"required":["notebookPathLists","path","recursive","id"],"type":"object"}},"databricks:index/getNotificationDestinations:getNotificationDestinations":{"description":"This data source allows you to retrieve information about [Notification Destinations](https://docs.databricks.com/api/workspace/notificationdestinations). Notification Destinations are used to send notifications for query alerts and jobs to external systems such as email, Slack, Microsoft Teams, PagerDuty, or generic webhooks. \n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst email = new databricks.NotificationDestination(\"email\", {\n    displayName: \"Email Destination\",\n    config: {\n        email: {\n            addresses: [\"abc@gmail.com\"],\n        },\n    },\n});\nconst slack = new databricks.NotificationDestination(\"slack\", {\n    displayName: \"Slack Destination\",\n    config: {\n        slack: {\n            url: \"https://hooks.slack.com/services/...\",\n        },\n    },\n});\n// Lists all notification desitnations\nconst _this = databricks.getNotificationDestinations({});\n// List destinations of specific type and name\nconst filteredNotification = databricks.getNotificationDestinations({\n    displayNameContains: \"Destination\",\n    type: \"EMAIL\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nemail = databricks.NotificationDestination(\"email\",\n    display_name=\"Email Destination\",\n    config={\n        \"email\": {\n            \"addresses\": [\"abc@gmail.com\"],\n        },\n    })\nslack = databricks.NotificationDestination(\"slack\",\n    display_name=\"Slack Destination\",\n    config={\n        \"slack\": {\n            \"url\": \"https://hooks.slack.com/services/...\",\n        },\n    })\n# Lists all notification desitnations\nthis = databricks.get_notification_destinations()\n# List destinations of specific type and name\nfiltered_notification = databricks.get_notification_destinations(display_name_contains=\"Destination\",\n    type=\"EMAIL\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var email = new Databricks.NotificationDestination(\"email\", new()\n    {\n        DisplayName = \"Email Destination\",\n        Config = new Databricks.Inputs.NotificationDestinationConfigArgs\n        {\n            Email = new Databricks.Inputs.NotificationDestinationConfigEmailArgs\n            {\n                Addresses = new[]\n                {\n                    \"abc@gmail.com\",\n                },\n            },\n        },\n    });\n\n    var slack = new Databricks.NotificationDestination(\"slack\", new()\n    {\n        DisplayName = \"Slack Destination\",\n        Config = new Databricks.Inputs.NotificationDestinationConfigArgs\n        {\n            Slack = new Databricks.Inputs.NotificationDestinationConfigSlackArgs\n            {\n                Url = \"https://hooks.slack.com/services/...\",\n            },\n        },\n    });\n\n    // Lists all notification desitnations\n    var @this = Databricks.GetNotificationDestinations.Invoke();\n\n    // List destinations of specific type and name\n    var filteredNotification = Databricks.GetNotificationDestinations.Invoke(new()\n    {\n        DisplayNameContains = \"Destination\",\n        Type = \"EMAIL\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.NewNotificationDestination(ctx, \"email\", \u0026databricks.NotificationDestinationArgs{\n\t\t\tDisplayName: pulumi.String(\"Email Destination\"),\n\t\t\tConfig: \u0026databricks.NotificationDestinationConfigArgs{\n\t\t\t\tEmail: \u0026databricks.NotificationDestinationConfigEmailArgs{\n\t\t\t\t\tAddresses: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"abc@gmail.com\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewNotificationDestination(ctx, \"slack\", \u0026databricks.NotificationDestinationArgs{\n\t\t\tDisplayName: pulumi.String(\"Slack Destination\"),\n\t\t\tConfig: \u0026databricks.NotificationDestinationConfigArgs{\n\t\t\t\tSlack: \u0026databricks.NotificationDestinationConfigSlackArgs{\n\t\t\t\t\tUrl: pulumi.String(\"https://hooks.slack.com/services/...\"),\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// Lists all notification desitnations\n\t\t_, err = databricks.GetNotificationDestinations(ctx, \u0026databricks.GetNotificationDestinationsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// List destinations of specific type and name\n\t\t_, err = databricks.GetNotificationDestinations(ctx, \u0026databricks.GetNotificationDestinationsArgs{\n\t\t\tDisplayNameContains: pulumi.StringRef(\"Destination\"),\n\t\t\tType:                pulumi.StringRef(\"EMAIL\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.NotificationDestination;\nimport com.pulumi.databricks.NotificationDestinationArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigEmailArgs;\nimport com.pulumi.databricks.inputs.NotificationDestinationConfigSlackArgs;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetNotificationDestinationsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        var email = new NotificationDestination(\"email\", NotificationDestinationArgs.builder()\n            .displayName(\"Email Destination\")\n            .config(NotificationDestinationConfigArgs.builder()\n                .email(NotificationDestinationConfigEmailArgs.builder()\n                    .addresses(\"abc@gmail.com\")\n                    .build())\n                .build())\n            .build());\n\n        var slack = new NotificationDestination(\"slack\", NotificationDestinationArgs.builder()\n            .displayName(\"Slack Destination\")\n            .config(NotificationDestinationConfigArgs.builder()\n                .slack(NotificationDestinationConfigSlackArgs.builder()\n                    .url(\"https://hooks.slack.com/services/...\")\n                    .build())\n                .build())\n            .build());\n\n        // Lists all notification desitnations\n        final var this = DatabricksFunctions.getNotificationDestinations(GetNotificationDestinationsArgs.builder()\n            .build());\n\n        // List destinations of specific type and name\n        final var filteredNotification = DatabricksFunctions.getNotificationDestinations(GetNotificationDestinationsArgs.builder()\n            .displayNameContains(\"Destination\")\n            .type(\"EMAIL\")\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  email:\n    type: databricks:NotificationDestination\n    properties:\n      displayName: Email Destination\n      config:\n        email:\n          addresses:\n            - abc@gmail.com\n  slack:\n    type: databricks:NotificationDestination\n    properties:\n      displayName: Slack Destination\n      config:\n        slack:\n          url: https://hooks.slack.com/services/...\nvariables:\n  # Lists all notification desitnations\n  this:\n    fn::invoke:\n      function: databricks:getNotificationDestinations\n      arguments: {}\n  # List destinations of specific type and name\n  filteredNotification:\n    fn::invoke:\n      function: databricks:getNotificationDestinations\n      arguments:\n        displayNameContains: Destination\n        type: EMAIL\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getNotificationDestinations.\n","properties":{"displayNameContains":{"type":"string","description":"A **case-insensitive** substring to filter Notification Destinations by their display name.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getNotificationDestinationsProviderConfig:getNotificationDestinationsProviderConfig"},"type":{"type":"string","description":"The type of the Notification Destination to filter by. Valid values are: \n* `EMAIL` - Filters Notification Destinations of type Email.\n* `MICROSOFT_TEAMS` - Filters Notification Destinations of type Microsoft Teams.\n* `PAGERDUTY` - Filters Notification Destinations of type PagerDuty.\n* `SLACK` - Filters Notification Destinations of type Slack.\n* `WEBHOOK` - Filters Notification Destinations of type Webhook.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getNotificationDestinations.\n","properties":{"displayNameContains":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"notificationDestinations":{"description":"A list of Notification Destinations matching the specified criteria. Each element contains the following attributes:\n","items":{"$ref":"#/types/databricks:index/getNotificationDestinationsNotificationDestination:getNotificationDestinationsNotificationDestination"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getNotificationDestinationsProviderConfig:getNotificationDestinationsProviderConfig"},"type":{"type":"string"}},"required":["notificationDestinations","id"],"type":"object"}},"databricks:index/getOnlineStore:getOnlineStore":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getOnlineStore.\n","properties":{"name":{"type":"string","description":"The name of the online store. This is the unique identifier for the online store\n"},"providerConfig":{"$ref":"#/types/databricks:index/getOnlineStoreProviderConfig:getOnlineStoreProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getOnlineStore.\n","properties":{"capacity":{"description":"(string) - The capacity of the online store. Valid values are \"CU_1\", \"CU_2\", \"CU_4\", \"CU_8\"\n","type":"string"},"creationTime":{"description":"(string) - The timestamp when the online store was created\n","type":"string"},"creator":{"description":"(string) - The email of the creator of the online store\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"(string) - The name of the online store. This is the unique identifier for the online store\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getOnlineStoreProviderConfig:getOnlineStoreProviderConfig"},"readReplicaCount":{"description":"(integer) - The number of read replicas for the online store. Defaults to 0\n","type":"integer"},"state":{"description":"(string) - The current state of the online store. Possible values are: `AVAILABLE`, `DELETING`, `FAILING_OVER`, `STARTING`, `STOPPED`, `UPDATING`\n","type":"string"},"usagePolicyId":{"description":"(string) - The usage policy applied to the online store to track billing\n","type":"string"}},"required":["capacity","creationTime","creator","name","readReplicaCount","state","usagePolicyId","id"],"type":"object"}},"databricks:index/getOnlineStores:getOnlineStores":{"description":"[![Private Preview](https://img.shields.io/badge/Release_Stage-Private_Preview-blueviolet)](https://docs.databricks.com/aws/en/release-notes/release-types)\n","inputs":{"description":"A collection of arguments for invoking getOnlineStores.\n","properties":{"pageSize":{"type":"integer","description":"The maximum number of results to return. Defaults to 100 if not specified\n"},"providerConfig":{"$ref":"#/types/databricks:index/getOnlineStoresProviderConfig:getOnlineStoresProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getOnlineStores.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"onlineStores":{"items":{"$ref":"#/types/databricks:index/getOnlineStoresOnlineStore:getOnlineStoresOnlineStore"},"type":"array"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getOnlineStoresProviderConfig:getOnlineStoresProviderConfig"}},"required":["onlineStores","id"],"type":"object"}},"databricks:index/getPipelines:getPipelines":{"description":"Retrieves a list of all\u003cspan pulumi-lang-nodejs=\" databricks.Pipeline \" pulumi-lang-dotnet=\" databricks.Pipeline \" pulumi-lang-go=\" Pipeline \" pulumi-lang-python=\" Pipeline \" pulumi-lang-yaml=\" databricks.Pipeline \" pulumi-lang-java=\" databricks.Pipeline \"\u003e databricks.Pipeline \u003c/span\u003e([Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt)) ids deployed in a workspace, or those matching the provided search term. Maximum 100 results.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGet all Lakeflow Declarative Pipelines:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getPipelines({});\nexport const allPipelines = all.then(all =\u003e all.ids);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_pipelines()\npulumi.export(\"allPipelines\", all.ids)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetPipelines.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allPipelines\"] = all.Apply(getPipelinesResult =\u003e getPipelinesResult.Ids),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetPipelines(ctx, \u0026databricks.GetPipelinesArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allPipelines\", all.Ids)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetPipelinesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getPipelines(GetPipelinesArgs.builder()\n            .build());\n\n        ctx.export(\"allPipelines\", all.ids());\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getPipelines\n      arguments: {}\noutputs:\n  allPipelines: ${all.ids}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nFilter Lakeflow Declarative Pipelines by name (exact match):\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getPipelines({\n    pipelineName: \"my_pipeline\",\n});\nexport const myPipeline = _this.then(_this =\u003e _this.ids);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_pipelines(pipeline_name=\"my_pipeline\")\npulumi.export(\"myPipeline\", this.ids)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetPipelines.Invoke(new()\n    {\n        PipelineName = \"my_pipeline\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"myPipeline\"] = @this.Apply(@this =\u003e @this.Apply(getPipelinesResult =\u003e getPipelinesResult.Ids)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetPipelines(ctx, \u0026databricks.GetPipelinesArgs{\n\t\t\tPipelineName: pulumi.StringRef(\"my_pipeline\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"myPipeline\", this.Ids)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetPipelinesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getPipelines(GetPipelinesArgs.builder()\n            .pipelineName(\"my_pipeline\")\n            .build());\n\n        ctx.export(\"myPipeline\", this_.ids());\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getPipelines\n      arguments:\n        pipelineName: my_pipeline\noutputs:\n  myPipeline: ${this.ids}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nFilter Lakeflow Declarative Pipelines by name (wildcard search):\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getPipelines({\n    pipelineName: \"%pipeline%\",\n});\nexport const wildcardPipelines = _this.then(_this =\u003e _this.ids);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_pipelines(pipeline_name=\"%pipeline%\")\npulumi.export(\"wildcardPipelines\", this.ids)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetPipelines.Invoke(new()\n    {\n        PipelineName = \"%pipeline%\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"wildcardPipelines\"] = @this.Apply(@this =\u003e @this.Apply(getPipelinesResult =\u003e getPipelinesResult.Ids)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetPipelines(ctx, \u0026databricks.GetPipelinesArgs{\n\t\t\tPipelineName: pulumi.StringRef(\"%pipeline%\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"wildcardPipelines\", this.Ids)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetPipelinesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getPipelines(GetPipelinesArgs.builder()\n            .pipelineName(\"%pipeline%\")\n            .build());\n\n        ctx.export(\"wildcardPipelines\", this_.ids());\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getPipelines\n      arguments:\n        pipelineName: '%pipeline%'\noutputs:\n  wildcardPipelines: ${this.ids}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Pipeline \" pulumi-lang-dotnet=\" databricks.Pipeline \" pulumi-lang-go=\" Pipeline \" pulumi-lang-python=\" Pipeline \" pulumi-lang-yaml=\" databricks.Pipeline \" pulumi-lang-java=\" databricks.Pipeline \"\u003e databricks.Pipeline \u003c/span\u003eto deploy [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a databricks_cluster.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Notebook \" pulumi-lang-dotnet=\" databricks.Notebook \" pulumi-lang-go=\" Notebook \" pulumi-lang-python=\" Notebook \" pulumi-lang-yaml=\" databricks.Notebook \" pulumi-lang-java=\" databricks.Notebook \"\u003e databricks.Notebook \u003c/span\u003eto manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).\n","inputs":{"description":"A collection of arguments for invoking getPipelines.\n","properties":{"ids":{"type":"array","items":{"type":"string"},"description":"List of ids for [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt) pipelines matching the provided search criteria.\n"},"pipelineName":{"type":"string","description":"Filter Lakeflow Declarative Pipelines by name for a given search term. `%` is the supported wildcard operator.\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getPipelinesProviderConfig:getPipelinesProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getPipelines.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"ids":{"description":"List of ids for [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt) pipelines matching the provided search criteria.\n","items":{"type":"string"},"type":"array"},"pipelineName":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getPipelinesProviderConfig:getPipelinesProviderConfig"}},"required":["ids","id"],"type":"object"}},"databricks:index/getPolicyInfo:getPolicyInfo":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nRetrieves information about a specific ABAC (Attribute-Based Access Control) policy in Unity Catalog. Use this data source to query details of an existing policy by its securable type, securable name, and policy name.\n\nABAC policies provide governance for enforcing compliance through data attributes, allowing flexible and comprehensive access control based on conditions rather than specific resources.\n\n\n\n## Example Usage\n\n### Get Policy Information\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst piiPolicy = databricks.getPolicyInfo({\n    onSecurableType: \"CATALOG\",\n    onSecurableFullname: \"main\",\n    name: \"pii_data_policy\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\npii_policy = databricks.get_policy_info(on_securable_type=\"CATALOG\",\n    on_securable_fullname=\"main\",\n    name=\"pii_data_policy\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var piiPolicy = Databricks.GetPolicyInfo.Invoke(new()\n    {\n        OnSecurableType = \"CATALOG\",\n        OnSecurableFullname = \"main\",\n        Name = \"pii_data_policy\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupPolicyInfo(ctx, \u0026databricks.LookupPolicyInfoArgs{\n\t\t\tOnSecurableType:     \"CATALOG\",\n\t\t\tOnSecurableFullname: \"main\",\n\t\t\tName:                \"pii_data_policy\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetPolicyInfoArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var piiPolicy = DatabricksFunctions.getPolicyInfo(GetPolicyInfoArgs.builder()\n            .onSecurableType(\"CATALOG\")\n            .onSecurableFullname(\"main\")\n            .name(\"pii_data_policy\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  piiPolicy:\n    fn::invoke:\n      function: databricks:getPolicyInfo\n      arguments:\n        onSecurableType: CATALOG\n        onSecurableFullname: main\n        name: pii_data_policy\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getPolicyInfo.\n","properties":{"name":{"type":"string","description":"Name of the policy. Required on create and optional on update.\nTo rename the policy, set \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e to a different value on update\n"},"onSecurableFullname":{"type":"string","description":"Full name of the securable on which the policy is defined.\nRequired on create\n"},"onSecurableType":{"type":"string","description":"Type of the securable on which the policy is defined.\nOnly `CATALOG`, `SCHEMA` and `TABLE` are supported at this moment.\nRequired on create. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPolicyInfoProviderConfig:getPolicyInfoProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name","onSecurableFullname","onSecurableType"]},"outputs":{"description":"A collection of values returned by getPolicyInfo.\n","properties":{"columnMask":{"$ref":"#/types/databricks:index/getPolicyInfoColumnMask:getPolicyInfoColumnMask","description":"(ColumnMaskOptions) - Options for column mask policies. Valid only if \u003cspan pulumi-lang-nodejs=\"`policyType`\" pulumi-lang-dotnet=\"`PolicyType`\" pulumi-lang-go=\"`policyType`\" pulumi-lang-python=\"`policy_type`\" pulumi-lang-yaml=\"`policyType`\" pulumi-lang-java=\"`policyType`\"\u003e`policy_type`\u003c/span\u003e is `POLICY_TYPE_COLUMN_MASK`.\nRequired on create and optional on update. When specified on update,\nthe new options will replace the existing options as a whole\n"},"comment":{"description":"(string) - Optional description of the policy\n","type":"string"},"createdAt":{"description":"(integer) - Time at which the policy was created, in epoch milliseconds. Output only\n","type":"integer"},"createdBy":{"description":"(string) - Username of the user who created the policy. Output only\n","type":"string"},"exceptPrincipals":{"description":"(list of string) - Optional list of user or group names that should be excluded from the policy\n","items":{"type":"string"},"type":"array"},"forSecurableType":{"description":"(string) - Type of securables that the policy should take effect on.\nOnly `TABLE` is supported at this moment.\nRequired on create and optional on update. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n","type":"string"},"id":{"description":"(string) - Unique identifier of the policy. This field is output only and is generated by the system\n","type":"string"},"matchColumns":{"description":"(list of MatchColumn) - Optional list of condition expressions used to match table columns.\nOnly valid when \u003cspan pulumi-lang-nodejs=\"`forSecurableType`\" pulumi-lang-dotnet=\"`ForSecurableType`\" pulumi-lang-go=\"`forSecurableType`\" pulumi-lang-python=\"`for_securable_type`\" pulumi-lang-yaml=\"`forSecurableType`\" pulumi-lang-java=\"`forSecurableType`\"\u003e`for_securable_type`\u003c/span\u003e is `TABLE`.\nWhen specified, the policy only applies to tables whose columns satisfy all match conditions\n","items":{"$ref":"#/types/databricks:index/getPolicyInfoMatchColumn:getPolicyInfoMatchColumn"},"type":"array"},"name":{"description":"(string) - Name of the policy. Required on create and optional on update.\nTo rename the policy, set \u003cspan pulumi-lang-nodejs=\"`name`\" pulumi-lang-dotnet=\"`Name`\" pulumi-lang-go=\"`name`\" pulumi-lang-python=\"`name`\" pulumi-lang-yaml=\"`name`\" pulumi-lang-java=\"`name`\"\u003e`name`\u003c/span\u003e to a different value on update\n","type":"string"},"onSecurableFullname":{"description":"(string) - Full name of the securable on which the policy is defined.\nRequired on create\n","type":"string"},"onSecurableType":{"description":"(string) - Type of the securable on which the policy is defined.\nOnly `CATALOG`, `SCHEMA` and `TABLE` are supported at this moment.\nRequired on create. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n","type":"string"},"policyType":{"description":"(string) - Type of the policy. Required on create. Possible values are: `POLICY_TYPE_COLUMN_MASK`, `POLICY_TYPE_ROW_FILTER`\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getPolicyInfoProviderConfig:getPolicyInfoProviderConfig"},"rowFilter":{"$ref":"#/types/databricks:index/getPolicyInfoRowFilter:getPolicyInfoRowFilter","description":"(RowFilterOptions) - Options for row filter policies. Valid only if \u003cspan pulumi-lang-nodejs=\"`policyType`\" pulumi-lang-dotnet=\"`PolicyType`\" pulumi-lang-go=\"`policyType`\" pulumi-lang-python=\"`policy_type`\" pulumi-lang-yaml=\"`policyType`\" pulumi-lang-java=\"`policyType`\"\u003e`policy_type`\u003c/span\u003e is `POLICY_TYPE_ROW_FILTER`.\nRequired on create and optional on update. When specified on update,\nthe new options will replace the existing options as a whole\n"},"toPrincipals":{"description":"(list of string) - List of user or group names that the policy applies to.\nRequired on create and optional on update\n","items":{"type":"string"},"type":"array"},"updatedAt":{"description":"(integer) - Time at which the policy was last modified, in epoch milliseconds. Output only\n","type":"integer"},"updatedBy":{"description":"(string) - Username of the user who last modified the policy. Output only\n","type":"string"},"whenCondition":{"description":"(string) - Optional condition when the policy should take effect\n","type":"string"}},"required":["columnMask","comment","createdAt","createdBy","exceptPrincipals","forSecurableType","id","matchColumns","name","onSecurableFullname","onSecurableType","policyType","rowFilter","toPrincipals","updatedAt","updatedBy","whenCondition"],"type":"object"}},"databricks:index/getPolicyInfos:getPolicyInfos":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nRetrieves a list of all ABAC (Attribute-Based Access Control) policies defined on a specific securable in Unity Catalog. Use this data source to query all policies for a given securable type and name.\n\nABAC policies provide governance for enforcing compliance through data attributes, allowing flexible and comprehensive access control based on conditions rather than specific resources.\n\n\n\n## Example Usage\n\n### List All Policies on a Securable\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst catalogPolicies = databricks.getPolicyInfos({\n    onSecurableType: \"CATALOG\",\n    onSecurableFullname: \"main\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncatalog_policies = databricks.get_policy_infos(on_securable_type=\"CATALOG\",\n    on_securable_fullname=\"main\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var catalogPolicies = Databricks.GetPolicyInfos.Invoke(new()\n    {\n        OnSecurableType = \"CATALOG\",\n        OnSecurableFullname = \"main\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetPolicyInfos(ctx, \u0026databricks.GetPolicyInfosArgs{\n\t\t\tOnSecurableType:     \"CATALOG\",\n\t\t\tOnSecurableFullname: \"main\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetPolicyInfosArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var catalogPolicies = DatabricksFunctions.getPolicyInfos(GetPolicyInfosArgs.builder()\n            .onSecurableType(\"CATALOG\")\n            .onSecurableFullname(\"main\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  catalogPolicies:\n    fn::invoke:\n      function: databricks:getPolicyInfos\n      arguments:\n        onSecurableType: CATALOG\n        onSecurableFullname: main\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getPolicyInfos.\n","properties":{"includeInherited":{"type":"boolean","description":"Optional. Whether to include policies defined on parent securables.\nBy default, the inherited policies are not included\n"},"maxResults":{"type":"integer","description":"Optional.  Maximum number of policies to return on a single page (page length).\n- When not set or set to 0, the page length is set to a server configured value (recommended);\n- When set to a value greater than 0, the page length is the minimum of this value and a server configured value;\n"},"onSecurableFullname":{"type":"string","description":"Required. The fully qualified name of securable to list policies for\n"},"onSecurableType":{"type":"string","description":"Required. The type of the securable to list policies for\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPolicyInfosProviderConfig:getPolicyInfosProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["onSecurableFullname","onSecurableType"]},"outputs":{"description":"A collection of values returned by getPolicyInfos.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"includeInherited":{"type":"boolean"},"maxResults":{"type":"integer"},"onSecurableFullname":{"description":"(string) - Full name of the securable on which the policy is defined.\nRequired on create\n","type":"string"},"onSecurableType":{"description":"(string) - Type of the securable on which the policy is defined.\nOnly `CATALOG`, `SCHEMA` and `TABLE` are supported at this moment.\nRequired on create. Possible values are: `CATALOG`, `CLEAN_ROOM`, `CONNECTION`, `CREDENTIAL`, `EXTERNAL_LOCATION`, `EXTERNAL_METADATA`, `FUNCTION`, `METASTORE`, `PIPELINE`, `PROVIDER`, `RECIPIENT`, `SCHEMA`, `SHARE`, `STAGING_TABLE`, `STORAGE_CREDENTIAL`, `TABLE`, `VOLUME`\n","type":"string"},"policies":{"items":{"$ref":"#/types/databricks:index/getPolicyInfosPolicy:getPolicyInfosPolicy"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getPolicyInfosProviderConfig:getPolicyInfosProviderConfig"}},"required":["onSecurableFullname","onSecurableType","policies","id"],"type":"object"}},"databricks:index/getPostgresBranch:getPostgresBranch":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source retrieves a single Postgres branch.\n\n\n## Example Usage\n\n### Retrieve Branch by Name\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getPostgresBranch({\n    name: \"projects/my-project/branches/dev-branch\",\n});\nexport const branchIsProtected = _this.then(_this =\u003e _this.status?.isProtected);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_postgres_branch(name=\"projects/my-project/branches/dev-branch\")\npulumi.export(\"branchIsProtected\", this.status.is_protected)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetPostgresBranch.Invoke(new()\n    {\n        Name = \"projects/my-project/branches/dev-branch\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"branchIsProtected\"] = @this.Apply(@this =\u003e @this.Apply(getPostgresBranchResult =\u003e getPostgresBranchResult.Status?.IsProtected)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupPostgresBranch(ctx, \u0026databricks.LookupPostgresBranchArgs{\n\t\t\tName: \"projects/my-project/branches/dev-branch\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"branchIsProtected\", this.Status.IsProtected)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetPostgresBranchArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getPostgresBranch(GetPostgresBranchArgs.builder()\n            .name(\"projects/my-project/branches/dev-branch\")\n            .build());\n\n        ctx.export(\"branchIsProtected\", this_.status().isProtected());\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getPostgresBranch\n      arguments:\n        name: projects/my-project/branches/dev-branch\noutputs:\n  branchIsProtected: ${this.status.isProtected}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getPostgresBranch.\n","properties":{"name":{"type":"string","description":"Output only. The full resource path of the branch.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresBranchProviderConfig:getPostgresBranchProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getPostgresBranch.\n","properties":{"createTime":{"description":"(string) - A timestamp indicating when the branch was created\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"(string) - Output only. The full resource path of the branch.\nFormat: projects/{project_id}/branches/{branch_id}\n","type":"string"},"parent":{"description":"(string) - The project containing this branch (API resource hierarchy).\nFormat: projects/{project_id}\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresBranchProviderConfig:getPostgresBranchProviderConfig"},"spec":{"$ref":"#/types/databricks:index/getPostgresBranchSpec:getPostgresBranchSpec","description":"(BranchSpec) - The spec contains the branch configuration\n"},"status":{"$ref":"#/types/databricks:index/getPostgresBranchStatus:getPostgresBranchStatus","description":"(BranchStatus) - The current status of a Branch\n"},"uid":{"description":"(string) - System-generated unique ID for the branch\n","type":"string"},"updateTime":{"description":"(string) - A timestamp indicating when the branch was last updated\n","type":"string"}},"required":["createTime","name","parent","spec","status","uid","updateTime","id"],"type":"object"}},"databricks:index/getPostgresBranches:getPostgresBranches":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source lists all Postgres branches in a project.\n\n\n## Example Usage\n\n### List All Branches in a Project\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getPostgresBranches({\n    parent: \"projects/my-project\",\n});\nexport const branchNames = all.then(all =\u003e .map(branch =\u003e (branch.name)));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_postgres_branches(parent=\"projects/my-project\")\npulumi.export(\"branchNames\", [branch.name for branch in all.branches])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetPostgresBranches.Invoke(new()\n    {\n        Parent = \"projects/my-project\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"branchNames\"] = .Select(branch =\u003e \n        {\n            return branch.Name;\n        }).ToList(),\n    };\n});\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getPostgresBranches.\n","properties":{"pageSize":{"type":"integer","description":"Upper bound for items returned. Cannot be negative\n"},"parent":{"type":"string","description":"The Project that owns this collection of branches.\nFormat: projects/{project_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresBranchesProviderConfig:getPostgresBranchesProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["parent"]},"outputs":{"description":"A collection of values returned by getPostgresBranches.\n","properties":{"branches":{"items":{"$ref":"#/types/databricks:index/getPostgresBranchesBranch:getPostgresBranchesBranch"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"parent":{"description":"(string) - The project containing this branch (API resource hierarchy).\nFormat: projects/{project_id}\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresBranchesProviderConfig:getPostgresBranchesProviderConfig"}},"required":["branches","parent","id"],"type":"object"}},"databricks:index/getPostgresEndpoint:getPostgresEndpoint":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source retrieves a single Postgres endpoint.\n\n\n## Example Usage\n\n### Retrieve Endpoint by Name\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getPostgresEndpoint({\n    name: \"projects/my-project/branches/dev-branch/endpoints/primary\",\n});\nexport const endpointType = _this.then(_this =\u003e _this.status?.endpointType);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_postgres_endpoint(name=\"projects/my-project/branches/dev-branch/endpoints/primary\")\npulumi.export(\"endpointType\", this.status.endpoint_type)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetPostgresEndpoint.Invoke(new()\n    {\n        Name = \"projects/my-project/branches/dev-branch/endpoints/primary\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"endpointType\"] = @this.Apply(@this =\u003e @this.Apply(getPostgresEndpointResult =\u003e getPostgresEndpointResult.Status?.EndpointType)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupPostgresEndpoint(ctx, \u0026databricks.LookupPostgresEndpointArgs{\n\t\t\tName: \"projects/my-project/branches/dev-branch/endpoints/primary\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"endpointType\", this.Status.EndpointType)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetPostgresEndpointArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getPostgresEndpoint(GetPostgresEndpointArgs.builder()\n            .name(\"projects/my-project/branches/dev-branch/endpoints/primary\")\n            .build());\n\n        ctx.export(\"endpointType\", this_.status().endpointType());\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getPostgresEndpoint\n      arguments:\n        name: projects/my-project/branches/dev-branch/endpoints/primary\noutputs:\n  endpointType: ${this.status.endpointType}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getPostgresEndpoint.\n","properties":{"name":{"type":"string","description":"Output only. The full resource path of the endpoint.\nFormat: projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresEndpointProviderConfig:getPostgresEndpointProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getPostgresEndpoint.\n","properties":{"createTime":{"description":"(string) - A timestamp indicating when the compute endpoint was created\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"(string) - Output only. The full resource path of the endpoint.\nFormat: projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}\n","type":"string"},"parent":{"description":"(string) - The branch containing this endpoint (API resource hierarchy).\nFormat: projects/{project_id}/branches/{branch_id}\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresEndpointProviderConfig:getPostgresEndpointProviderConfig"},"spec":{"$ref":"#/types/databricks:index/getPostgresEndpointSpec:getPostgresEndpointSpec","description":"(EndpointSpec) - The spec contains the compute endpoint configuration, including autoscaling limits, suspend timeout, and disabled state\n"},"status":{"$ref":"#/types/databricks:index/getPostgresEndpointStatus:getPostgresEndpointStatus","description":"(EndpointStatus) - Current operational status of the compute endpoint\n"},"uid":{"description":"(string) - System-generated unique ID for the endpoint\n","type":"string"},"updateTime":{"description":"(string) - A timestamp indicating when the compute endpoint was last updated\n","type":"string"}},"required":["createTime","name","parent","spec","status","uid","updateTime","id"],"type":"object"}},"databricks:index/getPostgresEndpoints:getPostgresEndpoints":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source lists all Postgres endpoints in a branch.\n\n\n## Example Usage\n\n### List All Endpoints in a Branch\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getPostgresEndpoints({\n    parent: \"projects/my-project/branches/dev-branch\",\n});\nexport const endpointNames = all.then(all =\u003e .map(endpoint =\u003e (endpoint.name)));\nexport const endpointTypes = all.then(all =\u003e .map(endpoint =\u003e (endpoint.status?.endpointType)));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_postgres_endpoints(parent=\"projects/my-project/branches/dev-branch\")\npulumi.export(\"endpointNames\", [endpoint.name for endpoint in all.endpoints])\npulumi.export(\"endpointTypes\", [endpoint.status.endpoint_type for endpoint in all.endpoints])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetPostgresEndpoints.Invoke(new()\n    {\n        Parent = \"projects/my-project/branches/dev-branch\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"endpointNames\"] = .Select(endpoint =\u003e \n        {\n            return endpoint.Name;\n        }).ToList(),\n        [\"endpointTypes\"] = .Select(endpoint =\u003e \n        {\n            return endpoint.Status?.EndpointType;\n        }).ToList(),\n    };\n});\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getPostgresEndpoints.\n","properties":{"pageSize":{"type":"integer","description":"Upper bound for items returned. Cannot be negative\n"},"parent":{"type":"string","description":"The Branch that owns this collection of endpoints.\nFormat: projects/{project_id}/branches/{branch_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresEndpointsProviderConfig:getPostgresEndpointsProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["parent"]},"outputs":{"description":"A collection of values returned by getPostgresEndpoints.\n","properties":{"endpoints":{"items":{"$ref":"#/types/databricks:index/getPostgresEndpointsEndpoint:getPostgresEndpointsEndpoint"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"parent":{"description":"(string) - The branch containing this endpoint (API resource hierarchy).\nFormat: projects/{project_id}/branches/{branch_id}\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresEndpointsProviderConfig:getPostgresEndpointsProviderConfig"}},"required":["endpoints","parent","id"],"type":"object"}},"databricks:index/getPostgresProject:getPostgresProject":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source retrieves a single Postgres project.\n\n\n## Example Usage\n\n### Retrieve Project by Name\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getPostgresProject({\n    name: \"projects/my-project\",\n});\nexport const projectPgVersion = _this.then(_this =\u003e _this.status?.pgVersion);\nexport const projectDisplayName = _this.then(_this =\u003e _this.status?.displayName);\nexport const projectHistoryRetention = _this.then(_this =\u003e _this.status?.historyRetentionDuration);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_postgres_project(name=\"projects/my-project\")\npulumi.export(\"projectPgVersion\", this.status.pg_version)\npulumi.export(\"projectDisplayName\", this.status.display_name)\npulumi.export(\"projectHistoryRetention\", this.status.history_retention_duration)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetPostgresProject.Invoke(new()\n    {\n        Name = \"projects/my-project\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"projectPgVersion\"] = @this.Apply(@this =\u003e @this.Apply(getPostgresProjectResult =\u003e getPostgresProjectResult.Status?.PgVersion)),\n        [\"projectDisplayName\"] = @this.Apply(@this =\u003e @this.Apply(getPostgresProjectResult =\u003e getPostgresProjectResult.Status?.DisplayName)),\n        [\"projectHistoryRetention\"] = @this.Apply(@this =\u003e @this.Apply(getPostgresProjectResult =\u003e getPostgresProjectResult.Status?.HistoryRetentionDuration)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupPostgresProject(ctx, \u0026databricks.LookupPostgresProjectArgs{\n\t\t\tName: \"projects/my-project\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"projectPgVersion\", this.Status.PgVersion)\n\t\tctx.Export(\"projectDisplayName\", this.Status.DisplayName)\n\t\tctx.Export(\"projectHistoryRetention\", this.Status.HistoryRetentionDuration)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetPostgresProjectArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getPostgresProject(GetPostgresProjectArgs.builder()\n            .name(\"projects/my-project\")\n            .build());\n\n        ctx.export(\"projectPgVersion\", this_.status().pgVersion());\n        ctx.export(\"projectDisplayName\", this_.status().displayName());\n        ctx.export(\"projectHistoryRetention\", this_.status().historyRetentionDuration());\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getPostgresProject\n      arguments:\n        name: projects/my-project\noutputs:\n  projectPgVersion: ${this.status.pgVersion}\n  projectDisplayName: ${this.status.displayName}\n  projectHistoryRetention: ${this.status.historyRetentionDuration}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getPostgresProject.\n","properties":{"name":{"type":"string","description":"Output only. The full resource path of the project.\nFormat: projects/{project_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresProjectProviderConfig:getPostgresProjectProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getPostgresProject.\n","properties":{"createTime":{"description":"(string) - A timestamp indicating when the project was created\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"(string) - Output only. The full resource path of the project.\nFormat: projects/{project_id}\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresProjectProviderConfig:getPostgresProjectProviderConfig"},"spec":{"$ref":"#/types/databricks:index/getPostgresProjectSpec:getPostgresProjectSpec","description":"(ProjectSpec) - The spec contains the project configuration, including display_name,\u003cspan pulumi-lang-nodejs=\" pgVersion \" pulumi-lang-dotnet=\" PgVersion \" pulumi-lang-go=\" pgVersion \" pulumi-lang-python=\" pg_version \" pulumi-lang-yaml=\" pgVersion \" pulumi-lang-java=\" pgVersion \"\u003e pg_version \u003c/span\u003e(Postgres version), history_retention_duration, and default_endpoint_settings\n"},"status":{"$ref":"#/types/databricks:index/getPostgresProjectStatus:getPostgresProjectStatus","description":"(ProjectStatus) - The current status of a Project\n"},"uid":{"description":"(string) - System-generated unique ID for the project\n","type":"string"},"updateTime":{"description":"(string) - A timestamp indicating when the project was last updated\n","type":"string"}},"required":["createTime","name","spec","status","uid","updateTime","id"],"type":"object"}},"databricks:index/getPostgresProjects:getPostgresProjects":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source lists all Postgres projects in the workspace.\n\n\n## Example Usage\n\n### List All Projects\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getPostgresProjects({});\nexport const projectNames = all.then(all =\u003e .map(project =\u003e (project.name)));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_postgres_projects()\npulumi.export(\"projectNames\", [project.name for project in all.projects])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetPostgresProjects.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"projectNames\"] = .Select(project =\u003e \n        {\n            return project.Name;\n        }).ToList(),\n    };\n});\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getPostgresProjects.\n","properties":{"pageSize":{"type":"integer","description":"Upper bound for items returned. Cannot be negative. The maximum value is 100\n"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresProjectsProviderConfig:getPostgresProjectsProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getPostgresProjects.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"projects":{"items":{"$ref":"#/types/databricks:index/getPostgresProjectsProject:getPostgresProjectsProject"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getPostgresProjectsProviderConfig:getPostgresProjectsProviderConfig"}},"required":["projects","id"],"type":"object"}},"databricks:index/getQualityMonitorV2:getQualityMonitorV2":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n\u003e **Deprecated** This data source is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`databricks.DataQualityMonitor`\" pulumi-lang-dotnet=\"`databricks.DataQualityMonitor`\" pulumi-lang-go=\"`DataQualityMonitor`\" pulumi-lang-python=\"`DataQualityMonitor`\" pulumi-lang-yaml=\"`databricks.DataQualityMonitor`\" pulumi-lang-java=\"`databricks.DataQualityMonitor`\"\u003e`databricks.DataQualityMonitor`\u003c/span\u003e instead.\n\nThis data source can be used to fetch a quality monitors v2.\n\n\u003e **Note** This data source can only be used with an workspace-level provider!\n\n\n## Example Usage\n\n\u003e **Deprecated** This data source is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`databricks.DataQualityMonitor`\" pulumi-lang-dotnet=\"`databricks.DataQualityMonitor`\" pulumi-lang-go=\"`DataQualityMonitor`\" pulumi-lang-python=\"`DataQualityMonitor`\" pulumi-lang-yaml=\"`databricks.DataQualityMonitor`\" pulumi-lang-java=\"`databricks.DataQualityMonitor`\"\u003e`databricks.DataQualityMonitor`\u003c/span\u003e instead.\n\nReferring to a quality monitor by uc object type (currently only support \u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e) and object id:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getSchema({\n    name: \"my_catalog.my_schema\",\n});\nconst thisGetQualityMonitorV2 = _this.then(_this =\u003e databricks.getQualityMonitorV2({\n    objectType: \"schema\",\n    objectId: _this.schemaInfo?.schemaId,\n}));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_schema(name=\"my_catalog.my_schema\")\nthis_get_quality_monitor_v2 = databricks.get_quality_monitor_v2(object_type=\"schema\",\n    object_id=this.schema_info.schema_id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetSchema.Invoke(new()\n    {\n        Name = \"my_catalog.my_schema\",\n    });\n\n    var thisGetQualityMonitorV2 = Databricks.GetQualityMonitorV2.Invoke(new()\n    {\n        ObjectType = \"schema\",\n        ObjectId = @this.Apply(getSchemaResult =\u003e getSchemaResult.SchemaInfo?.SchemaId),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupSchema(ctx, \u0026databricks.LookupSchemaArgs{\n\t\t\tName: \"my_catalog.my_schema\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.LookupQualityMonitorV2(ctx, \u0026databricks.LookupQualityMonitorV2Args{\n\t\t\tObjectType: \"schema\",\n\t\t\tObjectId:   this.SchemaInfo.SchemaId,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetSchemaArgs;\nimport com.pulumi.databricks.inputs.GetQualityMonitorV2Args;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getSchema(GetSchemaArgs.builder()\n            .name(\"my_catalog.my_schema\")\n            .build());\n\n        final var thisGetQualityMonitorV2 = DatabricksFunctions.getQualityMonitorV2(GetQualityMonitorV2Args.builder()\n            .objectType(\"schema\")\n            .objectId(this_.schemaInfo().schemaId())\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getSchema\n      arguments:\n        name: my_catalog.my_schema\n  thisGetQualityMonitorV2:\n    fn::invoke:\n      function: databricks:getQualityMonitorV2\n      arguments:\n        objectType: schema\n        objectId: ${this.schemaInfo.schemaId}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getQualityMonitorV2.\n","properties":{"objectId":{"type":"string","description":"The uuid of the request object. For example, schema id\n"},"objectType":{"type":"string","description":"The type of the monitored object. Can be one of the following: schema\n"},"providerConfig":{"$ref":"#/types/databricks:index/getQualityMonitorV2ProviderConfig:getQualityMonitorV2ProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["objectId","objectType"]},"outputs":{"description":"A collection of values returned by getQualityMonitorV2.\n","properties":{"anomalyDetectionConfig":{"$ref":"#/types/databricks:index/getQualityMonitorV2AnomalyDetectionConfig:getQualityMonitorV2AnomalyDetectionConfig","description":"(AnomalyDetectionConfig)\n"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"objectId":{"description":"(string) - The uuid of the request object. For example, schema id\n","type":"string"},"objectType":{"description":"(string) - The type of the monitored object. Can be one of the following: schema\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getQualityMonitorV2ProviderConfig:getQualityMonitorV2ProviderConfig"},"validityCheckConfigurations":{"description":"(list of ValidityCheckConfiguration) - Validity check configurations for anomaly detection\n","items":{"$ref":"#/types/databricks:index/getQualityMonitorV2ValidityCheckConfiguration:getQualityMonitorV2ValidityCheckConfiguration"},"type":"array"}},"required":["anomalyDetectionConfig","objectId","objectType","validityCheckConfigurations","id"],"type":"object"}},"databricks:index/getQualityMonitorsV2:getQualityMonitorsV2":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\n\u003e **Deprecated** This data source is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`databricks.getDataQualityMonitors`\" pulumi-lang-dotnet=\"`databricks.getDataQualityMonitors`\" pulumi-lang-go=\"`getDataQualityMonitors`\" pulumi-lang-python=\"`get_data_quality_monitors`\" pulumi-lang-yaml=\"`databricks.getDataQualityMonitors`\" pulumi-lang-java=\"`databricks.getDataQualityMonitors`\"\u003e`databricks.getDataQualityMonitors`\u003c/span\u003e instead.\n\nThis data source can be used to fetch the list of quality monitors v2.\n\n\u003e **Note** This data source can only be used with an workspace-level provider!\n\n\n## Example Usage\n\n\u003e **Deprecated** This data source is deprecated. Please use \u003cspan pulumi-lang-nodejs=\"`databricks.getDataQualityMonitors`\" pulumi-lang-dotnet=\"`databricks.getDataQualityMonitors`\" pulumi-lang-go=\"`getDataQualityMonitors`\" pulumi-lang-python=\"`get_data_quality_monitors`\" pulumi-lang-yaml=\"`databricks.getDataQualityMonitors`\" pulumi-lang-java=\"`databricks.getDataQualityMonitors`\"\u003e`databricks.getDataQualityMonitors`\u003c/span\u003e instead.\n\nGetting a list of all quality monitors:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getQualityMonitorsV2({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_quality_monitors_v2()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetQualityMonitorsV2.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetQualityMonitorsV2(ctx, \u0026databricks.GetQualityMonitorsV2Args{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetQualityMonitorsV2Args;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getQualityMonitorsV2(GetQualityMonitorsV2Args.builder()\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getQualityMonitorsV2\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getQualityMonitorsV2.\n","properties":{"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getQualityMonitorsV2ProviderConfig:getQualityMonitorsV2ProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getQualityMonitorsV2.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getQualityMonitorsV2ProviderConfig:getQualityMonitorsV2ProviderConfig"},"qualityMonitors":{"items":{"$ref":"#/types/databricks:index/getQualityMonitorsV2QualityMonitor:getQualityMonitorsV2QualityMonitor"},"type":"array"}},"required":["qualityMonitors","id"],"type":"object"}},"databricks:index/getRegisteredModel:getRegisteredModel":{"description":"This resource allows you to get information about [Model in Unity Catalog](https://docs.databricks.com/en/mlflow/models-in-uc.html) in Databricks.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getRegisteredModel({\n    fullName: \"main.default.my_model\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_registered_model(full_name=\"main.default.my_model\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetRegisteredModel.Invoke(new()\n    {\n        FullName = \"main.default.my_model\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupRegisteredModel(ctx, \u0026databricks.LookupRegisteredModelArgs{\n\t\t\tFullName: \"main.default.my_model\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetRegisteredModelArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getRegisteredModel(GetRegisteredModelArgs.builder()\n            .fullName(\"main.default.my_model\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getRegisteredModel\n      arguments:\n        fullName: main.default.my_model\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.RegisteredModel \" pulumi-lang-dotnet=\" databricks.RegisteredModel \" pulumi-lang-go=\" RegisteredModel \" pulumi-lang-python=\" RegisteredModel \" pulumi-lang-yaml=\" databricks.RegisteredModel \" pulumi-lang-java=\" databricks.RegisteredModel \"\u003e databricks.RegisteredModel \u003c/span\u003eresource to manage models within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.ModelServing \" pulumi-lang-dotnet=\" databricks.ModelServing \" pulumi-lang-go=\" ModelServing \" pulumi-lang-python=\" ModelServing \" pulumi-lang-yaml=\" databricks.ModelServing \" pulumi-lang-java=\" databricks.ModelServing \"\u003e databricks.ModelServing \u003c/span\u003eto serve this model on a Databricks serving endpoint.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MlflowExperiment \" pulumi-lang-dotnet=\" databricks.MlflowExperiment \" pulumi-lang-go=\" MlflowExperiment \" pulumi-lang-python=\" MlflowExperiment \" pulumi-lang-yaml=\" databricks.MlflowExperiment \" pulumi-lang-java=\" databricks.MlflowExperiment \"\u003e databricks.MlflowExperiment \u003c/span\u003eto manage [MLflow experiments](https://docs.databricks.com/data/data-sources/mlflow-experiment.html) in Databricks.\n","inputs":{"description":"A collection of arguments for invoking getRegisteredModel.\n","properties":{"fullName":{"type":"string","description":"The fully-qualified name of the registered model (`catalog_name.schema_name.name`).\n"},"includeAliases":{"type":"boolean","description":"flag to specify if list of aliases should be included into output.\n"},"includeBrowse":{"type":"boolean","description":"flag to specify if include registered models in the response for which the principal can only access selective metadata for.\n"},"modelInfos":{"type":"array","items":{"$ref":"#/types/databricks:index/getRegisteredModelModelInfo:getRegisteredModelModelInfo"},"description":"block with information about the model in Unity Catalog:\n"},"providerConfig":{"$ref":"#/types/databricks:index/getRegisteredModelProviderConfig:getRegisteredModelProviderConfig"}},"type":"object","required":["fullName"]},"outputs":{"description":"A collection of values returned by getRegisteredModel.\n","properties":{"fullName":{"description":"The fully-qualified name of the registered model (`catalog_name.schema_name.name`).\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"includeAliases":{"type":"boolean"},"includeBrowse":{"type":"boolean"},"modelInfos":{"description":"block with information about the model in Unity Catalog:\n","items":{"$ref":"#/types/databricks:index/getRegisteredModelModelInfo:getRegisteredModelModelInfo"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getRegisteredModelProviderConfig:getRegisteredModelProviderConfig"}},"required":["fullName","modelInfos","id"],"type":"object"}},"databricks:index/getRegisteredModelVersions:getRegisteredModelVersions":{"description":"This resource allows you to get information about versions of [Model in Unity Catalog](https://docs.databricks.com/en/mlflow/models-in-uc.html).\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getRegisteredModelVersions({\n    fullName: \"main.default.my_model\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_registered_model_versions(full_name=\"main.default.my_model\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetRegisteredModelVersions.Invoke(new()\n    {\n        FullName = \"main.default.my_model\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetRegisteredModelVersions(ctx, \u0026databricks.GetRegisteredModelVersionsArgs{\n\t\t\tFullName: \"main.default.my_model\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetRegisteredModelVersionsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getRegisteredModelVersions(GetRegisteredModelVersionsArgs.builder()\n            .fullName(\"main.default.my_model\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getRegisteredModelVersions\n      arguments:\n        fullName: main.default.my_model\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.RegisteredModel \" pulumi-lang-dotnet=\" databricks.RegisteredModel \" pulumi-lang-go=\" RegisteredModel \" pulumi-lang-python=\" RegisteredModel \" pulumi-lang-yaml=\" databricks.RegisteredModel \" pulumi-lang-java=\" databricks.RegisteredModel \"\u003e databricks.RegisteredModel \u003c/span\u003edata source to retrieve information about a model within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.RegisteredModel \" pulumi-lang-dotnet=\" databricks.RegisteredModel \" pulumi-lang-go=\" RegisteredModel \" pulumi-lang-python=\" RegisteredModel \" pulumi-lang-yaml=\" databricks.RegisteredModel \" pulumi-lang-java=\" databricks.RegisteredModel \"\u003e databricks.RegisteredModel \u003c/span\u003eresource to manage models within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.ModelServing \" pulumi-lang-dotnet=\" databricks.ModelServing \" pulumi-lang-go=\" ModelServing \" pulumi-lang-python=\" ModelServing \" pulumi-lang-yaml=\" databricks.ModelServing \" pulumi-lang-java=\" databricks.ModelServing \"\u003e databricks.ModelServing \u003c/span\u003eto serve this model on a Databricks serving endpoint.\n*\u003cspan pulumi-lang-nodejs=\" databricks.MlflowExperiment \" pulumi-lang-dotnet=\" databricks.MlflowExperiment \" pulumi-lang-go=\" MlflowExperiment \" pulumi-lang-python=\" MlflowExperiment \" pulumi-lang-yaml=\" databricks.MlflowExperiment \" pulumi-lang-java=\" databricks.MlflowExperiment \"\u003e databricks.MlflowExperiment \u003c/span\u003eto manage [MLflow experiments](https://docs.databricks.com/data/data-sources/mlflow-experiment.html) in Databricks.\n","inputs":{"description":"A collection of arguments for invoking getRegisteredModelVersions.\n","properties":{"fullName":{"type":"string","description":"The fully-qualified name of the registered model (`catalog_name.schema_name.name`).\n"},"modelVersions":{"type":"array","items":{"$ref":"#/types/databricks:index/getRegisteredModelVersionsModelVersion:getRegisteredModelVersionsModelVersion"},"description":"list of objects describing the model versions. Each object consists of following attributes:\n"},"providerConfig":{"$ref":"#/types/databricks:index/getRegisteredModelVersionsProviderConfig:getRegisteredModelVersionsProviderConfig"}},"type":"object","required":["fullName"]},"outputs":{"description":"A collection of values returned by getRegisteredModelVersions.\n","properties":{"fullName":{"description":"The fully-qualified name of the registered model (`catalog_name.schema_name.name`).\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"modelVersions":{"description":"list of objects describing the model versions. Each object consists of following attributes:\n","items":{"$ref":"#/types/databricks:index/getRegisteredModelVersionsModelVersion:getRegisteredModelVersionsModelVersion"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getRegisteredModelVersionsProviderConfig:getRegisteredModelVersionsProviderConfig"}},"required":["fullName","modelVersions","id"],"type":"object"}},"databricks:index/getRfaAccessRequestDestinations:getRfaAccessRequestDestinations":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get the Request for Access (RFA) access request destinations for a specific securable object.\n\n\n## Example Usage\n\nReferring to RFA access request destinations by securable type and full name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst customerDataSchema = databricks.getRfaAccessRequestDestinations({\n    securableType: \"SCHEMA\",\n    fullName: \"main.customer_data\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncustomer_data_schema = databricks.get_rfa_access_request_destinations(securable_type=\"SCHEMA\",\n    full_name=\"main.customer_data\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var customerDataSchema = Databricks.GetRfaAccessRequestDestinations.Invoke(new()\n    {\n        SecurableType = \"SCHEMA\",\n        FullName = \"main.customer_data\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupRfaAccessRequestDestinations(ctx, \u0026databricks.LookupRfaAccessRequestDestinationsArgs{\n\t\t\tSecurableType: \"SCHEMA\",\n\t\t\tFullName:      \"main.customer_data\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetRfaAccessRequestDestinationsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var customerDataSchema = DatabricksFunctions.getRfaAccessRequestDestinations(GetRfaAccessRequestDestinationsArgs.builder()\n            .securableType(\"SCHEMA\")\n            .fullName(\"main.customer_data\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  customerDataSchema:\n    fn::invoke:\n      function: databricks:getRfaAccessRequestDestinations\n      arguments:\n        securableType: SCHEMA\n        fullName: main.customer_data\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getRfaAccessRequestDestinations.\n","properties":{"fullName":{"type":"string","description":"The full name of the securable. Redundant with the name in the securable object, but necessary for Pulumi integration\n"},"providerConfig":{"$ref":"#/types/databricks:index/getRfaAccessRequestDestinationsProviderConfig:getRfaAccessRequestDestinationsProviderConfig","description":"Configure the provider for management through account provider.\n"},"securableType":{"type":"string","description":"The type of the securable. Redundant with the type in the securable object, but necessary for Pulumi integration\n"}},"type":"object","required":["fullName","securableType"]},"outputs":{"description":"A collection of values returned by getRfaAccessRequestDestinations.\n","properties":{"areAnyDestinationsHidden":{"description":"(boolean) - Indicates whether any destinations are hidden from the caller due to a lack of permissions.\nThis value is true if the caller does not have permission to see all destinations\n","type":"boolean"},"destinationSourceSecurable":{"$ref":"#/types/databricks:index/getRfaAccessRequestDestinationsDestinationSourceSecurable:getRfaAccessRequestDestinationsDestinationSourceSecurable","description":"(Securable) - The source securable from which the destinations are inherited. Either the same value as securable (if destination\nis set directly on the securable) or the nearest parent securable with destinations set\n"},"destinations":{"description":"(list of NotificationDestination) - The access request destinations for the securable\n","items":{"$ref":"#/types/databricks:index/getRfaAccessRequestDestinationsDestination:getRfaAccessRequestDestinationsDestination"},"type":"array"},"fullName":{"description":"(string) - Required. The full name of the catalog/schema/table.\nOptional if\u003cspan pulumi-lang-nodejs=\" resourceName \" pulumi-lang-dotnet=\" ResourceName \" pulumi-lang-go=\" resourceName \" pulumi-lang-python=\" resource_name \" pulumi-lang-yaml=\" resourceName \" pulumi-lang-java=\" resourceName \"\u003e resource_name \u003c/span\u003eis present\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getRfaAccessRequestDestinationsProviderConfig:getRfaAccessRequestDestinationsProviderConfig"},"securable":{"$ref":"#/types/databricks:index/getRfaAccessRequestDestinationsSecurable:getRfaAccessRequestDestinationsSecurable","description":"(Securable) - The securable for which the access request destinations are being modified or read\n"},"securableType":{"description":"(string) - The type of the securable. Redundant with the type in the securable object, but necessary for Pulumi integration\n","type":"string"}},"required":["areAnyDestinationsHidden","destinationSourceSecurable","destinations","fullName","securable","securableType","id"],"type":"object"}},"databricks:index/getSchema:getSchema":{"description":"Retrieves details about\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003ethat was created by Pulumi or manually.\nA schema can be identified by its two-level (fully qualified) name (in the form of: \u003cspan pulumi-lang-nodejs=\"`catalogName`\" pulumi-lang-dotnet=\"`CatalogName`\" pulumi-lang-go=\"`catalogName`\" pulumi-lang-python=\"`catalog_name`\" pulumi-lang-yaml=\"`catalogName`\" pulumi-lang-java=\"`catalogName`\"\u003e`catalog_name`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schemaName`\" pulumi-lang-dotnet=\"`SchemaName`\" pulumi-lang-go=\"`schemaName`\" pulumi-lang-python=\"`schema_name`\" pulumi-lang-yaml=\"`schemaName`\" pulumi-lang-java=\"`schemaName`\"\u003e`schema_name`\u003c/span\u003e) as input. This can be retrieved programmatically using\u003cspan pulumi-lang-nodejs=\" databricks.getSchemas \" pulumi-lang-dotnet=\" databricks.getSchemas \" pulumi-lang-go=\" getSchemas \" pulumi-lang-python=\" get_schemas \" pulumi-lang-yaml=\" databricks.getSchemas \" pulumi-lang-java=\" databricks.getSchemas \"\u003e databricks.getSchemas \u003c/span\u003edata source.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n* Retrieve details of all schemas in in a _sandbox_ databricks_catalog:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getSchemas({\n    catalogName: \"sandbox\",\n});\nconst _this = all.then(all =\u003e .reduce((__obj, [__key, __value]) =\u003e ({ ...__obj, [__key]: databricks.getSchema({\n    name: __value,\n}) })));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_schemas(catalog_name=\"sandbox\")\nthis = {__key: databricks.get_schema(name=__value) for __key, __value in all.ids}\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetSchemas.Invoke(new()\n    {\n        CatalogName = \"sandbox\",\n    });\n\n    var @this = ;\n\n});\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n* Search for a specific schema by its fully qualified name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getSchema({\n    name: \"catalog.schema\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_schema(name=\"catalog.schema\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetSchema.Invoke(new()\n    {\n        Name = \"catalog.schema\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupSchema(ctx, \u0026databricks.LookupSchemaArgs{\n\t\t\tName: \"catalog.schema\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetSchemaArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getSchema(GetSchemaArgs.builder()\n            .name(\"catalog.schema\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getSchema\n      arguments:\n        name: catalog.schema\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eto manage schemas within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eto manage catalogs within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getSchema.\n","properties":{"id":{"type":"string","description":"ID of this Unity Catalog Schema in form of `\u003ccatalog\u003e.\u003cschema\u003e`.\n"},"name":{"type":"string","description":"a fully qualified name of databricks_schema: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e*\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getSchemaProviderConfig:getSchemaProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"schemaInfo":{"$ref":"#/types/databricks:index/getSchemaSchemaInfo:getSchemaSchemaInfo","description":"`SchemaInfo` object for a Unity Catalog schema. This contains the following attributes:\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getSchema.\n","properties":{"id":{"description":"ID of this Unity Catalog Schema in form of `\u003ccatalog\u003e.\u003cschema\u003e`.\n","type":"string"},"name":{"description":"Name of schema, relative to parent catalog.\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getSchemaProviderConfig:getSchemaProviderConfig"},"schemaInfo":{"$ref":"#/types/databricks:index/getSchemaSchemaInfo:getSchemaSchemaInfo","description":"`SchemaInfo` object for a Unity Catalog schema. This contains the following attributes:\n"}},"required":["id","name","schemaInfo"],"type":"object"}},"databricks:index/getSchemas:getSchemas":{"description":"Retrieves a list of\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eids, that were created by Pulumi or manually, so that special handling could be applied.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nListing all schemas in a _sandbox_ databricks_catalog:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst sandbox = databricks.getSchemas({\n    catalogName: \"sandbox\",\n});\nexport const allSandboxSchemas = sandbox;\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nsandbox = databricks.get_schemas(catalog_name=\"sandbox\")\npulumi.export(\"allSandboxSchemas\", sandbox)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var sandbox = Databricks.GetSchemas.Invoke(new()\n    {\n        CatalogName = \"sandbox\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allSandboxSchemas\"] = sandbox,\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tsandbox, err := databricks.GetSchemas(ctx, \u0026databricks.GetSchemasArgs{\n\t\t\tCatalogName: \"sandbox\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allSandboxSchemas\", sandbox)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetSchemasArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var sandbox = DatabricksFunctions.getSchemas(GetSchemasArgs.builder()\n            .catalogName(\"sandbox\")\n            .build());\n\n        ctx.export(\"allSandboxSchemas\", sandbox);\n    }\n}\n```\n```yaml\nvariables:\n  sandbox:\n    fn::invoke:\n      function: databricks:getSchemas\n      arguments:\n        catalogName: sandbox\noutputs:\n  allSandboxSchemas: ${sandbox}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eto manage schemas within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eto manage catalogs within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getSchemas.\n","properties":{"catalogName":{"type":"string","description":"Name of databricks_catalog\n","willReplaceOnChanges":true},"ids":{"type":"array","items":{"type":"string"},"description":"set of\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003efull names: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e*\n"},"providerConfig":{"$ref":"#/types/databricks:index/getSchemasProviderConfig:getSchemasProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object","required":["catalogName"]},"outputs":{"description":"A collection of values returned by getSchemas.\n","properties":{"catalogName":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"ids":{"description":"set of\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003efull names: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e*\n","items":{"type":"string"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getSchemasProviderConfig:getSchemasProviderConfig"}},"required":["catalogName","ids","id"],"type":"object"}},"databricks:index/getServicePrincipal:getServicePrincipal":{"description":"Retrieves information about databricks_service_principal.\n\n\u003e This data source can be used with an account or workspace-level provider.\n\n## Example Usage\n\nAdding service principal `11111111-2222-3333-4444-555666777888` to administrative group\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst admins = databricks.getGroup({\n    displayName: \"admins\",\n});\nconst spn = databricks.getServicePrincipal({\n    applicationId: \"11111111-2222-3333-4444-555666777888\",\n});\nconst myMemberA = new databricks.GroupMember(\"my_member_a\", {\n    groupId: admins.then(admins =\u003e admins.id),\n    memberId: spn.then(spn =\u003e spn.id),\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nadmins = databricks.get_group(display_name=\"admins\")\nspn = databricks.get_service_principal(application_id=\"11111111-2222-3333-4444-555666777888\")\nmy_member_a = databricks.GroupMember(\"my_member_a\",\n    group_id=admins.id,\n    member_id=spn.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var admins = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"admins\",\n    });\n\n    var spn = Databricks.GetServicePrincipal.Invoke(new()\n    {\n        ApplicationId = \"11111111-2222-3333-4444-555666777888\",\n    });\n\n    var myMemberA = new Databricks.GroupMember(\"my_member_a\", new()\n    {\n        GroupId = admins.Apply(getGroupResult =\u003e getGroupResult.Id),\n        MemberId = spn.Apply(getServicePrincipalResult =\u003e getServicePrincipalResult.Id),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tadmins, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"admins\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tspn, err := databricks.LookupServicePrincipal(ctx, \u0026databricks.LookupServicePrincipalArgs{\n\t\t\tApplicationId: pulumi.StringRef(\"11111111-2222-3333-4444-555666777888\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupMember(ctx, \"my_member_a\", \u0026databricks.GroupMemberArgs{\n\t\t\tGroupId:  pulumi.String(admins.Id),\n\t\t\tMemberId: pulumi.String(spn.Id),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.inputs.GetServicePrincipalArgs;\nimport com.pulumi.databricks.GroupMember;\nimport com.pulumi.databricks.GroupMemberArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var admins = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"admins\")\n            .build());\n\n        final var spn = DatabricksFunctions.getServicePrincipal(GetServicePrincipalArgs.builder()\n            .applicationId(\"11111111-2222-3333-4444-555666777888\")\n            .build());\n\n        var myMemberA = new GroupMember(\"myMemberA\", GroupMemberArgs.builder()\n            .groupId(admins.id())\n            .memberId(spn.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  myMemberA:\n    type: databricks:GroupMember\n    name: my_member_a\n    properties:\n      groupId: ${admins.id}\n      memberId: ${spn.id}\nvariables:\n  admins:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: admins\n  spn:\n    fn::invoke:\n      function: databricks:getServicePrincipal\n      arguments:\n        applicationId: 11111111-2222-3333-4444-555666777888\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n- End to end workspace management guide.\n-\u003cspan pulumi-lang-nodejs=\" databricks.getCurrentUser \" pulumi-lang-dotnet=\" databricks.getCurrentUser \" pulumi-lang-go=\" getCurrentUser \" pulumi-lang-python=\" get_current_user \" pulumi-lang-yaml=\" databricks.getCurrentUser \" pulumi-lang-java=\" databricks.getCurrentUser \"\u003e databricks.getCurrentUser \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eor databricks_service_principal, that is calling Databricks REST API.\n-\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n-\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n-\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_group.\n-\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n-\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eto manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.\n-\u003cspan pulumi-lang-nodejs=\" databricksService \" pulumi-lang-dotnet=\" DatabricksService \" pulumi-lang-go=\" databricksService \" pulumi-lang-python=\" databricks_service \" pulumi-lang-yaml=\" databricksService \" pulumi-lang-java=\" databricksService \"\u003e databricks_service \u003c/span\u003eprincipal to manage service principals\n","inputs":{"description":"A collection of arguments for invoking getServicePrincipal.\n","properties":{"aclPrincipalId":{"type":"string","description":"identifier for use in databricks_access_control_rule_set, e.g. `servicePrincipals/00000000-0000-0000-0000-000000000000`.\n"},"active":{"type":"boolean","description":"Whether service principal is active or not.\n"},"applicationId":{"type":"string","description":"Application ID of the service principal. The service principal must exist before this resource can be retrieved.\n"},"displayName":{"type":"string","description":"Exact display name of the service principal. The service principal must exist before this resource can be retrieved.  In case if there are several service principals with the same name, an error is thrown.\n"},"externalId":{"type":"string","description":"ID of the service principal in an external identity provider.\n"},"home":{"type":"string","description":"Home folder of the service principal, e.g. `/Users/11111111-2222-3333-4444-555666777888`.\n"},"id":{"type":"string","description":"The id of the service principal (SCIM ID).\n"},"providerConfig":{"$ref":"#/types/databricks:index/getServicePrincipalProviderConfig:getServicePrincipalProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"repos":{"type":"string","description":"Repos location of the service principal, e.g. `/Repos/11111111-2222-3333-4444-555666777888`.\n"},"scimId":{"type":"string","description":"Unique SCIM ID for a service principal in the Databricks workspace. The service principal must exist before this resource can be retrieved.\n"},"spId":{"type":"string"}},"type":"object"},"outputs":{"description":"A collection of values returned by getServicePrincipal.\n","properties":{"aclPrincipalId":{"description":"identifier for use in databricks_access_control_rule_set, e.g. `servicePrincipals/00000000-0000-0000-0000-000000000000`.\n","type":"string"},"active":{"description":"Whether service principal is active or not.\n","type":"boolean"},"applicationId":{"description":"Application ID of the service principal.\n","type":"string"},"displayName":{"description":"Display name of the service principal, e.g. `Foo SPN`.\n","type":"string"},"externalId":{"description":"ID of the service principal in an external identity provider.\n","type":"string"},"home":{"description":"Home folder of the service principal, e.g. `/Users/11111111-2222-3333-4444-555666777888`.\n","type":"string"},"id":{"description":"The id of the service principal (SCIM ID).\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getServicePrincipalProviderConfig:getServicePrincipalProviderConfig"},"repos":{"description":"Repos location of the service principal, e.g. `/Repos/11111111-2222-3333-4444-555666777888`.\n","type":"string"},"scimId":{"description":"same as \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e.\n","type":"string"},"spId":{"type":"string"}},"required":["aclPrincipalId","active","applicationId","displayName","externalId","home","id","repos","scimId","spId"],"type":"object"}},"databricks:index/getServicePrincipalFederationPolicies:getServicePrincipalFederationPolicies":{"description":"[![GA](https://img.shields.io/badge/Release_Stage-GA-green)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to fetch the list of federation policies for a service principal.\n\n\u003e **Note** This data source can only be used with an account-level provider!\n\n## Example Usage\n\nGetting a list of all service principal federation policies:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getServicePrincipalFederationPolicies({\n    servicePrincipalId: 1234,\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_service_principal_federation_policies(service_principal_id=1234)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetServicePrincipalFederationPolicies.Invoke(new()\n    {\n        ServicePrincipalId = 1234,\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetServicePrincipalFederationPolicies(ctx, \u0026databricks.GetServicePrincipalFederationPoliciesArgs{\n\t\t\tServicePrincipalId: 1234,\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetServicePrincipalFederationPoliciesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getServicePrincipalFederationPolicies(GetServicePrincipalFederationPoliciesArgs.builder()\n            .servicePrincipalId(1234)\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getServicePrincipalFederationPolicies\n      arguments:\n        servicePrincipalId: 1234\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getServicePrincipalFederationPolicies.\n","properties":{"pageSize":{"type":"integer"},"servicePrincipalId":{"type":"integer","description":"The service principal id for the federation policy\n"}},"type":"object","required":["servicePrincipalId"]},"outputs":{"description":"A collection of values returned by getServicePrincipalFederationPolicies.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"policies":{"items":{"$ref":"#/types/databricks:index/getServicePrincipalFederationPoliciesPolicy:getServicePrincipalFederationPoliciesPolicy"},"type":"array"},"servicePrincipalId":{"description":"(integer) - The service principal ID that this federation policy applies to. Output only. Only set for service principal federation policies\n","type":"integer"}},"required":["policies","servicePrincipalId","id"],"type":"object"}},"databricks:index/getServicePrincipalFederationPolicy:getServicePrincipalFederationPolicy":{"description":"[![GA](https://img.shields.io/badge/Release_Stage-GA-green)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single service principal federation policy.\n\n\u003e **Note** This data source can only be used with an account-level provider!\n\n## Example Usage\n\nReferring to a service principal federation policy by id:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```yaml\nvariables:\n  myPolicy:\n    fn::invoke:\n      function: databricks:getServicePrincipalFederationPolicy\n      arguments:\n        servicePrincipalId: 1234\n        policyId: my-policy\n        oidcPolicy:\n          issuer: https://myidp.example.com\n          subjectClaim: sub\n          subject: subject-in-token-from-myidp\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getServicePrincipalFederationPolicy.\n","properties":{"policyId":{"type":"string","description":"The ID of the federation policy. Output only\n"},"servicePrincipalId":{"type":"integer","description":"The service principal ID that this federation policy applies to. Output only. Only set for service principal federation policies\n"}},"type":"object","required":["policyId","servicePrincipalId"]},"outputs":{"description":"A collection of values returned by getServicePrincipalFederationPolicy.\n","properties":{"createTime":{"description":"(string) - Creation time of the federation policy\n","type":"string"},"description":{"description":"(string) - Description of the federation policy\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"(string) - Resource name for the federation policy. Example values include\n`accounts/\u003caccount-id\u003e/federationPolicies/my-federation-policy` for Account Federation Policies, and\n`accounts/\u003caccount-id\u003e/servicePrincipals/\u003cservice-principal-id\u003e/federationPolicies/my-federation-policy`\nfor Service Principal Federation Policies. Typically an output parameter, which does not need to be\nspecified in create or update requests. If specified in a request, must match the value in the\nrequest URL\n","type":"string"},"oidcPolicy":{"$ref":"#/types/databricks:index/getServicePrincipalFederationPolicyOidcPolicy:getServicePrincipalFederationPolicyOidcPolicy","description":"(OidcFederationPolicy)\n"},"policyId":{"description":"(string) - The ID of the federation policy. Output only\n","type":"string"},"servicePrincipalId":{"description":"(integer) - The service principal ID that this federation policy applies to. Output only. Only set for service principal federation policies\n","type":"integer"},"uid":{"description":"(string) - Unique, immutable id of the federation policy\n","type":"string"},"updateTime":{"description":"(string) - Last update time of the federation policy\n","type":"string"}},"required":["createTime","description","name","oidcPolicy","policyId","servicePrincipalId","uid","updateTime","id"],"type":"object"}},"databricks:index/getServicePrincipals:getServicePrincipals":{"description":"Retrieves \u003cspan pulumi-lang-nodejs=\"`applicationIds`\" pulumi-lang-dotnet=\"`ApplicationIds`\" pulumi-lang-go=\"`applicationIds`\" pulumi-lang-python=\"`application_ids`\" pulumi-lang-yaml=\"`applicationIds`\" pulumi-lang-java=\"`applicationIds`\"\u003e`application_ids`\u003c/span\u003e of all\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003ebased on their \u003cspan pulumi-lang-nodejs=\"`displayName`\" pulumi-lang-dotnet=\"`DisplayName`\" pulumi-lang-go=\"`displayName`\" pulumi-lang-python=\"`display_name`\" pulumi-lang-yaml=\"`displayName`\" pulumi-lang-java=\"`displayName`\"\u003e`display_name`\u003c/span\u003e\n\n\u003e This data source can be used with an account or workspace-level provider.\n\n## Example Usage\n\nAdding all service principals of which display name contains `my-spn` to admin group\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\nimport * as std from \"@pulumi/std\";\n\nexport = async () =\u003e {\n    const admins = await databricks.getGroup({\n        displayName: \"admins\",\n    });\n    const spns = await databricks.getServicePrincipals({\n        displayNameContains: \"my-spn\",\n    });\n    const spn = .reduce((__obj, [__key, __value]) =\u003e ({ ...__obj, [__key]: await databricks.getServicePrincipal({\n        applicationId: __value,\n    }) }));\n    const myMemberSpn: databricks.GroupMember[] = [];\n    for (const range of std.toset({\n        input: spns.applicationIds,\n    }).result.map((v, k) =\u003e ({key: k, value: v}))) {\n        myMemberSpn.push(new databricks.GroupMember(`my_member_spn-${range.key}`, {\n            groupId: admins.id,\n            memberId: spn[range.value].spId,\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\nimport pulumi_std as std\n\nadmins = databricks.get_group(display_name=\"admins\")\nspns = databricks.get_service_principals(display_name_contains=\"my-spn\")\nspn = {__key: databricks.get_service_principal(application_id=__value) for __key, __value in std.toset(input=spns.application_ids).result}\nmy_member_spn = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate(std.toset(input=spns.application_ids).result)]:\n    my_member_spn.append(databricks.GroupMember(f\"my_member_spn-{range['key']}\",\n        group_id=admins.id,\n        member_id=spn[range[\"value\"]].sp_id))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\nusing Std = Pulumi.Std;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var admins = await Databricks.GetGroup.InvokeAsync(new()\n    {\n        DisplayName = \"admins\",\n    });\n\n    var spns = await Databricks.GetServicePrincipals.InvokeAsync(new()\n    {\n        DisplayNameContains = \"my-spn\",\n    });\n\n    var spn = ;\n\n    var myMemberSpn = new List\u003cDatabricks.GroupMember\u003e();\n    foreach (var range in )\n    {\n        myMemberSpn.Add(new Databricks.GroupMember($\"my_member_spn-{range.Key}\", new()\n        {\n            GroupId = admins.Id,\n            MemberId = spn[range.Value].SpId,\n        }));\n    }\n});\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n- End to end workspace management guide.\n-\u003cspan pulumi-lang-nodejs=\" databricks.getCurrentUser \" pulumi-lang-dotnet=\" databricks.getCurrentUser \" pulumi-lang-go=\" getCurrentUser \" pulumi-lang-python=\" get_current_user \" pulumi-lang-yaml=\" databricks.getCurrentUser \" pulumi-lang-java=\" databricks.getCurrentUser \"\u003e databricks.getCurrentUser \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eor databricks_service_principal, that is calling Databricks REST API.\n-\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n-\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n-\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_group.\n-\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n-\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eto manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.\n-\u003cspan pulumi-lang-nodejs=\" databricksService \" pulumi-lang-dotnet=\" DatabricksService \" pulumi-lang-go=\" databricksService \" pulumi-lang-python=\" databricks_service \" pulumi-lang-yaml=\" databricksService \" pulumi-lang-java=\" databricksService \"\u003e databricks_service \u003c/span\u003eprincipal to manage service principals\n","inputs":{"description":"A collection of arguments for invoking getServicePrincipals.\n","properties":{"applicationIds":{"type":"array","items":{"type":"string"},"description":"List of \u003cspan pulumi-lang-nodejs=\"`applicationIds`\" pulumi-lang-dotnet=\"`ApplicationIds`\" pulumi-lang-go=\"`applicationIds`\" pulumi-lang-python=\"`application_ids`\" pulumi-lang-yaml=\"`applicationIds`\" pulumi-lang-java=\"`applicationIds`\"\u003e`application_ids`\u003c/span\u003e of service principals.  Individual service principal can be retrieved using\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003edata source or from \u003cspan pulumi-lang-nodejs=\"`servicePrincipals`\" pulumi-lang-dotnet=\"`ServicePrincipals`\" pulumi-lang-go=\"`servicePrincipals`\" pulumi-lang-python=\"`service_principals`\" pulumi-lang-yaml=\"`servicePrincipals`\" pulumi-lang-java=\"`servicePrincipals`\"\u003e`service_principals`\u003c/span\u003e attribute.\n"},"displayNameContains":{"type":"string","description":"Only return\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003edisplay name that match the given name string\n"},"providerConfig":{"$ref":"#/types/databricks:index/getServicePrincipalsProviderConfig:getServicePrincipalsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"servicePrincipals":{"type":"array","items":{"$ref":"#/types/databricks:index/getServicePrincipalsServicePrincipal:getServicePrincipalsServicePrincipal"},"description":"List of objects describing individual service principals. Each object has the following attributes:\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getServicePrincipals.\n","properties":{"applicationIds":{"description":"List of \u003cspan pulumi-lang-nodejs=\"`applicationIds`\" pulumi-lang-dotnet=\"`ApplicationIds`\" pulumi-lang-go=\"`applicationIds`\" pulumi-lang-python=\"`application_ids`\" pulumi-lang-yaml=\"`applicationIds`\" pulumi-lang-java=\"`applicationIds`\"\u003e`application_ids`\u003c/span\u003e of service principals.  Individual service principal can be retrieved using\u003cspan pulumi-lang-nodejs=\" databricks.ServicePrincipal \" pulumi-lang-dotnet=\" databricks.ServicePrincipal \" pulumi-lang-go=\" ServicePrincipal \" pulumi-lang-python=\" ServicePrincipal \" pulumi-lang-yaml=\" databricks.ServicePrincipal \" pulumi-lang-java=\" databricks.ServicePrincipal \"\u003e databricks.ServicePrincipal \u003c/span\u003edata source or from \u003cspan pulumi-lang-nodejs=\"`servicePrincipals`\" pulumi-lang-dotnet=\"`ServicePrincipals`\" pulumi-lang-go=\"`servicePrincipals`\" pulumi-lang-python=\"`service_principals`\" pulumi-lang-yaml=\"`servicePrincipals`\" pulumi-lang-java=\"`servicePrincipals`\"\u003e`service_principals`\u003c/span\u003e attribute.\n","items":{"type":"string"},"type":"array"},"displayNameContains":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getServicePrincipalsProviderConfig:getServicePrincipalsProviderConfig"},"servicePrincipals":{"description":"List of objects describing individual service principals. Each object has the following attributes:\n","items":{"$ref":"#/types/databricks:index/getServicePrincipalsServicePrincipal:getServicePrincipalsServicePrincipal"},"type":"array"}},"required":["applicationIds","displayNameContains","servicePrincipals","id"],"type":"object"}},"databricks:index/getServingEndpoints:getServingEndpoints":{"description":"This resource allows you to get information about [Model Serving](https://docs.databricks.com/machine-learning/model-serving/index.html) endpoints in Databricks.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getServingEndpoints({});\nconst mlServingUsage: databricks.Permissions[] = [];\nfor (const range = {value: 0}; range.value \u003c allDatabricksServingEndpoints.endpoints; range.value++) {\n    mlServingUsage.push(new databricks.Permissions(`ml_serving_usage-${range.value}`, {\n        servingEndpointId: range.value.id,\n        accessControls: [\n            {\n                groupName: \"users\",\n                permissionLevel: \"CAN_VIEW\",\n            },\n            {\n                groupName: auto.displayName,\n                permissionLevel: \"CAN_MANAGE\",\n            },\n            {\n                groupName: eng.displayName,\n                permissionLevel: \"CAN_QUERY\",\n            },\n        ],\n    }));\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_serving_endpoints()\nml_serving_usage = []\nfor range in [{\"value\": i} for i in range(0, all_databricks_serving_endpoints.endpoints)]:\n    ml_serving_usage.append(databricks.Permissions(f\"ml_serving_usage-{range['value']}\",\n        serving_endpoint_id=range[\"value\"][\"id\"],\n        access_controls=[\n            {\n                \"group_name\": \"users\",\n                \"permission_level\": \"CAN_VIEW\",\n            },\n            {\n                \"group_name\": auto[\"displayName\"],\n                \"permission_level\": \"CAN_MANAGE\",\n            },\n            {\n                \"group_name\": eng[\"displayName\"],\n                \"permission_level\": \"CAN_QUERY\",\n            },\n        ]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetServingEndpoints.Invoke();\n\n    var mlServingUsage = new List\u003cDatabricks.Permissions\u003e();\n    for (var rangeIndex = 0; rangeIndex \u003c allDatabricksServingEndpoints.Endpoints; rangeIndex++)\n    {\n        var range = new { Value = rangeIndex };\n        mlServingUsage.Add(new Databricks.Permissions($\"ml_serving_usage-{range.Value}\", new()\n        {\n            ServingEndpointId = range.Value.Id,\n            AccessControls = new[]\n            {\n                new Databricks.Inputs.PermissionsAccessControlArgs\n                {\n                    GroupName = \"users\",\n                    PermissionLevel = \"CAN_VIEW\",\n                },\n                new Databricks.Inputs.PermissionsAccessControlArgs\n                {\n                    GroupName = auto.DisplayName,\n                    PermissionLevel = \"CAN_MANAGE\",\n                },\n                new Databricks.Inputs.PermissionsAccessControlArgs\n                {\n                    GroupName = eng.DisplayName,\n                    PermissionLevel = \"CAN_QUERY\",\n                },\n            },\n        }));\n    }\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetServingEndpoints(ctx, \u0026databricks.GetServingEndpointsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar mlServingUsage []*databricks.Permissions\n\t\tfor index := 0; index \u003c allDatabricksServingEndpoints.Endpoints; index++ {\n\t\t\tkey0 := index\n\t\t\tval0 := index\n\t\t\t__res, err := databricks.NewPermissions(ctx, fmt.Sprintf(\"ml_serving_usage-%v\", key0), \u0026databricks.PermissionsArgs{\n\t\t\t\tServingEndpointId: pulumi.Any(val0),\n\t\t\t\tAccessControls: databricks.PermissionsAccessControlArray{\n\t\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\t\tGroupName:       pulumi.String(\"users\"),\n\t\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_VIEW\"),\n\t\t\t\t\t},\n\t\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\t\tGroupName:       pulumi.Any(auto.DisplayName),\n\t\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_MANAGE\"),\n\t\t\t\t\t},\n\t\t\t\t\t\u0026databricks.PermissionsAccessControlArgs{\n\t\t\t\t\t\tGroupName:       pulumi.Any(eng.DisplayName),\n\t\t\t\t\t\tPermissionLevel: pulumi.String(\"CAN_QUERY\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tmlServingUsage = append(mlServingUsage, __res)\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetServingEndpointsArgs;\nimport com.pulumi.databricks.Permissions;\nimport com.pulumi.databricks.PermissionsArgs;\nimport com.pulumi.databricks.inputs.PermissionsAccessControlArgs;\nimport com.pulumi.codegen.internal.KeyedValue;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getServingEndpoints(GetServingEndpointsArgs.builder()\n            .build());\n\n        for (var i = 0; i \u003c allDatabricksServingEndpoints.endpoints(); i++) {\n            new Permissions(\"mlServingUsage-\" + i, PermissionsArgs.builder()\n                .servingEndpointId(range.value().id())\n                .accessControls(                \n                    PermissionsAccessControlArgs.builder()\n                        .groupName(\"users\")\n                        .permissionLevel(\"CAN_VIEW\")\n                        .build(),\n                    PermissionsAccessControlArgs.builder()\n                        .groupName(auto.displayName())\n                        .permissionLevel(\"CAN_MANAGE\")\n                        .build(),\n                    PermissionsAccessControlArgs.builder()\n                        .groupName(eng.displayName())\n                        .permissionLevel(\"CAN_QUERY\")\n                        .build())\n                .build());\n\n        \n}\n    }\n}\n```\n```yaml\nresources:\n  mlServingUsage:\n    type: databricks:Permissions\n    name: ml_serving_usage\n    properties:\n      servingEndpointId: ${range.value.id}\n      accessControls:\n        - groupName: users\n          permissionLevel: CAN_VIEW\n        - groupName: ${auto.displayName}\n          permissionLevel: CAN_MANAGE\n        - groupName: ${eng.displayName}\n          permissionLevel: CAN_QUERY\n    options: {}\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getServingEndpoints\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003ecan control which groups or individual users can *Manage*, *Query* or *View* individual serving endpoints.\n","inputs":{"description":"A collection of arguments for invoking getServingEndpoints.\n","properties":{"endpoints":{"type":"array","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpoint:getServingEndpointsEndpoint"},"description":"List of objects describing the serving endpoints. Each object consists of following attributes:\n"},"providerConfig":{"$ref":"#/types/databricks:index/getServingEndpointsProviderConfig:getServingEndpointsProviderConfig"}},"type":"object"},"outputs":{"description":"A collection of values returned by getServingEndpoints.\n","properties":{"endpoints":{"description":"List of objects describing the serving endpoints. Each object consists of following attributes:\n","items":{"$ref":"#/types/databricks:index/getServingEndpointsEndpoint:getServingEndpointsEndpoint"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getServingEndpointsProviderConfig:getServingEndpointsProviderConfig"}},"required":["endpoints","id"],"type":"object"}},"databricks:index/getShare:getShare":{"description":"Retrieves details about a\u003cspan pulumi-lang-nodejs=\" databricks.Share \" pulumi-lang-dotnet=\" databricks.Share \" pulumi-lang-go=\" Share \" pulumi-lang-python=\" Share \" pulumi-lang-yaml=\" databricks.Share \" pulumi-lang-java=\" databricks.Share \"\u003e databricks.Share \u003c/span\u003ethat were created by Pulumi or manually.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGetting details of an existing share in the metastore\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getShare({\n    name: \"this\",\n});\nexport const createdBy = _this.then(_this =\u003e _this.createdBy);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_share(name=\"this\")\npulumi.export(\"createdBy\", this.created_by)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetShare.Invoke(new()\n    {\n        Name = \"this\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"createdBy\"] = @this.Apply(@this =\u003e @this.Apply(getShareResult =\u003e getShareResult.CreatedBy)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupShare(ctx, \u0026databricks.LookupShareArgs{\n\t\t\tName: pulumi.StringRef(\"this\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"createdBy\", this.CreatedBy)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetShareArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getShare(GetShareArgs.builder()\n            .name(\"this\")\n            .build());\n\n        ctx.export(\"createdBy\", this_.createdBy());\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getShare\n      arguments:\n        name: this\noutputs:\n  createdBy: ${this.createdBy}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Share \" pulumi-lang-dotnet=\" databricks.Share \" pulumi-lang-go=\" Share \" pulumi-lang-python=\" Share \" pulumi-lang-yaml=\" databricks.Share \" pulumi-lang-java=\" databricks.Share \"\u003e databricks.Share \u003c/span\u003eto create Delta Sharing shares.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Recipient \" pulumi-lang-dotnet=\" databricks.Recipient \" pulumi-lang-go=\" Recipient \" pulumi-lang-python=\" Recipient \" pulumi-lang-yaml=\" databricks.Recipient \" pulumi-lang-java=\" databricks.Recipient \"\u003e databricks.Recipient \u003c/span\u003eto create Delta Sharing recipients.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage Delta Sharing permissions.\n","inputs":{"description":"A collection of arguments for invoking getShare.\n","properties":{"comment":{"type":"string","description":"Description about the object.\n"},"name":{"type":"string","description":"The name of the share\n"},"objects":{"type":"array","items":{"$ref":"#/types/databricks:index/getShareObject:getShareObject"},"description":"arrays containing details of each object in the share.\n"},"owner":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getShareProviderConfig:getShareProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"storageRoot":{"type":"string"}},"type":"object"},"outputs":{"description":"A collection of values returned by getShare.\n","properties":{"comment":{"description":"Description about the object.\n","type":"string"},"createdAt":{"description":"Time when the share was created.\n","type":"integer"},"createdBy":{"description":"The principal that created the share.\n","type":"string"},"effectiveOwner":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"Full name of the object being shared.\n","type":"string"},"objects":{"description":"arrays containing details of each object in the share.\n","items":{"$ref":"#/types/databricks:index/getShareObject:getShareObject"},"type":"array"},"owner":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getShareProviderConfig:getShareProviderConfig"},"storageLocation":{"type":"string"},"storageRoot":{"type":"string"},"updatedAt":{"type":"integer"},"updatedBy":{"type":"string"}},"required":["createdAt","createdBy","effectiveOwner","storageLocation","updatedAt","updatedBy","id"],"type":"object"}},"databricks:index/getShares:getShares":{"description":"Retrieves a list of\u003cspan pulumi-lang-nodejs=\" databricks.Share \" pulumi-lang-dotnet=\" databricks.Share \" pulumi-lang-go=\" Share \" pulumi-lang-python=\" Share \" pulumi-lang-yaml=\" databricks.Share \" pulumi-lang-java=\" databricks.Share \"\u003e databricks.Share \u003c/span\u003ename, that were created by Pulumi or manually.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGetting all existing shares in the metastore\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getShares({});\nexport const shareName = _this.then(_this =\u003e _this.shares);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_shares()\npulumi.export(\"shareName\", this.shares)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetShares.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"shareName\"] = @this.Apply(@this =\u003e @this.Apply(getSharesResult =\u003e getSharesResult.Shares)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetShares(ctx, \u0026databricks.GetSharesArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"shareName\", this.Shares)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetSharesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getShares(GetSharesArgs.builder()\n            .build());\n\n        ctx.export(\"shareName\", this_.shares());\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getShares\n      arguments: {}\noutputs:\n  shareName: ${this.shares}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Share \" pulumi-lang-dotnet=\" databricks.Share \" pulumi-lang-go=\" Share \" pulumi-lang-python=\" Share \" pulumi-lang-yaml=\" databricks.Share \" pulumi-lang-java=\" databricks.Share \"\u003e databricks.Share \u003c/span\u003eto create Delta Sharing shares.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Recipient \" pulumi-lang-dotnet=\" databricks.Recipient \" pulumi-lang-go=\" Recipient \" pulumi-lang-python=\" Recipient \" pulumi-lang-yaml=\" databricks.Recipient \" pulumi-lang-java=\" databricks.Recipient \"\u003e databricks.Recipient \u003c/span\u003eto create Delta Sharing recipients.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage Delta Sharing permissions.\n","inputs":{"description":"A collection of arguments for invoking getShares.\n","properties":{"providerConfig":{"$ref":"#/types/databricks:index/getSharesProviderConfig:getSharesProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"shares":{"type":"array","items":{"type":"string"},"description":"list of\u003cspan pulumi-lang-nodejs=\" databricks.Share \" pulumi-lang-dotnet=\" databricks.Share \" pulumi-lang-go=\" Share \" pulumi-lang-python=\" Share \" pulumi-lang-yaml=\" databricks.Share \" pulumi-lang-java=\" databricks.Share \"\u003e databricks.Share \u003c/span\u003enames.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getShares.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getSharesProviderConfig:getSharesProviderConfig"},"shares":{"description":"list of\u003cspan pulumi-lang-nodejs=\" databricks.Share \" pulumi-lang-dotnet=\" databricks.Share \" pulumi-lang-go=\" Share \" pulumi-lang-python=\" Share \" pulumi-lang-yaml=\" databricks.Share \" pulumi-lang-java=\" databricks.Share \"\u003e databricks.Share \u003c/span\u003enames.\n","items":{"type":"string"},"type":"array"}},"required":["shares","id"],"type":"object"}},"databricks:index/getSparkVersion:getSparkVersion":{"description":"Gets [Databricks Runtime (DBR)](https://docs.databricks.com/runtime/dbr.html) version that could be used for \u003cspan pulumi-lang-nodejs=\"`sparkVersion`\" pulumi-lang-dotnet=\"`SparkVersion`\" pulumi-lang-go=\"`sparkVersion`\" pulumi-lang-python=\"`spark_version`\" pulumi-lang-yaml=\"`sparkVersion`\" pulumi-lang-java=\"`sparkVersion`\"\u003e`spark_version`\u003c/span\u003e parameter in\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand other resources that fits search criteria, like specific Spark or Scala version, ML or Genomics runtime, etc., similar to executing `databricks clusters spark-versions`, and filters it to return the latest version that matches criteria. Often used along\u003cspan pulumi-lang-nodejs=\" databricks.getNodeType \" pulumi-lang-dotnet=\" databricks.getNodeType \" pulumi-lang-go=\" getNodeType \" pulumi-lang-python=\" get_node_type \" pulumi-lang-yaml=\" databricks.getNodeType \" pulumi-lang-java=\" databricks.getNodeType \"\u003e databricks.getNodeType \u003c/span\u003edata source.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n\u003e This is experimental functionality, which aims to simplify things. In case of wrong parameters given (e.g. together `ml = true` and `genomics = true`, or something like), data source will throw an error.  Similarly, if search returns multiple results, and `latest = false`, data source will throw an error.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst withGpu = databricks.getNodeType({\n    localDisk: true,\n    minCores: 16,\n    gbPerCore: 1,\n    minGpus: 1,\n});\nconst gpuMl = databricks.getSparkVersion({\n    gpu: true,\n    ml: true,\n});\nconst research = new databricks.Cluster(\"research\", {\n    clusterName: \"Research Cluster\",\n    sparkVersion: gpuMl.then(gpuMl =\u003e gpuMl.id),\n    nodeTypeId: withGpu.then(withGpu =\u003e withGpu.id),\n    autoterminationMinutes: 20,\n    autoscale: {\n        minWorkers: 1,\n        maxWorkers: 50,\n    },\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nwith_gpu = databricks.get_node_type(local_disk=True,\n    min_cores=16,\n    gb_per_core=1,\n    min_gpus=1)\ngpu_ml = databricks.get_spark_version(gpu=True,\n    ml=True)\nresearch = databricks.Cluster(\"research\",\n    cluster_name=\"Research Cluster\",\n    spark_version=gpu_ml.id,\n    node_type_id=with_gpu.id,\n    autotermination_minutes=20,\n    autoscale={\n        \"min_workers\": 1,\n        \"max_workers\": 50,\n    })\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var withGpu = Databricks.GetNodeType.Invoke(new()\n    {\n        LocalDisk = true,\n        MinCores = 16,\n        GbPerCore = 1,\n        MinGpus = 1,\n    });\n\n    var gpuMl = Databricks.GetSparkVersion.Invoke(new()\n    {\n        Gpu = true,\n        Ml = true,\n    });\n\n    var research = new Databricks.Cluster(\"research\", new()\n    {\n        ClusterName = \"Research Cluster\",\n        SparkVersion = gpuMl.Apply(getSparkVersionResult =\u003e getSparkVersionResult.Id),\n        NodeTypeId = withGpu.Apply(getNodeTypeResult =\u003e getNodeTypeResult.Id),\n        AutoterminationMinutes = 20,\n        Autoscale = new Databricks.Inputs.ClusterAutoscaleArgs\n        {\n            MinWorkers = 1,\n            MaxWorkers = 50,\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\twithGpu, err := databricks.GetNodeType(ctx, \u0026databricks.GetNodeTypeArgs{\n\t\t\tLocalDisk: pulumi.BoolRef(true),\n\t\t\tMinCores:  pulumi.IntRef(16),\n\t\t\tGbPerCore: pulumi.IntRef(1),\n\t\t\tMinGpus:   pulumi.IntRef(1),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tgpuMl, err := databricks.GetSparkVersion(ctx, \u0026databricks.GetSparkVersionArgs{\n\t\t\tGpu: pulumi.BoolRef(true),\n\t\t\tMl:  pulumi.BoolRef(true),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewCluster(ctx, \"research\", \u0026databricks.ClusterArgs{\n\t\t\tClusterName:            pulumi.String(\"Research Cluster\"),\n\t\t\tSparkVersion:           pulumi.String(gpuMl.Id),\n\t\t\tNodeTypeId:             pulumi.String(withGpu.Id),\n\t\t\tAutoterminationMinutes: pulumi.Int(20),\n\t\t\tAutoscale: \u0026databricks.ClusterAutoscaleArgs{\n\t\t\t\tMinWorkers: pulumi.Int(1),\n\t\t\t\tMaxWorkers: pulumi.Int(50),\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetNodeTypeArgs;\nimport com.pulumi.databricks.inputs.GetSparkVersionArgs;\nimport com.pulumi.databricks.Cluster;\nimport com.pulumi.databricks.ClusterArgs;\nimport com.pulumi.databricks.inputs.ClusterAutoscaleArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var withGpu = DatabricksFunctions.getNodeType(GetNodeTypeArgs.builder()\n            .localDisk(true)\n            .minCores(16)\n            .gbPerCore(1)\n            .minGpus(1)\n            .build());\n\n        final var gpuMl = DatabricksFunctions.getSparkVersion(GetSparkVersionArgs.builder()\n            .gpu(true)\n            .ml(true)\n            .build());\n\n        var research = new Cluster(\"research\", ClusterArgs.builder()\n            .clusterName(\"Research Cluster\")\n            .sparkVersion(gpuMl.id())\n            .nodeTypeId(withGpu.id())\n            .autoterminationMinutes(20)\n            .autoscale(ClusterAutoscaleArgs.builder()\n                .minWorkers(1)\n                .maxWorkers(50)\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  research:\n    type: databricks:Cluster\n    properties:\n      clusterName: Research Cluster\n      sparkVersion: ${gpuMl.id}\n      nodeTypeId: ${withGpu.id}\n      autoterminationMinutes: 20\n      autoscale:\n        minWorkers: 1\n        maxWorkers: 50\nvariables:\n  withGpu:\n    fn::invoke:\n      function: databricks:getNodeType\n      arguments:\n        localDisk: true\n        minCores: 16\n        gbPerCore: 1\n        minGpus: 1\n  gpuMl:\n    fn::invoke:\n      function: databricks:getSparkVersion\n      arguments:\n        gpu: true\n        ml: true\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eto create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.ClusterPolicy \" pulumi-lang-dotnet=\" databricks.ClusterPolicy \" pulumi-lang-go=\" ClusterPolicy \" pulumi-lang-python=\" ClusterPolicy \" pulumi-lang-yaml=\" databricks.ClusterPolicy \" pulumi-lang-java=\" databricks.ClusterPolicy \"\u003e databricks.ClusterPolicy \u003c/span\u003eto create a\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003epolicy, which limits the ability to create clusters based on a set of rules.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstancePool \" pulumi-lang-dotnet=\" databricks.InstancePool \" pulumi-lang-go=\" InstancePool \" pulumi-lang-python=\" InstancePool \" pulumi-lang-yaml=\" databricks.InstancePool \" pulumi-lang-java=\" databricks.InstancePool \"\u003e databricks.InstancePool \u003c/span\u003eto manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce cluster start and auto-scaling times by maintaining a set of idle, ready-to-use instances.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Job \" pulumi-lang-dotnet=\" databricks.Job \" pulumi-lang-go=\" Job \" pulumi-lang-python=\" Job \" pulumi-lang-yaml=\" databricks.Job \" pulumi-lang-java=\" databricks.Job \"\u003e databricks.Job \u003c/span\u003eto manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a databricks_cluster.\n","inputs":{"description":"A collection of arguments for invoking getSparkVersion.\n","properties":{"beta":{"type":"boolean","description":"if we should limit the search only to runtimes that are in Beta stage. Default to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n","willReplaceOnChanges":true},"genomics":{"type":"boolean","description":"if we should limit the search only to Genomics (HLS) runtimes. Default to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n","willReplaceOnChanges":true},"gpu":{"type":"boolean","description":"if we should limit the search only to runtimes that support GPUs. Default to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n","willReplaceOnChanges":true},"graviton":{"type":"boolean","description":"if we should limit the search only to runtimes supporting AWS Graviton CPUs. Default to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e. _Deprecated with DBR 14.0 release. DBR version compiled for Graviton will be automatically installed when nodes with Graviton CPUs are specified in the cluster configuration._\n","deprecationMessage":"Not required anymore - it's automatically enabled on the Graviton-based node types","willReplaceOnChanges":true},"id":{"type":"string","description":"Databricks Runtime version, that can be used as \u003cspan pulumi-lang-nodejs=\"`sparkVersion`\" pulumi-lang-dotnet=\"`SparkVersion`\" pulumi-lang-go=\"`sparkVersion`\" pulumi-lang-python=\"`spark_version`\" pulumi-lang-yaml=\"`sparkVersion`\" pulumi-lang-java=\"`sparkVersion`\"\u003e`spark_version`\u003c/span\u003e field in databricks_job, databricks_cluster, or databricks_instance_pool.\n"},"latest":{"type":"boolean","description":"if we should return only the latest version if there is more than one result.  Default to \u003cspan pulumi-lang-nodejs=\"`true`\" pulumi-lang-dotnet=\"`True`\" pulumi-lang-go=\"`true`\" pulumi-lang-python=\"`true`\" pulumi-lang-yaml=\"`true`\" pulumi-lang-java=\"`true`\"\u003e`true`\u003c/span\u003e. If set to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e and multiple versions are matching, throws an error.\n","willReplaceOnChanges":true},"longTermSupport":{"type":"boolean","description":"if we should limit the search only to LTS (long term support) \u0026 ESR (extended support) versions. Default to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n","willReplaceOnChanges":true},"ml":{"type":"boolean","description":"if we should limit the search only to ML runtimes. Default to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e.\n","willReplaceOnChanges":true},"photon":{"type":"boolean","description":"if we should limit the search only to Photon runtimes. Default to \u003cspan pulumi-lang-nodejs=\"`false`\" pulumi-lang-dotnet=\"`False`\" pulumi-lang-go=\"`false`\" pulumi-lang-python=\"`false`\" pulumi-lang-yaml=\"`false`\" pulumi-lang-java=\"`false`\"\u003e`false`\u003c/span\u003e. *Deprecated with DBR 14.0 release. Specify `runtime_engine=\\\"PHOTON\\\"` in the cluster configuration instead!*\n","deprecationMessage":"Specify runtime_engine=\"PHOTON\" in the cluster configuration","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getSparkVersionProviderConfig:getSparkVersionProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"scala":{"type":"string","description":"if we should limit the search only to runtimes that are based on specific Scala version. Default to `2.1` to select either `2.12` or `2.13` depending on the DBR version (for DBR that has both `2.12` and `2.13` flavors, `2.12` is returned by default).\n","willReplaceOnChanges":true},"sparkVersion":{"type":"string","description":"if we should limit the search only to runtimes that are based on specific Spark version. Default to empty string.  It could be specified as \u003cspan pulumi-lang-nodejs=\"`3`\" pulumi-lang-dotnet=\"`3`\" pulumi-lang-go=\"`3`\" pulumi-lang-python=\"`3`\" pulumi-lang-yaml=\"`3`\" pulumi-lang-java=\"`3`\"\u003e`3`\u003c/span\u003e, or `3.0`, or full version, like, `3.0.1`.\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getSparkVersion.\n","properties":{"beta":{"type":"boolean"},"genomics":{"type":"boolean"},"gpu":{"type":"boolean"},"graviton":{"deprecationMessage":"Not required anymore - it's automatically enabled on the Graviton-based node types","type":"boolean"},"id":{"description":"Databricks Runtime version, that can be used as \u003cspan pulumi-lang-nodejs=\"`sparkVersion`\" pulumi-lang-dotnet=\"`SparkVersion`\" pulumi-lang-go=\"`sparkVersion`\" pulumi-lang-python=\"`spark_version`\" pulumi-lang-yaml=\"`sparkVersion`\" pulumi-lang-java=\"`sparkVersion`\"\u003e`spark_version`\u003c/span\u003e field in databricks_job, databricks_cluster, or databricks_instance_pool.\n","type":"string"},"latest":{"type":"boolean"},"longTermSupport":{"type":"boolean"},"ml":{"type":"boolean"},"photon":{"deprecationMessage":"Specify runtime_engine=\"PHOTON\" in the cluster configuration","type":"boolean"},"providerConfig":{"$ref":"#/types/databricks:index/getSparkVersionProviderConfig:getSparkVersionProviderConfig"},"scala":{"type":"string"},"sparkVersion":{"type":"string"}},"required":["id"],"type":"object"}},"databricks:index/getSqlWarehouse:getSqlWarehouse":{"description":"Retrieves information about a\u003cspan pulumi-lang-nodejs=\" databricks.getSqlWarehouse \" pulumi-lang-dotnet=\" databricks.getSqlWarehouse \" pulumi-lang-go=\" getSqlWarehouse \" pulumi-lang-python=\" get_sql_warehouse \" pulumi-lang-yaml=\" databricks.getSqlWarehouse \" pulumi-lang-java=\" databricks.getSqlWarehouse \"\u003e databricks.getSqlWarehouse \u003c/span\u003eusing its id. This could be retrieved programmatically using\u003cspan pulumi-lang-nodejs=\" databricks.getSqlWarehouses \" pulumi-lang-dotnet=\" databricks.getSqlWarehouses \" pulumi-lang-go=\" getSqlWarehouses \" pulumi-lang-python=\" get_sql_warehouses \" pulumi-lang-yaml=\" databricks.getSqlWarehouses \" pulumi-lang-java=\" databricks.getSqlWarehouses \"\u003e databricks.getSqlWarehouses \u003c/span\u003edata source.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n* Retrieve attributes of each SQL warehouses in a workspace:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getSqlWarehouses({});\nconst _this = all.then(all =\u003e .reduce((__obj, [__key, __value]) =\u003e ({ ...__obj, [__key]: databricks.getSqlWarehouse({\n    id: __value,\n}) })));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_sql_warehouses()\nthis = {__key: databricks.get_sql_warehouse(id=__value) for __key, __value in all.ids}\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetSqlWarehouses.Invoke();\n\n    var @this = ;\n\n});\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n* Search for a specific SQL Warehouse by name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getSqlWarehouse({\n    name: \"Starter Warehouse\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_sql_warehouse(name=\"Starter Warehouse\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetSqlWarehouse.Invoke(new()\n    {\n        Name = \"Starter Warehouse\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetSqlWarehouse(ctx, \u0026databricks.GetSqlWarehouseArgs{\n\t\t\tName: pulumi.StringRef(\"Starter Warehouse\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetSqlWarehouseArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getSqlWarehouse(GetSqlWarehouseArgs.builder()\n            .name(\"Starter Warehouse\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getSqlWarehouse\n      arguments:\n        name: Starter Warehouse\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlDashboard \" pulumi-lang-dotnet=\" databricks.SqlDashboard \" pulumi-lang-go=\" SqlDashboard \" pulumi-lang-python=\" SqlDashboard \" pulumi-lang-yaml=\" databricks.SqlDashboard \" pulumi-lang-java=\" databricks.SqlDashboard \"\u003e databricks.SqlDashboard \u003c/span\u003eto manage Databricks SQL [Dashboards](https://docs.databricks.com/sql/user/dashboards/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlGlobalConfig \" pulumi-lang-dotnet=\" databricks.SqlGlobalConfig \" pulumi-lang-go=\" SqlGlobalConfig \" pulumi-lang-python=\" SqlGlobalConfig \" pulumi-lang-yaml=\" databricks.SqlGlobalConfig \" pulumi-lang-java=\" databricks.SqlGlobalConfig \"\u003e databricks.SqlGlobalConfig \u003c/span\u003eto configure the security policy, databricks_instance_profile, and [data access properties](https://docs.databricks.com/sql/admin/data-access-configuration.html) for all\u003cspan pulumi-lang-nodejs=\" databricks.getSqlWarehouse \" pulumi-lang-dotnet=\" databricks.getSqlWarehouse \" pulumi-lang-go=\" getSqlWarehouse \" pulumi-lang-python=\" get_sql_warehouse \" pulumi-lang-yaml=\" databricks.getSqlWarehouse \" pulumi-lang-java=\" databricks.getSqlWarehouse \"\u003e databricks.getSqlWarehouse \u003c/span\u003eof workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage data access in Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getSqlWarehouse.\n","properties":{"autoStopMins":{"type":"integer","description":"Time in minutes until an idle SQL warehouse terminates all clusters and stops.\n"},"channel":{"$ref":"#/types/databricks:index/getSqlWarehouseChannel:getSqlWarehouseChannel","description":"block, consisting of following fields:\n"},"clusterSize":{"type":"string","description":"The size of the clusters allocated to the warehouse: \"2X-Small\", \"X-Small\", \"Small\", \"Medium\", \"Large\", \"X-Large\", \"2X-Large\", \"3X-Large\", \"4X-Large\", \"5X-Large\".\n"},"creatorName":{"type":"string","description":"The username of the user who created the endpoint.\n"},"dataSourceId":{"type":"string","description":"(Deprecated, will be removed) ID of the data source for this warehouse. This is used to bind an Databricks SQL query to an warehouse.\n"},"enablePhoton":{"type":"boolean","description":"Whether [Photon](https://databricks.com/product/delta-engine) is enabled.\n"},"enableServerlessCompute":{"type":"boolean","description":"Whether this SQL warehouse is a serverless SQL warehouse.\n"},"health":{"$ref":"#/types/databricks:index/getSqlWarehouseHealth:getSqlWarehouseHealth","description":"Health status of the endpoint.\n"},"id":{"type":"string","description":"The ID of the SQL warehouse.\n"},"instanceProfileArn":{"type":"string"},"jdbcUrl":{"type":"string","description":"JDBC connection string.\n"},"maxNumClusters":{"type":"integer","description":"Maximum number of clusters available when a SQL warehouse is running.\n"},"minNumClusters":{"type":"integer","description":"Minimum number of clusters available when a SQL warehouse is running.\n"},"name":{"type":"string","description":"Name of the SQL warehouse to search (case-sensitive).\n"},"numActiveSessions":{"type":"integer","description":"The current number of clusters used by the endpoint.\n"},"numClusters":{"type":"integer","description":"The current number of clusters used by the endpoint.\n"},"odbcParams":{"$ref":"#/types/databricks:index/getSqlWarehouseOdbcParams:getSqlWarehouseOdbcParams","description":"ODBC connection params: `odbc_params.hostname`, `odbc_params.path`, `odbc_params.protocol`, and `odbc_params.port`.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getSqlWarehouseProviderConfig:getSqlWarehouseProviderConfig","willReplaceOnChanges":true},"spotInstancePolicy":{"type":"string","description":"The spot policy to use for allocating instances to clusters: `COST_OPTIMIZED` or `RELIABILITY_OPTIMIZED`.\n"},"state":{"type":"string","description":"The current state of the endpoint.\n"},"tags":{"$ref":"#/types/databricks:index/getSqlWarehouseTags:getSqlWarehouseTags","description":"tags used for SQL warehouse resources.\n"},"warehouseType":{"type":"string","description":"SQL warehouse type. See [documentation](https://docs.databricks.com/sql/index.html#warehouse-types).\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getSqlWarehouse.\n","properties":{"autoStopMins":{"description":"Time in minutes until an idle SQL warehouse terminates all clusters and stops.\n","type":"integer"},"channel":{"$ref":"#/types/databricks:index/getSqlWarehouseChannel:getSqlWarehouseChannel","description":"block, consisting of following fields:\n"},"clusterSize":{"description":"The size of the clusters allocated to the warehouse: \"2X-Small\", \"X-Small\", \"Small\", \"Medium\", \"Large\", \"X-Large\", \"2X-Large\", \"3X-Large\", \"4X-Large\", \"5X-Large\".\n","type":"string"},"creatorName":{"description":"The username of the user who created the endpoint.\n","type":"string"},"dataSourceId":{"description":"(Deprecated, will be removed) ID of the data source for this warehouse. This is used to bind an Databricks SQL query to an warehouse.\n","type":"string"},"enablePhoton":{"description":"Whether [Photon](https://databricks.com/product/delta-engine) is enabled.\n","type":"boolean"},"enableServerlessCompute":{"description":"Whether this SQL warehouse is a serverless SQL warehouse.\n","type":"boolean"},"health":{"$ref":"#/types/databricks:index/getSqlWarehouseHealth:getSqlWarehouseHealth","description":"Health status of the endpoint.\n"},"id":{"description":"The ID of the SQL warehouse.\n","type":"string"},"instanceProfileArn":{"type":"string"},"jdbcUrl":{"description":"JDBC connection string.\n","type":"string"},"maxNumClusters":{"description":"Maximum number of clusters available when a SQL warehouse is running.\n","type":"integer"},"minNumClusters":{"description":"Minimum number of clusters available when a SQL warehouse is running.\n","type":"integer"},"name":{"description":"Name of the Databricks SQL release channel. Possible values are: `CHANNEL_NAME_PREVIEW` and `CHANNEL_NAME_CURRENT`. Default is `CHANNEL_NAME_CURRENT`.\n","type":"string"},"numActiveSessions":{"description":"The current number of clusters used by the endpoint.\n","type":"integer"},"numClusters":{"description":"The current number of clusters used by the endpoint.\n","type":"integer"},"odbcParams":{"$ref":"#/types/databricks:index/getSqlWarehouseOdbcParams:getSqlWarehouseOdbcParams","description":"ODBC connection params: `odbc_params.hostname`, `odbc_params.path`, `odbc_params.protocol`, and `odbc_params.port`.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getSqlWarehouseProviderConfig:getSqlWarehouseProviderConfig"},"spotInstancePolicy":{"description":"The spot policy to use for allocating instances to clusters: `COST_OPTIMIZED` or `RELIABILITY_OPTIMIZED`.\n","type":"string"},"state":{"description":"The current state of the endpoint.\n","type":"string"},"tags":{"$ref":"#/types/databricks:index/getSqlWarehouseTags:getSqlWarehouseTags","description":"tags used for SQL warehouse resources.\n"},"warehouseType":{"description":"SQL warehouse type. See [documentation](https://docs.databricks.com/sql/index.html#warehouse-types).\n","type":"string"}},"required":["autoStopMins","channel","clusterSize","creatorName","dataSourceId","enablePhoton","enableServerlessCompute","health","id","instanceProfileArn","jdbcUrl","maxNumClusters","minNumClusters","name","numActiveSessions","numClusters","odbcParams","spotInstancePolicy","state","tags","warehouseType"],"type":"object"}},"databricks:index/getSqlWarehouses:getSqlWarehouses":{"description":"Retrieves a list of\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eids, that were created by Pulumi or manually.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nRetrieve IDs for all SQL warehouses:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getSqlWarehouses({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_sql_warehouses()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetSqlWarehouses.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetSqlWarehouses(ctx, \u0026databricks.GetSqlWarehousesArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetSqlWarehousesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getSqlWarehouses(GetSqlWarehousesArgs.builder()\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getSqlWarehouses\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\nRetrieve IDs for all clusters having \"Shared\" in the warehouse name:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst allShared = databricks.getSqlWarehouses({\n    warehouseNameContains: \"shared\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall_shared = databricks.get_sql_warehouses(warehouse_name_contains=\"shared\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var allShared = Databricks.GetSqlWarehouses.Invoke(new()\n    {\n        WarehouseNameContains = \"shared\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetSqlWarehouses(ctx, \u0026databricks.GetSqlWarehousesArgs{\n\t\t\tWarehouseNameContains: pulumi.StringRef(\"shared\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetSqlWarehousesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var allShared = DatabricksFunctions.getSqlWarehouses(GetSqlWarehousesArgs.builder()\n            .warehouseNameContains(\"shared\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  allShared:\n    fn::invoke:\n      function: databricks:getSqlWarehouses\n      arguments:\n        warehouseNameContains: shared\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are often used in the same context:\n\n* End to end workspace management guide.\n*\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003eto manage AWS EC2 instance profiles that users can launch\u003cspan pulumi-lang-nodejs=\" databricks.Cluster \" pulumi-lang-dotnet=\" databricks.Cluster \" pulumi-lang-go=\" Cluster \" pulumi-lang-python=\" Cluster \" pulumi-lang-yaml=\" databricks.Cluster \" pulumi-lang-java=\" databricks.Cluster \"\u003e databricks.Cluster \u003c/span\u003eand access data, like databricks_mount.\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlDashboard \" pulumi-lang-dotnet=\" databricks.SqlDashboard \" pulumi-lang-go=\" SqlDashboard \" pulumi-lang-python=\" SqlDashboard \" pulumi-lang-yaml=\" databricks.SqlDashboard \" pulumi-lang-java=\" databricks.SqlDashboard \"\u003e databricks.SqlDashboard \u003c/span\u003eto manage Databricks SQL [Dashboards](https://docs.databricks.com/sql/user/dashboards/index.html).\n*\u003cspan pulumi-lang-nodejs=\" databricks.SqlGlobalConfig \" pulumi-lang-dotnet=\" databricks.SqlGlobalConfig \" pulumi-lang-go=\" SqlGlobalConfig \" pulumi-lang-python=\" SqlGlobalConfig \" pulumi-lang-yaml=\" databricks.SqlGlobalConfig \" pulumi-lang-java=\" databricks.SqlGlobalConfig \"\u003e databricks.SqlGlobalConfig \u003c/span\u003eto configure the security policy, databricks_instance_profile, and [data access properties](https://docs.databricks.com/sql/admin/data-access-configuration.html) for all\u003cspan pulumi-lang-nodejs=\" databricks.getSqlWarehouse \" pulumi-lang-dotnet=\" databricks.getSqlWarehouse \" pulumi-lang-go=\" getSqlWarehouse \" pulumi-lang-python=\" get_sql_warehouse \" pulumi-lang-yaml=\" databricks.getSqlWarehouse \" pulumi-lang-java=\" databricks.getSqlWarehouse \"\u003e databricks.getSqlWarehouse \u003c/span\u003eof workspace.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grants \" pulumi-lang-dotnet=\" databricks.Grants \" pulumi-lang-go=\" Grants \" pulumi-lang-python=\" Grants \" pulumi-lang-yaml=\" databricks.Grants \" pulumi-lang-java=\" databricks.Grants \"\u003e databricks.Grants \u003c/span\u003eto manage data access in Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getSqlWarehouses.\n","properties":{"ids":{"type":"array","items":{"type":"string"},"description":"list of\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eids\n"},"providerConfig":{"$ref":"#/types/databricks:index/getSqlWarehousesProviderConfig:getSqlWarehousesProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"warehouseNameContains":{"type":"string","description":"Only return\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eids that match the given name string.\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getSqlWarehouses.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"ids":{"description":"list of\u003cspan pulumi-lang-nodejs=\" databricks.SqlEndpoint \" pulumi-lang-dotnet=\" databricks.SqlEndpoint \" pulumi-lang-go=\" SqlEndpoint \" pulumi-lang-python=\" SqlEndpoint \" pulumi-lang-yaml=\" databricks.SqlEndpoint \" pulumi-lang-java=\" databricks.SqlEndpoint \"\u003e databricks.SqlEndpoint \u003c/span\u003eids\n","items":{"type":"string"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getSqlWarehousesProviderConfig:getSqlWarehousesProviderConfig"},"warehouseNameContains":{"type":"string"}},"required":["ids","id"],"type":"object"}},"databricks:index/getStorageCredential:getStorageCredential":{"description":"Retrieves details about a\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003ethat were created by Pulumi or manually.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGetting details of an existing storage credential in the metastore\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getStorageCredential({\n    name: \"this\",\n});\nexport const createdBy = _this.then(_this =\u003e _this.storageCredentialInfo?.createdBy);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_storage_credential(name=\"this\")\npulumi.export(\"createdBy\", this.storage_credential_info.created_by)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetStorageCredential.Invoke(new()\n    {\n        Name = \"this\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"createdBy\"] = @this.Apply(@this =\u003e @this.Apply(getStorageCredentialResult =\u003e getStorageCredentialResult.StorageCredentialInfo?.CreatedBy)),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.LookupStorageCredential(ctx, \u0026databricks.LookupStorageCredentialArgs{\n\t\t\tName: \"this\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"createdBy\", this.StorageCredentialInfo.CreatedBy)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetStorageCredentialArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getStorageCredential(GetStorageCredentialArgs.builder()\n            .name(\"this\")\n            .build());\n\n        ctx.export(\"createdBy\", this_.storageCredentialInfo().createdBy());\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getStorageCredential\n      arguments:\n        name: this\noutputs:\n  createdBy: ${this.storageCredentialInfo.createdBy}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.getStorageCredentials \" pulumi-lang-dotnet=\" databricks.getStorageCredentials \" pulumi-lang-go=\" getStorageCredentials \" pulumi-lang-python=\" get_storage_credentials \" pulumi-lang-yaml=\" databricks.getStorageCredentials \" pulumi-lang-java=\" databricks.getStorageCredentials \"\u003e databricks.getStorageCredentials \u003c/span\u003eto get names of all credentials\n*\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003eto manage Storage Credentials within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getStorageCredential.\n","properties":{"id":{"type":"string","description":"Unique ID of storage credential.\n"},"name":{"type":"string","description":"The name of the storage credential\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getStorageCredentialProviderConfig:getStorageCredentialProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"storageCredentialInfo":{"$ref":"#/types/databricks:index/getStorageCredentialStorageCredentialInfo:getStorageCredentialStorageCredentialInfo","description":"array of objects with information about storage credential.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getStorageCredential.\n","properties":{"id":{"description":"Unique ID of storage credential.\n","type":"string"},"name":{"type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getStorageCredentialProviderConfig:getStorageCredentialProviderConfig"},"storageCredentialInfo":{"$ref":"#/types/databricks:index/getStorageCredentialStorageCredentialInfo:getStorageCredentialStorageCredentialInfo","description":"array of objects with information about storage credential.\n"}},"required":["id","name","storageCredentialInfo"],"type":"object"}},"databricks:index/getStorageCredentials:getStorageCredentials":{"description":"Retrieves a list of\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003eobjects, that were created by Pulumi or manually, so that special handling could be applied.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nList all storage credentials in the metastore\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getStorageCredentials({});\nexport const allStorageCredentials = all.then(all =\u003e all.names);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_storage_credentials()\npulumi.export(\"allStorageCredentials\", all.names)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetStorageCredentials.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allStorageCredentials\"] = all.Apply(getStorageCredentialsResult =\u003e getStorageCredentialsResult.Names),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetStorageCredentials(ctx, \u0026databricks.GetStorageCredentialsArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allStorageCredentials\", all.Names)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetStorageCredentialsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getStorageCredentials(GetStorageCredentialsArgs.builder()\n            .build());\n\n        ctx.export(\"allStorageCredentials\", all.names());\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getStorageCredentials\n      arguments: {}\noutputs:\n  allStorageCredentials: ${all.names}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003eto get information about a single credential\n*\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003eto manage Storage Credentials within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getStorageCredentials.\n","properties":{"names":{"type":"array","items":{"type":"string"},"description":"List of names of\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003ein the metastore\n"},"providerConfig":{"$ref":"#/types/databricks:index/getStorageCredentialsProviderConfig:getStorageCredentialsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getStorageCredentials.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"names":{"description":"List of names of\u003cspan pulumi-lang-nodejs=\" databricks.StorageCredential \" pulumi-lang-dotnet=\" databricks.StorageCredential \" pulumi-lang-go=\" StorageCredential \" pulumi-lang-python=\" StorageCredential \" pulumi-lang-yaml=\" databricks.StorageCredential \" pulumi-lang-java=\" databricks.StorageCredential \"\u003e databricks.StorageCredential \u003c/span\u003ein the metastore\n","items":{"type":"string"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getStorageCredentialsProviderConfig:getStorageCredentialsProviderConfig"}},"required":["names","id"],"type":"object"}},"databricks:index/getTable:getTable":{"description":"Retrieves details of a specific table in Unity Catalog, that were created by Pulumi or manually. Use\u003cspan pulumi-lang-nodejs=\" databricks.getTables \" pulumi-lang-dotnet=\" databricks.getTables \" pulumi-lang-go=\" getTables \" pulumi-lang-python=\" get_tables \" pulumi-lang-yaml=\" databricks.getTables \" pulumi-lang-java=\" databricks.getTables \"\u003e databricks.getTables \u003c/span\u003eto retrieve multiple tables in Unity Catalog\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nRead  on a specific table `main.certified.fct_transactions`:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst fctTransactions = databricks.getTable({\n    name: \"main.certified.fct_transactions\",\n});\nconst things = new databricks.Grants(\"things\", {\n    table: fctTransactions.then(fctTransactions =\u003e fctTransactions.name),\n    grants: [{\n        principal: \"sensitive\",\n        privileges: [\n            \"SELECT\",\n            \"MODIFY\",\n        ],\n    }],\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nfct_transactions = databricks.get_table(name=\"main.certified.fct_transactions\")\nthings = databricks.Grants(\"things\",\n    table=fct_transactions.name,\n    grants=[{\n        \"principal\": \"sensitive\",\n        \"privileges\": [\n            \"SELECT\",\n            \"MODIFY\",\n        ],\n    }])\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var fctTransactions = Databricks.GetTable.Invoke(new()\n    {\n        Name = \"main.certified.fct_transactions\",\n    });\n\n    var things = new Databricks.Grants(\"things\", new()\n    {\n        Table = fctTransactions.Apply(getTableResult =\u003e getTableResult.Name),\n        GrantDetails = new[]\n        {\n            new Databricks.Inputs.GrantsGrantArgs\n            {\n                Principal = \"sensitive\",\n                Privileges = new[]\n                {\n                    \"SELECT\",\n                    \"MODIFY\",\n                },\n            },\n        },\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tfctTransactions, err := databricks.LookupTable(ctx, \u0026databricks.LookupTableArgs{\n\t\t\tName: \"main.certified.fct_transactions\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGrants(ctx, \"things\", \u0026databricks.GrantsArgs{\n\t\t\tTable: pulumi.String(fctTransactions.Name),\n\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\tPrincipal: pulumi.String(\"sensitive\"),\n\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetTableArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var fctTransactions = DatabricksFunctions.getTable(GetTableArgs.builder()\n            .name(\"main.certified.fct_transactions\")\n            .build());\n\n        var things = new Grants(\"things\", GrantsArgs.builder()\n            .table(fctTransactions.name())\n            .grants(GrantsGrantArgs.builder()\n                .principal(\"sensitive\")\n                .privileges(                \n                    \"SELECT\",\n                    \"MODIFY\")\n                .build())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  things:\n    type: databricks:Grants\n    properties:\n      table: ${fctTransactions.name}\n      grants:\n        - principal: sensitive\n          privileges:\n            - SELECT\n            - MODIFY\nvariables:\n  fctTransactions:\n    fn::invoke:\n      function: databricks:getTable\n      arguments:\n        name: main.certified.fct_transactions\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Grant \" pulumi-lang-dotnet=\" databricks.Grant \" pulumi-lang-go=\" Grant \" pulumi-lang-python=\" Grant \" pulumi-lang-yaml=\" databricks.Grant \" pulumi-lang-java=\" databricks.Grant \"\u003e databricks.Grant \u003c/span\u003eto manage grants within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.getTables \" pulumi-lang-dotnet=\" databricks.getTables \" pulumi-lang-go=\" getTables \" pulumi-lang-python=\" get_tables \" pulumi-lang-yaml=\" databricks.getTables \" pulumi-lang-java=\" databricks.getTables \"\u003e databricks.getTables \u003c/span\u003eto list all tables within a schema in Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getTable.\n","properties":{"id":{"type":"string"},"name":{"type":"string","description":"Full name of the databricks_table: _\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e_\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getTableProviderConfig:getTableProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"tableInfo":{"$ref":"#/types/databricks:index/getTableTableInfo:getTableTableInfo","description":"TableInfo object for a Unity Catalog table. This contains the following attributes:\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getTable.\n","properties":{"id":{"type":"string"},"name":{"description":"Name of table, relative to parent schema.\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getTableProviderConfig:getTableProviderConfig"},"tableInfo":{"$ref":"#/types/databricks:index/getTableTableInfo:getTableTableInfo","description":"TableInfo object for a Unity Catalog table. This contains the following attributes:\n"}},"required":["id","name","tableInfo"],"type":"object"}},"databricks:index/getTables:getTables":{"description":"Retrieves a list of managed or external table full names in Unity Catalog, that were created by Pulumi or manually. Use\u003cspan pulumi-lang-nodejs=\" databricks.getViews \" pulumi-lang-dotnet=\" databricks.getViews \" pulumi-lang-go=\" getViews \" pulumi-lang-python=\" get_views \" pulumi-lang-yaml=\" databricks.getViews \" pulumi-lang-java=\" databricks.getViews \"\u003e databricks.getViews \u003c/span\u003efor retrieving a list of views.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGranting `SELECT` and `MODIFY` to \u003cspan pulumi-lang-nodejs=\"`sensitive`\" pulumi-lang-dotnet=\"`Sensitive`\" pulumi-lang-go=\"`sensitive`\" pulumi-lang-python=\"`sensitive`\" pulumi-lang-yaml=\"`sensitive`\" pulumi-lang-java=\"`sensitive`\"\u003e`sensitive`\u003c/span\u003e group on all tables a _things_\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003efrom _sandbox_ databricks_catalog:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nexport = async () =\u003e {\n    const things = await databricks.getTables({\n        catalogName: \"sandbox\",\n        schemaName: \"things\",\n    });\n    const thingsGrants: databricks.Grants[] = [];\n    for (const range of things.ids.map((v, k) =\u003e ({key: k, value: v}))) {\n        thingsGrants.push(new databricks.Grants(`things-${range.key}`, {\n            table: range.value,\n            grants: [{\n                principal: \"sensitive\",\n                privileges: [\n                    \"SELECT\",\n                    \"MODIFY\",\n                ],\n            }],\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthings = databricks.get_tables(catalog_name=\"sandbox\",\n    schema_name=\"things\")\nthings_grants = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate(things.ids)]:\n    things_grants.append(databricks.Grants(f\"things-{range['key']}\",\n        table=range[\"value\"],\n        grants=[{\n            \"principal\": \"sensitive\",\n            \"privileges\": [\n                \"SELECT\",\n                \"MODIFY\",\n            ],\n        }]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var things = await Databricks.GetTables.InvokeAsync(new()\n    {\n        CatalogName = \"sandbox\",\n        SchemaName = \"things\",\n    });\n\n    var thingsGrants = new List\u003cDatabricks.Grants\u003e();\n    foreach (var range in )\n    {\n        thingsGrants.Add(new Databricks.Grants($\"things-{range.Key}\", new()\n        {\n            Table = range.Value,\n            GrantDetails = new[]\n            {\n                new Databricks.Inputs.GrantsGrantArgs\n                {\n                    Principal = \"sensitive\",\n                    Privileges = new[]\n                    {\n                        \"SELECT\",\n                        \"MODIFY\",\n                    },\n                },\n            },\n        }));\n    }\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthings, err := databricks.GetTables(ctx, \u0026databricks.GetTablesArgs{\n\t\t\tCatalogName: \"sandbox\",\n\t\t\tSchemaName:  \"things\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar thingsGrants []*databricks.Grants\n\t\tfor key0, val0 := range things.Ids {\n\t\t\t__res, err := databricks.NewGrants(ctx, fmt.Sprintf(\"things-%v\", key0), \u0026databricks.GrantsArgs{\n\t\t\t\tTable: pulumi.String(val0),\n\t\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\t\tPrincipal: pulumi.String(\"sensitive\"),\n\t\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tthingsGrants = append(thingsGrants, __res)\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetTablesArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport com.pulumi.codegen.internal.KeyedValue;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var things = DatabricksFunctions.getTables(GetTablesArgs.builder()\n            .catalogName(\"sandbox\")\n            .schemaName(\"things\")\n            .build());\n\n        final var thingsGrants = things.applyValue(getTablesResult -\u003e {\n            final var resources = new ArrayList\u003cGrants\u003e();\n            for (var range : KeyedValue.of(getTablesResult.ids())) {\n                var resource = new Grants(\"thingsGrants-\" + range.key(), GrantsArgs.builder()\n                    .table(range.value())\n                    .grants(GrantsGrantArgs.builder()\n                        .principal(\"sensitive\")\n                        .privileges(                        \n                            \"SELECT\",\n                            \"MODIFY\")\n                        .build())\n                    .build());\n\n                resources.add(resource);\n            }\n\n            return resources;\n        });\n\n    }\n}\n```\n```yaml\nresources:\n  thingsGrants:\n    type: databricks:Grants\n    name: things\n    properties:\n      table: ${range.value}\n      grants:\n        - principal: sensitive\n          privileges:\n            - SELECT\n            - MODIFY\n    options: {}\nvariables:\n  things:\n    fn::invoke:\n      function: databricks:getTables\n      arguments:\n        catalogName: sandbox\n        schemaName: things\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eto manage schemas within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eto manage catalogs within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getTables.\n","properties":{"catalogName":{"type":"string","description":"Name of databricks_catalog\n","willReplaceOnChanges":true},"ids":{"type":"array","items":{"type":"string"},"description":"set of\u003cspan pulumi-lang-nodejs=\" databricks.Table \" pulumi-lang-dotnet=\" databricks.Table \" pulumi-lang-go=\" Table \" pulumi-lang-python=\" Table \" pulumi-lang-yaml=\" databricks.Table \" pulumi-lang-java=\" databricks.Table \"\u003e databricks.Table \u003c/span\u003efull names: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e*\n"},"providerConfig":{"$ref":"#/types/databricks:index/getTablesProviderConfig:getTablesProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"schemaName":{"type":"string","description":"Name of databricks_schema\n","willReplaceOnChanges":true}},"type":"object","required":["catalogName","schemaName"]},"outputs":{"description":"A collection of values returned by getTables.\n","properties":{"catalogName":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"ids":{"description":"set of\u003cspan pulumi-lang-nodejs=\" databricks.Table \" pulumi-lang-dotnet=\" databricks.Table \" pulumi-lang-go=\" Table \" pulumi-lang-python=\" Table \" pulumi-lang-yaml=\" databricks.Table \" pulumi-lang-java=\" databricks.Table \"\u003e databricks.Table \u003c/span\u003efull names: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`table`\" pulumi-lang-dotnet=\"`Table`\" pulumi-lang-go=\"`table`\" pulumi-lang-python=\"`table`\" pulumi-lang-yaml=\"`table`\" pulumi-lang-java=\"`table`\"\u003e`table`\u003c/span\u003e*\n","items":{"type":"string"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getTablesProviderConfig:getTablesProviderConfig"},"schemaName":{"type":"string"}},"required":["catalogName","ids","schemaName","id"],"type":"object"}},"databricks:index/getTagPolicies:getTagPolicies":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to list all tag policies in the account.\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.EntityTagAssignment \" pulumi-lang-dotnet=\" databricks.EntityTagAssignment \" pulumi-lang-go=\" EntityTagAssignment \" pulumi-lang-python=\" EntityTagAssignment \" pulumi-lang-yaml=\" databricks.EntityTagAssignment \" pulumi-lang-java=\" databricks.EntityTagAssignment \"\u003e databricks.EntityTagAssignment \u003c/span\u003efor assigning tags to supported Unity Catalog entities.\n*\u003cspan pulumi-lang-nodejs=\" databricks.WorkspaceEntityTagAssignment \" pulumi-lang-dotnet=\" databricks.WorkspaceEntityTagAssignment \" pulumi-lang-go=\" WorkspaceEntityTagAssignment \" pulumi-lang-python=\" WorkspaceEntityTagAssignment \" pulumi-lang-yaml=\" databricks.WorkspaceEntityTagAssignment \" pulumi-lang-java=\" databricks.WorkspaceEntityTagAssignment \"\u003e databricks.WorkspaceEntityTagAssignment \u003c/span\u003efor assigning tags to supported workspace entities.\n*\u003cspan pulumi-lang-nodejs=\" databricks.PolicyInfo \" pulumi-lang-dotnet=\" databricks.PolicyInfo \" pulumi-lang-go=\" PolicyInfo \" pulumi-lang-python=\" PolicyInfo \" pulumi-lang-yaml=\" databricks.PolicyInfo \" pulumi-lang-java=\" databricks.PolicyInfo \"\u003e databricks.PolicyInfo \u003c/span\u003efor defining ABAC policies using governed tags.\n*\u003cspan pulumi-lang-nodejs=\" databricks.AccessControlRuleSet \" pulumi-lang-dotnet=\" databricks.AccessControlRuleSet \" pulumi-lang-go=\" AccessControlRuleSet \" pulumi-lang-python=\" AccessControlRuleSet \" pulumi-lang-yaml=\" databricks.AccessControlRuleSet \" pulumi-lang-java=\" databricks.AccessControlRuleSet \"\u003e databricks.AccessControlRuleSet \u003c/span\u003efor managing account-level and individual tag policy permissions.\n\n\u003e **Note** This resource can only be used with a workspace-level provider!\n\n\n## Example Usage\n\nGetting a list of all tag policies:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getTagPolicies({});\nexport const allTagPolicies = all.then(all =\u003e all.tagPolicies);\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_tag_policies()\npulumi.export(\"allTagPolicies\", all.tag_policies)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetTagPolicies.Invoke();\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allTagPolicies\"] = all.Apply(getTagPoliciesResult =\u003e getTagPoliciesResult.TagPolicies),\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tall, err := databricks.GetTagPolicies(ctx, \u0026databricks.GetTagPoliciesArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allTagPolicies\", all.TagPolicies)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetTagPoliciesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var all = DatabricksFunctions.getTagPolicies(GetTagPoliciesArgs.builder()\n            .build());\n\n        ctx.export(\"allTagPolicies\", all.tagPolicies());\n    }\n}\n```\n```yaml\nvariables:\n  all:\n    fn::invoke:\n      function: databricks:getTagPolicies\n      arguments: {}\noutputs:\n  allTagPolicies: ${all.tagPolicies}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getTagPolicies.\n","properties":{"pageSize":{"type":"integer","description":"The maximum number of results to return in this request. Fewer results may be returned than requested. If\nunspecified or set to 0, this defaults to 1000. The maximum value is 1000; values above 1000 will be coerced down\nto 1000\n"},"providerConfig":{"$ref":"#/types/databricks:index/getTagPoliciesProviderConfig:getTagPoliciesProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getTagPolicies.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getTagPoliciesProviderConfig:getTagPoliciesProviderConfig"},"tagPolicies":{"items":{"$ref":"#/types/databricks:index/getTagPoliciesTagPolicy:getTagPoliciesTagPolicy"},"type":"array"}},"required":["tagPolicies","id"],"type":"object"}},"databricks:index/getTagPolicy:getTagPolicy":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single tag policy by its tag key.\n\nThe following resources are often used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.EntityTagAssignment \" pulumi-lang-dotnet=\" databricks.EntityTagAssignment \" pulumi-lang-go=\" EntityTagAssignment \" pulumi-lang-python=\" EntityTagAssignment \" pulumi-lang-yaml=\" databricks.EntityTagAssignment \" pulumi-lang-java=\" databricks.EntityTagAssignment \"\u003e databricks.EntityTagAssignment \u003c/span\u003efor assigning tags to supported Unity Catalog entities.\n*\u003cspan pulumi-lang-nodejs=\" databricks.WorkspaceEntityTagAssignment \" pulumi-lang-dotnet=\" databricks.WorkspaceEntityTagAssignment \" pulumi-lang-go=\" WorkspaceEntityTagAssignment \" pulumi-lang-python=\" WorkspaceEntityTagAssignment \" pulumi-lang-yaml=\" databricks.WorkspaceEntityTagAssignment \" pulumi-lang-java=\" databricks.WorkspaceEntityTagAssignment \"\u003e databricks.WorkspaceEntityTagAssignment \u003c/span\u003efor assigning tags to supported workspace entities.\n*\u003cspan pulumi-lang-nodejs=\" databricks.PolicyInfo \" pulumi-lang-dotnet=\" databricks.PolicyInfo \" pulumi-lang-go=\" PolicyInfo \" pulumi-lang-python=\" PolicyInfo \" pulumi-lang-yaml=\" databricks.PolicyInfo \" pulumi-lang-java=\" databricks.PolicyInfo \"\u003e databricks.PolicyInfo \u003c/span\u003efor defining ABAC policies using governed tags.\n*\u003cspan pulumi-lang-nodejs=\" databricks.AccessControlRuleSet \" pulumi-lang-dotnet=\" databricks.AccessControlRuleSet \" pulumi-lang-go=\" AccessControlRuleSet \" pulumi-lang-python=\" AccessControlRuleSet \" pulumi-lang-yaml=\" databricks.AccessControlRuleSet \" pulumi-lang-java=\" databricks.AccessControlRuleSet \"\u003e databricks.AccessControlRuleSet \u003c/span\u003efor managing account-level and individual tag policy permissions.\n\n\u003e **Note** This resource can only be used with a workspace-level provider!\n\n\n## Example Usage\n\nReferring to a tag policy by its tag key:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst exampleTagPolicy = databricks.getTagPolicy({\n    tagKey: \"example_tag_key\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nexample_tag_policy = databricks.get_tag_policy(tag_key=\"example_tag_key\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var exampleTagPolicy = Databricks.GetTagPolicy.Invoke(new()\n    {\n        TagKey = \"example_tag_key\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupTagPolicy(ctx, \u0026databricks.LookupTagPolicyArgs{\n\t\t\tTagKey: \"example_tag_key\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetTagPolicyArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var exampleTagPolicy = DatabricksFunctions.getTagPolicy(GetTagPolicyArgs.builder()\n            .tagKey(\"example_tag_key\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  exampleTagPolicy:\n    fn::invoke:\n      function: databricks:getTagPolicy\n      arguments:\n        tagKey: example_tag_key\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getTagPolicy.\n","properties":{"providerConfig":{"$ref":"#/types/databricks:index/getTagPolicyProviderConfig:getTagPolicyProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string"}},"type":"object","required":["tagKey"]},"outputs":{"description":"A collection of values returned by getTagPolicy.\n","properties":{"createTime":{"description":"(string) - Timestamp when the tag policy was created\n","type":"string"},"description":{"description":"(string)\n","type":"string"},"id":{"description":"(string)\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getTagPolicyProviderConfig:getTagPolicyProviderConfig"},"tagKey":{"description":"(string)\n","type":"string"},"updateTime":{"description":"(string) - Timestamp when the tag policy was last updated\n","type":"string"},"values":{"description":"(list of Value)\n","items":{"$ref":"#/types/databricks:index/getTagPolicyValue:getTagPolicyValue"},"type":"array"}},"required":["createTime","description","id","tagKey","updateTime","values"],"type":"object"}},"databricks:index/getUser:getUser":{"description":"Retrieves information about databricks_user.\n\n\u003e This data source can be used with an account or workspace-level provider.\n\n## Example Usage\n\nAdding user to administrative group\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst admins = databricks.getGroup({\n    displayName: \"admins\",\n});\nconst me = databricks.getUser({\n    userName: \"me@example.com\",\n});\nconst myMemberA = new databricks.GroupMember(\"my_member_a\", {\n    groupId: admins.then(admins =\u003e admins.id),\n    memberId: me.then(me =\u003e me.id),\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nadmins = databricks.get_group(display_name=\"admins\")\nme = databricks.get_user(user_name=\"me@example.com\")\nmy_member_a = databricks.GroupMember(\"my_member_a\",\n    group_id=admins.id,\n    member_id=me.id)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var admins = Databricks.GetGroup.Invoke(new()\n    {\n        DisplayName = \"admins\",\n    });\n\n    var me = Databricks.GetUser.Invoke(new()\n    {\n        UserName = \"me@example.com\",\n    });\n\n    var myMemberA = new Databricks.GroupMember(\"my_member_a\", new()\n    {\n        GroupId = admins.Apply(getGroupResult =\u003e getGroupResult.Id),\n        MemberId = me.Apply(getUserResult =\u003e getUserResult.Id),\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tadmins, err := databricks.LookupGroup(ctx, \u0026databricks.LookupGroupArgs{\n\t\t\tDisplayName: \"admins\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tme, err := databricks.LookupUser(ctx, \u0026databricks.LookupUserArgs{\n\t\t\tUserName: pulumi.StringRef(\"me@example.com\"),\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.NewGroupMember(ctx, \"my_member_a\", \u0026databricks.GroupMemberArgs{\n\t\t\tGroupId:  pulumi.String(admins.Id),\n\t\t\tMemberId: pulumi.String(me.Id),\n\t\t})\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetGroupArgs;\nimport com.pulumi.databricks.inputs.GetUserArgs;\nimport com.pulumi.databricks.GroupMember;\nimport com.pulumi.databricks.GroupMemberArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var admins = DatabricksFunctions.getGroup(GetGroupArgs.builder()\n            .displayName(\"admins\")\n            .build());\n\n        final var me = DatabricksFunctions.getUser(GetUserArgs.builder()\n            .userName(\"me@example.com\")\n            .build());\n\n        var myMemberA = new GroupMember(\"myMemberA\", GroupMemberArgs.builder()\n            .groupId(admins.id())\n            .memberId(me.id())\n            .build());\n\n    }\n}\n```\n```yaml\nresources:\n  myMemberA:\n    type: databricks:GroupMember\n    name: my_member_a\n    properties:\n      groupId: ${admins.id}\n      memberId: ${me.id}\nvariables:\n  admins:\n    fn::invoke:\n      function: databricks:getGroup\n      arguments:\n        displayName: admins\n  me:\n    fn::invoke:\n      function: databricks:getUser\n      arguments:\n        userName: me@example.com\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n- End to end workspace management guide.\n-\u003cspan pulumi-lang-nodejs=\" databricks.getCurrentUser \" pulumi-lang-dotnet=\" databricks.getCurrentUser \" pulumi-lang-go=\" getCurrentUser \" pulumi-lang-python=\" get_current_user \" pulumi-lang-yaml=\" databricks.getCurrentUser \" pulumi-lang-java=\" databricks.getCurrentUser \"\u003e databricks.getCurrentUser \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eor databricks_service_principal, that is calling Databricks REST API.\n-\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003eto manage [Account-level](https://docs.databricks.com/aws/en/admin/users-groups/groups) or [Workspace-level](https://docs.databricks.com/aws/en/admin/users-groups/workspace-local-groups) groups.\n-\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003edata to retrieve information about\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003emembers, entitlements and instance profiles.\n-\u003cspan pulumi-lang-nodejs=\" databricks.GroupInstanceProfile \" pulumi-lang-dotnet=\" databricks.GroupInstanceProfile \" pulumi-lang-go=\" GroupInstanceProfile \" pulumi-lang-python=\" GroupInstanceProfile \" pulumi-lang-yaml=\" databricks.GroupInstanceProfile \" pulumi-lang-java=\" databricks.GroupInstanceProfile \"\u003e databricks.GroupInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_group.\n-\u003cspan pulumi-lang-nodejs=\" databricks.GroupMember \" pulumi-lang-dotnet=\" databricks.GroupMember \" pulumi-lang-go=\" GroupMember \" pulumi-lang-python=\" GroupMember \" pulumi-lang-yaml=\" databricks.GroupMember \" pulumi-lang-java=\" databricks.GroupMember \"\u003e databricks.GroupMember \u003c/span\u003eto attach users and groups as group members.\n-\u003cspan pulumi-lang-nodejs=\" databricks.Permissions \" pulumi-lang-dotnet=\" databricks.Permissions \" pulumi-lang-go=\" Permissions \" pulumi-lang-python=\" Permissions \" pulumi-lang-yaml=\" databricks.Permissions \" pulumi-lang-java=\" databricks.Permissions \"\u003e databricks.Permissions \u003c/span\u003eto manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.\n-\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eto [manage users](https://docs.databricks.com/administration-guide/users-groups/users.html), that could be added to\u003cspan pulumi-lang-nodejs=\" databricks.Group \" pulumi-lang-dotnet=\" databricks.Group \" pulumi-lang-go=\" Group \" pulumi-lang-python=\" Group \" pulumi-lang-yaml=\" databricks.Group \" pulumi-lang-java=\" databricks.Group \"\u003e databricks.Group \u003c/span\u003ewithin the workspace.\n-\u003cspan pulumi-lang-nodejs=\" databricks.UserInstanceProfile \" pulumi-lang-dotnet=\" databricks.UserInstanceProfile \" pulumi-lang-go=\" UserInstanceProfile \" pulumi-lang-python=\" UserInstanceProfile \" pulumi-lang-yaml=\" databricks.UserInstanceProfile \" pulumi-lang-java=\" databricks.UserInstanceProfile \"\u003e databricks.UserInstanceProfile \u003c/span\u003eto attach\u003cspan pulumi-lang-nodejs=\" databricks.InstanceProfile \" pulumi-lang-dotnet=\" databricks.InstanceProfile \" pulumi-lang-go=\" InstanceProfile \" pulumi-lang-python=\" InstanceProfile \" pulumi-lang-yaml=\" databricks.InstanceProfile \" pulumi-lang-java=\" databricks.InstanceProfile \"\u003e databricks.InstanceProfile \u003c/span\u003e(AWS) to databricks_user.\n","inputs":{"description":"A collection of arguments for invoking getUser.\n","properties":{"providerConfig":{"$ref":"#/types/databricks:index/getUserProviderConfig:getUserProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"userId":{"type":"string","description":"ID of the user.\n","willReplaceOnChanges":true},"userName":{"type":"string","description":"User name of the user. The user must exist before this resource can be planned.\n","willReplaceOnChanges":true}},"type":"object"},"outputs":{"description":"A collection of values returned by getUser.\n","properties":{"aclPrincipalId":{"description":"identifier for use in databricks_access_control_rule_set, e.g. `users/mr.foo@example.com`.\n","type":"string"},"active":{"description":"Whether the user is active.\n","type":"boolean"},"alphanumeric":{"description":"Alphanumeric representation of user local name. e.g. \u003cspan pulumi-lang-nodejs=\"`mrFoo`\" pulumi-lang-dotnet=\"`MrFoo`\" pulumi-lang-go=\"`mrFoo`\" pulumi-lang-python=\"`mr_foo`\" pulumi-lang-yaml=\"`mrFoo`\" pulumi-lang-java=\"`mrFoo`\"\u003e`mr_foo`\u003c/span\u003e.\n","type":"string"},"applicationId":{"type":"string"},"displayName":{"description":"Display name of the user, e.g. `Mr Foo`.\n","type":"string"},"externalId":{"description":"ID of the user in an external identity provider.\n","type":"string"},"home":{"description":"Home folder of the user, e.g. `/Users/mr.foo@example.com`.\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getUserProviderConfig:getUserProviderConfig"},"repos":{"description":"Personal Repos location of the user, e.g. `/Repos/mr.foo@example.com`.\n","type":"string"},"userId":{"type":"string"},"userName":{"description":"Name of the user, e.g. `mr.foo@example.com`.\n","type":"string"}},"required":["aclPrincipalId","active","alphanumeric","applicationId","displayName","externalId","home","repos","id"],"type":"object"}},"databricks:index/getUsers:getUsers":{"description":"Retrieves information about multiple\u003cspan pulumi-lang-nodejs=\" databricks.User \" pulumi-lang-dotnet=\" databricks.User \" pulumi-lang-go=\" User \" pulumi-lang-python=\" User \" pulumi-lang-yaml=\" databricks.User \" pulumi-lang-java=\" databricks.User \"\u003e databricks.User \u003c/span\u003eresources.\n\n\u003e This data source works with both the account-level and workspace-level provider.\n\n## Example Usage\n\nAdding a subset of users to a group\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nexport = async () =\u003e {\n    const companyUsers = await databricks.getUsers({\n        filter: \"userName co \\\"@domain.org\\\"\",\n    });\n    const dataUsersGroup = new databricks.Group(\"data_users_group\", {displayName: \"Data Users\"});\n    const addUsersToGroup: databricks.GroupMember[] = [];\n    for (const range of Object.entries(.reduce((__obj, user) =\u003e ({ ...__obj, [user.id]: user }))).map(([k, v]) =\u003e ({key: k, value: v}))) {\n        addUsersToGroup.push(new databricks.GroupMember(`add_users_to_group-${range.key}`, {\n            groupId: dataUsersGroup.id,\n            memberId: range.value.id,\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\ncompany_users = databricks.get_users(filter=\"userName co \\\"@domain.org\\\"\")\ndata_users_group = databricks.Group(\"data_users_group\", display_name=\"Data Users\")\nadd_users_to_group = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate({user.id: user for user in company_users.users})]:\n    add_users_to_group.append(databricks.GroupMember(f\"add_users_to_group-{range['key']}\",\n        group_id=data_users_group.id,\n        member_id=range[\"value\"].id))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var companyUsers = await Databricks.GetUsers.InvokeAsync(new()\n    {\n        Filter = \"userName co \\\"@domain.org\\\"\",\n    });\n\n    var dataUsersGroup = new Databricks.Group(\"data_users_group\", new()\n    {\n        DisplayName = \"Data Users\",\n    });\n\n    var addUsersToGroup = new List\u003cDatabricks.GroupMember\u003e();\n    foreach (var range in .Select(pair =\u003e new { pair.Key, pair.Value }))\n    {\n        addUsersToGroup.Add(new Databricks.GroupMember($\"add_users_to_group-{range.Key}\", new()\n        {\n            GroupId = dataUsersGroup.Id,\n            MemberId = range.Value.Id,\n        }));\n    }\n});\n```\n```yaml\nresources:\n  dataUsersGroup:\n    type: databricks:Group\n    name: data_users_group\n    properties:\n      displayName: Data Users\n  addUsersToGroup:\n    type: databricks:GroupMember\n    name: add_users_to_group\n    properties:\n      groupId: ${dataUsersGroup.id}\n      memberId: ${range.value.id}\n    options: {}\nvariables:\n  companyUsers:\n    fn::invoke:\n      function: databricks:getUsers\n      arguments:\n        filter: userName co \"@domain.org\"\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n- **databricks_user**: Resource to manage individual users in Databricks.\n- **databricks_group**: Resource to manage groups in Databricks.\n- **databricks_group_member**: Resource to manage group memberships by adding users to groups.\n- **databricks_permissions**: Resource to manage access control in the Databricks workspace.\n- **databricks_current_user**: Data source to retrieve information about the user or service principal that is calling the Databricks REST API.\n","inputs":{"description":"A collection of arguments for invoking getUsers.\n","properties":{"extraAttributes":{"type":"string","description":"A comma-separated list of additional user attributes to include in the results. By default, the data source returns the following attributes: \u003cspan pulumi-lang-nodejs=\"`id`\" pulumi-lang-dotnet=\"`Id`\" pulumi-lang-go=\"`id`\" pulumi-lang-python=\"`id`\" pulumi-lang-yaml=\"`id`\" pulumi-lang-java=\"`id`\"\u003e`id`\u003c/span\u003e, `userName`, `displayName`, and `externalId`. Use this argument to request additional attributes as needed. The list of all available attributes can be found in the [API reference](https://docs.databricks.com/api/workspace/users/list).\n"},"filter":{"type":"string","description":"Query by which the results have to be filtered. If not specified, all users will be returned. Supported operators are equals (\u003cspan pulumi-lang-nodejs=\"`eq`\" pulumi-lang-dotnet=\"`Eq`\" pulumi-lang-go=\"`eq`\" pulumi-lang-python=\"`eq`\" pulumi-lang-yaml=\"`eq`\" pulumi-lang-java=\"`eq`\"\u003e`eq`\u003c/span\u003e), contains (\u003cspan pulumi-lang-nodejs=\"`co`\" pulumi-lang-dotnet=\"`Co`\" pulumi-lang-go=\"`co`\" pulumi-lang-python=\"`co`\" pulumi-lang-yaml=\"`co`\" pulumi-lang-java=\"`co`\"\u003e`co`\u003c/span\u003e), starts with (\u003cspan pulumi-lang-nodejs=\"`sw`\" pulumi-lang-dotnet=\"`Sw`\" pulumi-lang-go=\"`sw`\" pulumi-lang-python=\"`sw`\" pulumi-lang-yaml=\"`sw`\" pulumi-lang-java=\"`sw`\"\u003e`sw`\u003c/span\u003e), and not equals (\u003cspan pulumi-lang-nodejs=\"`ne`\" pulumi-lang-dotnet=\"`Ne`\" pulumi-lang-go=\"`ne`\" pulumi-lang-python=\"`ne`\" pulumi-lang-yaml=\"`ne`\" pulumi-lang-java=\"`ne`\"\u003e`ne`\u003c/span\u003e). Additionally, simple expressions can be formed using logical operators \u003cspan pulumi-lang-nodejs=\"`and`\" pulumi-lang-dotnet=\"`And`\" pulumi-lang-go=\"`and`\" pulumi-lang-python=\"`and`\" pulumi-lang-yaml=\"`and`\" pulumi-lang-java=\"`and`\"\u003e`and`\u003c/span\u003e and \u003cspan pulumi-lang-nodejs=\"`or`\" pulumi-lang-dotnet=\"`Or`\" pulumi-lang-go=\"`or`\" pulumi-lang-python=\"`or`\" pulumi-lang-yaml=\"`or`\" pulumi-lang-java=\"`or`\"\u003e`or`\u003c/span\u003e.\n\n**Examples:**\n- User whose `displayName` equals \"john\":\n"},"users":{"type":"array","items":{"$ref":"#/types/databricks:index/getUsersUser:getUsersUser"},"description":"A list of users matching the specified criteria. Each user has the following attributes:\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getUsers.\n","properties":{"extraAttributes":{"type":"string"},"filter":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"users":{"description":"A list of users matching the specified criteria. Each user has the following attributes:\n","items":{"$ref":"#/types/databricks:index/getUsersUser:getUsersUser"},"type":"array"}},"required":["users","id"],"type":"object"}},"databricks:index/getViews:getViews":{"description":"Retrieves a list of view full names in Unity Catalog, that were created by Pulumi or manually. Use\u003cspan pulumi-lang-nodejs=\" databricks.getTables \" pulumi-lang-dotnet=\" databricks.getTables \" pulumi-lang-go=\" getTables \" pulumi-lang-python=\" get_tables \" pulumi-lang-yaml=\" databricks.getTables \" pulumi-lang-java=\" databricks.getTables \"\u003e databricks.getTables \u003c/span\u003efor retrieving a list of tables.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\nGranting `SELECT` and `MODIFY` to \u003cspan pulumi-lang-nodejs=\"`sensitive`\" pulumi-lang-dotnet=\"`Sensitive`\" pulumi-lang-go=\"`sensitive`\" pulumi-lang-python=\"`sensitive`\" pulumi-lang-yaml=\"`sensitive`\" pulumi-lang-java=\"`sensitive`\"\u003e`sensitive`\u003c/span\u003e group on all views in a _things_\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003efrom _sandbox_ databricks_catalog.\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nexport = async () =\u003e {\n    const things = await databricks.getViews({\n        catalogName: \"sandbox\",\n        schemaName: \"things\",\n    });\n    const thingsGrants: databricks.Grants[] = [];\n    for (const range of things.ids.map((v, k) =\u003e ({key: k, value: v}))) {\n        thingsGrants.push(new databricks.Grants(`things-${range.key}`, {\n            table: range.value,\n            grants: [{\n                principal: \"sensitive\",\n                privileges: [\n                    \"SELECT\",\n                    \"MODIFY\",\n                ],\n            }],\n        }));\n    }\n}\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthings = databricks.get_views(catalog_name=\"sandbox\",\n    schema_name=\"things\")\nthings_grants = []\nfor range in [{\"key\": k, \"value\": v} for [k, v] in enumerate(things.ids)]:\n    things_grants.append(databricks.Grants(f\"things-{range['key']}\",\n        table=range[\"value\"],\n        grants=[{\n            \"principal\": \"sensitive\",\n            \"privileges\": [\n                \"SELECT\",\n                \"MODIFY\",\n            ],\n        }]))\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(async() =\u003e \n{\n    var things = await Databricks.GetViews.InvokeAsync(new()\n    {\n        CatalogName = \"sandbox\",\n        SchemaName = \"things\",\n    });\n\n    var thingsGrants = new List\u003cDatabricks.Grants\u003e();\n    foreach (var range in )\n    {\n        thingsGrants.Add(new Databricks.Grants($\"things-{range.Key}\", new()\n        {\n            Table = range.Value,\n            GrantDetails = new[]\n            {\n                new Databricks.Inputs.GrantsGrantArgs\n                {\n                    Principal = \"sensitive\",\n                    Privileges = new[]\n                    {\n                        \"SELECT\",\n                        \"MODIFY\",\n                    },\n                },\n            },\n        }));\n    }\n});\n```\n```go\npackage main\n\nimport (\n\t\"fmt\"\n\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthings, err := databricks.GetViews(ctx, \u0026databricks.GetViewsArgs{\n\t\t\tCatalogName: \"sandbox\",\n\t\t\tSchemaName:  \"things\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tvar thingsGrants []*databricks.Grants\n\t\tfor key0, val0 := range things.Ids {\n\t\t\t__res, err := databricks.NewGrants(ctx, fmt.Sprintf(\"things-%v\", key0), \u0026databricks.GrantsArgs{\n\t\t\t\tTable: pulumi.String(val0),\n\t\t\t\tGrants: databricks.GrantsGrantArray{\n\t\t\t\t\t\u0026databricks.GrantsGrantArgs{\n\t\t\t\t\t\tPrincipal: pulumi.String(\"sensitive\"),\n\t\t\t\t\t\tPrivileges: pulumi.StringArray{\n\t\t\t\t\t\t\tpulumi.String(\"SELECT\"),\n\t\t\t\t\t\t\tpulumi.String(\"MODIFY\"),\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t})\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tthingsGrants = append(thingsGrants, __res)\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetViewsArgs;\nimport com.pulumi.databricks.Grants;\nimport com.pulumi.databricks.GrantsArgs;\nimport com.pulumi.databricks.inputs.GrantsGrantArgs;\nimport com.pulumi.codegen.internal.KeyedValue;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var things = DatabricksFunctions.getViews(GetViewsArgs.builder()\n            .catalogName(\"sandbox\")\n            .schemaName(\"things\")\n            .build());\n\n        final var thingsGrants = things.applyValue(getViewsResult -\u003e {\n            final var resources = new ArrayList\u003cGrants\u003e();\n            for (var range : KeyedValue.of(getViewsResult.ids())) {\n                var resource = new Grants(\"thingsGrants-\" + range.key(), GrantsArgs.builder()\n                    .table(range.value())\n                    .grants(GrantsGrantArgs.builder()\n                        .principal(\"sensitive\")\n                        .privileges(                        \n                            \"SELECT\",\n                            \"MODIFY\")\n                        .build())\n                    .build());\n\n                resources.add(resource);\n            }\n\n            return resources;\n        });\n\n    }\n}\n```\n```yaml\nresources:\n  thingsGrants:\n    type: databricks:Grants\n    name: things\n    properties:\n      table: ${range.value}\n      grants:\n        - principal: sensitive\n          privileges:\n            - SELECT\n            - MODIFY\n    options: {}\nvariables:\n  things:\n    fn::invoke:\n      function: databricks:getViews\n      arguments:\n        catalogName: sandbox\n        schemaName: things\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eto manage schemas within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eto manage catalogs within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getViews.\n","properties":{"catalogName":{"type":"string","description":"Name of databricks_catalog\n","willReplaceOnChanges":true},"ids":{"type":"array","items":{"type":"string"},"description":"set of\u003cspan pulumi-lang-nodejs=\" databricksView \" pulumi-lang-dotnet=\" DatabricksView \" pulumi-lang-go=\" databricksView \" pulumi-lang-python=\" databricks_view \" pulumi-lang-yaml=\" databricksView \" pulumi-lang-java=\" databricksView \"\u003e databricks_view \u003c/span\u003efull names: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`view`\" pulumi-lang-dotnet=\"`View`\" pulumi-lang-go=\"`view`\" pulumi-lang-python=\"`view`\" pulumi-lang-yaml=\"`view`\" pulumi-lang-java=\"`view`\"\u003e`view`\u003c/span\u003e*\n"},"providerConfig":{"$ref":"#/types/databricks:index/getViewsProviderConfig:getViewsProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"schemaName":{"type":"string","description":"Name of databricks_schema\n","willReplaceOnChanges":true}},"type":"object","required":["catalogName","schemaName"]},"outputs":{"description":"A collection of values returned by getViews.\n","properties":{"catalogName":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"ids":{"description":"set of\u003cspan pulumi-lang-nodejs=\" databricksView \" pulumi-lang-dotnet=\" DatabricksView \" pulumi-lang-go=\" databricksView \" pulumi-lang-python=\" databricks_view \" pulumi-lang-yaml=\" databricksView \" pulumi-lang-java=\" databricksView \"\u003e databricks_view \u003c/span\u003efull names: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`view`\" pulumi-lang-dotnet=\"`View`\" pulumi-lang-go=\"`view`\" pulumi-lang-python=\"`view`\" pulumi-lang-yaml=\"`view`\" pulumi-lang-java=\"`view`\"\u003e`view`\u003c/span\u003e*\n","items":{"type":"string"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getViewsProviderConfig:getViewsProviderConfig"},"schemaName":{"type":"string"}},"required":["catalogName","ids","schemaName","id"],"type":"object"}},"databricks:index/getVolume:getVolume":{"description":"Retrieves details about\u003cspan pulumi-lang-nodejs=\" databricks.Volume \" pulumi-lang-dotnet=\" databricks.Volume \" pulumi-lang-go=\" Volume \" pulumi-lang-python=\" Volume \" pulumi-lang-yaml=\" databricks.Volume \" pulumi-lang-java=\" databricks.Volume \"\u003e databricks.Volume \u003c/span\u003ethat was created by Pulumi or manually.\nA volume can be identified by its three-level (fully qualified) name (in the form of: \u003cspan pulumi-lang-nodejs=\"`catalogName`\" pulumi-lang-dotnet=\"`CatalogName`\" pulumi-lang-go=\"`catalogName`\" pulumi-lang-python=\"`catalog_name`\" pulumi-lang-yaml=\"`catalogName`\" pulumi-lang-java=\"`catalogName`\"\u003e`catalog_name`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schemaName`\" pulumi-lang-dotnet=\"`SchemaName`\" pulumi-lang-go=\"`schemaName`\" pulumi-lang-python=\"`schema_name`\" pulumi-lang-yaml=\"`schemaName`\" pulumi-lang-java=\"`schemaName`\"\u003e`schema_name`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`volumeName`\" pulumi-lang-dotnet=\"`VolumeName`\" pulumi-lang-go=\"`volumeName`\" pulumi-lang-python=\"`volume_name`\" pulumi-lang-yaml=\"`volumeName`\" pulumi-lang-java=\"`volumeName`\"\u003e`volume_name`\u003c/span\u003e) as input. This can be retrieved programmatically using\u003cspan pulumi-lang-nodejs=\" databricks.getVolumes \" pulumi-lang-dotnet=\" databricks.getVolumes \" pulumi-lang-go=\" getVolumes \" pulumi-lang-python=\" get_volumes \" pulumi-lang-yaml=\" databricks.getVolumes \" pulumi-lang-java=\" databricks.getVolumes \"\u003e databricks.getVolumes \u003c/span\u003edata source.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n* Retrieve details of all volumes in in a _things_\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eof a  _sandbox_ databricks_catalog:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst all = databricks.getVolumes({\n    catalogName: \"sandbox\",\n    schemaName: \"things\",\n});\nconst _this = all.then(all =\u003e .reduce((__obj, [__key, __value]) =\u003e ({ ...__obj, [__key]: databricks.getVolume({\n    name: __value,\n}) })));\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nall = databricks.get_volumes(catalog_name=\"sandbox\",\n    schema_name=\"things\")\nthis = {__key: databricks.get_volume(name=__value) for __key, __value in all.ids}\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var all = Databricks.GetVolumes.Invoke(new()\n    {\n        CatalogName = \"sandbox\",\n        SchemaName = \"things\",\n    });\n\n    var @this = ;\n\n});\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n* Search for a specific volume by its fully qualified name\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getVolume({\n    name: \"catalog.schema.volume\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_volume(name=\"catalog.schema.volume\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetVolume.Invoke(new()\n    {\n        Name = \"catalog.schema.volume\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupVolume(ctx, \u0026databricks.LookupVolumeArgs{\n\t\t\tName: \"catalog.schema.volume\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetVolumeArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getVolume(GetVolumeArgs.builder()\n            .name(\"catalog.schema.volume\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getVolume\n      arguments:\n        name: catalog.schema.volume\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Volume \" pulumi-lang-dotnet=\" databricks.Volume \" pulumi-lang-go=\" Volume \" pulumi-lang-python=\" Volume \" pulumi-lang-yaml=\" databricks.Volume \" pulumi-lang-java=\" databricks.Volume \"\u003e databricks.Volume \u003c/span\u003eto manage volumes within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eto manage schemas within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eto manage catalogs within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getVolume.\n","properties":{"id":{"type":"string","description":"ID of this Unity Catalog Volume in form of `\u003ccatalog\u003e.\u003cschema\u003e.\u003cname\u003e`.\n"},"name":{"type":"string","description":"a fully qualified name of databricks_volume: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`volume`\" pulumi-lang-dotnet=\"`Volume`\" pulumi-lang-go=\"`volume`\" pulumi-lang-python=\"`volume`\" pulumi-lang-yaml=\"`volume`\" pulumi-lang-java=\"`volume`\"\u003e`volume`\u003c/span\u003e*\n","willReplaceOnChanges":true},"providerConfig":{"$ref":"#/types/databricks:index/getVolumeProviderConfig:getVolumeProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"volumeInfo":{"$ref":"#/types/databricks:index/getVolumeVolumeInfo:getVolumeVolumeInfo","description":"`VolumeInfo` object for a Unity Catalog volume. This contains the following attributes:\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getVolume.\n","properties":{"id":{"description":"ID of this Unity Catalog Volume in form of `\u003ccatalog\u003e.\u003cschema\u003e.\u003cname\u003e`.\n","type":"string"},"name":{"description":"the name of the volume\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getVolumeProviderConfig:getVolumeProviderConfig"},"volumeInfo":{"$ref":"#/types/databricks:index/getVolumeVolumeInfo:getVolumeVolumeInfo","description":"`VolumeInfo` object for a Unity Catalog volume. This contains the following attributes:\n"}},"required":["id","name","volumeInfo"],"type":"object"}},"databricks:index/getVolumes:getVolumes":{"description":"Retrieves a list of\u003cspan pulumi-lang-nodejs=\" databricks.Volume \" pulumi-lang-dotnet=\" databricks.Volume \" pulumi-lang-go=\" Volume \" pulumi-lang-python=\" Volume \" pulumi-lang-yaml=\" databricks.Volume \" pulumi-lang-java=\" databricks.Volume \"\u003e databricks.Volume \u003c/span\u003eids (full names), that were created by Pulumi or manually.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Plugin Framework Migration\n\nThe volumes data source has been migrated from sdkv2 to plugin framework in version 1.57。 If you encounter any problem with this data source and suspect it is due to the migration, you can fallback to sdkv2 by setting the environment variable in the following way `export USE_SDK_V2_DATA_SOURCES=\u003cspan pulumi-lang-nodejs=\"\"databricks.getVolumes\"\" pulumi-lang-dotnet=\"\"databricks.getVolumes\"\" pulumi-lang-go=\"\"getVolumes\"\" pulumi-lang-python=\"\"get_volumes\"\" pulumi-lang-yaml=\"\"databricks.getVolumes\"\" pulumi-lang-java=\"\"databricks.getVolumes\"\"\u003e\"databricks.getVolumes\"\u003c/span\u003e`.\n\n## Example Usage\n\nListing all volumes in a _things_\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eof a  _sandbox_ databricks_catalog:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getVolumes({\n    catalogName: \"sandbox\",\n    schemaName: \"things\",\n});\nexport const allVolumes = _this;\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_volumes(catalog_name=\"sandbox\",\n    schema_name=\"things\")\npulumi.export(\"allVolumes\", this)\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetVolumes.Invoke(new()\n    {\n        CatalogName = \"sandbox\",\n        SchemaName = \"things\",\n    });\n\n    return new Dictionary\u003cstring, object?\u003e\n    {\n        [\"allVolumes\"] = @this,\n    };\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\tthis, err := databricks.GetVolumes(ctx, \u0026databricks.GetVolumesArgs{\n\t\t\tCatalogName: \"sandbox\",\n\t\t\tSchemaName:  \"things\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tctx.Export(\"allVolumes\", this)\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetVolumesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getVolumes(GetVolumesArgs.builder()\n            .catalogName(\"sandbox\")\n            .schemaName(\"things\")\n            .build());\n\n        ctx.export(\"allVolumes\", this_);\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getVolumes\n      arguments:\n        catalogName: sandbox\n        schemaName: things\noutputs:\n  allVolumes: ${this}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n\n## Related Resources\n\nThe following resources are used in the same context:\n\n*\u003cspan pulumi-lang-nodejs=\" databricks.Volume \" pulumi-lang-dotnet=\" databricks.Volume \" pulumi-lang-go=\" Volume \" pulumi-lang-python=\" Volume \" pulumi-lang-yaml=\" databricks.Volume \" pulumi-lang-java=\" databricks.Volume \"\u003e databricks.Volume \u003c/span\u003eto manage volumes within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Schema \" pulumi-lang-dotnet=\" databricks.Schema \" pulumi-lang-go=\" Schema \" pulumi-lang-python=\" Schema \" pulumi-lang-yaml=\" databricks.Schema \" pulumi-lang-java=\" databricks.Schema \"\u003e databricks.Schema \u003c/span\u003eto manage schemas within Unity Catalog.\n*\u003cspan pulumi-lang-nodejs=\" databricks.Catalog \" pulumi-lang-dotnet=\" databricks.Catalog \" pulumi-lang-go=\" Catalog \" pulumi-lang-python=\" Catalog \" pulumi-lang-yaml=\" databricks.Catalog \" pulumi-lang-java=\" databricks.Catalog \"\u003e databricks.Catalog \u003c/span\u003eto manage catalogs within Unity Catalog.\n","inputs":{"description":"A collection of arguments for invoking getVolumes.\n","properties":{"catalogName":{"type":"string","description":"Name of databricks_catalog\n"},"ids":{"type":"array","items":{"type":"string"},"description":"a list of\u003cspan pulumi-lang-nodejs=\" databricks.Volume \" pulumi-lang-dotnet=\" databricks.Volume \" pulumi-lang-go=\" Volume \" pulumi-lang-python=\" Volume \" pulumi-lang-yaml=\" databricks.Volume \" pulumi-lang-java=\" databricks.Volume \"\u003e databricks.Volume \u003c/span\u003efull names: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`volume`\" pulumi-lang-dotnet=\"`Volume`\" pulumi-lang-go=\"`volume`\" pulumi-lang-python=\"`volume`\" pulumi-lang-yaml=\"`volume`\" pulumi-lang-java=\"`volume`\"\u003e`volume`\u003c/span\u003e*\n"},"providerConfig":{"$ref":"#/types/databricks:index/getVolumesProviderConfig:getVolumesProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n"},"schemaName":{"type":"string","description":"Name of databricks_schema\n"}},"type":"object","required":["catalogName","schemaName"]},"outputs":{"description":"A collection of values returned by getVolumes.\n","properties":{"catalogName":{"type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"ids":{"description":"a list of\u003cspan pulumi-lang-nodejs=\" databricks.Volume \" pulumi-lang-dotnet=\" databricks.Volume \" pulumi-lang-go=\" Volume \" pulumi-lang-python=\" Volume \" pulumi-lang-yaml=\" databricks.Volume \" pulumi-lang-java=\" databricks.Volume \"\u003e databricks.Volume \u003c/span\u003efull names: *\u003cspan pulumi-lang-nodejs=\"`catalog`\" pulumi-lang-dotnet=\"`Catalog`\" pulumi-lang-go=\"`catalog`\" pulumi-lang-python=\"`catalog`\" pulumi-lang-yaml=\"`catalog`\" pulumi-lang-java=\"`catalog`\"\u003e`catalog`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`schema`\" pulumi-lang-dotnet=\"`Schema`\" pulumi-lang-go=\"`schema`\" pulumi-lang-python=\"`schema`\" pulumi-lang-yaml=\"`schema`\" pulumi-lang-java=\"`schema`\"\u003e`schema`\u003c/span\u003e.\u003cspan pulumi-lang-nodejs=\"`volume`\" pulumi-lang-dotnet=\"`Volume`\" pulumi-lang-go=\"`volume`\" pulumi-lang-python=\"`volume`\" pulumi-lang-yaml=\"`volume`\" pulumi-lang-java=\"`volume`\"\u003e`volume`\u003c/span\u003e*\n","items":{"type":"string"},"type":"array"},"providerConfig":{"$ref":"#/types/databricks:index/getVolumesProviderConfig:getVolumesProviderConfig"},"schemaName":{"type":"string"}},"required":["catalogName","ids","schemaName","id"],"type":"object"}},"databricks:index/getWarehousesDefaultWarehouseOverride:getWarehousesDefaultWarehouseOverride":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThe Default Warehouse Override data source allows you to retrieve information about a user's default warehouse selection configuration in Databricks SQL.\n\nYou can use this data source to:\n- Retrieve the current default warehouse override configuration for a user\n- Check whether a user has a custom warehouse configured or uses last-selected behavior\n- Get the warehouse ID if a custom warehouse is configured\n\n\u003e **Note** The resource name format is `default-warehouse-overrides/{default_warehouse_override_id}`, where \u003cspan pulumi-lang-nodejs=\"`defaultWarehouseOverrideId`\" pulumi-lang-dotnet=\"`DefaultWarehouseOverrideId`\" pulumi-lang-go=\"`defaultWarehouseOverrideId`\" pulumi-lang-python=\"`default_warehouse_override_id`\" pulumi-lang-yaml=\"`defaultWarehouseOverrideId`\" pulumi-lang-java=\"`defaultWarehouseOverrideId`\"\u003e`default_warehouse_override_id`\u003c/span\u003e represents a user ID.\n\n\n## Example Usage\n\n","inputs":{"description":"A collection of arguments for invoking getWarehousesDefaultWarehouseOverride.\n","properties":{"name":{"type":"string","description":"The resource name of the default warehouse override.\nFormat: default-warehouse-overrides/{default_warehouse_override_id}\n"},"providerConfig":{"$ref":"#/types/databricks:index/getWarehousesDefaultWarehouseOverrideProviderConfig:getWarehousesDefaultWarehouseOverrideProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getWarehousesDefaultWarehouseOverride.\n","properties":{"defaultWarehouseOverrideId":{"description":"(string) - The ID component of the resource name (user ID)\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"name":{"description":"(string) - The resource name of the default warehouse override.\nFormat: default-warehouse-overrides/{default_warehouse_override_id}\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getWarehousesDefaultWarehouseOverrideProviderConfig:getWarehousesDefaultWarehouseOverrideProviderConfig"},"type":{"description":"(string) - The type of override behavior. Possible values are: `CUSTOM`, `LAST_SELECTED`\n","type":"string"},"warehouseId":{"description":"(string) - The specific warehouse ID when type is CUSTOM.\nNot set for LAST_SELECTED type\n","type":"string"}},"required":["defaultWarehouseOverrideId","name","type","warehouseId","id"],"type":"object"}},"databricks:index/getWarehousesDefaultWarehouseOverrides:getWarehousesDefaultWarehouseOverrides":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to list all default warehouse overrides in the workspace.\n\n\u003e **Note** This data source requires workspace admin permissions.\n\n\n\n\n\n\n\n\n## Example Usage\n\n","inputs":{"description":"A collection of arguments for invoking getWarehousesDefaultWarehouseOverrides.\n","properties":{"pageSize":{"type":"integer","description":"The maximum number of overrides to return. The service may return fewer than\nthis value.\nIf unspecified, at most 100 overrides will be returned.\nThe maximum value is 1000; values above 1000 will be coerced to 1000\n"},"providerConfig":{"$ref":"#/types/databricks:index/getWarehousesDefaultWarehouseOverridesProviderConfig:getWarehousesDefaultWarehouseOverridesProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getWarehousesDefaultWarehouseOverrides.\n","properties":{"defaultWarehouseOverrides":{"items":{"$ref":"#/types/databricks:index/getWarehousesDefaultWarehouseOverridesDefaultWarehouseOverride:getWarehousesDefaultWarehouseOverridesDefaultWarehouseOverride"},"type":"array"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getWarehousesDefaultWarehouseOverridesProviderConfig:getWarehousesDefaultWarehouseOverridesProviderConfig"}},"required":["defaultWarehouseOverrides","id"],"type":"object"}},"databricks:index/getWorkspaceEntityTagAssignment:getWorkspaceEntityTagAssignment":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source allows you to get information about a tag assignment for a specific workspace scoped entity using the entity type, entity id, and tag key.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst appTag = databricks.getWorkspaceEntityTagAssignment({\n    entityType: \"apps\",\n    entityId: \"2807324866692453\",\n    tagKey: \"sensitivity_level\",\n});\nconst dashboardTag = databricks.getWorkspaceEntityTagAssignment({\n    entityType: \"dashboards\",\n    entityId: \"2807324866692453\",\n    tagKey: \"sensitivity_level\",\n});\nconst geniespaceTag = databricks.getWorkspaceEntityTagAssignment({\n    entityType: \"geniespaces\",\n    entityId: \"2807324866692453\",\n    tagKey: \"sensitivity_level\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\napp_tag = databricks.get_workspace_entity_tag_assignment(entity_type=\"apps\",\n    entity_id=\"2807324866692453\",\n    tag_key=\"sensitivity_level\")\ndashboard_tag = databricks.get_workspace_entity_tag_assignment(entity_type=\"dashboards\",\n    entity_id=\"2807324866692453\",\n    tag_key=\"sensitivity_level\")\ngeniespace_tag = databricks.get_workspace_entity_tag_assignment(entity_type=\"geniespaces\",\n    entity_id=\"2807324866692453\",\n    tag_key=\"sensitivity_level\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var appTag = Databricks.GetWorkspaceEntityTagAssignment.Invoke(new()\n    {\n        EntityType = \"apps\",\n        EntityId = \"2807324866692453\",\n        TagKey = \"sensitivity_level\",\n    });\n\n    var dashboardTag = Databricks.GetWorkspaceEntityTagAssignment.Invoke(new()\n    {\n        EntityType = \"dashboards\",\n        EntityId = \"2807324866692453\",\n        TagKey = \"sensitivity_level\",\n    });\n\n    var geniespaceTag = Databricks.GetWorkspaceEntityTagAssignment.Invoke(new()\n    {\n        EntityType = \"geniespaces\",\n        EntityId = \"2807324866692453\",\n        TagKey = \"sensitivity_level\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupWorkspaceEntityTagAssignment(ctx, \u0026databricks.LookupWorkspaceEntityTagAssignmentArgs{\n\t\t\tEntityType: \"apps\",\n\t\t\tEntityId:   \"2807324866692453\",\n\t\t\tTagKey:     \"sensitivity_level\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.LookupWorkspaceEntityTagAssignment(ctx, \u0026databricks.LookupWorkspaceEntityTagAssignmentArgs{\n\t\t\tEntityType: \"dashboards\",\n\t\t\tEntityId:   \"2807324866692453\",\n\t\t\tTagKey:     \"sensitivity_level\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.LookupWorkspaceEntityTagAssignment(ctx, \u0026databricks.LookupWorkspaceEntityTagAssignmentArgs{\n\t\t\tEntityType: \"geniespaces\",\n\t\t\tEntityId:   \"2807324866692453\",\n\t\t\tTagKey:     \"sensitivity_level\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetWorkspaceEntityTagAssignmentArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var appTag = DatabricksFunctions.getWorkspaceEntityTagAssignment(GetWorkspaceEntityTagAssignmentArgs.builder()\n            .entityType(\"apps\")\n            .entityId(\"2807324866692453\")\n            .tagKey(\"sensitivity_level\")\n            .build());\n\n        final var dashboardTag = DatabricksFunctions.getWorkspaceEntityTagAssignment(GetWorkspaceEntityTagAssignmentArgs.builder()\n            .entityType(\"dashboards\")\n            .entityId(\"2807324866692453\")\n            .tagKey(\"sensitivity_level\")\n            .build());\n\n        final var geniespaceTag = DatabricksFunctions.getWorkspaceEntityTagAssignment(GetWorkspaceEntityTagAssignmentArgs.builder()\n            .entityType(\"geniespaces\")\n            .entityId(\"2807324866692453\")\n            .tagKey(\"sensitivity_level\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  appTag:\n    fn::invoke:\n      function: databricks:getWorkspaceEntityTagAssignment\n      arguments:\n        entityType: apps\n        entityId: '2807324866692453'\n        tagKey: sensitivity_level\n  dashboardTag:\n    fn::invoke:\n      function: databricks:getWorkspaceEntityTagAssignment\n      arguments:\n        entityType: dashboards\n        entityId: '2807324866692453'\n        tagKey: sensitivity_level\n  geniespaceTag:\n    fn::invoke:\n      function: databricks:getWorkspaceEntityTagAssignment\n      arguments:\n        entityType: geniespaces\n        entityId: '2807324866692453'\n        tagKey: sensitivity_level\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getWorkspaceEntityTagAssignment.\n","properties":{"entityId":{"type":"string","description":"The identifier of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of entity to which the tag is assigned. Allowed values are apps, dashboards, geniespaces\n"},"providerConfig":{"$ref":"#/types/databricks:index/getWorkspaceEntityTagAssignmentProviderConfig:getWorkspaceEntityTagAssignmentProviderConfig","description":"Configure the provider for management through account provider.\n"},"tagKey":{"type":"string","description":"The key of the tag. The characters , . : / - = and leading/trailing spaces are not allowed\n"}},"type":"object","required":["entityId","entityType","tagKey"]},"outputs":{"description":"A collection of values returned by getWorkspaceEntityTagAssignment.\n","properties":{"entityId":{"description":"(string) - The identifier of the entity to which the tag is assigned\n","type":"string"},"entityType":{"description":"(string) - The type of entity to which the tag is assigned. Allowed values are apps, dashboards, geniespaces\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getWorkspaceEntityTagAssignmentProviderConfig:getWorkspaceEntityTagAssignmentProviderConfig"},"tagKey":{"description":"(string) - The key of the tag. The characters , . : / - = and leading/trailing spaces are not allowed\n","type":"string"},"tagValue":{"description":"(string) - The value of the tag\n","type":"string"}},"required":["entityId","entityType","tagKey","tagValue","id"],"type":"object"}},"databricks:index/getWorkspaceEntityTagAssignments:getWorkspaceEntityTagAssignments":{"description":"[![Public Beta](https://img.shields.io/badge/Release_Stage-Public_Beta-orange)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source allows you to retrieve tag assignments that have been applied to a particular workspace scoped entity.\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst appTags = databricks.getWorkspaceEntityTagAssignments({\n    entityType: \"apps\",\n    entityId: \"2807324866692453\",\n});\nconst dashboardTags = databricks.getWorkspaceEntityTagAssignments({\n    entityType: \"dashboards\",\n    entityId: \"2807324866692453\",\n});\nconst geniespaceTags = databricks.getWorkspaceEntityTagAssignments({\n    entityType: \"geniespaces\",\n    entityId: \"2807324866692453\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\napp_tags = databricks.get_workspace_entity_tag_assignments(entity_type=\"apps\",\n    entity_id=\"2807324866692453\")\ndashboard_tags = databricks.get_workspace_entity_tag_assignments(entity_type=\"dashboards\",\n    entity_id=\"2807324866692453\")\ngeniespace_tags = databricks.get_workspace_entity_tag_assignments(entity_type=\"geniespaces\",\n    entity_id=\"2807324866692453\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var appTags = Databricks.GetWorkspaceEntityTagAssignments.Invoke(new()\n    {\n        EntityType = \"apps\",\n        EntityId = \"2807324866692453\",\n    });\n\n    var dashboardTags = Databricks.GetWorkspaceEntityTagAssignments.Invoke(new()\n    {\n        EntityType = \"dashboards\",\n        EntityId = \"2807324866692453\",\n    });\n\n    var geniespaceTags = Databricks.GetWorkspaceEntityTagAssignments.Invoke(new()\n    {\n        EntityType = \"geniespaces\",\n        EntityId = \"2807324866692453\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetWorkspaceEntityTagAssignments(ctx, \u0026databricks.GetWorkspaceEntityTagAssignmentsArgs{\n\t\t\tEntityType: \"apps\",\n\t\t\tEntityId:   \"2807324866692453\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.GetWorkspaceEntityTagAssignments(ctx, \u0026databricks.GetWorkspaceEntityTagAssignmentsArgs{\n\t\t\tEntityType: \"dashboards\",\n\t\t\tEntityId:   \"2807324866692453\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\t_, err = databricks.GetWorkspaceEntityTagAssignments(ctx, \u0026databricks.GetWorkspaceEntityTagAssignmentsArgs{\n\t\t\tEntityType: \"geniespaces\",\n\t\t\tEntityId:   \"2807324866692453\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetWorkspaceEntityTagAssignmentsArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var appTags = DatabricksFunctions.getWorkspaceEntityTagAssignments(GetWorkspaceEntityTagAssignmentsArgs.builder()\n            .entityType(\"apps\")\n            .entityId(\"2807324866692453\")\n            .build());\n\n        final var dashboardTags = DatabricksFunctions.getWorkspaceEntityTagAssignments(GetWorkspaceEntityTagAssignmentsArgs.builder()\n            .entityType(\"dashboards\")\n            .entityId(\"2807324866692453\")\n            .build());\n\n        final var geniespaceTags = DatabricksFunctions.getWorkspaceEntityTagAssignments(GetWorkspaceEntityTagAssignmentsArgs.builder()\n            .entityType(\"geniespaces\")\n            .entityId(\"2807324866692453\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  appTags:\n    fn::invoke:\n      function: databricks:getWorkspaceEntityTagAssignments\n      arguments:\n        entityType: apps\n        entityId: '2807324866692453'\n  dashboardTags:\n    fn::invoke:\n      function: databricks:getWorkspaceEntityTagAssignments\n      arguments:\n        entityType: dashboards\n        entityId: '2807324866692453'\n  geniespaceTags:\n    fn::invoke:\n      function: databricks:getWorkspaceEntityTagAssignments\n      arguments:\n        entityType: geniespaces\n        entityId: '2807324866692453'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getWorkspaceEntityTagAssignments.\n","properties":{"entityId":{"type":"string","description":"The identifier of the entity to which the tag is assigned\n"},"entityType":{"type":"string","description":"The type of entity to which the tag is assigned. Allowed values are apps, dashboards, geniespaces\n"},"pageSize":{"type":"integer","description":"Optional. Maximum number of tag assignments to return in a single page\n"},"providerConfig":{"$ref":"#/types/databricks:index/getWorkspaceEntityTagAssignmentsProviderConfig:getWorkspaceEntityTagAssignmentsProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["entityId","entityType"]},"outputs":{"description":"A collection of values returned by getWorkspaceEntityTagAssignments.\n","properties":{"entityId":{"description":"(string) - The identifier of the entity to which the tag is assigned\n","type":"string"},"entityType":{"description":"(string) - The type of entity to which the tag is assigned. Allowed values are apps, dashboards, geniespaces\n","type":"string"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"pageSize":{"type":"integer"},"providerConfig":{"$ref":"#/types/databricks:index/getWorkspaceEntityTagAssignmentsProviderConfig:getWorkspaceEntityTagAssignmentsProviderConfig"},"tagAssignments":{"items":{"$ref":"#/types/databricks:index/getWorkspaceEntityTagAssignmentsTagAssignment:getWorkspaceEntityTagAssignmentsTagAssignment"},"type":"array"}},"required":["entityId","entityType","tagAssignments","id"],"type":"object"}},"databricks:index/getWorkspaceNetworkOption:getWorkspaceNetworkOption":{"description":"[![GA](https://img.shields.io/badge/Release_Stage-GA-green)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single workspace network option.\n\n\u003e **Note** This data source can only be used with an account-level provider!\n\n## Example Usage\n\nReferring to a network policy by id:\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst _this = databricks.getWorkspaceNetworkOption({\n    workspaceId: \"9999999999999999\",\n});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nthis = databricks.get_workspace_network_option(workspace_id=\"9999999999999999\")\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var @this = Databricks.GetWorkspaceNetworkOption.Invoke(new()\n    {\n        WorkspaceId = \"9999999999999999\",\n    });\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.LookupWorkspaceNetworkOption(ctx, \u0026databricks.LookupWorkspaceNetworkOptionArgs{\n\t\t\tWorkspaceId: \"9999999999999999\",\n\t\t}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetWorkspaceNetworkOptionArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var this = DatabricksFunctions.getWorkspaceNetworkOption(GetWorkspaceNetworkOptionArgs.builder()\n            .workspaceId(\"9999999999999999\")\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getWorkspaceNetworkOption\n      arguments:\n        workspaceId: '9999999999999999'\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getWorkspaceNetworkOption.\n","properties":{"workspaceId":{"type":"string","description":"The workspace ID\n"}},"type":"object","required":["workspaceId"]},"outputs":{"description":"A collection of values returned by getWorkspaceNetworkOption.\n","properties":{"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"networkPolicyId":{"description":"(string) - The network policy ID to apply to the workspace. This controls the network access rules\nfor all serverless compute resources in the workspace. Each workspace can only be\nlinked to one policy at a time. If no policy is explicitly assigned,\nthe workspace will use 'default-policy'\n","type":"string"},"workspaceId":{"description":"(integer) - The workspace ID\n","type":"string"}},"required":["networkPolicyId","workspaceId","id"],"type":"object"}},"databricks:index/getWorkspaceSettingV2:getWorkspaceSettingV2":{"description":"[![Public Preview](https://img.shields.io/badge/Release_Stage-Public_Preview-yellowgreen)](https://docs.databricks.com/aws/en/release-notes/release-types)\n\nThis data source can be used to get a single account setting. \n\n## Example Usage\n\nReferring to a setting by id\n\u003c!--Start PulumiCodeChooser --\u003e\n```yaml\nvariables:\n  this:\n    fn::invoke:\n      function: databricks:getWorkspaceSettingV2\n      arguments:\n        name: llm_proxy_partner_powered\n        booleanVal:\n          value: false\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getWorkspaceSettingV2.\n","properties":{"name":{"type":"string","description":"Name of the setting\n"},"providerConfig":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2ProviderConfig:getWorkspaceSettingV2ProviderConfig","description":"Configure the provider for management through account provider.\n"}},"type":"object","required":["name"]},"outputs":{"description":"A collection of values returned by getWorkspaceSettingV2.\n","properties":{"aibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy:getWorkspaceSettingV2AibiDashboardEmbeddingAccessPolicy","description":"(AibiDashboardEmbeddingAccessPolicy) - Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingAccessPolicy \"\u003e effective_aibi_dashboard_embedding_access_policy \u003c/span\u003efor final setting value\n"},"aibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains:getWorkspaceSettingV2AibiDashboardEmbeddingApprovedDomains","description":"(AibiDashboardEmbeddingApprovedDomains) - Setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" EffectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" effective_aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" effectiveAibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" effectiveAibiDashboardEmbeddingApprovedDomains \"\u003e effective_aibi_dashboard_embedding_approved_domains \u003c/span\u003efor final setting value\n"},"automaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2AutomaticClusterUpdateWorkspace:getWorkspaceSettingV2AutomaticClusterUpdateWorkspace","description":"(ClusterAutoRestartMessage) - Setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" EffectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-python=\" effective_automatic_cluster_update_workspace \" pulumi-lang-yaml=\" effectiveAutomaticClusterUpdateWorkspace \" pulumi-lang-java=\" effectiveAutomaticClusterUpdateWorkspace \"\u003e effective_automatic_cluster_update_workspace \u003c/span\u003efor final setting value\n"},"booleanVal":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2BooleanVal:getWorkspaceSettingV2BooleanVal","description":"(BooleanMessage) - Setting value for boolean type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveBooleanVal \" pulumi-lang-dotnet=\" EffectiveBooleanVal \" pulumi-lang-go=\" effectiveBooleanVal \" pulumi-lang-python=\" effective_boolean_val \" pulumi-lang-yaml=\" effectiveBooleanVal \" pulumi-lang-java=\" effectiveBooleanVal \"\u003e effective_boolean_val \u003c/span\u003efor final setting value\n"},"effectiveAibiDashboardEmbeddingAccessPolicy":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy:getWorkspaceSettingV2EffectiveAibiDashboardEmbeddingAccessPolicy","description":"(AibiDashboardEmbeddingAccessPolicy) - Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingAccessPolicy \" pulumi-lang-go=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-python=\" aibi_dashboard_embedding_access_policy \" pulumi-lang-yaml=\" aibiDashboardEmbeddingAccessPolicy \" pulumi-lang-java=\" aibiDashboardEmbeddingAccessPolicy \"\u003e aibi_dashboard_embedding_access_policy \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_access_policy\n"},"effectiveAibiDashboardEmbeddingApprovedDomains":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains:getWorkspaceSettingV2EffectiveAibiDashboardEmbeddingApprovedDomains","description":"(AibiDashboardEmbeddingApprovedDomains) - Effective setting value for\u003cspan pulumi-lang-nodejs=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-dotnet=\" AibiDashboardEmbeddingApprovedDomains \" pulumi-lang-go=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-python=\" aibi_dashboard_embedding_approved_domains \" pulumi-lang-yaml=\" aibiDashboardEmbeddingApprovedDomains \" pulumi-lang-java=\" aibiDashboardEmbeddingApprovedDomains \"\u003e aibi_dashboard_embedding_approved_domains \u003c/span\u003esetting. This is the final effective value of setting. To set a value use aibi_dashboard_embedding_approved_domains\n"},"effectiveAutomaticClusterUpdateWorkspace":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace:getWorkspaceSettingV2EffectiveAutomaticClusterUpdateWorkspace","description":"(ClusterAutoRestartMessage) - Effective setting value for\u003cspan pulumi-lang-nodejs=\" automaticClusterUpdateWorkspace \" pulumi-lang-dotnet=\" AutomaticClusterUpdateWorkspace \" pulumi-lang-go=\" automaticClusterUpdateWorkspace \" pulumi-lang-python=\" automatic_cluster_update_workspace \" pulumi-lang-yaml=\" automaticClusterUpdateWorkspace \" pulumi-lang-java=\" automaticClusterUpdateWorkspace \"\u003e automatic_cluster_update_workspace \u003c/span\u003esetting. This is the final effective value of setting. To set a value use automatic_cluster_update_workspace\n"},"effectiveBooleanVal":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectiveBooleanVal:getWorkspaceSettingV2EffectiveBooleanVal","description":"(BooleanMessage) - Effective setting value for boolean type setting. This is the final effective value of setting. To set a value use boolean_val\n"},"effectiveIntegerVal":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectiveIntegerVal:getWorkspaceSettingV2EffectiveIntegerVal","description":"(IntegerMessage) - Effective setting value for integer type setting. This is the final effective value of setting. To set a value use integer_val\n"},"effectivePersonalCompute":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectivePersonalCompute:getWorkspaceSettingV2EffectivePersonalCompute","description":"(PersonalComputeMessage) - Effective setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the final effective value of setting. To set a value use personal_compute\n"},"effectiveRestrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectiveRestrictWorkspaceAdmins:getWorkspaceSettingV2EffectiveRestrictWorkspaceAdmins","description":"(RestrictWorkspaceAdminsMessage) - Effective setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the final effective value of setting. To set a value use restrict_workspace_admins\n"},"effectiveStringVal":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2EffectiveStringVal:getWorkspaceSettingV2EffectiveStringVal","description":"(StringMessage) - Effective setting value for string type setting. This is the final effective value of setting. To set a value use string_val\n"},"id":{"description":"The provider-assigned unique ID for this managed resource.","type":"string"},"integerVal":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2IntegerVal:getWorkspaceSettingV2IntegerVal","description":"(IntegerMessage) - Setting value for integer type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveIntegerVal \" pulumi-lang-dotnet=\" EffectiveIntegerVal \" pulumi-lang-go=\" effectiveIntegerVal \" pulumi-lang-python=\" effective_integer_val \" pulumi-lang-yaml=\" effectiveIntegerVal \" pulumi-lang-java=\" effectiveIntegerVal \"\u003e effective_integer_val \u003c/span\u003efor final setting value\n"},"name":{"description":"(string) - Name of the setting\n","type":"string"},"personalCompute":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2PersonalCompute:getWorkspaceSettingV2PersonalCompute","description":"(PersonalComputeMessage) - Setting value for\u003cspan pulumi-lang-nodejs=\" personalCompute \" pulumi-lang-dotnet=\" PersonalCompute \" pulumi-lang-go=\" personalCompute \" pulumi-lang-python=\" personal_compute \" pulumi-lang-yaml=\" personalCompute \" pulumi-lang-java=\" personalCompute \"\u003e personal_compute \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectivePersonalCompute \" pulumi-lang-dotnet=\" EffectivePersonalCompute \" pulumi-lang-go=\" effectivePersonalCompute \" pulumi-lang-python=\" effective_personal_compute \" pulumi-lang-yaml=\" effectivePersonalCompute \" pulumi-lang-java=\" effectivePersonalCompute \"\u003e effective_personal_compute \u003c/span\u003efor final setting value\n"},"providerConfig":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2ProviderConfig:getWorkspaceSettingV2ProviderConfig"},"restrictWorkspaceAdmins":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2RestrictWorkspaceAdmins:getWorkspaceSettingV2RestrictWorkspaceAdmins","description":"(RestrictWorkspaceAdminsMessage) - Setting value for\u003cspan pulumi-lang-nodejs=\" restrictWorkspaceAdmins \" pulumi-lang-dotnet=\" RestrictWorkspaceAdmins \" pulumi-lang-go=\" restrictWorkspaceAdmins \" pulumi-lang-python=\" restrict_workspace_admins \" pulumi-lang-yaml=\" restrictWorkspaceAdmins \" pulumi-lang-java=\" restrictWorkspaceAdmins \"\u003e restrict_workspace_admins \u003c/span\u003esetting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-dotnet=\" EffectiveRestrictWorkspaceAdmins \" pulumi-lang-go=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-python=\" effective_restrict_workspace_admins \" pulumi-lang-yaml=\" effectiveRestrictWorkspaceAdmins \" pulumi-lang-java=\" effectiveRestrictWorkspaceAdmins \"\u003e effective_restrict_workspace_admins \u003c/span\u003efor final setting value\n"},"stringVal":{"$ref":"#/types/databricks:index/getWorkspaceSettingV2StringVal:getWorkspaceSettingV2StringVal","description":"(StringMessage) - Setting value for string type setting. This is the setting value set by consumers, check\u003cspan pulumi-lang-nodejs=\" effectiveStringVal \" pulumi-lang-dotnet=\" EffectiveStringVal \" pulumi-lang-go=\" effectiveStringVal \" pulumi-lang-python=\" effective_string_val \" pulumi-lang-yaml=\" effectiveStringVal \" pulumi-lang-java=\" effectiveStringVal \"\u003e effective_string_val \u003c/span\u003efor final setting value\n"}},"required":["aibiDashboardEmbeddingAccessPolicy","aibiDashboardEmbeddingApprovedDomains","automaticClusterUpdateWorkspace","booleanVal","effectiveAibiDashboardEmbeddingAccessPolicy","effectiveAibiDashboardEmbeddingApprovedDomains","effectiveAutomaticClusterUpdateWorkspace","effectiveBooleanVal","effectiveIntegerVal","effectivePersonalCompute","effectiveRestrictWorkspaceAdmins","effectiveStringVal","integerVal","name","personalCompute","restrictWorkspaceAdmins","stringVal","id"],"type":"object"}},"databricks:index/getZones:getZones":{"description":"This data source allows you to fetch all available AWS availability zones on your workspace on AWS.\n\n\u003e This data source can only be used with a workspace-level provider!\n\n## Example Usage\n\n\u003c!--Start PulumiCodeChooser --\u003e\n```typescript\nimport * as pulumi from \"@pulumi/pulumi\";\nimport * as databricks from \"@pulumi/databricks\";\n\nconst zones = databricks.getZones({});\n```\n```python\nimport pulumi\nimport pulumi_databricks as databricks\n\nzones = databricks.get_zones()\n```\n```csharp\nusing System.Collections.Generic;\nusing System.Linq;\nusing Pulumi;\nusing Databricks = Pulumi.Databricks;\n\nreturn await Deployment.RunAsync(() =\u003e \n{\n    var zones = Databricks.GetZones.Invoke();\n\n});\n```\n```go\npackage main\n\nimport (\n\t\"github.com/pulumi/pulumi-databricks/sdk/go/databricks\"\n\t\"github.com/pulumi/pulumi/sdk/v3/go/pulumi\"\n)\n\nfunc main() {\n\tpulumi.Run(func(ctx *pulumi.Context) error {\n\t\t_, err := databricks.GetZones(ctx, \u0026databricks.GetZonesArgs{}, nil)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\treturn nil\n\t})\n}\n```\n```java\npackage generated_program;\n\nimport com.pulumi.Context;\nimport com.pulumi.Pulumi;\nimport com.pulumi.core.Output;\nimport com.pulumi.databricks.DatabricksFunctions;\nimport com.pulumi.databricks.inputs.GetZonesArgs;\nimport java.util.List;\nimport java.util.ArrayList;\nimport java.util.Map;\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\n\npublic class App {\n    public static void main(String[] args) {\n        Pulumi.run(App::stack);\n    }\n\n    public static void stack(Context ctx) {\n        final var zones = DatabricksFunctions.getZones(GetZonesArgs.builder()\n            .build());\n\n    }\n}\n```\n```yaml\nvariables:\n  zones:\n    fn::invoke:\n      function: databricks:getZones\n      arguments: {}\n```\n\u003c!--End PulumiCodeChooser --\u003e\n","inputs":{"description":"A collection of arguments for invoking getZones.\n","properties":{"defaultZone":{"type":"string","description":"This is the default zone that gets assigned to your workspace. This is the zone used by default for clusters and instance pools.\n"},"id":{"type":"string","description":"The id for the zone object.\n"},"providerConfig":{"$ref":"#/types/databricks:index/getZonesProviderConfig:getZonesProviderConfig","description":"Configure the provider for management through account provider. This block consists of the following fields:\n","willReplaceOnChanges":true},"zones":{"type":"array","items":{"type":"string"},"description":"This is a list of all the zones available for your subnets in your Databricks workspace.\n"}},"type":"object"},"outputs":{"description":"A collection of values returned by getZones.\n","properties":{"defaultZone":{"description":"This is the default zone that gets assigned to your workspace. This is the zone used by default for clusters and instance pools.\n","type":"string"},"id":{"description":"The id for the zone object.\n","type":"string"},"providerConfig":{"$ref":"#/types/databricks:index/getZonesProviderConfig:getZonesProviderConfig"},"zones":{"description":"This is a list of all the zones available for your subnets in your Databricks workspace.\n","items":{"type":"string"},"type":"array"}},"required":["defaultZone","id","zones"],"type":"object"}},"pulumi:providers:databricks/terraformConfig":{"description":"This function returns a Terraform config object with terraform-namecased keys,to be used with the Terraform Module Provider.","inputs":{"properties":{"__self__":{"type":"ref","$ref":"#/provider"}},"type":"pulumi:providers:databricks/terraformConfig","required":["__self__"]},"outputs":{"properties":{"result":{"additionalProperties":{"$ref":"pulumi.json#/Any"},"type":"object"}},"required":["result"],"type":"object"}}}}