diff --git a/examples/ibm-event-streams/README.md b/examples/ibm-event-streams/README.md index 7a1aa0679e..2e3dab13b1 100644 --- a/examples/ibm-event-streams/README.md +++ b/examples/ibm-event-streams/README.md @@ -1,70 +1,101 @@ # IBM Event Streams examples -This example shows 3 usage scenarios. +This example shows several Event Streams usage scenarios. -#### Scenario 1: Create an Event Streams service instance and topic. +## Creating Event Streams instances + +Event Streams service instances are created with the `"ibm_resource_instance"` resource type. + +The following `"ibm_resource_instance"` arguments are required: + +- `name`: The service instance name, as it will appear in the Event Streams UI and CLI. + +- `service`: Use `"messagehub"` for an Event Streams instance. + +- `plan`: One of `"lite"`, `"standard"`, or `"enterprise-3nodes-2tb"`. For more information about the plans, see [Choosing your plan](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-plan_choose). Note: `"enterprise-3nodes-2tb"` selects the Enterprise plan. + +- `location`: The region where the service instance will be provisioned. For a list of regions, see [Region and data center locations](https://cloud.ibm.com/docs/overview?topic=overview-locations). + +- `resource_group_id`: The ID of the resource group in which the instance will be provisioned. For more information about resource groups, see [Managing resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs). + +The `parameters` argument is optional and provides additional provision or update options. Supported parameters are: + +- `throughput`: One of `"150"` (the default), `"300"`, `"450"`. The maximum capacity in MB/s for producing or consuming messages. For more information see [Scaling Enterprise plan capacity](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-ES_scaling_capacity). *Note:* See [Scaling combinations](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-ES_scaling_capacity#ES_scaling_combinations) for allowed combinations of `throughput` and `storage_size`. + - Example: `throughput = "300"` + +- `storage_size`: One of `"2048"` (the default), `"4096"`, `"6144"`, `"8192"`, `"10240"`, or `"12288"`. The amount of storage capacity in GB. For more information see [Scaling Enterprise plan capacity](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-ES_scaling_capacity). *Note:* See [Scaling combinations](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-ES_scaling_capacity#ES_scaling_combinations) for allowed combinations of `throughput` and `storage_size`. + - Example: `storage_size = "4096"` + +- `service-endpoints`: One of `"public"` (the default), `"private"`, or `"public-and-private"`. For enterprise instance only. For more information see [Restricting network access](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-restrict_access). + - Example: `service-endpoints = "private"` + +- `private_ip_allowlist`: **Deprecated** An array of CIDRs specifying a private IP allowlist. For enterprise instance only. For more information see [Specifying an IP allowlist](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-restrict_access#specify_allowlist). This feature has been deprecated in favor of context-based restrictions. + - Example: `private_ip_allowlist = "[10.0.0.0/32,10.0.0.1/32]"` + +- `metrics`: An array of strings, allowed values are `"topic"`, `"partition"`, and `"consumers"`. Enables additional enhanced metrics for the instance. For enterprise instance only. For more information on enhanced metrics, see [Enabling enhanced Event Streams metrics](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-metrics#opt_in_enhanced_metrics). + - Example: `metrics = "[topic,partition]"` + +- `kms_key_crn`: The CRN (as a string) of a customer-managed root key provisioned with an IBM Cloud Key Protect or Hyper Protect Crypto Service. If provided, this key is used to encrypt all data at rest. For more information on customer-managed encryption, see [Managing encryption in Event Streams](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-managing_encryption). + - Example: `kms_key_crn = "crn:v1:prod:public:kms:us-south:a/6db1b0d0b5c54ee5c201552547febcd8:20adf7eb-e095-4dec-08cf-0b7d81e32db6:key:3fa9d921-d3b6-3516-a1ec-d54e27e7638b"` + +The `timeouts` argument is used to specify how long the IBM Cloud terraform provider will wait for the provision, update, or deprovision of the service instance. Values of 15 minutes are sufficient for standard and lite plans. For enterprise plans: +- Use "3h" for create. Add an additional 1 hour for each level of non-default throughput, and an additional 30 minutes for each level of non-default storage size. For example with `throughput = "300"` (one level over default) and `storage_size = "8192"` (three levels over default), use 3 hours + 1 * 1 hour + 3 * 30 minutes = 5.5 hours. +- Use "1h" for update. If increasing the throughput or storage size, add an additional 1 hour for each level of non-default throughput, and an additional 30 minutes for each level of non-default storage size. +- Use "1h" for delete. + +## Scenarios + +#### Scenario 1: Create an Event Streams standard-plan service instance. + +This creates a standard plan instance in us-south. ```terraform resource "ibm_resource_instance" "es_instance_1" { name = "terraform-integration-1" service = "messagehub" - plan = "standard" # "lite", "enterprise-3nodes-2tb" - location = "us-south" # "us-east", "eu-gb", "eu-de", "jp-tok", "au-syd" + plan = "standard" + location = "us-south" resource_group_id = data.ibm_resource_group.group.id - # parameters = { - # service-endpoints = "private" # for enterprise instance only, Options are: "public", "public-and-private", "private". Default is "public" when not specified. - # private_ip_allowlist = "[10.0.0.0/32,10.0.0.1/32]" # for enterprise instance only. Specify 1 or more IP range in CIDR format - # # document about using private service endpoint and IP allowlist to restrict access: https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-restrict_access - - # throughput = "150" # for enterprise instance only. Options are: "150", "300", "450". Default is "150" when not specified. - # storage_size = "2048" # for enterprise instance only. Options are: "2048", "4096", "6144", "8192", "10240", "12288". Default is "2048" when not specified. - # # Note: when throughput is "300", storage_size starts from "4096", when throughput is "450", storage_size starts from "6144" - # # document about supported combinations of throughput and storage_size: https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-ES_scaling_capacity#ES_scaling_combinations - # } - - # timeouts { - # create = "15m" # use 3h when creating enterprise instance, add additional 1h for each level of non-default throughput, add additional 30m for each level of non-default storage_size - # update = "15m" # use 1h when updating enterprise instance, add additional 1h for each level of non-default throughput, add additional 30m for each level of non-default storage_size - # delete = "15m" - # } -} - -resource "ibm_event_streams_topic" "es_topic_1" { - resource_instance_id = ibm_resource_instance.es_instance_1.id - name = "my-es-topic" - partitions = 1 - config = { - "cleanup.policy" = "compact,delete" - "retention.ms" = "86400000" - "retention.bytes" = "1073741824" - "segment.bytes" = "536870912" + timeouts { + create = "15m" + update = "15m" + delete = "15m" } } ``` -#### Scenario 2: Create a topic on an existing Event Streams instance. +#### Scenario 2: Create an Event Streams enterprise service instance with non-default attributes + +This creates an enterprise plan instance in us-east with 300 MB/s throughput, 4 TB storage, private endpoints with an allowlist, and enhanced metrics for topics and consumer groups. The timeouts are calculated as described above. ```terraform -data "ibm_resource_instance" "es_instance_2" { +resource "ibm_resource_instance" "es_instance_2" { name = "terraform-integration-2" + service = "messagehub" + plan = "enterprise-3nodes-2tb" + location = "us-east" resource_group_id = data.ibm_resource_group.group.id -} -resource "ibm_event_streams_topic" "es_topic_2" { - resource_instance_id = data.ibm_resource_instance.es_instance_2.id - name = "my-es-topic" - partitions = 1 - config = { - "cleanup.policy" = "compact,delete" - "retention.ms" = "86400000" - "retention.bytes" = "1073741824" - "segment.bytes" = "536870912" + parameters = { + throughput = "300" + storage_size = "4096" + service-endpoints = "private" + private_ip_allowlist = "[10.0.0.0/32,10.0.0.1/32]" + metrics = "[topic,consumers]" + } + + timeouts { + create = "330m" # 5.5h + update = "210m" # 3.5h + delete = "1h" } } ``` -#### Scenario 3: Create a kafka consumer application connecting to an existing Event Streams instance and its topics. +#### Scenario 3: Create a topic on an existing Event Streams instance. + +For more information on topics and topic parameters, see [Topics and partitions](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-apache_kafka&interface=ui#kafka_topics_partitions) and [Using the administration Kafka Java client API](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-kafka_java_api). ```terraform data "ibm_resource_instance" "es_instance_3" { @@ -72,20 +103,23 @@ data "ibm_resource_instance" "es_instance_3" { resource_group_id = data.ibm_resource_group.group.id } -data "ibm_event_streams_topic" "es_topic_3" { +resource "ibm_event_streams_topic" "es_topic_3" { resource_instance_id = data.ibm_resource_instance.es_instance_3.id name = "my-es-topic" -} - -resource "kafka_consumer_app" "es_kafka_app" { - bootstrap_server = lookup(data.ibm_resource_instance.es_instance_3.extensions, "kafka_brokers_sasl", []) - topics = [data.ibm_event_streams_topic.es_topic_3.name] - apikey = var.es_reader_api_key + partitions = 1 + config = { + "cleanup.policy" = "compact,delete" + "retention.ms" = "86400000" + "retention.bytes" = "1073741824" + "segment.bytes" = "536870912" + } } ``` #### Scenario 4: Create a schema on an existing Event Streams Enterprise instance +For more information on the Event Streams schema registry, see [Using Event Streams Schema Registry](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-ES_schema_registry). + ```terraform data "ibm_resource_instance" "es_instance_4" { name = "terraform-integration-4" @@ -108,6 +142,60 @@ resource "ibm_event_streams_schema" "es_schema" { } ``` +#### Scenario 5: Apply access tags to an Event Streams service instance + +Tags are applied using the `"ibm_resource_tag"` terraform resource. +For more information about tagging, see the documentation for the `"ibm_resource_tag"` resource and [Tagging](https://cloud.ibm.com/apidocs/tagging). + +```terraform +data "ibm_resource_instance" "es_instance_5" { + name = "terraform-integration-5" + resource_group_id = data.ibm_resource_group.group.id +} + +resource "ibm_resource_tag" "tag_example_on_es" { + tags = ["example:tag"] + tag_type = "access" + resource_id = data.ibm_resource_instance.es_instance_5.id +} +``` + +#### Scenario 6: Connect to an existing Event Streams instance and its topics. + +This scenario uses a fictitious `"kafka_consumer_app"` resource to demonstrate how a consumer application could be configured. +The resource uses three configuration properties: + +1. The Kafka broker hostnames used to connect to the service instance. +2. An API key for reading from the topics. +3. The names of the topic(s) which the consumer should read. + +The broker hostnames would be required by any consumer or producer application. After the Event Streams service instance has been created, they are available in the `extensions` attribute of the service instance, as an array named `"kafka_brokers_sasl"`. This is shown in the example. + +An API key would also be required by any application. This key would typically be created with reduced permissions to restrict the operations it can perform, for example only allowing it to read from certain topics. See [Managing authentication to your Event Streams instance](https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-security) for more information on creating keys. The example assumes the key is provided as a terraform variable. + +The topic names can be provided as strings, or can be taken from topic data sources as shown in the example. + +```terraform +# Use an existing instance +data "ibm_resource_instance" "es_instance_6" { + name = "terraform-integration-6" + resource_group_id = data.ibm_resource_group.group.id +} + +# Use an existing topic on that instance +data "ibm_event_streams_topic" "es_topic_6" { + resource_instance_id = data.ibm_resource_instance.es_instance_6.id + name = "my-es-topic" +} + +# The FICTITIOUS consumer application, configured with brokers, API key, and topics +resource "kafka_consumer_app" "es_kafka_app" { + bootstrap_server = lookup(data.ibm_resource_instance.es_instance_4.extensions, "kafka_brokers_sasl", []) + apikey = var.es_reader_api_key + topics = [data.ibm_event_streams_topic.es_topic_4.name] +} +``` + ## Dependencies - The owner of the `ibmcloud_api_key` has permission to create Event Streams instance under specified resource group and has Manager role to the created instance in order to create topic. @@ -116,9 +204,7 @@ resource "ibm_event_streams_schema" "es_schema" { ## Configuration -- `ibmcloud_api_key` - An API key for IBM Cloud services. If you don't have one already, go to https://cloud.ibm.com/iam/#/apikeys and create a new key. - -- `es_reader_api_key` - An service ID API key with reduced permission in scenario 3 if user wish to scope the access to Event Streams instance and topics. +- `ibmcloud_api_key` - An API key for IBM Cloud services. If you don't have one already, go to https://cloud.ibm.com/iam/apikeys and create a new key. ## Running the configuration diff --git a/examples/ibm-event-streams/main.tf b/examples/ibm-event-streams/main.tf index 44c3e91214..a16618467b 100644 --- a/examples/ibm-event-streams/main.tf +++ b/examples/ibm-event-streams/main.tf @@ -1,83 +1,72 @@ +# This is not functional terraform code. It is intended as a template for users to remove +# unneeded scenarios and edit the other sections. + +# Replace the resource group name with the one in which your resources should be created data "ibm_resource_group" "group" { name = "Default" } -#### Scenario 1: Create Event Streams service instance and topic +#### Scenario 1: Create an Event Streams standard-plan service instance. resource "ibm_resource_instance" "es_instance_1" { name = "terraform-integration-1" service = "messagehub" - plan = "standard" # "lite", "enterprise-3nodes-2tb" - location = "us-south" # "us-east", "eu-gb", "eu-de", "jp-tok", "au-syd" + plan = "standard" + location = "us-south" resource_group_id = data.ibm_resource_group.group.id - # parameters = { - # service-endpoints = "private" # for enterprise instance only, Options are: "public", "public-and-private", "private". Default is "public" when not specified. - # private_ip_allowlist = "[10.0.0.0/32,10.0.0.1/32]" # for enterprise instance only. Specify 1 or more IP range in CIDR format - # # document about using private service endpoint and IP allowlist to restrict access: https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-restrict_access - - # throughput = "150" # for enterprise instance only. Options are: "150", "300", "450". Default is "150" when not specified. - # storage_size = "2048" # for enterprise instance only. Options are: "2048", "4096", "6144", "8192", "10240", "12288". Default is "2048" when not specified. - # kms_key_crn = "crn:v1:bluemix:public:kms:us-south:a/6db1b0d0b5c54ee5c201552547febcd8:0aa69b09-941b-41b2-bbf9-9f9f0f6a6f79:key:dd37a0b6-eff4-4708-8459-e29ae0a8f256" # for enterprise instance only. Specify the CRN of a root key from a Key Management Service instance used to encrypt disks. - # # Note: when throughput is "300", storage_size starts from "4096", when throughput is "450", storage_size starts from "6144" - # # document about supported combinations of throughput and storage_size: https://cloud.ibm.com/docs/EventStreams?topic=EventStreams-ES_scaling_capacity#ES_scaling_combinations - # } - - # timeouts { - # create = "15m" # use 3h when creating enterprise instance, add additional 1h for each level of non-default throughput, add additional 30m for each level of non-default storage_size - # update = "15m" # use 1h when updating enterprise instance, add additional 1h for each level of non-default throughput, add additional 30m for each level of non-default storage_size - # delete = "15m" - # } -} - -resource "ibm_event_streams_topic" "es_topic_1" { - resource_instance_id = ibm_resource_instance.es_instance_1.id - name = "my-es-topic" - partitions = 1 - config = { - "cleanup.policy" = "compact,delete" - "retention.ms" = "86400000" - "retention.bytes" = "1073741824" - "segment.bytes" = "536870912" + timeouts { + create = "15m" + update = "15m" + delete = "15m" } } -#### Scenario 2: Create topic on an existing Event Streams instance -data "ibm_resource_instance" "es_instance_2" { +#### Scenario 2: Create an Event Streams enterprise service instance with non-default attributes +resource "ibm_resource_instance" "es_instance_2" { name = "terraform-integration-2" + service = "messagehub" + plan = "enterprise-3nodes-2tb" + location = "us-east" resource_group_id = data.ibm_resource_group.group.id -} -resource "ibm_event_streams_topic" "es_topic_2" { - resource_instance_id = data.ibm_resource_instance.es_instance_2.id - name = "my-es-topic" - partitions = 1 - config = { - "cleanup.policy" = "compact,delete" - "retention.ms" = "86400000" - "retention.bytes" = "1073741824" - "segment.bytes" = "536870912" + parameters = { + throughput = "300" + storage_size = "4096" + service-endpoints = "private" + private_ip_allowlist = "[10.0.0.0/32,10.0.0.1/32]" + metrics = "[topic,consumers]" + } + + timeouts { + create = "330m" # 5.5h + update = "210m" # 3.5h + delete = "1h" } } -#### Scenario 3: Create a kafka consumer application connecting to an existing Event Streams instance and its topics +#### Scenario 3: Create a topic on an existing Event Streams instance. + +# the existing instance data "ibm_resource_instance" "es_instance_3" { name = "terraform-integration-3" resource_group_id = data.ibm_resource_group.group.id } -data "ibm_event_streams_topic" "es_topic_3" { +resource "ibm_event_streams_topic" "es_topic_3" { resource_instance_id = data.ibm_resource_instance.es_instance_3.id name = "my-es-topic" + partitions = 1 + config = { + "cleanup.policy" = "compact,delete" + "retention.ms" = "86400000" + "retention.bytes" = "1073741824" + "segment.bytes" = "536870912" + } } -resource "kafka_consumer_app" "es_kafka_app" { - bootstrap_server = lookup(data.ibm_resource_instance.es_instance_3.extensions, "kafka_brokers_sasl", []) - topics = [data.ibm_event_streams_topic.es_topic_3.name] - apikey = var.es_reader_api_key -} +#### Scenario 4: Create a schema on an existing Event Streams Enterprise instance -#### Scenario 4 Create a schema on an existing Event Streams Enterprise instance data "ibm_resource_instance" "es_instance_4" { name = "terraform-integration-4" resource_group_id = data.ibm_resource_group.group.id @@ -85,7 +74,7 @@ data "ibm_resource_instance" "es_instance_4" { resource "ibm_event_streams_schema" "es_schema" { resource_instance_id = data.ibm_resource_instance.es_instance_4.id - schema_id = "my-es-schema" + schema_id = "tf_schema" schema = < **NOTE:** raise error if name is given with a prefix `ibm- `. - `subnet` - (Required, Forces new resource, String) The subnet ID for the reserved IP. -- `target` - (Optional, string) The ID for the target endpoint gateway for the reserved IP. - +- `target` - (Optional, string) The target to bind this reserved IP to. The target must be in the same VPC. If unspecified, the reserved IP will be created unbound. The following targets are supported: + - An endpoint gateway not already bound to a reserved IP in the subnet's zone. + - A virtual network interface. + ## Attribute reference In addition to all argument reference list, you can access the following attribute reference after your resource is created. - `created_at` - (Timestamp) The date and time that the reserved IP was created.", - `href` - (String) The URL for this reserved IP. - `id` - (String) The combination of the subnet ID and reserved IP ID, separated by **/**. -- `lifecycle_state` - (String) The lifecycle state of the reserved IP. [ deleting, failed, pending, stable, suspended, updating, waiting ] +- `lifecycle_state` - (String) The lifecycle state of the reserved IP. [ **deleting**, **failed**, **pending**, **stable**, **suspended**, **updating**, **waiting** ] - `owner` - (String) The owner of a reserved IP, defining whether it is managed by the user or the provider. -- `reserved_ip` - (String) The reserved IP. +- `reserved_ip` - (String) The unique identifier for this reserved IP. - `resource_type` - (String) The resource type. - `target` - (String) The ID for the target for the reserved IP. - `target_crn` - (String) The crn of the target for the reserved IP. ## Import -The `ibm_is_subnet_reserved_ip` and `ibm_is_subnet` resource can be imported by using subnet ID and reserved IP ID separated by **/**. +The `ibm_is_subnet_reserved_ip` resource can be imported by using subnet ID and reserved IP ID separated by **/**. **Syntax** ``` -$ terraform import ibm_is_subnet.example / +$ terraform import ibm_is_subnet_reserved_ip.example / ``` **Example**