Terraform module to setup all resources needed for setting up an AWS OpenSearch Service domain.
| Name | Version |
|---|---|
| terraform | >= 1.3.9 |
| aws | ~> 6.0 |
| Name | Version |
|---|---|
| aws | ~> 6.0 |
No modules.
| Name | Type |
|---|---|
| aws_cloudwatch_log_group.cwl_application | resource |
| aws_cloudwatch_log_group.cwl_index | resource |
| aws_cloudwatch_log_group.cwl_search | resource |
| aws_cloudwatch_log_resource_policy.cwl_resource_policy | resource |
| aws_opensearch_domain.os | resource |
| aws_security_group.sg | resource |
| aws_iam_policy_document.cwl_policy | data source |
| aws_region.current | data source |
| aws_subnet.private | data source |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| instance_type | Instance type to use for the OpenSearch domain | string |
n/a | yes |
| name | Name to use for the OpenSearch domain | string |
n/a | yes |
| volume_size | EBS volume size (in GB) to use for the OpenSearch domain | number |
n/a | yes |
| application_logging_enabled | Whether to enable OpenSearch application logs (error) in Cloudwatch | bool |
false |
no |
| auto_software_update_enabled | Whether automatic service software updates are enabled for the domain | bool |
true |
no |
| availability_zone_count | Number of Availability Zones for the domain to use with zone_awareness_enabled.Valid values: 2 or 3. Automatically configured through number of instances/subnets available if not set. | number |
null |
no |
| cognito_enabled | Whether to enable Cognito for authentication in Kibana | bool |
false |
no |
| cognito_identity_pool_id | Required when cognito_enabled is enabled: ID of the Cognito Identity Pool to use | string |
null |
no |
| cognito_role_arn | Required when cognito_enabled is enabled: ARN of the IAM role that has the AmazonESCognitoAccess policy attached |
string |
null |
no |
| cognito_user_pool_id | Required when cognito_enabled is enabled: ID of the Cognito User Pool to use | string |
null |
no |
| custom_endpoint | The domain name to use as custom endpoint for Elasicsearch | string |
null |
no |
| custom_endpoint_certificate_arn | ARN of the ACM certificate to use for the custom endpoint. Required when custom endpoint is set along with enabling endpoint_enforce_https |
string |
null |
no |
| dedicated_master_count | Number of dedicated master nodes in the domain (can be 3 or 5) | number |
3 |
no |
| dedicated_master_enabled | Whether dedicated master nodes are enabled for the domain. Automatically enabled when warm_enabled = true |
bool |
false |
no |
| dedicated_master_type | Instance type of the dedicated master nodes in the domain | string |
"t3.small.search" |
no |
| encrypt_at_rest | Whether to enable encryption at rest for the cluster. Changing this on an existing cluster will force a new resource! | bool |
true |
no |
| encrypt_at_rest_kms_key_id | The KMS key id to encrypt the OpenSearch domain with. If not specified then it defaults to using the aws/es service KMS key |
string |
null |
no |
| endpoint_enforce_https | Whether or not to require HTTPS | bool |
true |
no |
| endpoint_tls_security_policy | The name of the TLS security policy that needs to be applied to the HTTPS endpoint. Valid values: Policy-Min-TLS-1-0-2019-07 and Policy-Min-TLS-1-2-2019-07 |
string |
"Policy-Min-TLS-1-2-2019-07" |
no |
| ephemeral_list | m3 and r3 are supported by aws using ephemeral storage but are a legacy instance type https://docs.aws.amazon.com/opensearch-service/latest/developerguide/supported-instance-types.html | list(string) |
[ |
no |
| instance_count | Size of the OpenSearch domain | number |
1 |
no |
| logging_enabled | Whether to enable OpenSearch slow logs (index & search) in Cloudwatch | bool |
false |
no |
| logging_retention | How many days to retain OpenSearch logs in Cloudwatch | number |
30 |
no |
| node_to_node_encryption | Whether to enable node-to-node encryption. Changing this on an existing cluster will force a new resource! | bool |
true |
no |
| options_indices_fielddata_cache_size | Sets the indices.fielddata.cache.size advanced option. Specifies the percentage of heap space that is allocated to fielddata |
number |
null |
no |
| options_indices_query_bool_max_clause_count | Sets the indices.query.bool.max_clause_count advanced option. Specifies the maximum number of allowed boolean clauses in a query |
number |
1024 |
no |
| options_override_main_response_version | Whether to enable compatibility mode when creating an OpenSearch domain. Because certain Elasticsearch OSS clients and plugins check the cluster version before connecting, compatibility mode sets OpenSearch to report its version as 7.10 so these clients continue to work | bool |
false |
no |
| options_rest_action_multi_allow_explicit_index | Sets the rest.action.multi.allow_explicit_index advanced option. When set to false, OpenSearch will reject requests that have an explicit index specified in the request body |
bool |
true |
no |
| search_version | Version of the OpenSearch domain | string |
"OpenSearch_2.19" |
no |
| security_group_ids | Extra security group IDs to attach to the OpenSearch domain. Note: a default SG is already created and exposed via outputs | list(string) |
[] |
no |
| snapshot_start_hour | Hour during which an automated daily snapshot is taken of the OpenSearch indices | number |
3 |
no |
| subnet_ids | Required if vpc_id is specified: Subnet IDs for the VPC enabled OpenSearch domain endpoints to be created in | list(string) |
[] |
no |
| tags | Optional tags | map(string) |
{} |
no |
| volume_iops | Required if volume_type="io1" or "gp3": Amount of provisioned IOPS for the EBS volume | number |
0 |
no |
| volume_throughput | Required if volume_type="gp3": Amount of throughput in MiB/s for the EBS volume. For more information, check https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html#current-general-purpose | number |
125 |
no |
| volume_type | EBS volume type to use for the OpenSearch domain | string |
"gp2" |
no |
| vpc_id | VPC ID where to deploy the OpenSearch domain. If set, you also need to specify subnet_ids. If not set, the module creates a public domain |
string |
null |
no |
| warm_count | Number of warm nodes (2 - 150) | number |
2 |
no |
| warm_enabled | Whether to enable warm storage | bool |
false |
no |
| warm_type | Instance type of the warm nodes | string |
"ultrawarm1.medium.search" |
no |
| zone_awareness_enabled | Whether to enable zone_awareness or not, if not set, multi az is enabled by default and configured through number of instances/subnets available | bool |
null |
no |
| Name | Description |
|---|---|
| arn | ARN of the OpenSearch domain |
| domain_id | ID of the OpenSearch domain |
| domain_name | Name of the OpenSearch domain |
| domain_region | Region of the OpenSearch domain |
| endpoint | DNS endpoint of the OpenSearch domain |
| kibana_endpoint | DNS endpoint of Kibana |
| sg_id | ID of the OpenSearch security group |
module "opensearch" {
source = "github.com/skyscrapers/terraform-opensearch//opensearch?ref=14.0.0"
name = "logs-${terraform.workspace}-es"
instance_count = 3
instance_type = "m7g.large.search"
volume_size = 100
vpc_id = data.terraform_remote_state.networking.outputs.vpc_id
subnet_ids = data.terraform_remote_state.networking.outputs.private_db_subnets
}
data "aws_iam_policy_document" "opensearch" {
statement {
effect = "Allow"
principals {
type = "AWS"
identifiers = ["${aws_iam_user.es_user.arn}"]
}
actions = ["es:*"]
resources = ["${module.opensearch.arn}/*"]
}
}
resource "aws_opensearch_domain_policy" "opensearch" {
domain_name = module.opensearch.domain_name
access_policies = data.aws_iam_policy_document.opensearch.json
}This module by default creates Cloudwatch Log Groups & IAM permissions for OpenSearch slow logging (search & index), but we don't enable these logs by default. You can control logging behavior via the logging_enabled and logging_retention parameters. When enabling this, make sure you also enable this on OpenSearch side, following the AWS documentation.
You can also enable OpenSearch error logs via application_logging_enabled = true.
This module will not work without the ES default role AWSServiceRoleForAmazonOpenSearchService. This service role needs to be created per-account so you will need to add it if not present (just once per AWS account).
Here is a code sample you can use:
resource "aws_iam_service_linked_role" "opensearch" {
aws_service_name = "opensearchservice.amazonaws.com"
}This module can be used to create your own snapshots of Opensearch to S3, using Snapshot Management. It can also deploy a PrometheusRule for monitoring snapshot success.
Important
This requires Opensearch >= 2.5!
| Name | Version |
|---|---|
| terraform | >= 1.3.9 |
| aws | ~> 6.0 |
| helm | ~> 3.0 |
| opensearch | ~> 2.2 |
| Name | Version |
|---|---|
| aws | ~> 6.0 |
| helm | ~> 3.0 |
| opensearch | ~> 2.2 |
| Name | Source | Version |
|---|---|---|
| s3_snapshot | terraform-aws-modules/s3-bucket/aws | ~> 5.0 |
| Name | Type |
|---|---|
| aws_iam_role.snapshot_create | resource |
| aws_iam_role_policy.snapshot_create | resource |
| helm_release.elasticsearch_exporter | resource |
| opensearch_sm_policy.snapshot | resource |
| opensearch_snapshot_repository.repo | resource |
| aws_iam_policy_document.s3_snapshot_bucket | data source |
| aws_iam_policy_document.snapshot_create | data source |
| aws_iam_policy_document.snapshot_create_assume | data source |
| aws_opensearch_domain.os | data source |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| name | Name for the snapshot system, S3 bucket, etc. | string |
n/a | yes |
| aws_kms_key_arn | ARN of the CMK used for S3 Server Side Encryption. When specified, we'll use the aws:kms SSE algorithm. When not specified, falls back to using AES256 |
string |
null |
no |
| bucket_key_enabled | Whether to use Amazon S3 Bucket Keys for encryption, which reduces API costs | bool |
false |
no |
| create_cron_expression | The cron schedule used to create snapshots | string |
"0 0 * * *" |
no |
| create_time_limit | Sets the maximum time to wait for snapshot creation to finish. If time_limit is longer than the scheduled time interval for taking snapshots, no scheduled snapshots are taken until time_limit elapses. For example, if time_limit is set to 35 minutes and snapshots are taken every 30 minutes starting at midnight, the snapshots at 00:00 and 01:00 are taken, but the snapshot at 00:30 is skipped | string |
"1h" |
no |
| custom_sm_policy | Set this variable when you want to override the generated SM policy JSON with your own. Make sure to correctly set snapshot_config.repository to the same value as var.name (the bucket name) |
string |
null |
no |
| delete_cron_expression | The cron schedule used to delete snapshots | string |
"0 2 * * *" |
no |
| delete_time_limit | Sets the maximum time to wait for snapshot deletion to finish | string |
"1h" |
no |
| domain_name | Name / ID of the OpenSearch domain. Required to set when monitoring_enabled is true | string |
null |
no |
| extra_bucket_policy | Extra bucket policy to attach to the S3 bucket (JSON string formatted) | string |
null |
no |
| indices | The names of the indexes in the snapshot. Multiple index names are separated by ,. Supports wildcards (*) |
string |
"*" |
no |
| max_age | The maximum time a snapshot is retained in S3 | string |
"14d" |
no |
| max_count | The maximum number of snapshots retained in S3 | number |
400 |
no |
| min_count | The minimum number of snapshot retained in S3 | number |
1 |
no |
| monitoring_elasticsearch_exporter_nodeSelector | nodeSelector to add to the kubernetes pods. Set to null to disable. | map(map(string)) |
{ |
no |
| monitoring_elasticsearch_exporter_tolerations | Tolerations to add to the kubernetes pods. Set to null to disable. | any |
{ |
no |
| monitoring_elasticsearch_exporter_version | Version of the prometheus-elasticsearch-exporter Helm chart to deploy | string |
"7.0.0" |
no |
| monitoring_enabled | Whether to deploy a small elasticsearch-exporter with PrometheusRule for monitoring the snapshots. Requires the prometheus-operator to be deployed | bool |
true |
no |
| monitoring_namespace | Namespace where to deploy the PrometheusRule | string |
"infrastructure" |
no |
| monitoring_prometheus_labels | Additional K8s labels to add to the ServiceMonitor and PrometheusRule | map(string) |
{ |
no |
| monitoring_prometheusrule_alert_labels | Additional labels to add to the PrometheusRule alert | map(string) |
{} |
no |
| monitoring_prometheusrule_query_period | Period to apply to the PrometheusRule queries. Make sure this is bigger than the create_cron_expression interval |
string |
"32h" |
no |
| monitoring_prometheusrule_severity | Severity of the PrometheusRule alert. Usual values are: info, warning and critical |
string |
"warning" |
no |
| s3_force_destroy | Whether to force-destroy and empty the S3 bucket when destroying this Terraform module. WARNING: Not recommended! | bool |
false |
no |
| s3_replication_configuration | Replication configuration block for the S3 bucket. See https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/v3.15.1/examples/s3-replication for an example | any |
{} |
no |
No outputs.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
opensearch = {
source = "opensearch-project/opensearch"
}
}
}
provider "opensearch" {
url = "https://${module.opensearch.endpoint}"
sign_aws_requests = true
aws_region = var._aws_provider_region
aws_profile = var._aws_provider_profile
aws_assume_role_arn = "arn:aws:iam::${var._aws_provider_account_id}:role/${var._aws_provider_assume_role}"
healthcheck = false
}
provider "opensearch" {
url = module.opensearch.endpoint
aws_region = var._aws_provider_region
aws_profile = var._aws_provider_profile
aws_assume_role_arn = "arn:aws:iam::${var._aws_provider_account_id}:role/${var._aws_provider_assume_role}"
}
module "opensearch_snapshots" {
source = "github.com/skyscrapers/terraform-opensearch//opensearch-backup?ref=14.0.0"
name = "${module.opensearch.domain_name}-snapshots"
opensearch_endpoint = "https://${module.opensearch.endpoint}"
}- Removes unmaintained modules
elasticsearch_k8s_monitoring,kibana_k8s_auth_ingressandkibana_k8s_auth_proxy - Deploys prometheus-elasticsearch-exporter for monitoring snapshots
- Replaces
aws_elasticsearch_domainwithaws_opensearch_domain, and remove other references to "Elasticsearch" - Sets default OpenSearch version to 2.19
Migration steps:
terraform state rm module.elasticsearch.aws_elasticsearch_domain.es
terraform state import module.elasticsearch.aws_opensearch_domain.os <my-opensearch-domain-name>Or in your terraform code:
removed {
from = module.opensearch.aws_elasticsearch_domain.es
}
import {
to = module.opensearch.aws_opensearch_domain.os
id = var.name
}We removed the custom S3 backup mechanism (via Lambda) from the opensearch module. As an alternative we now offer a new opensearch-backup module, which relies on the OpenSearch Snapshot Management API to create snapshots to S3.
If you want to upgrade, without destroying your old S3 snapshot bucket, we recommend to remove the bucket from Terraform's state and re-import it into the new backup module. For example, consider your code like this:
module "opensearch" {
source = "github.com/skyscrapers/terraform-opensearch//opensearch?ref=11.3.0"
...
}
module "opensearch_backup" {
source = "github.com/skyscrapers/terraform-opensearch//opensearch-backup?ref=11.3.0"
name = "${module.opensearch.domain_name}-snapshot"
}Then you can migrate your snapshots S3 bucket like this:
terraform state rm module.opensearch.aws_s3_bucket.snapshot[0]
terraform import module.opensearch_backup.module.s3_snapshot.aws_s3_bucket.this[0] "<opensearch_domain_name>-snapshot"Also make sure to set var.name of this module to <opensearch_domain_name>-snapshot!
Alternatively you can just let the module create a new bucket.
In the elasticsearch_k8s_monitoring module, the variables system_tolerations and system_nodeSelector have been added to isolate the monitoring on a dedicated system node pool. If you don't want this you can override these variables to null to disable.
In the opensearch module, the s3_snapshots_schedule_expression variable has been replaced with s3_snapshots_schedule_period. Instead of a cron expression, we only allow to specify a period in hours, which will be used as a rate(x hours).
This change migrates the elasticsearch module to opensearch. This is mostly a cosmetic change, however there's several breaking things to note:
- Security Group description is updated, which would normally trigger a destroy/recreate. However existing setups won't be affected due to an ignore lifecycle
- Variables
projectandenvironmenthave been removed. Only thenamevariable is now used. For existing setups, you can setname = "<myproject>-<myenvironment>-<oldname>"to retain the original "name". - CloudWatch Log Groups will be destroyed and recreated using the new name. If you wish to keep your older logs, it's best to remove the existing Log Groups from the TF state:
terraform state rm module.elasticsearch.aws_cloudwatch_log_group.cwl_index
terraform state rm module.elasticsearch.aws_cloudwatch_log_group.cwl_search
terraform state rm module.elasticsearch.aws_cloudwatch_log_group.cwl_application- Variable
elasticsearch_versionhas been renamed tosearch_version, with default valueOpenSearch_1.1 - We no longer merge the
tagsvariable with our own hardcoded defaults (Environment,Project,Name) , all tags need to be passed through thetagsvariable and/or through thedefault_tagsprovider setting - Updated list of instance types with NVMe SSD storage
Behavior of this module in function of backups has changed much between versions 6.0.0 and 7.0.0:
- Replace the
snapshot_bucket_enabledvariable withs3_snapshots_enabled- Note: This will also enable the Lambda for automated backups
- If you just want to keep the bucket, you can remove it from the terraform state and manage it outside the module:
terraform state rm aws_s3_bucket.snapshot[0]
- The IAM role for taking snapshots has been renamed. If you want to keep the old role too, you should remove it from the terraform state:
terraform state rm module.registrations.aws_iam_role.role[0]- Otherwise just let it destroy the old role and it will create a new one
Also note that some default values for variables has beem changed, mostly related to encryption. If this triggers an unwanted change, you can override this by explicitly setting the variable with it's old value.