Skip to content

Use version variables for Cloud versions #2263

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 25, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion deploy-manage/autoscaling/autoscaling-in-eck.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ spec:
max: 512Gi
```

You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/{{eck_release_branch}}/config/recipes/autoscaling/elasticsearch.yaml) which will also show you how to fine-tune the [autoscaling deciders](/deploy-manage/autoscaling/autoscaling-deciders.md).
You can find [a complete example in the ECK GitHub repository](https://github.com/elastic/cloud-on-k8s/blob/{{version.eck | M.M}}/config/recipes/autoscaling/elasticsearch.yaml) which will also show you how to fine-tune the [autoscaling deciders](/deploy-manage/autoscaling/autoscaling-deciders.md).
Copy link
Contributor

@kilfoyle kilfoyle Jul 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@colleenmcginnis Just checking: Is it expected that this link currently 404s with {{version.eck | M.M}} resolving to 3.2?
https://github.com/elastic/cloud-on-k8s/blob/3.2/config/recipes/autoscaling/elasticsearch.yaml

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is 3.2 the latest ECK version? If not, we should update https://github.com/elastic/docs-builder/blob/main/config/versions.yml#L19.

Copy link
Contributor

@kilfoyle kilfoyle Jul 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking the release schedule, ECK 3.1 is due for release next week; 3.2 is planned for a few months later.
The ECK docs are (correctly) showing 3.0.

Here's a PR that I think should be merged when ECK 3.1 goes live next week:
elastic/docs-builder#1608

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bmorelli25 updated the versions.yml config in elastic/docs-builder#1611 to use 3.0 as current, which is where the version.eck variable comes from so the link is now working. 🎉



#### Change the polling interval [k8s-autoscaling-polling-interval]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ Versions of the {{stack}}, containing {{es}}, {{kib}}, and other products, are a

The first table contains the stack versions that shipped with the 4.0 version of {{ece}}. You can also check the [most recent stack packs and Docker images](#ece-recent-download-list), which might have released after the 4.0 version of ECE, as well as the [full list of available stack packs and Docker images](#ece-full-download-list).

| Docker images included with {{ece}} {{ece_version}} |
| Docker images included with {{ece}} {{version.ece}} |
| --- |
| docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}} |
| docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}} |
| docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0 |
| docker.elastic.co/cloud-release/kibana-cloud:8.18.0 |
| docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0 |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ To perform an offline installation without a private Docker registry, you have t
1. On an internet-connected host that has Docker installed, download the [Available Docker Images](ece-install-offline-images.md). Note that for ECE version 4.0, if you want to use {{stack}} version 9.0 in your deployments, you need to download and make available both the version 8.x and version 9.x Docker images (the version 8.x images are required for system deployments).

```sh subs=true
docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}}
docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}}
docker pull docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0
docker pull docker.elastic.co/cloud-release/kibana-cloud:8.18.0
docker pull docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0
Expand All @@ -26,15 +26,15 @@ To perform an offline installation without a private Docker registry, you have t
docker pull docker.elastic.co/cloud-release/elastic-agent-cloud:9.0.0
```

For example, for {{ece}} {{ece_version}} and the {{stack}} versions it shipped with, you need:
For example, for {{ece}} {{version.ece}} and the {{stack}} versions it shipped with, you need:

* {{ece}} {{ece_version}}
* {{ece}} {{version.ece}}
* {{es}} 9.0.0, {{kib}} 9.0.0, and APM 9.0.0

2. Create .tar files of the images:

```sh subs=true
docker save -o ece.{{ece_version}}.tar docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}}
docker save -o ece.{{version.ece}}.tar docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}}
docker save -o es.8.18.0.tar docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0
docker save -o kibana.8.18.0.tar docker.elastic.co/cloud-release/kibana-cloud:8.18.0
docker save -o apm.8.18.0.tar docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0
Expand All @@ -48,7 +48,7 @@ To perform an offline installation without a private Docker registry, you have t
4. On each host, load the images into Docker, replacing `FILE_PATH` with the correct path to the .tar files:

```sh subs=true
docker load < FILE_PATH/ece.{{ece_version}}.tar
docker load < FILE_PATH/ece.{{version.ece}}.tar
docker load < FILE_PATH/es.8.18.0.tar
docker load < FILE_PATH/kibana.8.18.0.tar
docker load < FILE_PATH/apm.8.18.0.tar
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Installing ECE on multiple hosts with your own registry server is simpler, becau
2. On an internet-connected host that has Docker installed, download the [Available Docker Images](ece-install-offline-images.md) and push them to your private Docker registry. Note that for ECE version 4.0, if you want to use {{stack}} version 9.0 in your deployments, you need to download and make available both the version 8.x and version 9.x Docker images.

```sh subs=true
docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}}
docker pull docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}}
docker pull docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0
docker pull docker.elastic.co/cloud-release/kibana-cloud:8.18.0
docker pull docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0
Expand All @@ -32,9 +32,9 @@ Installing ECE on multiple hosts with your own registry server is simpler, becau
docker pull docker.elastic.co/cloud-release/elastic-agent-cloud:9.0.0
```

For example, for {{ece}} {{ece_version}} and the {{stack}} versions it shipped with, you need:
For example, for {{ece}} {{version.ece}} and the {{stack}} versions it shipped with, you need:

* {{ece}} {{ece_version}}
* {{ece}} {{version.ece}}
* {{es}} 9.0.0, {{kib}} 9.0.0, APM 9.0.0

:::{important}
Expand All @@ -44,7 +44,7 @@ Installing ECE on multiple hosts with your own registry server is simpler, becau
3. Tag the Docker images with your private registry URL by replacing `REGISTRY` with your actual registry address, for example `my.private.repo:5000`:

```sh subs=true
docker tag docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}} REGISTRY/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}}
docker tag docker.elastic.co/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}} REGISTRY/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}}
docker tag docker.elastic.co/cloud-release/elasticsearch-cloud-ess:8.18.0 REGISTRY/cloud-release/elasticsearch-cloud-ess:8.18.0
docker tag docker.elastic.co/cloud-release/kibana-cloud:8.18.0 REGISTRY/cloud-release/kibana-cloud:8.18.0
docker tag docker.elastic.co/cloud-release/elastic-agent-cloud:8.18.0 REGISTRY/cloud-release/elastic-agent-cloud:8.18.0
Expand All @@ -57,7 +57,7 @@ Installing ECE on multiple hosts with your own registry server is simpler, becau
4. Push the Docker images to your private Docker registry, using the same tags from the previous step. Replace `REGISTRY` with your actual registry URL, for example `my.private.repo:5000`:

```sh subs=true
docker push REGISTRY/cloud-enterprise/elastic-cloud-enterprise:{{ece_version}}
docker push REGISTRY/cloud-enterprise/elastic-cloud-enterprise:{{version.ece}}
docker push REGISTRY/cloud-release/elasticsearch-cloud-ess:8.18.0
docker push REGISTRY/cloud-release/kibana-cloud:8.18.0
docker push REGISTRY/cloud-release/elastic-agent-cloud:8.18.0
Expand Down
2 changes: 1 addition & 1 deletion deploy-manage/deploy/cloud-on-k8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Afterwards, you can:

* Learn how to [update your deployment](./cloud-on-k8s/update-deployments.md)
* Check out [our recipes](./cloud-on-k8s/recipes.md) for multiple use cases
* Find further sample resources [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/samples)
* Find further sample resources [in the project repository](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/samples)

## Supported versions [k8s-supported]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ Starting with ECK 2.0 the operator can make Kubernetes Node labels available as
2. On the {{es}} resources set the `eck.k8s.elastic.co/downward-node-labels` annotations with the list of the Kubernetes node labels that should be copied as Pod annotations.
3. Use the [Kubernetes downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) in the `podTemplate` to make those annotations available as environment variables in {{es}} Pods.

Refer to the next section or to the [{{es}} sample resource in the ECK source repository](https://github.com/elastic/cloud-on-k8s/tree/{{eck_release_branch}}/config/samples/elasticsearch/elasticsearch.yaml) for a complete example.
Refer to the next section or to the [{{es}} sample resource in the ECK source repository](https://github.com/elastic/cloud-on-k8s/tree/{{version.eck | M.M}}/config/samples/elasticsearch/elasticsearch.yaml) for a complete example.


### Using node topology labels, Kubernetes topology spread constraints, and {{es}} shard allocation awareness [k8s-availability-zone-awareness-example]
Expand Down
2 changes: 1 addition & 1 deletion deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ ECK will automatically set the correct container image for each application. Whe

To deploy the ECK operator in an air-gapped environment, you first have to mirror the operator image itself from `docker.elastic.co` to a private container registry, for example `my.registry`.

Once the ECK operator image is copied internally, replace the original image name `docker.elastic.co/eck/eck-operator:{{eck_version}}` with the private name of the image, for example `my.registry/eck/eck-operator:{{eck_version}}`, in the [operator manifests](../../../deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md). When using [Helm charts](../../../deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md), replace the `image.repository` Helm value with, for example, `my.registry/eck/eck-operator`.
Once the ECK operator image is copied internally, replace the original image name `docker.elastic.co/eck/eck-operator:{{version.eck}}` with the private name of the image, for example `my.registry/eck/eck-operator:{{version.eck}}`, in the [operator manifests](../../../deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md). When using [Helm charts](../../../deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md), replace the `image.repository` Helm value with, for example, `my.registry/eck/eck-operator`.


## Override the default container registry [k8s-container-registry-override]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The examples in this section are purely descriptive and should not be considered
## Metricbeat for Kubernetes monitoring [k8s_metricbeat_for_kubernetes_monitoring]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/metricbeat_hosts.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/metricbeat_hosts.yaml
```

Deploys Metricbeat as a DaemonSet that monitors the usage of the following resources:
Expand All @@ -32,7 +32,7 @@ Deploys Metricbeat as a DaemonSet that monitors the usage of the following resou
## Filebeat with autodiscover [k8s_filebeat_with_autodiscover]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/filebeat_autodiscover.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/filebeat_autodiscover.yaml
```

Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. It collects logs from Pods in every namespace and loads them to the connected {{es}} cluster.
Expand All @@ -41,7 +41,7 @@ Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. It collec
## Filebeat with autodiscover for metadata [k8s_filebeat_with_autodiscover_for_metadata]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/filebeat_autodiscover_by_metadata.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/filebeat_autodiscover_by_metadata.yaml
```

Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. Logs from Pods that match the following criteria are shipped to the connected {{es}} cluster:
Expand All @@ -53,7 +53,7 @@ Deploys Filebeat as a DaemonSet with the autodiscover feature enabled. Logs from
## Filebeat without autodiscover [k8s_filebeat_without_autodiscover]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/filebeat_no_autodiscover.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/filebeat_no_autodiscover.yaml
```

Deploys Filebeat as a DaemonSet with the autodiscover feature disabled. Uses the entire logs directory on the host as the input source. This configuration does not require any RBAC resources as no Kubernetes APIs are used.
Expand All @@ -62,7 +62,7 @@ Deploys Filebeat as a DaemonSet with the autodiscover feature disabled. Uses the
## {{es}} and {{kib}} Stack Monitoring [k8s_elasticsearch_and_kibana_stack_monitoring]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/stack_monitoring.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/stack_monitoring.yaml
```

Deploys Metricbeat configured for {{es}} and {{kib}} [Stack Monitoring](/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md) and Filebeat using autodiscover. Deploys one monitored {{es}} cluster and one monitoring {{es}} cluster. You can access the Stack Monitoring app in the monitoring cluster’s {{kib}}.
Expand All @@ -76,7 +76,7 @@ In this example, TLS verification is disabled when Metricbeat communicates with
## Heartbeat monitoring {{es}} and {{kib}} health [k8s_heartbeat_monitoring_elasticsearch_and_kibana_health]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/heartbeat_es_kb_health.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/heartbeat_es_kb_health.yaml
```

Deploys Heartbeat as a single Pod deployment that monitors the health of {{es}} and {{kib}} by TCP probing their Service endpoints. Heartbeat expects that {{es}} and {{kib}} are deployed in the `default` namespace.
Expand All @@ -85,7 +85,7 @@ Deploys Heartbeat as a single Pod deployment that monitors the health of {{es}}
## Auditbeat [k8s_auditbeat]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/auditbeat_hosts.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/auditbeat_hosts.yaml
```

Deploys Auditbeat as a DaemonSet that checks file integrity and audits file operations on the host system.
Expand All @@ -94,7 +94,7 @@ Deploys Auditbeat as a DaemonSet that checks file integrity and audits file oper
## Packetbeat monitoring DNS and HTTP traffic [k8s_packetbeat_monitoring_dns_and_http_traffic]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/packetbeat_dns_http.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/packetbeat_dns_http.yaml
```

Deploys Packetbeat as a DaemonSet that monitors DNS on port `53` and HTTP(S) traffic on ports `80`, `8000`, `8080` and `9200`.
Expand All @@ -103,7 +103,7 @@ Deploys Packetbeat as a DaemonSet that monitors DNS on port `53` and HTTP(S) tra
## OpenShift monitoring [k8s_openshift_monitoring]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/beats/openshift_monitoring.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/beats/openshift_monitoring.yaml
```

Deploys Metricbeat as a DaemonSet that monitors the host resource usage (CPU, memory, network, filesystem), OpenShift resources (Nodes, Pods, Containers, Volumes), API Server and Filebeat using autodiscover. Deploys an {{es}} cluster and {{kib}} to centralize data collection.
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The examples in this section are for illustration purposes only and should not b
## System and {{k8s}} {{integrations}} [k8s_system_and_k8s_integrations]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-kubernetes-integration.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/fleet-kubernetes-integration.yaml
```

Deploys {{agent}} as a DaemonSet in {{fleet}} mode with System and {{k8s}} {{integrations}} enabled. System integration collects syslog logs, auth logs and system metrics (for CPU, I/O, filesystem, memory, network, process and others). {{k8s}} {{integrations}} collects API server, Container, Event, Node, Pod, Volume and system metrics.
Expand All @@ -29,7 +29,7 @@ Deploys {{agent}} as a DaemonSet in {{fleet}} mode with System and {{k8s}} {{int
## System and {{k8s}} {{integrations}} running as non-root [k8s_system_and_k8s_integrations_running_as_non_root]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-kubernetes-integration-nonroot.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/fleet-kubernetes-integration-nonroot.yaml
```

The provided example is functionally identical to the previous section but runs the {{agent}} processes (both the {{agent}} running as the {{fleet}} server and the {{agent}} connected to {{fleet}}) as a non-root user by utilizing a DaemonSet to ensure directory and file permissions.
Expand All @@ -43,7 +43,7 @@ The DaemonSet itself must run as root to set up permissions and ECK >= 2.10.0 is
## Custom logs integration with autodiscover [k8s_custom_logs_integration_with_autodiscover]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-custom-logs-integration.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/fleet-custom-logs-integration.yaml
```

Deploys {{agent}} as a DaemonSet in {{fleet}} mode with Custom Logs integration enabled. Collects logs from all Pods in the `default` namespace using autodiscover feature.
Expand All @@ -52,7 +52,7 @@ Deploys {{agent}} as a DaemonSet in {{fleet}} mode with Custom Logs integration
## APM integration [k8s_apm_integration]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/fleet-apm-integration.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/fleet-apm-integration.yaml
```

Deploys single instance {{agent}} Deployment in {{fleet}} mode with APM integration enabled.
Expand All @@ -61,7 +61,7 @@ Deploys single instance {{agent}} Deployment in {{fleet}} mode with APM integrat
## Synthetic monitoring [k8s_synthetic_monitoring]

```sh subs=true
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{eck_release_branch}}/config/recipes/elastic-agent/synthetic-monitoring.yaml
kubectl apply -f https://raw.github.com/elastic/cloud-on-k8s/{{version.eck | M.M}}/config/recipes/elastic-agent/synthetic-monitoring.yaml
```

Deploys an {{fleet}}-enrolled {{agent}} that can be used as for [Synthetic monitoring](/solutions/observability/synthetics/index.md). This {{agent}} uses the `elastic-agent-complete` image. The agent policy still needs to be [registered as private location](/solutions/observability/synthetics/monitor-resources-on-private-networks.md#synthetics-private-location-add) in {{kib}}.
Loading
Loading