Kubernetes Setup
Overview
Cloud ex Machina integrates with AWS EKS clusters to collect metadata about cluster resources such as pods, services, deployments, and workloads. This integration provides valuable insights for cost optimization and resource management.
Architecture
The integration uses automated Terraform modules that configure the appropriate authentication and authorization mechanisms for your EKS clusters. The modules automatically detect whether your clusters support modern EKS access entries or require the legacy aws-auth ConfigMap approach.
Deployment Options:
- Single Account: Deploy CxM access for EKS clusters within a single AWS account
- Organization-Wide: Deploy CxM access across multiple AWS accounts within an AWS Organization for enterprise-scale deployments
Prerequisites
Single Account Setup
- Existing EKS cluster(s)
- AWS CLI configured with appropriate permissions
- kubectl configured to access your EKS cluster(s)
- Terraform >= 1.0 installed
- CxM IAM role created by
terraform-aws-account-enablementmodule
Organization Setup
- AWS Organizations set up with management account access
- Existing EKS cluster(s) across one or more member accounts
- AWS CLI configured with cross-account permissions
- kubectl configured to access your EKS cluster(s)
- Terraform >= 1.0 installed
- Proper IAM permissions for cross-account role assumption
- CxM enablement using both:
terraform-aws-organization-enablementterraform-aws-full-organization-enablement
Setup Using Terraform Module
The Cloud ex Machina integration modules are published on the official Terraform Registry. Always use the latest version for the most recent features and security updates.
Choose Your Deployment Approach:
- Use Single Account Setup if the rest of your CxM set up targets a single account.
- Use Organization-Wide Setup if you have EKS clusters across multiple AWS accounts of your AWS Organization with your CxM instance targeting some or all of your organization accounts.
Basic Single Cluster Setup
For a single EKS cluster in your account with cluster-wide read-only access (default and recommended):
- main.tf
- variables.tf
- outputs.tf
# Configure providers
terraform {
required_version = ">= 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.20"
}
}
}
provider "aws" {
region = "us-west-2" # Your AWS region
}
# Configure Kubernetes provider
data "aws_eks_cluster" "cluster" {
name = var.cluster_name
}
data "aws_eks_cluster_auth" "cluster" {
name = var.cluster_name
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
# Enable CxM on your AWS account (if not already done)
module "cxm_account_enablement" {
source = "cxmlabs/cxm-integration/aws/terraform-aws-account-enablement"
version = "~> 1.0"
cxm_aws_account_id = "123456789012" # CxM will provide this
cxm_external_id = "your-external-id" # CxM will provide this
iam_role_name = "asset-crawler"
}
# Enable CxM access to your EKS cluster
module "cxm_eks_enablement" {
source = "cxmlabs/cxm-integration/aws/terraform-aws-eks-cluster-enablement"
version = "~> 1.0"
# Required variables
cluster_name = "my-production-cluster" # Your EKS cluster name
iam_role_arn = module.cxm_account_enablement.iam_role_arn
# Optional variables - using defaults (cluster-wide access recommended)
# access_scope_type = "cluster" # DEFAULT: cluster-wide read-only access
# kubernetes_groups = [] # DEFAULT: empty list for view-only access
# access_scope_namespaces = [] # DEFAULT: empty (not used with cluster scope)
# tags = {} # DEFAULT: no tags
# Module automatically detects and uses appropriate access method (access entries vs aws-auth)
}
# Variables for the calling configuration (not the module itself)
variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
default = "my-production-cluster"
}
variable "cxm_aws_account_id" {
description = "CxM AWS Account ID (provided by Cloud ex Machina)"
type = string
}
variable "cxm_external_id" {
description = "External ID for role assumption (provided by Cloud ex Machina)"
type = string
}
# Note: EKS cluster enablement module variables (defaults provide cluster-wide read-only access):
# REQUIRED:
# - cluster_name: Name of the EKS cluster to configure access for
# - iam_role_arn: ARN or name of the IAM role from CxM account enablement
# OPTIONAL (with recommended defaults):
# - access_scope_type: "cluster" (DEFAULT - recommended) or "namespace" (advanced)
# - kubernetes_groups: [] (DEFAULT - view-only permissions)
# - access_scope_namespaces: [] (DEFAULT - only used with namespace scope)
# - tags: {} (DEFAULT - no tags)
# These outputs must be shared with Cloud ex Machina
output "cluster_name" {
value = module.cxm_eks_enablement.cluster_name
description = "Name of the EKS cluster that was configured"
}
output "cluster_endpoint" {
value = module.cxm_eks_enablement.cluster_endpoint
description = "Endpoint URL of the EKS cluster"
}
output "cluster_account_id" {
value = module.cxm_eks_enablement.cluster_account_id
description = "AWS Account ID where the EKS cluster is located"
}
output "iam_role_arn" {
value = module.cxm_eks_enablement.iam_role_arn
description = "ARN of the IAM role that was granted access to the cluster"
}
output "access_method" {
value = module.cxm_eks_enablement.access_method
description = "Method used to grant access to the cluster"
}
output "cluster_supports_access_entries" {
value = module.cxm_eks_enablement.cluster_supports_access_entries
description = "Whether the cluster supports modern access entries"
}
Advanced Configuration Options
Namespace-Scoped Access (Advanced)
Cluster-wide access is recommended for most use cases as it provides comprehensive cost optimization insights. Only use namespace-scoped access if you have specific security requirements that prevent cluster-wide read-only access.
To limit CxM access to specific namespaces instead of the default cluster-wide access:
module "cxm_eks_enablement" {
source = "cxmlabs/cxm-integration/aws/terraform-aws-eks-cluster-enablement"
version = "~> 1.0"
# Required variables
cluster_name = "my-cluster"
iam_role_arn = module.cxm_account_enablement.iam_role_arn
# Optional variables - scope access to specific namespaces
access_scope_type = "namespace"
access_scope_namespaces = ["monitoring", "logging", "kube-system"]
# Optional variables - custom username and groups
kubernetes_groups = [] # Default empty list for view-only access
# Optional tags
tags = {
Environment = "staging"
}
}
Multiple Clusters
For multiple EKS clusters in the same account. Production uses default cluster-wide access, staging shows advanced namespace-scoped access:
# Production cluster with cluster-wide access (using defaults - recommended)
module "cxm_eks_production" {
source = "cxmlabs/cxm-integration/aws/terraform-aws-eks-cluster-enablement"
version = "~> 1.0"
# Required variables only - defaults provide cluster-wide read-only access
cluster_name = "production-cluster"
iam_role_arn = module.cxm_account_enablement.iam_role_arn
# Using all default values:
# access_scope_type = "cluster" # DEFAULT: cluster-wide access
# kubernetes_groups = [] # DEFAULT: view-only permissions
tags = {
Environment = "production"
}
}
# Staging cluster with namespace-scoped access (advanced configuration)
module "cxm_eks_staging" {
source = "cxmlabs/cxm-integration/aws/terraform-aws-eks-cluster-enablement"
version = "~> 1.0"
# Required variables
cluster_name = "staging-cluster"
iam_role_arn = module.cxm_account_enablement.iam_role_arn
# Advanced configuration - namespace-scoped access
access_scope_type = "namespace" # Override default "cluster"
access_scope_namespaces = ["monitoring", "observability"] # Required when type is "namespace"
kubernetes_groups = [] # Default empty list for view-only access
tags = {
Environment = "staging"
}
}
Organization-Wide Setup
For enterprise deployments across multiple AWS accounts within an AWS Organization, use the organization-wide enablement modules. This approach provides centralized management and consistent access patterns across all accounts.
Full Organization Setup
- main.tf
- variables.tf
- outputs.tf
# Configure providers
terraform {
required_version = ">= 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.20"
}
}
}
# Configure provider for management account
provider "aws" {
alias = "management"
region = var.aws_region
# Configure for your AWS Organizations management account
}
# Configure provider for target account with EKS cluster
provider "aws" {
alias = "target"
region = var.aws_region
# Configure for the account containing your EKS cluster
}
# Configure Kubernetes provider for EKS cluster in target account
data "aws_eks_cluster" "cluster" {
provider = aws.target
name = var.cluster_name
}
data "aws_eks_cluster_auth" "cluster" {
provider = aws.target
name = var.cluster_name
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
# Enable CxM across the entire organization
module "cxm_organization_enablement" {
source = "cxmlabs/cxm-integration/aws/terraform-aws-full-organization-enablement"
version = "~> 1.0"
providers = {
aws = aws.management
}
cxm_aws_account_id = var.cxm_aws_account_id
cxm_external_id = var.cxm_external_id
tags = {
Environment = "organization"
Team = "platform"
}
}
# Enable CxM access to specific EKS cluster in target account
module "cxm_eks_enablement" {
source = "cxmlabs/cxm-integration/aws/terraform-aws-eks-cluster-enablement"
version = "~> 1.0"
providers = {
aws = aws.target
kubernetes = kubernetes
}
# Required variables
cluster_name = module.kubernetes.cluster_name # Reference your EKS module
iam_role_arn = "arn:aws:iam::${var.target_account_id}:role/cxm-asset-crawler-cava-cfn"
# Optional variables - using recommended defaults
# access_scope_type = "cluster" # DEFAULT: cluster-wide read-only access (recommended)
# kubernetes_groups = [] # DEFAULT: view-only permissions
depends_on = [module.cxm_organization_enablement]
}
# Variables for organization-wide deployment
variable "aws_region" {
description = "AWS region for resources"
type = string
default = "us-west-2"
}
variable "cxm_aws_account_id" {
description = "CxM AWS Account ID (provided by Cloud ex Machina)"
type = string
}
variable "cxm_external_id" {
description = "External ID for role assumption (provided by Cloud ex Machina)"
type = string
}
variable "target_account_id" {
description = "AWS Account ID containing the EKS cluster"
type = string
}
variable "production_account_id" {
description = "AWS Account ID for production environment (for multi-account setup)"
type = string
}
variable "staging_account_id" {
description = "AWS Account ID for staging environment (for multi-account setup)"
type = string
}
# Note: The EKS cluster enablement module variables (defaults provide cluster-wide read-only access):
# REQUIRED:
# - cluster_name: Name of the EKS cluster to configure
# - iam_role_arn: ARN of the IAM role for CxM access
# OPTIONAL (with recommended defaults):
# - access_scope_type: "cluster" (DEFAULT - recommended) or "namespace" (advanced)
# - kubernetes_groups: [] (DEFAULT - view-only permissions)
# - access_scope_namespaces: [] (DEFAULT - only used with namespace scope)
# - tags: {} (DEFAULT - no tags)
# Organization-level outputs (for reference)
output "organization_role_arn" {
value = module.cxm_organization_enablement.cross_account_role_arn
description = "ARN of the cross-account role for organization access"
}
output "enabled_accounts" {
value = module.cxm_organization_enablement.enabled_accounts
description = "List of accounts enabled for CxM access"
}
# Individual cluster outputs (REQUIRED - share all of these with CxM)
output "target_cluster_outputs" {
value = {
cluster_name = module.cxm_eks_enablement.cluster_name
cluster_endpoint = module.cxm_eks_enablement.cluster_endpoint
cluster_account_id = module.cxm_eks_enablement.cluster_account_id
cluster_supports_access_entries = module.cxm_eks_enablement.cluster_supports_access_entries
access_entry_created = module.cxm_eks_enablement.access_entry_created
policy_association_created = module.cxm_eks_enablement.policy_association_created
aws_auth_configmap_updated = module.cxm_eks_enablement.aws_auth_configmap_updated
iam_role_arn = module.cxm_eks_enablement.iam_role_arn
access_method = module.cxm_eks_enablement.access_method
}
description = "Complete outputs from target EKS cluster enablement - share this entire object with CxM"
sensitive = true
}
Multi-Cluster Organization Setup
For organizations with EKS clusters across multiple accounts:
# Enable CxM across the organization (run once from management account)
module "cxm_organization_enablement" {
source = "cxmlabs/cxm-integration/aws//terraform-aws-full-organization-enablement"
version = "~> 1.0"
cxm_aws_account_id = var.cxm_aws_account_id
cxm_external_id = var.cxm_external_id
tags = {
Environment = "organization"
}
}
# Enable multiple EKS clusters across different accounts
module "cxm_eks_production" {
source = "cxmlabs/cxm-integration/aws//terraform-aws-eks-cluster-enablement"
version = "~> 1.0"
providers = {
aws = aws.production_account
kubernetes = kubernetes.production
}
# Required variables
cluster_name = module.production_kubernetes.cluster_name
iam_role_arn = "arn:aws:iam::${var.production_account_id}:role/cxm-asset-crawler-cava-cfn"
# Using defaults - cluster-wide read-only access (recommended)
# access_scope_type = "cluster" # DEFAULT: cluster-wide access
# kubernetes_groups = [] # DEFAULT: view-only permissions
}
module "cxm_eks_staging" {
source = "cxmlabs/cxm-integration/aws//terraform-aws-eks-cluster-enablement"
version = "~> 1.0"
providers = {
aws = aws.staging_account
kubernetes = kubernetes.staging
}
# Required variables
cluster_name = module.staging_kubernetes.cluster_name
iam_role_arn = "arn:aws:iam::${var.staging_account_id}:role/cxm-asset-crawler-cava-cfn"
# Optional variables - namespace-scoped access for staging
access_scope_type = "namespace"
access_scope_namespaces = ["monitoring", "observability"]
kubernetes_groups = [] # Default empty list
}
# REQUIRED OUTPUTS - Each cluster must be output separately for CxM integration
output "production_cluster_outputs" {
value = {
cluster_name = module.cxm_eks_production.cluster_name
cluster_endpoint = module.cxm_eks_production.cluster_endpoint
cluster_account_id = module.cxm_eks_production.cluster_account_id
cluster_supports_access_entries = module.cxm_eks_production.cluster_supports_access_entries
access_entry_created = module.cxm_eks_production.access_entry_created
policy_association_created = module.cxm_eks_production.policy_association_created
aws_auth_configmap_updated = module.cxm_eks_production.aws_auth_configmap_updated
iam_role_arn = module.cxm_eks_production.iam_role_arn
access_method = module.cxm_eks_production.access_method
}
description = "Production cluster outputs - share this entire object with CxM"
sensitive = true
}
output "staging_cluster_outputs" {
value = {
cluster_name = module.cxm_eks_staging.cluster_name
cluster_endpoint = module.cxm_eks_staging.cluster_endpoint
cluster_account_id = module.cxm_eks_staging.cluster_account_id
cluster_supports_access_entries = module.cxm_eks_staging.cluster_supports_access_entries
access_entry_created = module.cxm_eks_staging.access_entry_created
policy_association_created = module.cxm_eks_staging.policy_association_created
aws_auth_configmap_updated = module.cxm_eks_staging.aws_auth_configmap_updated
iam_role_arn = module.cxm_eks_staging.iam_role_arn
access_method = module.cxm_eks_staging.access_method
}
description = "Staging cluster outputs - share this entire object with CxM"
sensitive = true
}
# Organization-level outputs (for reference)
output "organization_role_arn" {
value = module.cxm_organization_enablement.cross_account_role_arn
description = "ARN of the cross-account role for organization access"
}
Organization Setup Benefits
Centralized Management:
- Single configuration for entire AWS Organization
- Consistent IAM roles across all accounts
- Automated account discovery and enablement
Security Best Practices:
- Cross-account roles with proper trust relationships
- Centralized audit trail
- Consistent permissions across the organization
Operational Efficiency:
- Reduced configuration overhead
- Standardized deployment patterns
- Easy addition of new accounts and clusters
Organization Deployment Steps
- Deploy from Management Account: Run the organization enablement from your AWS Organizations management account
- Configure Cross-Account Access: Ensure proper IAM permissions for cross-account role assumption
- Enable Individual Clusters: Deploy EKS enablement modules for each cluster in their respective accounts
- Validate Access: Verify CxM can access all configured clusters across accounts
Important Notes for Organization Setup
IAM Role Naming Pattern: When using organization enablement, the CxM crawler role follows a specific naming pattern:
- Pattern:
cxm-asset-crawler-SUFFIX - The suffix (
cava-cfnin the examples) is determined by your organization enablement configuration - Each target account will have this role created automatically by the organization enablement module
- Always reference the role using the full ARN:
arn:aws:iam::ACCOUNT_ID:role/cxm-asset-crawler-SUFFIX
Module References:
- Update
module.kubernetes.cluster_nameto match your actual EKS module name - Ensure your provider aliases match your AWS account structure
- Use separate Kubernetes providers for each account if managing multiple clusters
Module Variables:
The terraform-aws-eks-cluster-enablement module has a simple interface:
- Required: Only
cluster_nameandiam_role_arn - Default Access: Cluster-wide read-only access (recommended for comprehensive cost optimization)
- Optional: All other variables have sensible defaults - no additional configuration needed for most use cases
- Automatic Detection: The module automatically detects cluster capabilities and chooses the appropriate access method
- Advanced Options: Namespace-scoped access available when specific security requirements exist
Deployment Instructions
-
Create your Terraform configuration using one of the examples above
-
Version Pinning: Always specify a version constraint (e.g.,
version = "~> 1.0") to ensure compatibility and predictable deployments -
Set your variables in a
terraform.tfvarsfile:For single account setup:
cluster_name = "my-production-cluster"
cxm_aws_account_id = "123456789012" # Provided by CxM
cxm_external_id = "your-unique-external-id" # Provided by CxMFor organization setup:
aws_region = "us-west-2"
cxm_aws_account_id = "123456789012" # Provided by CxM
cxm_external_id = "your-unique-external-id" # Provided by CxM
target_account_id = "987654321098" # Account with EKS cluster
production_account_id = "987654321098" # Production account ID
staging_account_id = "876543210987" # Staging account ID -
Deploy the configuration:
terraform init
terraform plan
terraform apply -
Get outputs for each cluster separately (organization setup):
# Get all cluster outputs
terraform output
# Or get specific cluster outputs for multi-cluster deployments
terraform output production_cluster_outputs
terraform output staging_cluster_outputs -
Share the outputs with Cloud ex Machina (see next section)
Required Information to Share with Cloud ex Machina
After successful deployment, you must share the following Terraform outputs with Cloud ex Machina:
- Single Account Setup
- Organization Setup
# Get all required outputs
terraform output
# Get all outputs (includes both organization and individual cluster outputs)
terraform output
# Each cluster's complete output object - share these individually with CxM
terraform output production_cluster_outputs
terraform output staging_cluster_outputs
# Add more cluster outputs as needed for your setup
Required outputs for single account setup:
cluster_name: Name of your EKS clustercluster_endpoint: EKS cluster API endpointcluster_account_id: AWS Account ID where the cluster residesiam_role_arn: ARN of the IAM role created for CxM accessaccess_method: Access method used (access_entries or aws_auth_configmap)cluster_supports_access_entries: Whether cluster supports modern access entries
Additional required outputs for organization setup:
- Individual cluster outputs: Each EKS cluster module produces a complete set of outputs that must be shared separately with CxM
production_cluster_outputs: Complete outputs object for production clusterstaging_cluster_outputs: Complete outputs object for staging cluster- (One output object per cluster configured)
organization_role_arn: ARN of the cross-account role for organization access (for reference)enabled_accounts: List of accounts enabled for CxM access (for reference)
For organization setups: CxM requires the complete outputs from each individual cluster module, not just organization-level information. Each cluster integration is set up separately using its specific cluster outputs.
Legacy Cluster Support
Upgrading Legacy Clusters
If you have an older EKS cluster that doesn't support access entries, you can upgrade it to use the modern access entries method:
# Enable access entries support on legacy cluster
aws eks update-cluster-config \
--name your-cluster-name \
--access-config authenticationMode=API_AND_CONFIG_MAP
After running this command, re-run terraform apply and the module will automatically detect the new capability and switch to access entries.
Security and Permissions
The Terraform module automatically configures the minimum required permissions for CxM:
- Cluster-wide read-only access (default) - Comprehensive monitoring without security risk
- No write permissions - CxM cannot modify your cluster resources
- View-only operations - Only
get,list, andwatchpermissions - AWS EKSViewPolicy used for modern clusters (read-only policy)
- Standard RBAC used for legacy clusters (read-only permissions)
- Namespace-scoped access available for advanced security requirements
Cluster-wide access is secure: The default cluster-wide access only provides read-only permissions. CxM cannot create, modify, or delete any resources in your cluster. This comprehensive read access enables better cost optimization insights while maintaining security.
The integration is designed with security best practices and follows the principle of least privilege.
Troubleshooting
Common Issues
- Terraform provider authentication: Ensure your AWS CLI and kubectl are properly configured
- Cluster access: Verify you have admin access to the EKS cluster
- Legacy cluster upgrade: Some clusters may need manual upgrade to support access entries
Support
For technical issues with the integration, contact Cloud ex Machina support with:
- Your Terraform outputs
- Cluster configuration details
- Any error messages encountered during deployment
For the latest module version and documentation, visit the official Terraform Registry page.