0%

This site contains Angeles Broullon’s coding notes.

They mostly help me keep track of my current work, and help me clear my memory after intense projects. Most of what is stored here is related to Java, vanilla NodeJS and Python, but there is always room to learn more. Also check up the date of the note, as it may not be bleeding edge anymore: these are mostly a development diary.

This site is made with Hexo, and using the Hexo Next Theme. The Mermaidjs graphics are inserted using hexo-filter-mermaid-diagrams.

You are welcome to wander around and I hope you find something useful.

Types of tests

  • Unit tests: ensure the logical part of your code is working
  • Integration/service tests: ensure that your DAO (database access) and BO (Business objects) layers work as expected
  • UI tests: automate your UI

Behavior driven development (BDD)

  • Story: should have a clear, explicit title.
  • Scenario: acceptance criteria or scenario description of each specific case of the narrative. Its structure is:
    • Given: it starts by specifying the initial condition that is assumed to be true at the beginning of the scenario. This may consist of a single clause, or several.
    • When: it then states which event triggers the start of the scenario.
    • Then: it states the expected outcome, in one or more clauses.

Unit and Integration tests, with UnitTest

  • unittest is similar in structure to Junit.

Project structure

  • Example project structure
    1
    2
    3
    4
    5
    6
    7
    * scripts
    * process.py
    * tests
    * test_process.py
    * README.md
    * requirements.txt
    * .pylintrc
  • Naming: tests file for process.py may be test_process.py files.
  • Location: you may put your test files in a tests folder under the root folder of the project.
    • You should try to replicate under tests the same structure of your scripts folder.

File structure

  • Example, not real code

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    # imports
    import unittest
    # if we are working on AWS
    import boto3
    # external mock library for AWS
    from moto import mock_ec2
    # file to test
    from scripts.process import Process

    # activate mock from moto
    @mock_ec2
    class TestProcess(unittest.TestCase):

    # VARIABLES
    # private constants
    __LOCATION = "eu-west-1"

    # instance to test
    process = Process()

    # SET UP TEST INTEGRITY
    # test setup
    def setUp(self):
    # initalize, generic for tests, to keep integrity
    started_ec2_data = {}

    def tearDown(self):
    # clean up after process, to keep integrity
    process = ""

    # SET UP MOCKS
    def get_mock_helper(self):
    # mock code from another class
    helper = unittest.mock.Mock()
    helper.get_dict.return_value = self.__DUMMY_ROLE_DICT
    return helper

    # TESTS
    def test_report_ec2_has_instance(self):
    # set up input
    dict_profiles = {"demo-dev-1": "arn:demo_dev_1"}

    # patch mocks for external calls on test file
    with unittest.mock.patch(
    "scripts.process.Helper",
    return_value=self.get_mock_helper()
    ):
    # invoke method to test
    response = self.process.report_ec2(dict_profiles)
    # test assertion to check up results
    assert response["report"] != [] and response["errors"] == []

Run tests

  • You can run the tests manually via

    1
    python -m unittest discover -s "./tests/"
  • You may also check the code coverage for those unit tests.

    1
    2
    3
    4
    5
    6
    7
    8
    # install
    python -m pip install coverage
    # run
    python -m coverage run -m --source="scripts" unittest discover
    # get coverage report on CLI
    python -m coverage report
    # get html coverage
    python -m coverage html

Tests E2E on UI, with Gherkin and Python

Write Gherkin files

Project structure

  • Example project structure
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    * scripts
    * process.py
    * tests
    * test_process.py
    * features
    * features
    aplication.feature
    * images
    * application_running.png
    * steps
    * application.py
    * application_walkthrough.py
    * README.md
    * requirements.txt
    * .pylintrc
    • Naming: source files have .feature extension.
    • Location: under features folder, single Gherkin source file contains a description of a single feature.
      • feature_tests: .feature files
      • images: screenshots, sorted out by platform folder
      • steps: .py test steps

File structure

.feature files

  • Every .feature must consists of a single feature.
  • Lines starting with the keyword Feature followed by three indented lines starts a feature.
  • A feature usually contains a list of scenarios, which starts with Scenario keyword on a new line.
    • Every scenario starts with the Scenario: keyword (or localized one), followed by an optional scenario title.
    • Every scenario has steps:
      • Given.
      • When.
      • Then.
    • If scenarios are repetitive, you may use scenario outlines, using a template with placeholders.
  • You can use tags to group features and scenarios together, independent of your file and directory structure.
  • Background: it is like an untitled scenario, containing a number of steps. The background is run before each of your scenarios, but after your BeforeScenario hooks.

Examples

  • Basic example

    1
    2
    3
    4
    5
    6
    7
    8
    Feature: Serve coffee
    In order to earn money Customers should be able to buy coffee at all times

    Scenario: Buy last coffee
    Given there are 1 coffees left in the machine
    And I have deposited 1 dollar
    When I press the coffee button
    Then I should be served a coffee
  • Outline example

    1
    2
    3
    4
    5
    6
    7
    Scenario Outline: Eating
    Given there are <start> cucumbers
    When I eat <eat> cucumbers
    Then I should have <left> cucumbers Examples:
    | start | eat | left |
    | 12 | 5 | 7 |
    | 20 | 5 | 15 |
  • Background example

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    Feature: Multiple site support

    Background:
    Given a global administrator named "Greg"
    And a blog named "Greg's anti-tax rants"
    And a customer named "Wilson"
    And a blog named "Expensive Therapy" owned by "Wilson"

    Scenario: Wilson posts to his own blog
    Given I am logged in as Wilson
    When I try to post to "Expensive Therapy"
    Then I should see "Your article was published."
    Scenario: Greg posts to a client's blog
    Given I am logged in as Greg
    When I try to post to "Expensive Therapy"
    Then I should see "Your article was published.

Get Python files

Quick how-to

  • Gherkin files can be transformed into python files using Behave and autogui

    1
    2
    3
    4
    5
    6
    # install
    python -m pip install behave
    python -m pip install pyautogui
    # use
    python -m behave
    # python -m behave features/

Code example

  • original features/features/example.feature file

    1
    2
    3
    4
    5
    6
    Feature: Showing off behave

    Scenario: Run a simple test
    Given we have behave installed
    When we implement 5 tests
    Then behave will test them for us!
  • resulting features/steps/example.py file

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    from behave import given, when, then, step

    @given('we have behave installed')
    def step_impl(context):
    pass

    @when('we implement {number:d} tests')
    def step_impl(context, number):
    # number is converted into integer
    assert number > 1 or number == 0
    context.tests_count = number

    @then('behave will test them for us!')
    def step_impl(context):
    assert context.failed is False
    assert context.tests_count >= 0

Introduction

  • Terraform as a tool is cloud agnostic (it will support anything that exposes its API and has enough developer support to create a “provider” for it).
  • Terraform by itself will not natively abstract this at all, please consider if this is a good idea at all unless you have a really good use case.
    • If you did need to do this you would need to:
      • build a bunch of modules on top of things that abstracts the cloud layer from the module users.
      • allow them to specify the cloud provider as a variable (potentially controllable from some outside script).
  • You can check the registry.

Example: abstract when there is resource parity

Original DNS modules

Google Cloud DNS

  • modules/google/dns/record/main.tf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    variable "count" = {}

    variable "domain_name_record" = {}
    variable "domain_name_zone" = {}
    variable "domain_name_target" = {}

    resource "google_dns_record_set" "frontend" {
    count = "${variable.count}"
    name = "${var.domain_name_record}.${var.domain_name_zone}"
    type = "CNAME"
    ttl = 300

    managed_zone = "${var.domain_name_zone}"

    rrdatas = ["${var.domain_name_target}"]
    }

AWS

  • modules/aws/dns/record/main.tf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    variable "count" = {}

    variable "domain_name_record" = {}
    variable "domain_name_zone" = {}
    variable "domain_name_target" = {}

    data "aws_route53_zone" "selected" {
    count = "${variable.count}"
    name = "${var.domain_name_zone}"
    }

    resource "aws_route53_record" "www" {
    count = "${variable.count}"
    zone_id = "${data.aws_route53_zone.selected.zone_id}"
    name = "${var.domain_name_record}.${data.aws_route53_zone.selected.name}"
    type = "CNAME"
    ttl = "60"
    records = [${var.domain_name_target}]
    }

Generic inluding both module code

  • modules/generic/dns/record/main.tf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    variable "cloud_provider" = { default = "aws" }

    variable "domain_name_record" = {}
    variable "domain_name_zone" = {}
    variable "domain_name_target" = {}

    module "aws_dns_record" {
    source = "../../aws/dns/record"
    count = "${var.cloud_provider == "aws" ? 1 : 0}"
    domain_name_record = "${var.domain_name_record}"
    domain_name_zone = "${var.domain_name_zone}"
    domain_name_target = "${var.domain_name_target}"
    }

    module "google_dns_record" {
    source = "../../google/dns/record"
    count = "${var.cloud_provider == "google" ? 1 : 0}"
    domain_name_record = "${var.domain_name_record}"
    domain_name_zone = "${var.domain_name_zone}"
    domain_name_target = "${var.domain_name_target}"
    }

Providers

AWS: Set up apache server

  • Code

    • main.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      # Create and bootstrap webserver
      # create EC2 server, parameters come from setup.tf file
      resource "aws_instance" "webserver" {
      ami = data.aws_ssm_parameter.webserver-ami.value
      instance_type = "t3.micro"
      key_name = aws_key_pair.webserver-key.key_name
      associate_public_ip_address = true
      vpc_security_group_ids = [aws_security_group.sg.id]
      subnet_id = aws_subnet.subnet.id
      # if its remote, execute this code using the parameters embedded on connection
      provisioner "remote-exec" {
      inline = [
      "sudo yum -y install httpd && sudo systemctl start httpd",
      "echo '<h1><center>My Test Website With Help From Terraform Provisioner</center></h1>' > index.html",
      "sudo mv index.html /var/www/html/"
      ]
      connection {
      type = "ssh"
      user = "ec2-user"
      private_key = file("~/.ssh/id_rsa")
      host = self.public_ip
      }
      }
      tags = {
      Name = "webserver"
      }
      }
    • setup.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
      88
      89
      90
      91
      92
      93
      94
      95
      96
      97
      98
      provider "aws" {
      region = "us-east-1"
      }

      # Create key-pair for logging into EC2 in us-east-1
      resource "aws_key_pair" "webserver-key" {
      key_name = "webserver-key"
      public_key = file("~/.ssh/id_rsa.pub")
      }


      # Get Linux AMI ID using SSM Parameter endpoint in us-east-1
      data "aws_ssm_parameter" "webserver-ami" {
      name = "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2"
      }

      # Create VPC in us-east-1
      resource "aws_vpc" "vpc" {
      cidr_block = "10.0.0.0/16"
      enable_dns_support = true
      enable_dns_hostnames = true
      tags = {
      Name = "terraform-vpc"
      }

      }

      # Create IGW in us-east-1
      resource "aws_internet_gateway" "igw" {
      vpc_id = aws_vpc.vpc.id
      }

      # Get main route table to modify
      data "aws_route_table" "main_route_table" {
      filter {
      name = "association.main"
      values = ["true"]
      }
      filter {
      name = "vpc-id"
      values = [aws_vpc.vpc.id]
      }
      }
      # Create route table in us-east-1
      resource "aws_default_route_table" "internet_route" {
      default_route_table_id = data.aws_route_table.main_route_table.id
      route {
      cidr_block = "0.0.0.0/0"
      gateway_id = aws_internet_gateway.igw.id
      }
      tags = {
      Name = "Terraform-RouteTable"
      }
      }

      #Get all available AZ's in VPC for master region
      data "aws_availability_zones" "azs" {
      state = "available"
      }

      #Create subnet # 1 in us-east-1
      resource "aws_subnet" "subnet" {
      availability_zone = element(data.aws_availability_zones.azs.names, 0)
      vpc_id = aws_vpc.vpc.id
      cidr_block = "10.0.1.0/24"
      }


      #Create SG for allowing TCP/80 & TCP/22
      resource "aws_security_group" "sg" {
      name = "sg"
      description = "Allow TCP/80 & TCP/22"
      vpc_id = aws_vpc.vpc.id
      ingress {
      description = "Allow SSH traffic"
      from_port = 22
      to_port = 22
      protocol = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
      }
      ingress {
      description = "allow traffic from TCP/80"
      from_port = 80
      to_port = 80
      protocol = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
      }
      egress {
      from_port = 0
      to_port = 0
      protocol = "-1"
      cidr_blocks = ["0.0.0.0/0"]
      }
      }

      output "Webserver-Public-IP" {
      value = aws_instance.webserver.public_ip
      }
  • On CLI

    1
    2
    3
    4
    5
    6
    terraform init
    terraform validate
    terraform plan
    terraform apply
    # terraform provisioner tries to connect to the EC2 instance
    # then runs bootstraped code

Azure: Deploy WebApp

  • Code

    • Basic main.tf for Azure

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      # Configure the Azure provider
      terraform {
      required_providers {
      azurerm = {
      source = "hashicorp/azurerm"
      version = ">= 2.26"
      }
      }

      required_version = ">= 0.14.9"
      }

      provider "azurerm" {
      features {}
      skip_provider_registration = true
      }

      # Create a virtual network
      resource "azurerm_virtual_network" "vnet" {
      name = "BatmanInc"
      address_space = ["10.0.0.0/16"]
      location = "Central US"
      resource_group_name = "<ADD YOUR RESOURCE GROUP NAME>"
      }
    • maint.tf fpor webapp on Azure

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      provider "azurerm" {
      version = 1.38
      }

      resource "azurerm_app_service_plan" "svcplan" {
      name = "Enter App Service Plan name"
      location = "eastus"
      resource_group_name = "Enter Resource Group Name"

      sku {
      tier = "Standard"
      size = "S1"
      }
      }

      resource "azurerm_app_service" "appsvc" {
      name = "Enter Web App Service Name"
      location = "eastus"
      resource_group_name = "Enter Resource Group Name"
      app_service_plan_id = azurerm_app_service_plan.svcplan.id


      site_config {
      dotnet_framework_version = "v4.0"
      scm_type = "LocalGit"
      }
      }
  • CLI

    1
    2
    3
    terraform init
    terraform validate
    terraform plan

Kubernetes deployment

  • Code

    • kubernetes.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      terraform {
      required_providers {
      kubernetes = {
      source = "hashicorp/kubernetes"
      }
      }
      }

      variable "host" {
      type = string
      }

      variable "client_certificate" {
      type = string
      }

      variable "client_key" {
      type = string
      }

      variable "cluster_ca_certificate" {
      type = string
      }

      provider "kubernetes" {
      host = var.host

      client_certificate = base64decode(var.client_certificate)
      client_key = base64decode(var.client_key)
      cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
      }
    • terraform.tfvars
      1
      2
      3
      4
      host                   = "DUMMY VALUE"
      client_certificate = "DUMMY VALUE"
      client_key = "DUMMY VALUE"
      cluster_ca_certificate = "DUMMY VALUE"
    • .kind-config
      1
      2
      3
      4
      5
      6
      7
      8
      kind: Cluster
      apiVersion: kind.x-k8s.io/v1alpha4
      nodes:
      - role: control-plane
      extraPortMappings:
      - containerPort: 30201
      hostPort: 30201
      listenAddress: "0.0.0.0"
  • CLI

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    # PREPARE KUBERNETES
    # create Kubernetes cluster, using kind-cli
    kind create cluster --name lab-terraform-kubernetes --config kind-config.yaml
    kubectl cluster-info --context kind-lab-terraform-kubernetes
    # verify
    kind get clusters
    # get the server data
    kubectl config view --minify --flatten --context=kind-lab-terraform-kubernetes
    # put the server address, client-key-data into terraform.tfvars

    # DEPLOY IT
    terraform init
    terraform validate
    terraform plan
    # validate "long-live-the-bat" exists
    kubectl get deployments

AWS EKS deployment

  • Code

    • kubernetes.tf
      1
      2
      3
      4
      5
      provider "kubernetes" {
      host = data.aws_eks_cluster.cluster.endpoint
      token = data.aws_eks_cluster_auth.cluster.token
      cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
      }
    • eks-cluster.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      module "eks" {
      source = "terraform-aws-modules/eks/aws"
      version = "17.24.0"
      cluster_name = local.cluster_name
      cluster_version = "1.20"
      subnets = module.vpc.private_subnets

      tags = {
      Environment = "training"
      GithubRepo = "terraform-aws-eks"
      GithubOrg = "terraform-aws-modules"
      }

      vpc_id = module.vpc.vpc_id

      workers_group_defaults = {
      root_volume_type = "gp2"
      }

      worker_groups = [
      {
      name = "worker-group-1"
      instance_type = "t2.small"
      additional_userdata = "echo foo bar"
      asg_desired_capacity = 2
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id]
      },
      {
      name = "worker-group-2"
      instance_type = "t2.medium"
      additional_userdata = "echo foo bar"
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id]
      asg_desired_capacity = 1
      },
      ]
      }

      data "aws_eks_cluster" "cluster" {
      name = module.eks.cluster_id
      }

      data "aws_eks_cluster_auth" "cluster" {
      name = module.eks.cluster_id
      }
    • security-groups.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      resource "aws_security_group" "worker_group_mgmt_one" {
      name_prefix = "worker_group_mgmt_one"
      vpc_id = module.vpc.vpc_id

      ingress {
      from_port = 22
      to_port = 22
      protocol = "tcp"

      cidr_blocks = [
      "10.0.0.0/8",
      ]
      }
      }

      resource "aws_security_group" "worker_group_mgmt_two" {
      name_prefix = "worker_group_mgmt_two"
      vpc_id = module.vpc.vpc_id

      ingress {
      from_port = 22
      to_port = 22
      protocol = "tcp"

      cidr_blocks = [
      "192.168.0.0/16",
      ]
      }
      }

      resource "aws_security_group" "all_worker_mgmt" {
      name_prefix = "all_worker_management"
      vpc_id = module.vpc.vpc_id

      ingress {
      from_port = 22
      to_port = 22
      protocol = "tcp"

      cidr_blocks = [
      "10.0.0.0/8",
      "172.16.0.0/12",
      "192.168.0.0/16",
      ]
      }
      }
    • vpc.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      variable "region" {
      default = "us-east-1"
      description = "AWS region"
      }

      provider "aws" {
      region = "us-east-1"
      }

      data "aws_availability_zones" "available" {}

      locals {
      cluster_name = "education-eks-${random_string.suffix.result}"
      }

      resource "random_string" "suffix" {
      length = 8
      special = false
      }

      module "vpc" {
      source = "terraform-aws-modules/vpc/aws"
      version = "2.66.0"

      name = "education-vpc"
      cidr = "10.0.0.0/16"
      azs = data.aws_availability_zones.available.names
      private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
      public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
      enable_nat_gateway = true
      single_nat_gateway = true
      enable_dns_hostnames = true

      tags = {
      "kubernetes.io/cluster/${local.cluster_name}" = "shared"
      }

      public_subnet_tags = {
      "kubernetes.io/cluster/${local.cluster_name}" = "shared"
      "kubernetes.io/role/elb" = "1"
      }

      private_subnet_tags = {
      "kubernetes.io/cluster/${local.cluster_name}" = "shared"
      "kubernetes.io/role/internal-elb" = "1"
      }
      }
    • versions.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      terraform {
      required_providers {
      aws = {
      source = "hashicorp/aws"
      version = ">= 3.20.0"
      }

      random = {
      source = "hashicorp/random"
      version = "3.0.0"
      }

      local = {
      source = "hashicorp/local"
      version = "2.0.0"
      }

      null = {
      source = "hashicorp/null"
      version = "3.0.0"
      }

      template = {
      source = "hashicorp/template"
      version = "2.2.0"
      }

      kubernetes = {
      source = "hashicorp/kubernetes"
      version = ">= 2.0.1"
      }
      }

      required_version = "> 0.14"
      }
    • outouts.tf
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      output "cluster_id" {
      description = "EKS cluster ID."
      value = module.eks.cluster_id
      }

      output "cluster_endpoint" {
      description = "Endpoint for EKS control plane."
      value = module.eks.cluster_endpoint
      }

      output "cluster_security_group_id" {
      description = "Security group ids attached to the cluster control plane."
      value = module.eks.cluster_security_group_id
      }

      output "kubectl_config" {
      description = "kubectl config as generated by the module."
      value = module.eks.kubeconfig
      }

      output "config_map_aws_auth" {
      description = "A kubernetes configuration to authenticate to this EKS cluster."
      value = module.eks.config_map_aws_auth
      }

      output "region" {
      description = "AWS region"
      value = var.region
      }

      output "cluster_name" {
      description = "Kubernetes Cluster Name"
      value = local.cluster_name
      }
    • lab_kubernetes_resources.tf(for nginx, added to dir when required by CLI steps)
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      resource "kubernetes_deployment" "nginx" {
      metadata {
      name = "long-live-the-bat"
      labels = {
      App = "longlivethebat"
      }
      }

      spec {
      replicas = 2
      selector {
      match_labels = {
      App = "longlivethebat"
      }
      }
      template {
      metadata {
      labels = {
      App = "longlivethebat"
      }
      }
      spec {
      container {
      image = "nginx:1.7.8"
      name = "batman"

      port {
      container_port = 80
      }

      resources {
      limits = {
      cpu = "0.5"
      memory = "512Mi"
      }
      requests = {
      cpu = "250m"
      memory = "50Mi"
      }
      }
      }
      }
      }
      }
      }
  • CLI

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    # DEPLOY THE EKS CLUSTER
    terraform init
    terraform plan
    terraform apply
    # Configure kubectl to interact with the cluster
    aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)
    # verify deployment
    kubectl get cs

    # DEPLOY NGINX PODS
    # add lab_kubernetes_resources.tf file
    terraform plan
    terraform apply
    # verify 2 "long-live-the-bat" pods are up and running
    kubectl get deployments

    # CLEAN UP
    terraform destroy
    # verify
    terraform show

Troubleshooting

graph LR

subgraph Primary UI
A[configuration language]
end

subgraph Metadata
B[state]
end

subgraph Resource graph comms
C[TF core application]
end

subgraph Auth mapping
D[Cloud Provider]
end

A --> B
B --> C
C --> D
  • Detect syntax errors on files

    1
    terraform fmt
  • Detect errors on dependencies (e.g. cycles, invalid references, unsupported group vars)

    1
    terraform validate
  • Detect version missmatching

    1
    terraform version
  • Get TF trace, setting environment variables

    1
    2
    3
    4
    export TF_LOG_CORE=TRACE
    export TF_LOG_PROVIDER=TRACE
    export TF_LOG_PATH=logs.txt
    terraform refresh
  • Detect state discrepancies

    1
    terraform apply

Migrating Local Terraform

CLI Migration

  1. Check version: the cloud block is available in Terraform v1.1, plus a Terraform account
  2. add the cloud block to your backend configuration
    1
    2
    3
    4
    5
    6
    7
    8
    terraform {
    cloud {
    organization = "my-org"
    workspaces {
    tags = ["networking"]
    }
    }
    }
  3. specify one or more Terraform Cloud workspaces for the state files
  4. run terraform init

API migration

Integrations

VCS supported

  • GitHub
    • GitHub.com
    • GitHub.com (OAuth)
    • GitHub Enterprise
  • GitLab
    • GitLab.com
    • GitLab EE and CE
  • Bitbucket
    • Bitbucket Cloud
    • Bitbucket Server
  • Azure
    • Azure DevOps Server
    • Azure DevOps Services

How it works

  • Terraform Cloud needs:

    • access a list of repositories, to let you search for repos when creating new workspaces.
    • register webhooks with your VCS provider, to get notified of new commits to a chosen branch.
    • download the contents of a repository at a specific commit in order to run Terraform with that code.
  • Webhooks: monitor new commits and pull requests.

    • When someone adds new commits to a branch, any Terraform Cloud workspaces based on that branch will begin a Terraform run. Usually a user must inspect the plan output and approve an apply, but you can also enable automatic applies on a per-workspace basis. You can prevent automatic runs by locking a workspace.
    • When someone submits a pull request/merge request to a branch, any Terraform Cloud workspaces based on that branch will perform a speculative plan with the contents of the request and links to the results on the PR’s page. This helps you avoid merging PRs that cause plan failures.
  • SSH keys: required for Azure and Bitbucket, only for cloning. All other operations can be done by HTTPS.

Workspaces in Terraform Cloud

Component Local Terraform Terraform Cloud
Terraform configuration On disk In linked version control repository, or periodically uploaded via API/CLI
Variable values As .tfvars files, as CLI arguments, or in shell environment In workspace
State On disk or in remote backend In workspace
Credentials and secrets In shell environment or entered at prompts In workspace, stored as sensitive variables
  • It uses workspaces instead of directories
    • State versions: Each workspace retains backups of its previous state files. Although only the current state is necessary for managing resources, the state history can be useful for tracking changes over time or recovering from problems.
    • Run history: When Terraform Cloud manages a workspace’s Terraform runs, it retains a record of all run activity, including summaries, logs, a reference to the changes that caused the run, and user comments.
  • After creating a workspace
    • Allows to edit variables
      • terraform.tfvars or -var-file=terraform.tfvars
      • environment variables with export on shell
      • special environment variables (TF_PARALELISM sets up flag: terraform plan -paralelism<N> and terraform apply -paralelism<N>))
    • Creates a webhook with VCS

Runs in Terraform Cloud

  • Workspace has runs links.
  • CLI or API

Run workflows

  • UI/VCS-driven: primary mode of operation
  • API-driven: more flexible but requires you to create some tooling
  • CLI-driven: uses Terraform’s standard CLI tools to execute runs in Terraform Cloud

Environment

  • Terraform workersVM: single-use Linux VMs
  • Network access to VCS and Infra providers: needs access to all resources managed
  • Concurrency and run queueing: uses multiple workers
  • State access and authentication: stores state for its workspaces

Environment variables

Variable Name Description Example
TFC_RUN_ID A unique identifier for this run run-CKuwsxMGgMd4W7Ui
TFC_WORKSPACE_NAME The name of the workspace used in this run prod-load-balancers
TFC_WORKSPACE_SLUG The full slug of the configuration used in this run. This consists of the organization name and workspace name, joined with a slash acme-corp/prod-load-balancers
TFC_CONFIGURATION_VERSION_GIT_BRANCH The name of the branch that the associated Terraform configuration version was ingressed from main
TFC_CONFIGURATION_VERSION_GIT_COMMIT_SHA The full commit hash of the commit that the associated Terraform configuration version was ingressed from abcd1234...
TFC_CONFIGURATION_VERSION_GIT_TAG The name of the tag that the associated Terraform configuration version was ingressed from v0.1.0

Run States and Stages

graph TD

subgraph PENDING
A[pending]
end

subgraph PLAN
B[fetching]
C[planning]
D[confirmation]
end

subgraph POST-PLAN
E[running tasks]
end

subgraph COST-ESTIMATION
F[cost estimating]
G[cost estimated]
end

subgraph POLICIY CHECK
H[policy check]
I[policy override]
J[policy checked]
end

subgraph APPLY
K[cost estimating]
end

subgraph COMPLETED
T[applied]
U[applied errored]
V[plan errored]
W[planned and finished]
X[cancelled]
Y[plan errored]
Z[discarded]
end

A -- user discards before start --> Z
A -- 1st in queue --> B

B -- can not connect to VCS --> Y
B -- success --> C
C -- cancel --> X
C -- terraform plan failed --> Y
C -- success and no further policy stages required --> W
C -- success --> D
D -- success, run tasks enabled --> E
D -- success, cost estimation enabled --> F
D -- success, no cost estimation, sentinel policies --> F
D -- success, no cost estimation, no policies, auto-apply --> K
D -- success, no cost estimation, no policies, no get authorize manually --> D
D -- success, no cost estimation, no policies, reject authorize manually --> Z

E -- mandatory tasks failed --> V
E -- advised tasks failed --> K
E -- user cancels --> X

F -- success --> G
F -- no policy checks nor applies -->  W
G -- success --> H

H -- hard policy fails --> V
H -- soft policy fails --> I
H -- success --> J
I -- user overrides --> J
I -- user discards --> Z
J -- auto-applied or manual apply --> K
J -- duser discards --> Z

K -- success --> T
K -- fails --> U
K -- user cancels --> Z
  • Pending Stage: Terraform Cloud hasn’t started action on a run yet. Terraform Cloud processes each workspace’s runs in the order they were queued, and a run remains
    • Pending state: Terraform Cloud hasn’t started action on a run yet. Terraform Cloud processes each workspace’s runs in the order they were queued, and a run remains pending until every run before it has completed.
  • The Plan Stage: A run goes through different steps during the plan stage depending on whether or not Terraform Cloud needs to fetch the configuration from VCS. Terraform Cloud automatically archives configuration versions created through VCS when all runs are complete and then re-fetches the files for subsequent runs.
    • Fetching: If Terraform Cloud has not yet fetched the configuration from VCS, the run will go into this state until the configuration is available.
    • Planning: Terraform Cloud is currently running terraform plan.
    • Needs Confirmation: terraform plan has finished. Runs sometimes pause in this state, depending on the workspace and organization settings.
  • Post-Plan Stage: The post-plan phase executes any configured run tasks after the plan is complete, so it only occurs if there are run tasks enabled for the workspace. All runs can enter this phase, including speculative plans. During this phase, Terraform Cloud sends information about the run to the configured external system and waits for a passed or failed response to determine whether the run can continue.
    • Running tasks: Terraform Cloud is waiting for a response from the configured external system(s).
      • External systems must respond initially with a 200 OK acknowledging the request is in progress. After that, they have 10 minutes to return a status of passed, running, or failed, or the timeout will expire and the task will be assumed to be in the failed status.
  • Cost Estimation Stage: only occurs if cost estimation is enabled. After a successful terraform plan, Terraform Cloud uses plan data to estimate costs for each resource found in the plan.
    • Cost Estimating: Terraform Cloud is currently estimating the resources in the plan.
    • Cost Estimated: The cost estimate completed.
  • Policy Check Stage: This stage only occurs if Sentinel policies are enabled. After a successful terraform plan, Terraform Cloud checks whether the plan obeys policy to determine whether it can be applied.
    • Policy Check: Terraform Cloud is currently checking the plan against the organization’s policies.
    • Policy Override: The policy check finished, but a soft-mandatory policy failed, so an apply cannot proceed without approval from a user with permission to manage policy * overrides for the organization. The run pauses in this state.
    • Policy Checked: The policy check succeeded, and Sentinel will allow an apply to proceed. The run sometimes pauses in this state, depending on workspace settings.
      Leaving this stage:
  • Apply Stage
    • Applying: Terraform Cloud is currently running terraform apply.
  • Completion
    • Applied: The run was successfully applied.
    • Planned and Finished: terraform plan’s output already matches the current infrastructure state, so terraform apply doesn’t need to do anything.
    • Apply Errored: The terraform apply command failed, possibly due to a missing or misconfigured provider or an illegal operation on a provider.
    • Plan Errored: The terraform plan command failed (usually requiring fixes to variables or code), or a hard-mandatory Sentinel policy failed. The run cannot be applied.
    • Discarded: A user chose not to continue this run.
    • Canceled: A user interrupted the terraform plan or terraform apply command with the “Cancel Run” button.

Users, Teams, and Organizations

  • Users: individual members
  • Teams: group of users
    • Owner team: can manage an organization
  • Organizations: shared space of teams to collaborate on workspaces

Migration example

  1. Check terraform infra locally
  2. Generate your access keys and API keys on Terraform Cloud
  3. Set up you workspace session on https://app.terraform.io/session
  4. Create Your API Token for Terraform CLI Login
  5. Prepare your backend configuration on main.tf
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
      terraform {
    backend "remote" {
    organization = "<YOUR ORG NAME>"
    workspaces {
    name = "lab-migrate-state"
    }
    }

    required_providers {
    aws = {
    source = "hashicorp/aws"
    version = "~> 3.27"
    }
    }
    }
  6. On CLI
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    # INIT
    # login
    terraform login
    # format files
    terraform fmt
    terraform init
    # verify terraform.tfstate.backup exists
    ls
    # clean up
    rm -rf terraform.tfstate

    # APPLY
    terraform apply
    # confirm state on cloud via browser