Infrastructure as Code repository for the TerraHarbor project
This repository provides the necessary configurations and scripts to deploy and manage the infrastructure components of the TerraHarbor demo instance, available here.
The repository is organized in the following structure:
This folder contains the Terraform code for creating the cloud infrastructure on Azure, where we deploy our TerraHarbor demo instance.
Note
This Terraform code can be run locally, using the Terraform CLI, but it is usually applied with the GitHub Actions workflows in this repository.
In summary, this code does the following:
- Creates a new resource group in Azure.
- Creates a new virtual network and private subnet, along with a static public IP that allows access to the demo instance.
- Creates a new network security group allowing access to SSH, HTTP, and HTTPS, and network interface attached to it.
- Creates a new Ubuntu virtual machine in the specified subnet, attached to said network interface and public IP. This virtual machine is configured to be accessed via SSH using a public key.
Important
The Azure resources are deployed in the Azure for Students subscription of the user @lentidas.
Authentication against the Azure API is done using a Service Principal with the necessary permissions. Its credentials are stocked as GitHub Secrets on this repository.
This folder contains the Ansible playbooks and roles for configuring the VM and deploying the application using Docker Compose.
The azure_vms_config.yaml playbook is used to configure the VM created by Terraform. It updates the machine, installs Docker, Docker Compose, and any other necessary dependencies.
Note
The azure_vms_config.yaml playbook is called automatically by the respective GitHub Actions workflows whenever a Terraform applied is performed. It can also be triggered manually in the Actions tab.
The docker_apply.yaml playbook is used to deploy the application services defined in the Docker Compose files. It clones the necessary repositories to get the Docker Compose files and scripts, then runs them inside the VM.
Note
The docker_apply.yaml playbook is called automatically by the respective GitHub Actions workflow whenever a change in the docker-compose.yaml file is detected or when a new release of the terraharbor/backend or terraharbor/frontend repositories is published. It can also be triggered manually in the Actions tab.
This folder contains the Docker Compose files for deploying the application along with its dependencies:
docker-compose.yaml: the canonical Docker Compose file used to deploy our demonstration instance:- Defines an internal network shared between all the services and the required volumes (for the SQL database data, the Terraform state files and the Let's Encrypt certificates).
- Deploys a PostgreSQL database, an instance of the backend container, an instance of the frontend container, and a reverse proxy (Traefik) to manage the incoming traffic and provide SSL termination.
- The traffic directed to the
/stateendpoint is routed directly to the backend container, because it is the endpoint that the Terraform client directly talks to.
docker-compose-local.yaml: a Docker Compose file used for local development and testing; it is similar to the canonical file except for a few differences:- Instead of downloading the prebuilt images of the backend and frontend containers, it builds them from the local Dockerfiles.
- There is no SSL termination on the reverse proxy.
- The PostgreSQL database, the backend container, and Traefik all have their local ports exposed to the host for debugging and testing purposes.
Note
This section is aimed at contributors who want to run Docker Compose locally for development and testing purposes.
- First, ensure you have Docker and Docker Compose installed on your machine.
- Clone the project repositories into you machine (at a minimum terraharbor/backend, terraharbor/frontend and terraharbor/infrastructure). You should find yourself with a folder tree similar to the following (this is important, because the
docker-compose-local.yamlfile uses relative paths):. ├── backend | ├── ... | └── Dockerfile ├── frontend | ├── ... | └── Dockerfile └── infrastructure ├── ... └── docker-compose - Get into the
infrastructure/docker-composefolder. - Create a
.env.localfile with the following content (adjust the passwords as needed):PG_ROOT_PASSWORD=rootpass TERRAHARBOR_DB_PASSWORD=terraharborpass - Then start the services using Docker Compose:
# Start the services attached to the logs. docker compose --file docker-compose-local.yaml --env-file .env --env-file .env.local up --build # Alternatively, add the detached flag `-d` to run in background. docker compose --file docker-compose-local.yaml --env-file .env --env-file .env.local up --build -d # View the logs of the running services. docker compose --file docker-compose-local.yaml --env-file .env --env-file .env.local logs -f
Important
The --build flag ensures that the Docker images are rebuilt from the local Dockerfiles on every run, rather than using any cached images.
- When done testing, you can destroy the containers and networks created by Docker Compose:
docker compose --file docker-compose-local.yaml --env-file .env --env-file .env.local down
Tip
If you desire a truly clean environment, you can remove the persistent volumes when bringing down the containers:
docker compose --file docker-compose-local.yaml --env-file .env --env-file .env.local down --volumesBe warned that any users/projects/tokens on the SQL database will be lost, as with any Terraform state files.
-
Clone the terraharbor/demo repository to access the initialization scripts and Terraform code for demonstration purposes.
-
Run the initialization scripts to populate the instance with data (take note of the credentials printed at the end of the script):
# Get into the demo init-scripts folder. cd ./demo/init-scripts # Make the script executable. chmod +x init-data.sh # Run the script. ./init-data.sh
-
You can now access the frontend at http://localhost and Terraform can use the backend at http://localhost/state.
-
You can use the example Terraform code to bootstrap a state in a project:
# Get into the demo terraform folder. cd ./demo/terraform-examples/null-resource # Initialize the Terraform working directory. export TERRAHARBOR_USER=administrator export TERRAHARBOR_PASSWORD=<the-admin-password-printed-by-the-init-script> export TF_PROJECT_ID=1 export TF_STATE_NAME=main terraform init \ -backend-config="address=http://localhost/state/$TF_PROJECT_ID/$TF_STATE_NAME" \ -backend-config="lock_address=http://localhost/state/$TF_PROJECT_ID/$TF_STATE_NAME" \ -backend-config="unlock_address=http://localhost/state/$TF_PROJECT_ID/$TF_STATE_NAME" \ -backend-config="username=$TERRAHARBOR_USER" \ -backend-config="password=$TERRAHARBOR_PASSWORD" \ -backend-config="lock_method=LOCK" \ -backend-config="unlock_method=UNLOCK" \ -backend-config="retry_wait_min=5" # Apply the example configuration. terraform apply
-
You can run apply as many times as you want, and you will see new versions appear in the Terraform state history in the frontend.