Terraform, docker, Debian 8 - docker

I am beginner with Terraform and I looking for help. I tried use google, but I can not find solutions for me.
I have Debian 8 server. I installed docker and terraform succesfully. Now I need create docker container with ubuntu and set up ssh access to this container with Terraform. My terraform config is for create docker container, set image and provider to docker, but I can not find how to set ssh access to it or configure some addition SW.
Terraform config:
# Configure the Docker provider
provider "docker" {
host = "tcp://127.0.0.1:2376/"
}
# Definition of ubuntu image
resource "docker_image" "ubuntu" {
name = "ubuntu:latest"
}
# Create a container
resource "docker_container" "Ubn_Con" {
image = "${docker_image.ubuntu.latest}"
name = "Ubn_Con"
}
Thank you for any help.

The docker_container resource has an attribute called network_data, which has ip_address. That is the IP address of your container, so you could use that with SSH.
However, Jan Mesarc is correct, you do not need to SSH into a container to set it up with software (or ever, actually, but that's a longer story). Instead, you create an image for the container to be brought up from, using a Dockerfile.
For example:
FROM ubuntu:latest
RUN apt-get update && \
apt-get install -y curl
Then you build that image with docker build . -t ubuntu-curl:0.0.1, and upload it to Dockerhub. If you want to use another registry, just change the value to -t to include the full URL.
Then you can use that image in your docker_image resource:
resource "docker_image" "ubuntu" {
name = "ubuntu-curl:0.0.1"
}

Related

Permission denied on docker pull from Google Cloud Compute

So I set up a minimal Google Cloud Compute instance with terraform and want to use a docker image on it. The desired image is pushed to an artifact repository in the same project.
Error
The issue is that whatever I do, when trying to pull with the pull command specified in the artifact repo, I get:
sudo docker pull europe-west3-docker.pkg.dev/[project]/[repo]/[image]:latest
Error response from daemon:
Head "https://europe-west3-docker.pkg.dev/v2/[project]/[repo]/api/manifests/latest": denied:
Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/[project]/locations/europe-west3/repositories/[repo]" (or it may not exist)
Debugging attempts
What I've tried:
The default service account should have access without any additional setup. To debug and make sure nothing goes wrong, I tried creating a service account with the necessary role myself.
Tried to debug access with the policy troubleshooting tool, access should be possible
Made sure to enable docker auth with gcloud auth configure-docker europe-west3-docker.pkg.dev and debugged the used account with gcloud auth list.
Tried the pull command on my local machine, works flawlessly
Tried to access the repo via gcloud artifacts docker images list [repo], works fine as well
Tried to run gcloud init
Tried pulling images from the official docker repo, also works flawlessly
Terraform code for reference
#
# INSTANCE
#
resource "google_compute_instance" "mono" {
name = "mono"
machine_type = "n1-standard-1"
allow_stopping_for_update = true # allows hard updating
service_account {
scopes = ["cloud-platform"]
}
boot_disk {
initialize_params {
image = "debian-cloud/debian-11"
}
}
network_interface {
network = "default"
access_config {
}
}
}
#
# REPO
#
resource "google_artifact_registry_repository" "repo" {
location = var.location
repository_id = var.project_name
format = "DOCKER"
}
Solution
After wasting too many hours on this, I found that after adding my user to the docker group and thus running docker without sudo, it suddenly works.
So as laid out here: https://docs.docker.com/engine/install/linux-postinstall/
sudo groupadd docker
sudo usermod -aG docker $USER
# exit and login again via ssh
Explanation
As per the docs:
Note: If you normally run Docker commands on Linux with sudo, Docker looks for Artifact Registry credentials in /root/.docker/config.json instead of $HOME/.docker/config.json. If you want to use sudo with docker commands instead of using the Docker security group, configure credentials with sudo gcloud auth configure-docker instead.
So if you use docker with sudo, you also need to run
sudo gcloud auth configure-docker
I thought I had tried this, but it is what it is...

Creating Docker containers using Terraform - Error pinging Docker server

I want to create an nginx based docker container using Terraform.
HCL:
terraform{
required_providers{
docker={
source="kreuzwerker/docker"
}
}
}
provider "docker" {}
resource "docker_image" "nginx" {
name ="nginx:latest"
keep_locally="false"
}
resource "docker_container" "nserver"{
image=docker_image.nginx.latest
name="nginx_server"
ports{
internal =80
external=9090
}
}
But I'm getting an error:
Error pinging Docker server: Cannot connect to the Docker daemon at
unix:///var/run/docker.sock. Is the docker daemon running?
If the same error occurs in docker I would just start/ enable docker using "sudo systemctl start/enable docker" command.
But how should I deal with this error in Terraform ?
Please Help!
I share the solution for my case (I'm on Ubuntu 22.04, Docker Desktop, Terraform).
Check your DOCKER ENDPOINT opening a terminal and typing:
docker context ls
Look there and copy your DOCKER ENDPOINT
Open your main.tf and change provider line, i.e:
provider "docker" {
host ="unix:///home/user/.docker/desktop/docker.sock"
}
Save and do it again:
terraform init
terraform apply
Docs that helps me resolve this issue:
https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli
https://developer.hashicorp.com/terraform/language/syntax
https://docs.docker.com/desktop/faqs/linuxfaqs/#what-is-the-difference-between-docker-desktop-for-linux-and-docker-engine

How to create an Airflow task where I start a Docker container with GPU support

How could I create an Airflow task where I would start a Docker container that is using GPU. When running from the terminal I would just use --gpus all flag. I can`t do that using DockerOperator, because it does not support device_requests parameter, which is used underneath when calling docker run with --gpus all flag.
Okay, for anyone in the future - I figured it out. First, you need to mount the docker daemon socket inside an airflow docker container. Do this by changing docker-compose file by adding in the volumes section of airflow:
- /var/run/docker.sock:/var/run/docker.sock
Then you need to create a new docker image based on airflow docker image and install docker python SDK, eg.:
# syntax=docker/dockerfile:1
FROM apache/airflow:2.2.0-python3.7
RUN pip install docker
then you can create tasks based on PythonOperator, where you use the docker library to create new containers. Example task (output is not pretty)
def start_gpu_container(**kwargs):
client = docker.from_env()
response = client.containers.run(
'tensorflow/tensorflow:latest-gpu',
'nvidia-smi',
device_requests=[
docker.types.DeviceRequest(count=-1, capabilities=[['gpu']])
]
)
return str(response)

Docker Build CMD fail yum not able to install the requirements

My docker build cmd is failing to create an image using Dockerfile. It shows this error
here is the screenshot of the error
Check if you can access the site on the host machine.
Check your docker networking, for a docker VM, it is usually a bridge network by default.
Check if you need to add the repository to YUM.

Push\Pull docker images to Artifactory

I'm trying to push docker images to artifactory as part of a CI jenkins job.
I have an Artifactory installed with url art:8080
I installed Docker on Win2016 and built my dockerfile.
Now I stuck in how to push the output image of the dockerfile.
I tried:
<!-- language: lang-none -->
docker tag microsoft/windowsservercore art:8080/imageID:latest
docker push art:8080/docker-local:latest
but I get an error stating:
Get https://art:8080/v2/: dial tcp: lookup artifactory: getaddrinfow: No such host is known.
Where is the https getting from?
How do I push to the correct local docker repo in my artifactory?
Docker requires you to use https. What I do (I use Nexus not Artifactory) is setup a reverse proxy using nginx. Here is the doc for that - https://www.jfrog.com/confluence/display/RTF/Configuring+a+Reverse+Proxy
Alternatively, you can set Docker to not require https (though not recommended)
Since you're asking how to pull, these steps worked for an enterprise artifactory where Certificate CA are not trusted outside the organization
$ sudo mkdir -p /etc/docker/certs.d/docker-<artifactory-resolverhost>
$ sudo cp /tmp/ca.crt /etc/docker/certs.d/docker-<artifactory-resolverhost>
$ sudo chown root:docker /etc/docker/certs.d/docker-<artifactory-resolverhost>/ca.crt
$ sudo chmod 740 /etc/docker/certs.d/docker-<artifactory-resolverhost>/ca.crt
Where ca.crt is the base-64 chain of CA trusted certificates and is the resolver hostname of the repository. For ex. repo.jfrog.org if you were using the public repository. To confirm you can do a ping against "artifactory-resolverhost" to make sure is reachable from your network
Then you should be able to pull an image with your user belonging to docker group for ex.
docker pull docker-<artifactory-resolverhost>/<repository-name>/rhel7-tomcat:8.0.18_4
You can then view the downloaded image with below command
docker images

Resources