Creating Docker containers using Terraform - Error pinging Docker server - docker

I want to create an nginx based docker container using Terraform.
HCL:
terraform{
required_providers{
docker={
source="kreuzwerker/docker"
}
}
}
provider "docker" {}
resource "docker_image" "nginx" {
name ="nginx:latest"
keep_locally="false"
}
resource "docker_container" "nserver"{
image=docker_image.nginx.latest
name="nginx_server"
ports{
internal =80
external=9090
}
}
But I'm getting an error:
Error pinging Docker server: Cannot connect to the Docker daemon at
unix:///var/run/docker.sock. Is the docker daemon running?
If the same error occurs in docker I would just start/ enable docker using "sudo systemctl start/enable docker" command.
But how should I deal with this error in Terraform ?
Please Help!

I share the solution for my case (I'm on Ubuntu 22.04, Docker Desktop, Terraform).
Check your DOCKER ENDPOINT opening a terminal and typing:
docker context ls
Look there and copy your DOCKER ENDPOINT
Open your main.tf and change provider line, i.e:
provider "docker" {
host ="unix:///home/user/.docker/desktop/docker.sock"
}
Save and do it again:
terraform init
terraform apply
Docs that helps me resolve this issue:
https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli
https://developer.hashicorp.com/terraform/language/syntax
https://docs.docker.com/desktop/faqs/linuxfaqs/#what-is-the-difference-between-docker-desktop-for-linux-and-docker-engine

Related

Problems with Microk8s registry

I have two virtual machines: one with microk8s and another without microk8s. In order to build containers, I use the Microk8s registry to save my docker image. To ahieve this, I execute this commands:
microk8s enable registry
docker build . -t dirIPoftheVM:32000/vnf-image:registry
echo '{"insecure-registries": ["dirIPoftheVM"]}' | sudo tee /etc/docker/daemon.json
sudo service docker restart
docker push :32000/vnf-image:registry
In the other machine, I execute:
docker run dirIPoftheMV:32000/vnf-image:registry
and it returns the following error:
docker: Error response from daemon: Get "https://dirIPoftheVM:32000/v2/": http: server gave HTTP response to HTTPS client
How can I solve this?

docker service inside a LXC container: unable to apply RC_ULIMIT settings

I have a Debian hypervisor in which I ran a LXC Alpine 3.14 container. In the Alpine container, I would like to install a docker service. Alpine provides a docker package, but starting the docker service raises this error:
$ sudo service docker start
sh: error setting limit: Operation not permitted
* docker: unable to apply RC_ULIMIT settings
* Starting Docker Daemon ...
Is the problem on the hypervisor or the container? How can I solve this?
As the FAQ mentions it, I had to enable container nesting:
lxc config set <container> security.nesting true

Cannot Pull Docker Images in Docker from Docker Hub

I am using docker in Linux ol7. I have installed a docker successfully. But when I try to pull images from the docker hub I am getting the below error.
[root#xxxxx ~]# docker run hello-world
Unable to find image 'hello-world: latest' locally
docker: error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/images/create?fromImage=hello-world&tag=latest: EOF.
See 'docker run --help'.
Docker Version I am using - Docker version 18.09.1-ol, build e32a1bd
Try to pass to full docker registry offical URL, from the error it seems like it looking on host machine Docker socket docker.sock or somewhere else but not on offical registry.
docker run registry.hub.docker.com/library/hello-world
You can explore this and this to deal with registry url.

Packer Docker Builder with remote docker daemon

I'm using packer docker builder with ansible to create docker image (https://www.packer.io/docs/builders/docker.html)
I have a machine(client) which is meant to run build scripts. The packer docker is executed with ansible from this machine. This machine has docker client. It's connected to a remote docker daemon. The environment variable DOCKER_HOST is set to point to the remote docker host. I'm able to test the connectivity and things are working good.
Now the problem is, when I execute packer docker to build the image, it errors out saying:
docker: Run command: docker run -v /root/.packer.d/tmp/packer-docker612435850:/packer-files -d -i -t ubuntu:latest /bin/bash
==> docker: Error running container: Docker exited with a non-zero exit status.
==> docker: Stderr: docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
==> docker: See 'docker run --help'.
It seems the packer docker is stuck looking at local daemon.
Workaround: I renamed docker binary and introduced a script called "docker" which sets DOCKER_HOST and invokes the original docker binary with parameters passed on.
Is there a better way to deal this?
Packers Docker builder doesn't work with remote hosts since packer uses the /packer-files volume mount to communicate with the container. This is vaguely expressed in the docs with:
The Docker builder must run on a machine that has Docker installed.
And explained in Overriding the host directory.

docker: not found when using docker command using Docker Jenkins container

jenkins is running in a Docker container.
Docker is using in a Mac OS. So I marked out these lines in jenkins.yml:
# mount docker sock and binary for docker in docker (only works on linux)
#- /var/run/docker.sock:/var/run/docker.sock
#- /usr/bin/docker:/usr/bin/docker
in Jenkinsfile which is generated by JHipster and includes two tasks int he pipeline:
Perform the build in a Docker container
Analyze code with Sonar
List item
node {
stage('checkout') {
checkout scm
}
docker.image('openjdk:8').inside('-u root -e MAVEN_OPTS="-Duser.home=./"') {
stage('check java') {
sh "java -version"
}
checkout from bitbucket was successful. the pipeline stopped and got an error at docker "pull openjdk:8". Console Output is:
[AAAAApp] Running shell script
+ docker inspect -f . openjdk:8
/var/jenkins_home/workspace/GeneticsDB#tmp/durable-21459aca/script.sh:
2: /var/jenkins_home/workspace/GeneticsDB#tmp/durable-21459aca/script.sh: docker: not found
[Pipeline] sh
[AAAAApp] Running shell script
+ docker pull openjdk:8
/var/jenkins_home/workspace/GeneticsDB#tmp/durable-d5590370/script.sh:
2: /var/jenkins_home/workspace/GeneticsDB#tmp/durable-d5590370/script.sh: docker: not found
but this command could be run successfully in the command line, like below:
docker pull openjdk:8
8: Pulling from library/openjdk
Digest: sha256:18c9622a8dc67b608a2dd0178b4c5aebc0e2da9a656072c6e799cfc46cb96422
Status: Image is up to date for openjdk:8
I know there is a similar question: Docker not found when building docker image using Docker Jenkins container pipeline
But my docker is running in Mac OS.
The problem actually is How to run Docker inside a container running on Docker for Mac. It is fixed by
brew install docker
and update jenkins.yml to
# mount docker sock and binary for docker in docker
- /var/run/docker.sock:/var/run/docker.sock
- /usr/local/bin/docker:/usr/local/bin/docker
got an error:
Warning: failed to get default registry endpoint from daemon (Got
permission denied while trying to connect to the Docker daemon socket
at unix:///var/run/docker.sock: Get
http://%2Fvar%2Frun%2Fdocker.sock/v1.35/info: dial unix
/var/run/docker.sock: connect: permission denied). Using system
default: https://index.docker.io/v1/
Got permission denied while trying to connect to the Docker daemon
socket at unix:///var/run/docker.sock: Post
http://%2Fvar%2Frun%2Fdocker.sock/v1.35/images/create?
fromImage=openjdk&tag=8: dial unix /var/run/docker.sock: connect:
permission denied
Solution: update the access permission of /var/run/docker.sock in docker container.
find the container of Jenkins: docker container ps -a
login the container: docker exec -it -u root ec379335d599 /bin/bash
upadte permission: chmod 777 /var/run/docker.sock
If your jenkins is running inside of a docker container, then I'd recommend:
installing docker inside that container
mounting the docker socket so it can run docker commands from inside the container
dynamically adjusting group permissions of the jenkins user in an entrypoint.sh of the jenkins container, so you don't need to change permissions of the docker socket or try to match the host group to the container group
The last part I do with an entrypoint that runs as root, runs a groupmod to adjust the gid of the user's group, and then drops permissions to that user with an exec + gosu which replaces pid 1 with the jenkins server running as the jenkins user. All the code needed to do this is up in the following git repo: https://github.com/sudo-bmitch/jenkins-docker

Resources