Docker desktop install in container fails on /etc/hosts .. what do to? - docker

I am trying to install docker engine inside a container.
wget https://desktop.docker.com/linux/main/amd64/docker-desktop-4.16.2-amd64.deb
apt-get install -y ./docker-desktop-4.16.2-amd64.deb
Everything goes fine until in the post install phase, it tries to update /ect/hosts files for the kubernetes. Here it fails:
/var/lib/dpkg/info/docker-desktop.postinst: line 42: /etc/hosts: Read-only file system
This is expected behaviour for docker build in that it does not allow to modify /etc/hosts of the container.
Is there a way to solve this? Install docker desktop without doing this step? Or any other way?

I Solved this issue by adding this parameter in build
--add-host kubernetes.docker.internal:127.0.0.1
Example:
docker build --add-host kubernetes.docker.internal:127.0.0.1 -t stsjava2 .

When the Docker desktop installation fails with an error related to "/etc/hosts", it is usually due to a conflict with the host system's configuration. Here are some steps that you can try to resolve the issue:
Check the permissions of the "/etc/hosts" file on your host system to ensure
that it is accessible to Docker.
Try to start the Docker container with elevated privileges (e.g., using
"sudo") to see if that resolves the issue.
If the above steps do not resolve the issue, you can try modifying the
Docker container's network configuration to use a different network driver
that does not conflict with the host system's "/etc/hosts" file.
You can also try running the Docker container in a different environment
(e.g., a virtual machine) that does not have the same conflicts with the
host system.
If all else fails, you can try reinstalling Docker or using a different version of Docker to see if that resolves the issue.

Related

Jenkins Docker plugin volume/mount what syntax to use

I have a linux vm on which I installed docker. I have several docker containers with the different programs I have to use. Here's my architecture:
Everything is working fine except for the red box.
What I am trying to do is to dynamically provide a jenkins docker-in-docker agent with the cloud functionality in order to build my docker images and push them to the docker registry I set up.
I have been looking for documentation to create a docker in docker container and I found this:
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
This article states that in order to avoid problems with my main docker installation I have to create a volume:
-v /var/run/docker.sock:/var/run/docker.sock
I tested my image locally and I have no problem to run
docker run -d -v --name test /var/run/docker.sock:/var/run/docker.sock
docker exec -it test /bin/bash
docker run hello-world
The container is using the linux vm docker installation to build and run the docker images so everything is fine.
However, I face problems when it comes to the jenkins docker cloud configuration.
From what I gather, since the #826 build, the docker jenkins plugin has change its syntax for volumes.
This is the configuration I tried:
And the error message I have when trying to launch the agent:
Reason: Template provisioning failed.
com.github.dockerjava.api.exception.BadRequestException: {"message":"create
/var/run/docker.sock: \"/var/run/docker.sock\" includes invalid characters for a local
volume name, only \"[a-zA-Z0-9][a-zA-Z0-9_.-]\" are allowed. If you intended to pass a
host directory, use absolute path"}
I also tried that configuration:
Reason: Template provisioning failed.
com.github.dockerjava.api.exception.BadRequestException: {"message":"invalid mount config for type \"volume\": invalid mount path: './var/run/docker.sock' mount path must be absolute"}
I do not get what that means as on my linux vm the docker.sock absolute path is /var/run/docker.sock, and it is the same path inside the docker in docker I ran locally...
I tried to check the source code to find what I did wrong but it's unclear what the code is doing for me (https://github.com/jenkinsci/docker-plugin/blob/master/src/main/java/com/nirima/jenkins/plugins/docker/DockerTemplateBase.java, from row 884 onward), I also tried with backslashes, etc. Nothing worked.
Has anyone any idea what is the expected syntax in that configuration panel for setting up a simple volume?
Change the configuration to this:
type=bind,source=/var/run/docker.sock,destination=/var/run/docker.sock
it is not a volume, it is a bind type.
This worked for me
type=bind,source=/sys/fs/cgroup,target=/sys/fs/cgroup,readonly

Getting error while running local docker registry

I am getting while running local docker registry on centos system. I am explaining the error below.
docker: Error response from daemon: lstat /var/lib/docker/overlay2/3202584ed599bad99c7896e0363ac9bb80a0385910844ce13e9c5e8849494d07: no such file or directory.
I am setting of the local registry like below.
vi /etc/docker/daemon.json:
{ "insecure-registries":["ip:5000"] }
I have the registry image installed my system and I am running using the below command.
docker run -dit -p 5000:5000 --name registry bundle/tools:registry_3.0.0-521
I have cleaned all volume as per some suggestion from google but still same issue. Can anybody help me to resolve this error.
The error is not related to the registry and is happening in the client side because of local caching (or some other docker-related issue) in your system.
I've seen this error a lot in the docker community and the most suggested approach to solve this error is to clean up the whole /var/lib/docker directory.
On your local client system, if you don't care about your current containers, images, and caches, try stopping the docker daemon, removing the whole /var/lib/docker directory, and starting it again:
Note that sometimes it gets fixed by only restarting the daemon, so it worths trying it first:
sudo service docker restart
If a simple restart can't solve the problem, go ahead and destroy it:
sudo service docker stop
sudo rm -rf /var/lib/docker
sudo service docker start
(I'm not sure about if these systemd commands will work on your CentOS too)

Docker Build CMD fail yum not able to install the requirements

My docker build cmd is failing to create an image using Dockerfile. It shows this error
here is the screenshot of the error
Check if you can access the site on the host machine.
Check your docker networking, for a docker VM, it is usually a bridge network by default.
Check if you need to add the repository to YUM.

Permission denied for docker-compose Superset

I am trying to get the Superset running on ubuntu server. I have referenced the steps from Superset page as below:
git clone https://github.com/apache/incubator-superset/
cd incubator-superset/contrib/docker
# prefix with SUPERSET_LOAD_EXAMPLES=yes to load examples:
docker-compose run --rm superset ./docker-init.sh
# you can run this command everytime you need to start superset now:
docker-compose up
I have fixed the initial issues around right version of docker-compose and postgress address bind issue on port 5432. So after fixing those my docker compose run command
docker-compose run --rm superset ./docker-init.sh
works fine and it asks me to set up a user name and password.
Finally to get the container running I run the final command
docker-compose up.
On my mac, it would run redis, postrgre container and then give me a localhost:8088 for me to get access to Superset UI with login info.
However on Ubuntu, when I run that, I first get this:
So looks it is running redis and postgres containers fine.
But then it is giving me Permission denied errors to create some mkdir directory.
Pls note I am running it as root user.
Also, my docker compose version is fine with 1.23.2 and my docker along with docker-compose is installed under
/usr/bin/docker and not /usr/local/bin/docker.
But I think that shouldn't be an issue.
Any help where it is going wrong and how can I fix it?
Thanks
Edit:
Ok I looked at the same issue mentioned on Github. And used a suggestion of using it only for Production and not development in docker-compose.yml file.
It seems to not throw the same error now when I do
docker-compose up.
However when I open localhost:8088 it does not connect to the UI.
try this:
mkdir ../../assets
chmod -R 777 ../../superset/assets/
as set in docker-compose.yml#L64, it is using ../../superset as volume when in develop. However the container does not have any permission in the host so the solution is to make a directory by yourself and grant the necessary permissions on to it.

How to "start over" with Docker?

I am trying to run Tomcat in a Docker container with limited success. After I tried various things, I wanted to "reset" without completely deleting everything. I did stop and remove the virtual machine from the Virtualbox console. I then tried docker-machine create and docker-machine restart. My question is, if things reach a state in which the application appears to be hanging, what is the best procedure for starting from scratch that does not involve, for example, actually rebuilding the Docker container?
EDIT: All I am now asking is, given that "docker version" returns Client information but when it reaches the Server information I get the "An error occurred trying to connect" message, is what now needs to be done? What is it not connecting to? I tried with apparent success "docker-machine restart" but got no further with "docker version" after that.
First, don't delete the boot2docker VM itself (created by docker-machine)
If you want to reset, you might have to delete the container and image (quickly rebuilt with a docker build). But you can stay in the same docker-based boot2docker VM. No need for deletion.
Retrying a docker container session simply involve killing/removing the current container, and doing a new docker run.
Then, don't forget check what is not working: does a docker ps -a shows your container running? Can you access Tomcat from the boot2docker Linux host? From your actual OS host?
Based on that diagnostic and the exact content of your Dockerfile, you will be able to debug the issue.
The main issue might come from the fact docker command are executed from outside the VM.
That works only if the commands from docker-machine env <machine-name> are set.
See docker-machine env:
For cmd.exe:
$ docker-machine.exe env --shell cmd dev
set DOCKER_TLS_VERIFY=1
set DOCKER_HOST=tcp://192.168.99.101:2376
set DOCKER_CERT_PATH=C:\Users\captain\.docker\machine\machines\dev
set DOCKER_MACHINE_NAME=dev
# Run this command to configure your shell: copy and paste the above values into your command prompt.
(replace "dev" by the name of your docker machine here, probably "default")
But it is also perfectly fine to make all docker command from within the VM. No "env" to set.
Everything is on the VM (images, Dockerfile which can be on the Windows host as well, as long as it is under C:\Users\<yourLogin>, since that folder is automatically mounted as /c/Users/<yourLogin>)

Resources