I try to connect Docker-container with inner-resource by Unit-socket.
For this purpose I add to Dockerfile the following command:
RUN -v /var/run/docker.sock:/var/run/postgresql/.s.PGSQL.5432
But on the stage of docker build I receive: /bin/sh: illegal option -
ERROR: Service 'webapp' failed to build: The command '/bin/sh -c -v /var/run/docker.sock:/var/run/postgresql/.s.PGSQL.5432' returned a non-zero code: 2
What am I doing wrong?
RUN instruction in a Dockerfile is used to run a command inside container during build, not to run your container. Refer to it's documentation.
To make a Unix socket available inside your container, specify a bind mount when you start the container:
docker run -v /var/run/postgresql/.s.PGSQL.5432:/var/run/postgresql/.s.PGSQL.5432 yourapp:latest
This will make /var/run/postgresql/.s.PGSQL.5432 socket on your host available inside the container.
You may also specify different host and container path:
docker run -v /var/run/postgresql/.s.PGSQL.5432:/tmp/postgresql yourapp:latest
Note: /var/run/docker.sock is used to communicate with Docker
daemon. Be careful with it, since access to Docker daemon provides root access to your machine.
Related
I'm quite new to Docker. I'm running on Windows 10 Enterprise and am trying to containerize an existing app that runs on windows (so it's a Windows container). I don't know if this matters but the container is rather large (8 GB).
I need to share a config file (that lives on the host) with the container that the app will use when starting. I was thinking that a bind volume was simplest.
Problem: On running the image I get docker: Error response from daemon: invalid volume specification: '<source path>:<target path>'
Container was built with this command:
docker build -t my_image .
Here is the Dockerfile:
FROM mcr.microsoft.com/dotnet/framework/runtime:4.8
WORKDIR /app
COPY . .
ENTRYPOINT .\application.exe ..\Resources
Here is what I've tried
docker run -it -v c:/Users/my_user:/app my_image
I've tried every combination of C:/, C:\, C:\\, /c/, //c/, \c\, \\c\, etc.
I've tried multiple combinations of /app, //app, \app, \app, C:\app, etc.
I've also tried with and without :rw appended to the end
I've tried the ```--mount``` syntax which consistently outputs: docker: Error response from daemon: invalid mount config for type "bind": invalid mount path: '/app'. (tried a bunch of variations of /app here too)
I've tried every possible combination (except the right one). Please help!
Since you are using a Windows container, your file path will change. Try the below command, from the docs Persistent Storage in Windows Containers
docker run -it -v c:\Users\my_user:c:\app my_image
If you are using a powershell and trying to run docker using docker run command you can try this approach. It worked for me in windows powershell (vs code powershell)
docker run -v ${pwd}\src:/app/src -d -p 3000:3000 --name react-app-c2 react-app-image
Here react-app-c2 is container name and react-app-image is image name
-v is for volume and ${pwd} is for current working directory
/app/src is for the containerdirectory.
2 days I try to run the docker inside an ubuntu container:
docker run -it ubuntu bash
Install docker by instruction of https://docs.docker.com/engine/install/ubuntu/ or/and https://phoenixnap.com/kb/how-to-install-docker-on-ubuntu-18-04
Finally I have installed docker:
root#e65411d2b70a:/# docker -v
Docker version 19.03.6, build 369ce74a3c
But when I try to run docker run hello-world have some problem
root#5ac21097b6f6:/# docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
In service list not docker:
root#5ac21097b6f6:/# service docker start
docker: unrecognized service
root#5ac21097b6f6:/# service --status-all
[ - ] apparmor
[ + ] cgroupfs-mount
[ - ] dbus
[ ? ] hwclock.sh
[ - ] procps
[ ? ] ubuntu-fan
When try to run dockerd:
root#5ac21097b6f6:/# dockerd
INFO[2020-04-23T07:01:11.622627006Z] Starting up
INFO[2020-04-23T07:01:11.624389266Z] libcontainerd: started new containerd process pid=154
INFO[2020-04-23T07:01:11.624460438Z] parsed scheme: "unix" module=grpc
INFO[2020-04-23T07:01:11.624477203Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-04-23T07:01:11.624532871Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>} module=grpc
INFO[2020-04-23T07:01:11.624560679Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2020-04-23T07:01:11.664827037Z] starting containerd revision= version="1.3.3-0ubuntu1~18.04.2"
ERRO[2020-04-23T07:01:11.664943052Z] failed to change OOM score to -500 error="write /proc/154/oom_score_adj: permission denied"
...
INFO[2020-04-23T07:01:11.816951247Z] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.6.1: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
(exit status 3)
Not understand why Permission denied if user root.
Install sudo and add root to the group, but it's not help.
apt-get install sudo
usermod -a -G sudo root
- sudo dockerd have the save problem.
How to make work docker inside ubuntu container? Do you have ideas?
ps. I know about docker-in-docker, I need exactly docker inside ubuntu-container
pss. I know about -v /var/run/docker.sock:/var/run/docker.sock - but needed independent the docker service inside ubuntu-container.
When running docker in docker, the container must use the docker engine on your host.
Here is a simple working setup:
1) Create a dockerfile with docker CLI installed. I am using the official compose image, so you also have docker-compose
FROM docker/compose:1.25.5
WORKDIR /app
ENTRYPOINT ["/bin/sh"]
2) When running it, mount the docker sock
$ docker build -t dind .
$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock dind
Form within the container, you now have docker. Try running docker ps
If you want to do docker in docker without -v /var/run/docker.sock:/var/run/docker.sock then I am afraid that there is no good way to do this.
Sharing the docker socket from host is the classic way to make docker containers run within another docker container.
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container, like -v /var/run/docker.sock:/var/run/docker.sock
(Which never ever works for me, or for any Ubuntu OS I tried. Reason being, the main container which is based on Ubuntu OS does not comes with systemd which is important to run docker containers conveniently like a usual local machine)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options.
The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container:
https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md
I am looking for a reliable method for detecting whether the docker socket inside a container, such as /var/run/docker.sock, is injected from the docker host into the container using the -v parameter (docker run -v /var/run/docker.sock:/var/run/docker.sock image-name:image-tag) or is created by a docker daemon that is running inside the docker container ( run dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 in the entrypoint of the docker container).
I want to underline that I want to check this inside the container not on the host machine.
My preferred solution would be a shell script that can be executed inside the container. The output might be a string such as "docker-in-docker" or "injected-socket" (depending on the detected mode).
I found the following solution for checking whether a volume is mounted inside a container: Docker - check within the container if a directory is mounted from the host or not
The following script works for alpine and debian containers:
#!/bin/sh
v=$(mount | grep "/run/docker.sock")
if [ -n "$v" ]; then
echo "injected-socket"
elif [ -S /var/run/docker.sock ]; then
echo "local-socket"
else
echo "no-socket"
exit 1
fi
Additional references:
Check for empty command output https://stackoverflow.com/a/37618542/4202031
Recognize the existence of a socket file https://stackoverflow.com/a/12137503/4202031
I have a master container instance (Node.js) that runs some tasks in a temporary worker docker container.
The base image used is node:8-alpine and the entrypoint command executes with user node (non-root user).
I tried running my container with the following command:
docker run \
-v /tmp/box:/tmp/box \
-v /var/run/docker.sock:/var/run/docker.sock \
ifaisalalam/ide-taskmaster
But when the nodejs app tries running a docker container, permission denied error is thrown - the app can't read /var/run/docker.sock file.
Accessing this container through sh and running ls -lha /var/run/docker.sh, I see that the file is owned by root:412. That's why my node user can't run docker container.
The /var/run/docker.sh file on host machine is owned by root:docker, so I guess the 412 inside the container is the docker group ID of the host machine.
I'd be glad if someone could provide me an workaround to run docker from docker container in Container-optimized OS on GCE.
The source Git repository link of the image I'm trying to run is - https://github.com/ifaisalalam/ide-taskmaster
Adding the following command into my start-up script of the host machine solves the problem:
sudo chmod 666 /var/run/docker.sock
I am just not sure if this would be a secure workaround for an app running in production.
EDIT:
This answer suggests another approach that might also work - https://stackoverflow.com/a/47272481/11826776
Also, you may read this article - https://denibertovic.com/posts/handling-permissions-with-docker-volumes/
I'm using packer docker builder with ansible to create docker image (https://www.packer.io/docs/builders/docker.html)
I have a machine(client) which is meant to run build scripts. The packer docker is executed with ansible from this machine. This machine has docker client. It's connected to a remote docker daemon. The environment variable DOCKER_HOST is set to point to the remote docker host. I'm able to test the connectivity and things are working good.
Now the problem is, when I execute packer docker to build the image, it errors out saying:
docker: Run command: docker run -v /root/.packer.d/tmp/packer-docker612435850:/packer-files -d -i -t ubuntu:latest /bin/bash
==> docker: Error running container: Docker exited with a non-zero exit status.
==> docker: Stderr: docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
==> docker: See 'docker run --help'.
It seems the packer docker is stuck looking at local daemon.
Workaround: I renamed docker binary and introduced a script called "docker" which sets DOCKER_HOST and invokes the original docker binary with parameters passed on.
Is there a better way to deal this?
Packers Docker builder doesn't work with remote hosts since packer uses the /packer-files volume mount to communicate with the container. This is vaguely expressed in the docs with:
The Docker builder must run on a machine that has Docker installed.
And explained in Overriding the host directory.