Docker: How to use "docker/compose" container? - docker

I want to run Docker Compose inside a Docker container using the official docker/compose container.
My Dockerfile looks like this:
FROM docker/compose:latest
WORKDIR /
COPY ./docker-compose.yml .
COPY ./.env .
CMD [ "docker-compose", "up"]
Running docker build -t my-container . works. But running docker run --privileged my-container fails with:
> Couldn't connect to Docker daemon at http+docker://localhost - is it running?
>
> If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
What am I doing wrong? Do I have to specify DOCKER_HOST, and if yes, to what?

I guess the image docker/compose is used to in order to build docker-compose tool not to launch docker inside.
It's designed to be used by someone who would edit docker-compose source code.
However you could use the dind image (docker in docker).

Try installing docker-compose within dind (docker-in-docker), like so:
FROM docker:20.10-dind
RUN apk add docker-compose
WORKDIR /
COPY ./docker-compose.yml .
COPY ./.env .
CMD [ "docker-compose", "up"]

Try docker run -v /var/run/docker.sock:/var/run/docker.sock -it container_name
It will share docker socket.

Related

Folder created in Dockerfile not visible after running container

I am using Docker and have a docker-compose.yml and a Dockerfile. In my Dockerfile, I create a folder. When I build the container, I can see that the folder is created, but when I run the container, I can see all the files, but the folder that I created during the container build is not visible.
Both of these files are in the same location.
Here is docker-compose
version: '3'
services:
app:
build: .
volumes:
- ./:/app
command: tail -f /dev/null #command to leave the container on
Here is my Dockerfile
FROM alpine
WORKDIR /app
COPY . /app
RUN mkdir "test"
RUN ls
To build the container I use the command: docker-compose build --progress=plain --no-cache. Command RUN ls from Dockerfile prints me that there are 3 files: Dockerfile, docker-compose.yml and created in Dockerfile test directory.
When my container is running and i'm entering the docker to check which files are there i haven't directory 'test'.
I probably have 'volumes' mounted incorrectly. When I delete them, after entering the container, I see the 'test' folder. Unfortunately,
I want my files in the container and on the host to be sync.
This is a simple example. I have the same thing creating dependencies in nodejs. I think that the question written in this way will be more helpful to others.
When you mount the same volume in docker-compose it will mount that folder to the running image, and test folder will be overwritten by mounted folder.
This may work for you (to create folder from docker-compose file), but im not really sure in your use case:
version: '3'
services:
app:
build: .
volumes:
- ./:/app
command: mkdir /app/test && tail -f /dev/null
Based on your comment, below is an example on how i would use Dockerfile to build node packages and save node_modules back to host:
Dockerfile
FROM node:latest
COPY . /app
RUN npm install
Sh
#!/bin/bash
docker build -t my-image .
container_id=$(docker run -d my-image)
docker cp $container_id:/app/node_modules .
docker stop $container_id
docker rm $container_id
Or more simple way, on how I use to do it, is just to run the docker image and ssh into it:
docker run --rm -it -p 80:80 -v $(pwd):/home/app node:14.19-alpine
ssh into running container, perform npm commands then exit;

How ro access docker volume files from the code on docker container

i have creted a docker volume with such command
docker run -ti --rm -v TestVolume1:/testvolume1 ubuntu
then i created a file there, called TestFile.txt and added text to it
Also i have a simple "Hello world" .net core app with Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:6.0
COPY bin/Release/net6.0/publish/ ShareFileTestInstance1/
WORKDIR /ShareFileTestInstance1
ENTRYPOINT ["dotnet", "ShareFileTestInstance1.dll"]
I published it using
dotnet publish -c Release
then ran
docker build -t counter-image -f Dockerfile .
And finally executed
docker run -it --rm --name=counter-container counter-image -v TestVolume1:/testvolume1 ubuntu
to run my app with a docker volume
So what i want to achive to access a file which is in a volume("TestFile.txt" in my case) from a code in the container.
for example
Console.WriteLine(File.Exists("WHAT FILE PATH HAS TO BE HERE") ? "File exists." : "File does not exist.");
Is it also possible to combine all this stuff in a Dockerfile? I want to add one more container next and connect to the volume to save data there.
The parameters for docker run can be either for docker or for the program running in the docker container. Parameters for docker go before the image name and parameters for the program in the container go after the image name.
The volume mapping is a parameter for docker, so it should go before the image name. So instead of
docker run -it --rm --name=counter-container counter-image -v TestVolume1:/testvolume1 ubuntu
you should do
docker run -it --rm --name=counter-container -v TestVolume1:/testvolume1 counter-image
When you do that, your file should be accessible for your program at /testvolume1/TestFile.txt.
It's not possible to do the mapping in the Dockerfile as you ask. Mappings may vary from docker host to docker host, so they need to be specified at run-time.

How to get --add-host parameter working for a docker build?

I'm building a simple docker image based on a Dockerfile, and I'd like to add an alias to the hosts file to allow me to access an application on my local machine rather than out on the internet.
When I run the following...
> docker build --add-host=example.com:172.17.0.1 -f ./Dockerfile -t my-image .
> docker run --name=my-container --network=my-bridge --publish 8080:8080 my-image
> docker exec -it my-container cat /etc/hosts
I don't see example.com 172.17.0.1 like I'd expect. Where does the host get added? Or is it not working? The documentation is very sparse, but it looks like I'm using the param correctly.
My Dockerfile is doing very little - just specifying a base image, installing a few things, and setting some environment variables. It looks somewhat like this:
FROM tomcat:9.0.40-jdk8-adoptopenjdk-openj9
RUN apt update
RUN apt --assume-yes iputils-ping
# ... a few more installs ...
COPY ./conf /usr/local/tomcat/conf
COPY ./lib /usr/local/tomcat/lib
COPY ./webapps /usr/local/tomcat/webapps
ENV SOME_VAR=some value
# ... more env variables ...
EXPOSE 8080
When the image is created and the container is run my web app works fine, but I'd like to have certain communications (to example.com) redirected to an app running on my local machine.
when you run the container you can put the --add-host
docker run --add-host=example.com:172.17.0.1 --name=my-container --network=my-bridge --publish 8080:8080 my-image
the --add-host feature during build is designed to allow overriding a host during build, but not to persist that configuration in the image.
see also question for docker build --add-host command

Copy with Dockerfile

I try to run the Angular app inside docker with Nginx:
$ ls
dist Dockerfile
Dockerfile:
FROM nginx
COPY ./dist/statistic-ui /usr/share/nginx/html/
Inside dist/statistic-ui/ all app files.
But COPY command doesn't work, Nginx just starts with default welcome page and when I check files inside /usr/share/nginx/html/ only default Nginx files.
Why COPY command doesn't work and how to fix it?
UPDATE
Run docker container
sudo docker run -d --name ui -p 8082:80 nginx
You need to build an image from your Dockerfile then run a container from that image:
docker build -t angularapp .
docker run -d --name ui -p 8082:80 angularapp
Make sure you include the trailing dot at the end of the docker build command.

asp.net-core 2.0 Docker deploy

I have an app developed in asp.net-core 2.0 and deployed on Linux with Docker.
So I created Docker image and run it on Linux server like:
docker run -p 80:80 --name my-container my-image:1.0
So from Docker image my-image:1.0 created container my-container
Now the issue is when I make some changes to my app and want to deploy that changes I have to stop/remove my-container and create a new one from new Docker image like:
docker stop my-container
docker rm my-container
docker run -p 80:80 --name my-container my-image:1.1
Is there any way to just update the container with the new image? Point is to use existing container with the new version of the image.
Is there any way to just update the container with the new image?
No. But this is not what you need, since you said that your goal is the following one:
Now the issue is when I make some changes to my app and want to deploy that changes I have to stop/remove my-container and create a new one from new Docker image
Your Dockerfile looks certainly like that:
FROM microsoft/aspnetcore
WORKDIR /app
COPY . .
ENTRYPOINT ["dotnet", "myapp.dll"]
So, you just need to create a volume to export your workdir /app into the host filesystem, outside the container (use -v parameter with docker run). Then simply restart the container after having applied changes to your app.

Resources