docker COPY is not copying the files - docker

FROM nginx:alpine
EXPOSE 80
COPY . /usr/share/nginx/html
Am trying to run an Angular app with the following docker configuration. It does work, but I can't see the files/directory that was suppose to be copied in that location "/usr/share/nginx/html" which is super confusing. The directory only contains the default index.html nginx created.
Does it store it in memory or something since the files are not there but it does fetch my website properly.
Build:
docker build -t appname .
Run:
docker run -d -p 80:80 appname

It seems like the COPY destination path is not the path on disk server but its the path inside the image of the docker. Which explains why i cant see my files on the server disk.

Related

I can't see my html when I built container with docker

I have my container running like this
in the file directory C:\Users\hailey\Desktop\GitTest Where my project file are.
# getting bse image nginx
FROM nginx
MAINTAINER hailey
COPY . /usr/share/nginx/html
This is my docker file and I want to run my html file, which located in C:\Users\hailey\Desktop\GitTest
When I accessed to http://127.0.0.1:8080/
I see only this page, which is not helloWorld.html
you can copy and replace testhelloWorld.html with index.html
COPY testhelloWorld.html /usr/share/nginx/html/index.html

Does docker compose create port mappings automatically?

I created a simple asp.net core app in Visual Studio 2019 and added docker support.
Dockerfile, .dockerignore, and docker-compose file are all created.
In a command prompt I navigate to the folder docker-compose.yml file is present and then run the command
docker-compose up
I see that the containers are created and port mappings happen so that I can browse the web app in the browser.
So when I run the following inspect command on the container
docker inspect --format="{{ .NetworkSettings.Ports}}" ContainerId
I get something like this
map[80/tcp:[{0.0.0.0 32782}]]
So now I can browse the app with http://localhost:32782/index.html
Next if I tear down the containers with
docker-compose down
the containers are stopped and deleted. Created image remains.
Now when I do a docker run against that image to start a container
docker run -it --rm ae39
a new container is created but I am not able to browse the app because there is no port mapping from container to host. I have to explicitly specify this when I use the run command. Only then I am able to browse the app running inside of the container form the host.
But when I use docker compose I dont have to specify the port mapping. Something magical happens and the ports mappings are created for me. Note that the docker-compose.yml file is plane vanilla and does not contain any port mappings. So also the Dockerfile. They are included below for reference.
My question is does docker compose automatically create port mappings? If so how? Of is that is to do something with Visual Studio 2019
version: '3.4'
services:
generator31:
image: ${DOCKER_REGISTRY-}generator31
build:
context: .
dockerfile: generator31/Dockerfile
And the dockerfile is here.
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["generator31/generator31.csproj", "generator31/"]
RUN dotnet restore "generator31/generator31.csproj"
COPY . .
WORKDIR "/src/generator31"
RUN dotnet build "generator31.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "generator31.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "generator31.dll"]
I think I found the answer.
The key is the config command
docker-compose config
After I run that command, I see the following.
services:
generator31:
build:
context: D:\Trials\Docker\aspnetcore311\generator31
dockerfile: generator31/Dockerfile
environment:
ASPNETCORE_ENVIRONMENT: Development
image: generator31
ports:
- target: 80
version: '3.4'
I see ports, and also environment. In the docker compose file I pasted in the question, they are not present. So where did they come from?
I open the folder in file explorer, and I see docker-compose.override.yml file sitting quietly next to docker-compose.yml file. Open that file and I get the answer.
ports is defined there. Docker compose command picks up the configuration from multiple files, docker-compose.override.yml is one such file. And now this so ans should help as Fabian-Desnoes suggsted.
By having the EXPOSE tag in your dockerfile, it tells the platform that you are using that you need that port to be mapped. When using docker-compose, it will see this and automatically map this to a random open port. The EXPOSE keyword doesn't however expose any ports. It is just used to say to the platform you are using "Could you expose this port for me" and relies on the platform to do it for you.
If you want the ports to be mapped to a specific port every time on your host machine then you can add 'ports' to the service in you docker-compose.yml (host_machine_port:container_port)

Docker volumes not mounting/linking

I'm in Docker Desktop for Windows. I am trying to use docker-compose as a build container, where it builds my code and then the code is in my local build folder. The build processes are definitely succeeding; when I exec into my container, the files are there. However, nothing happens with my local folder -- no build folder is created.
docker-compose.yml
version: '3'
services:
front_end_build:
image: webapp-build
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:5000
volumes:
- "./build:/srv/build"
Dockerfile
FROM node:8.10.0-alpine
EXPOSE 5000
# add files from local to container
ADD . /srv
# navigate to the directory
WORKDIR /srv
# install dependencies
RUN npm install --pure-lockfile --silent
# build code (to-do: get this code somewhere where we can use it)
RUN npm run build
# install 'serve' and launch server.
# note: this is just to keep container running
# (so we can exec into it and check for the files).
# once we know that everything is working, we should delete this.
RUN npx serve -s -l tcp://0.0.0.0:5000 build
I also tried removing the final line that serves the folder. Then I actually did get a build folder, but that folder was empty.
UPDATE:
I've also tried a multi-stage build:
FROM node:12.13.0-alpine AS builder
WORKDIR /app
COPY . .
RUN yarn
RUN yarn run build
FROM node:12.13.0-alpine
RUN yarn global add serve
WORKDIR /app
COPY --from=builder /app/build .
CMD ["serve", "-p", "80", "-s", "."]
When my volumes aren't set (or are set to, say, some nonexistent source directory like ./build:/nonexistent), the app is served correctly, and I get an empty build folder on my local machine (empty because the source folder doesn't exist).
However when I set my volumes to - "./build:/app" (the correct source for the built files), I not only wind up with an empty build folder on my local machine, the app folder in the container is also empty!
It appears that what's happening is something like
1. Container is built, which builds the files in the builder.
2. Files are copied from builder to second container.
3. Volumes are linked, and then because my local build folder is empty, its linked folder on the container also becomes empty!
I've tried resetting my shared drives credentials, to no avail.
How do I do this?!?!
I believe you are misunderstanding how host volumes work. The volume definition:
./build:/srv/build
In the compose file will mount ./build from the host at /srv/build inside the container. This happens at run time, not during your image build, so after the Dockerfile instructions have been performed. Nothing from the image is copied out to the host, and no files in the directory being mounted in top of will be visible (this is standard behavior of the Linux mount command).
If you need files copied back out of the container to the host, there are various options.
You can perform your steps to populate the build folder as part of the container running. This is common for development. To do this, your CMD likely becomes a script of several commands to run, with the last step being an exec to run your app.
You can switch to a named volume. Docker will initialize these with the contents of the image. It's even possible to create a named bind mount to a folder on your host, which is almost the same as a host mount. There's an example of a named bind mount in my presentation here.
Your container entrypoint can copy the files to the host mount on startup. This is commonly seen on images that will run in unknown situations, e.g. the Jenkins image does this. I also do this in my save/load volume scripts in my example base image.
tl;dr; Volumes aren't mounted during the build stage, only while running a container. You can run the command docker run <image id> -v ./build/:/srv/build cp -R /app /srv/build to copy the data to your local disk
While Docker is building the image it is doing all actions in ephemeral containers, each command that you have in your Dockerfile is run in a separate container, each making a layer that eventually becomes the final image.
The result of this is that the data flow during the build is unidirectional, you are unable to mount a volume from the host into the container. When you run a build you will see Sending build context to Docker daemon, because your local Docker CLI is sending the context (the path you specified after the docker build, ususally . which represents the current directory) to the Docker daemon (the process that actually does the work). One key point to remember is that the Docker CLI (docker) doesn't actually do any work, it just sends commands to the Docker Daemon dockerd. The build stages shouldn't change anything on your local system, the container is designed to encapsulate the changes only into the container image, and give you a snapshot of the build that you can reuse consistently, knowing that the contents are the same.

docker-compose named volume copy contents on initial start

I may be a little confused on how volumes work and I keep reading the same things over and over and to me it should be working. I want the contents from a folder inside the container to copy over if the volume gets initialized the first time.
I have something like this:
I have a Dockerfile like this:
https://github.com/docker-library/tomcat/blob/f6dc3671bf56465917b52c8df4356fa8f0ebafcd/7/jre7/Dockerfile
And before
EXPOSE 8080
CMD ["catalina.sh", "run"]
I have something like
Tomcat Dockerfile
VOLUME ["/opt/tomcat/conf"]
EXPOSE 8080
CMD ["catalina.sh", "run"]
When i build this image, I tag it as tomcat.
Then I have another Dockerfile with a bunch of environment variables that I set and a script.
Like so:
MyApp Dockerfile
FROM tomcat
ENV SOME_VAR=Test1
COPY assets/script.sh /script.sh
The second image builds from the first image and just adds a script and sets some settings. So far so good.
I want to do something like this in my docker-compose.yml file:
Docker Compose file
website:
image: myapp
ports:
- "8000:8080"
volumes:
- /srv/myapp/conf:/opt/tomcat/conf
I want the contents of /opt/tomcat/conf to copy into /srv/myapp/conf when that folder first gets created. Everything I read suggests that this should work, but it just creates the folder and doesn't copy the contents. Am I missing something here?
Basically I have this issue:
https://github.com/moby/moby/issues/18670
Oh and my docker-compose yaml file is using version 2.1 if that makes a difference.
What you are looking for is not possible when you are binding host volume inside the container. It will only work if you have a named volume. Then docker will copy the content of the folder to a container. You need to change you compose file to
version: '3'
services:
website:
image: myapp
ports:
- "8000:8080"
volumes:
- appconfig:/opt/tomcat/conf
volumes:
appconfig: {}
If you want to get the config out then you can use a shell script and your original compose file
#!/bin/bash
if [ ! -d "/srv/myapp/conf" ]; then
mkdir /srv/myapp/conf
docker create --name myappconfig myapp
docker cp myapp:/opt/tomcat/conf /srv/myapp/
docker rm myapp
fi
docker-compose up -d
For this to work the directory should not exist for the first time.

Docker copy files to and mount same folder

I am working on 'tomcat:7.0.75-jre8-alpine' base image and want to deploy my web application alongwith its configurations file. Below is what I am doing in Dockerfile:
......
COPY <my-app-configurations> /org/app/data
COPY <my-app-configurations> /org/app/conf
......
CMD ["catalina.sh", "run"]
And I am using below command to create a container from above image:
$ docker run -p 8080:8080 -v "/c/Users/jaffy/app:/org/app" myapp-image
'/c/Users/jaffy/app' folder is initially empty and I want to get all contents of '/org/app' in it and remains in-sync.
Initially, all configurations are copied in '/org/app' folder but when '/c/Users/jaffy/app' is mounted, '/org/app' gets cleaned/emptied.
How can I solve this issue that host machine folder remains empty initially but afterwards it reflects the exact state of container's '/org/app' folder and its sub-directories.
Thanks a lot in advance.
There is no way to keep the container's folder content when you mount/share a volume over that folder, because the command is replacing the folder not merging it.
If you don't need to replace all the files in the folder you could just mount the file that need to be changed like:
$ docker run -p 8080:8080 -v "/c/Users/jaffy/app/web.xml:/org/app/web.xml" my-app-image

Resources