Docker Volume is not working for deployments - docker

I am following lynda Docker tutorials and performing stuff related to docker compose file.
This is my docker-compose.yml file.
more docker-compose.yml
version: '3'
services:
web:
image: jboss/wildfly
volumes:
- ~/deployments:/opt/jboss/wildfly/standalone/deployments
ports:
- 8080:8080
As per authors, I am trying to copy webapp.war file to deployments/ folder giving me error. It look like volume mapping for the docker file is not working.
cp /home/user/Demos/docker-for-java/chapter2/webapp.war deployments/
cp: cannot create regular file ‘deployments/’: Not a directory
docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------
helloweb_web_1 /opt/jboss/wildfly/bin/sta ... Up 0.0.0.0:8080->8080/tcp

I think you might be misinterpreting tutorial. I haven't seen the tutorial itself, but checking the documentation for the WildFly Docker image here there's a mention that you need to extend base image and add your war file inside:
To do this you just need to extend the jboss/wildfly image by creating a new one. Place your application inside the deployments/ directory with the ADD command (but make sure to include the trailing slash on the deployment folder path, more info). You can also do the changes to the configuration (if any) as additional steps (RUN command).
This means that you need to create a Dockerfile with approximately this contents (change your-awesome-app.war with the path to your war file):
FROM jboss/wildfly
ADD your-awesome-app.war /opt/jboss/wildfly/standalone/deployments/
After that you need to change you docker-compose.yml to build from your Dockerfile instead of using jboss/wildfly (note the use of build: . instead of image: jboss/wildfly):
version: '3'
services:
web:
build: .
ports:
- 8080:8080
Try that and comment if you run into any issues

Related

How to share prepared files on build stage between containers with docker compose

I have 2 services: nginx and web
When I build web image I build the frontend via the command npm install && npm run build
But I need prepared files in both containers: in the web and in the nginx.
How to share files between containers (images)? I can't simply use volumes, because they will be mounted only in runtime.
The Dockerfile COPY directive can copy files from an arbitrary image. While it's most commonly used in multi-stage builds, you can use it with any image, even one you built yourself.
Say your docker-compose.yml file looks like:
version: '3.8'
services:
web:
build: .
image: my/web
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
ports: [8000:80]
Note that we've explicitly given the web image a name; also notice that there are no volumes: in this setup.
In the proxy image, we can then copy files out of that image:
# Dockerfile.nginx
FROM nginx
COPY --from=my/web /app/static /usr/share/nginx/html
The only complication here is that Compose doesn't know that one image is built off of the other. You'll probably have to manually tell it to rebuild the application image so that it gets built before the proxy image.
docker-compose build web
docker-compose build
docker-compose up -d
You can use this in a more production-oriented setup to deploy this application without having the code directly available. You can create a base docker-compose.yml that names an image: for both containers, and then add a separate docker-compose.override.yml file that has the build: blocks. After running docker-compose build twice as above, you can docker-compose push the built images, and then run this container stack on your production system getting the images from the registry; without a local copy of the source tree and without volumes.

Push image to another registry with volume copy

I am running an image in a docker container locally with the following commands
docker pull locustio/locust
and my docker-compose looks as below, for which I use the docker-compose up
version: '3'
services:
locust-service:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py -H http://master:8089
I have my volume, which is the locustfile.py which has all the code to test my system. Now I would need to push and deploy this image into another private repository along with the volume, that is the file locustfile.py.
How can I do that with the docker-compose push? Or is there any other way I can copy the volume? The docker-compose push for the above compose file doesn't seem to work
Volumes are generally intended to hold data, not application code. You should build your code into a derived Docker image, which then can be pushed.
You can write what you show here into a basic Dockerfile:
FROM locustio/locust
COPY locustfile.py /mnt/locust
# CMD must be a JSON array if it's passing additional options to an ENTRYPOINT
CMD ["-f", "/mnt/locust/locustfile.py", "-H", "http://master:8089"]
Then your docker-compose.yml file only needs to specify to build and run it, but not duplicate any of these options:
version: '3.8'
services:
locust-service:
build: .
image: my-docker-hub-name/locust
ports:
- "8089:8089"
Then docker-compose build && docker-compose push would build and push the image. On the target host you'd need to copy this docker-compose.yml file but remove the build: line.
Glancing at the Locust documentation, this is similar to what is suggested to Use docker image as a base image. You also may find it more flexible to use environment variables to set options, rather than command-line arguments, which would let you split options between the Dockerfile and the docker-compose.yml runtime configuration.
Only docker images can be pushed.
The volumes are generated when you run the image creating a container with its volumes as explained in the official documentation https://docs.docker.com/storage/volumes/ .
I report here the example in the official documentation:
docker run -d \
--name=nginxtest \
-v nginx-vol:/usr/share/nginx/html \
nginx:latest

Mounted directory empty with docker-compose and custom Dockerfile

I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.

What is the difference between `docker-compose build` and `docker build`?

What is the difference between docker-compose build and docker build?
Suppose in a dockerized project path there is a docker-compose.yml file:
docker-compose build
And
docker build
docker-compose can be considered a wrapper around the docker CLI (in fact it is another implementation in python as said in the comments) in order to gain time and avoid 500 characters-long lines (and also start multiple containers at the same time). It uses a file called docker-compose.yml in order to retrieve parameters.
You can find the reference for the docker-compose file format here.
So basically docker-compose build will read your docker-compose.yml, look for all services containing the build: statement and run a docker build for each one.
Each build: can specify a Dockerfile, a context and args to pass to docker.
To conclude with an example docker-compose.yml file :
version: '3.2'
services:
database:
image: mariadb
restart: always
volumes:
- ./.data/sql:/var/lib/mysql
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
depends_on:
- database
When calling docker-compose build, only the web target will need an image to be built. The docker build command would look like :
docker build -t web_myproject -f Dockerfile-alpine ./web
docker-compose build will build the services in the docker-compose.yml file.
https://docs.docker.com/compose/reference/build/
docker build will build the image defined by Dockerfile.
https://docs.docker.com/engine/reference/commandline/build/
Basically, docker-compose is a better way to use docker than just a docker command.
If the question here is if docker-compose build command, will build a zip kind of thing containing multiple images, which otherwise would have been built separately with usual Dockerfile, then the thinking is wrong.
Docker-compose build, will build individual images, by going into individual service entry in docker-compose.yml.
With docker images, command, we can see all the individual images being saved as well.
The real magic is docker-compose up.
This one will basically create a network of interconnected containers, that can talk to each other with name of container similar to a hostname.
Adding to the first answer...
You can give the image name and container name under the service definition.
e.g. for the service called 'web' in the below docker-compose example, you can give the image name and container name explicitly, so that docker does not have to use the defaults.
Otherwise the image name that docker will use will be the concatenation of the folder (Directory) and the service name. e.g. myprojectdir_web
So it is better to explicitly put the desired image name that will be generated when docker build command is executed.
e.g.
image: mywebserviceImage
container_name: my-webServiceImage-Container
example docker-compose.yml file :
version: '3.2'
services:
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
image: mywebserviceImage
container_name: my-webServiceImage-Container
depends_on:
- database
Few additional words about the difference between docker build and docker-compose build.
Both have an option for building images using an existing image as a cache of layers.
with docker build, the option is --cache-from <image>
with docker-composer, there is a tag cache_from in the build section.
Unfortunately, up until now, at this level, images made by one are not compatible with the other as a cache of layers (Ids are not compatible).
However, docker-compose v1.25.0 (2019-11-18), introduces an experimental feature COMPOSE_DOCKER_CLI_BUILD so that docker-compose uses native docker builder (therefore, images made by docker build can be used as a cache of layers for docker-compose build)

docker-compose named volume copy contents on initial start

I may be a little confused on how volumes work and I keep reading the same things over and over and to me it should be working. I want the contents from a folder inside the container to copy over if the volume gets initialized the first time.
I have something like this:
I have a Dockerfile like this:
https://github.com/docker-library/tomcat/blob/f6dc3671bf56465917b52c8df4356fa8f0ebafcd/7/jre7/Dockerfile
And before
EXPOSE 8080
CMD ["catalina.sh", "run"]
I have something like
Tomcat Dockerfile
VOLUME ["/opt/tomcat/conf"]
EXPOSE 8080
CMD ["catalina.sh", "run"]
When i build this image, I tag it as tomcat.
Then I have another Dockerfile with a bunch of environment variables that I set and a script.
Like so:
MyApp Dockerfile
FROM tomcat
ENV SOME_VAR=Test1
COPY assets/script.sh /script.sh
The second image builds from the first image and just adds a script and sets some settings. So far so good.
I want to do something like this in my docker-compose.yml file:
Docker Compose file
website:
image: myapp
ports:
- "8000:8080"
volumes:
- /srv/myapp/conf:/opt/tomcat/conf
I want the contents of /opt/tomcat/conf to copy into /srv/myapp/conf when that folder first gets created. Everything I read suggests that this should work, but it just creates the folder and doesn't copy the contents. Am I missing something here?
Basically I have this issue:
https://github.com/moby/moby/issues/18670
Oh and my docker-compose yaml file is using version 2.1 if that makes a difference.
What you are looking for is not possible when you are binding host volume inside the container. It will only work if you have a named volume. Then docker will copy the content of the folder to a container. You need to change you compose file to
version: '3'
services:
website:
image: myapp
ports:
- "8000:8080"
volumes:
- appconfig:/opt/tomcat/conf
volumes:
appconfig: {}
If you want to get the config out then you can use a shell script and your original compose file
#!/bin/bash
if [ ! -d "/srv/myapp/conf" ]; then
mkdir /srv/myapp/conf
docker create --name myappconfig myapp
docker cp myapp:/opt/tomcat/conf /srv/myapp/
docker rm myapp
fi
docker-compose up -d
For this to work the directory should not exist for the first time.

Resources