How to ADD sibling directory to Docker image - docker

Is there a way to copy a sibling directory into my docker image, i.e., something like
ADD ../sibling_directory /usr/local/src/web/
This is not permitted - according to the Docker documentation, all resources accessible by my Dockerfile must be under the Dockerfile working directory.
In my scenario, I am in the process of splitting out worker services from web services from a common code base, and I'd like to do that logically first without having to actually physically separate the code.

Here's what a potential fig.yml might look like:
web:
build: ./web/
volumes:
- /usr/local/src/web/
worker:
build: ./worker/
volumes_from:
- web

Related

How to create a docker service without project name prefix in its name?

I have just started learning Docker to build Microservices. I am trying to understand and follow eShopOnContainers app as my reference application to understand all the concepts. For testing, I created two ASP.Net Web API services and created a docker-compose.yml file to test if I can run them. I am able to run the services but one thing I have noticed is that service names are not very tidy. They contain the folder name prefix. For example here is part of my docker-compose.yml file
services:
orders-api:
image: orders-api
build:
context: .
dockerfile: src/Services/Orders/Dockerfile
ports:
- "1122:80"
I am expecting that when this service runs, it should be named orders-api but instead it's name becomes micrservicestest_orders-api_1. MicroservicesTest is the folder name of my project. I was trying to find a way around it but it seems like this is the limitation of Docker itself. The only thing I don't understand is that when I run the sample app of eShopOnContainers, their services have readable names without any prefixes. How are they able to generate more readable service names?
Can you please tell me what am I missing here?
docker-compose automatically adds prefix to your containers' name.
By default, prefix is the basename of the directory that you executed docker-compose up.
You can change it by using $ docker-compose --project-name MY_PROJECT_NAME, but this is not you want.
You can specify containers' name by using container_name:.
services:
orders-api:
image: orders-api
build:
context: .
dockerfile: src/Services/Orders/Dockerfile
container_name: orders-api
ports:
- "1122:80"

entry point of docker container dependent on local file system and not in the image

I am working on a docker container that is being created from a generic image. The entry point of this container is dependent on a file in the local file system and not in the generic image. My docker-compose file looks something like this:
service_name:
image: base_generic_image
container_name: container_name
entrypoint:
- "/user/dlc/bin"
- "-p"
- "localFolder/fileName.ext"
- more parameters
The challenge that I am facing is removing this dependency and adding it to the base_generic_image at run time so that I can deploy it independently. Should I add this file to the base generic image and then proceed(this file is not required by others) or should this be done when creating the container, if so then what is the best way of going about it.
You should create a separate image for each part of your application. These can be based on the base image if you'd like; the Dockerfile might look like
FROM base_generic_image
COPY dlc /usr/bin
CMD ["dlc"]
Your Docker Compose setup might have a subdirectory for each component and could look like
servicename:
image: my/servicename
build:
context: ./servicename
command: ["dlc", "-p", ...]
In general Docker volumes and bind-mounts are good things to use for persistent data (when absolutely required; stateless containers with external databases are often easier to manage), getting log files out of containers, and pushing complex configuration into containers. The actual program that's being run generally should be built into the base image. The goal is that you can take the image and docker run it on a clean system without any of your development environment on it.

Allowing multiple services in docker-compose to share a merged volume

Given a docker-compose.yml file like below, I'm looking for a way that both service a and b can have access to a shared volume which consists of the merged contents of both containers.
version: '3'
volumes:
shared-merged-volume:
services:
a:
volumes:
- shared-merged-volume:/shared
b:
volumes:
- shared-merged-volume:/shared
Let's say service a has a directory at /shared/dir-from-a and service b has a similar /shared-dir-from-b directory. The desired result is to end up with:
$ ls /shared # from either container
dir-from-a
dir-from-b
What I find is that one of the containers "wins" and only one of those two directories is ever present. I can work around the issue like this but is more verbose and requires modification if directory contents ever changes:
version: '3'
volumes:
service-a-shared-volume:
service-b-shared-volume:
services:
a:
volumes:
- service-a-shared-volume:/shared/dir-from-a
- service-b-shared-volume:/shared/dir-from-b
b:
volumes:
- service-a-shared-volume:/shared/dir-from-a
- service-b-shared-volume:/shared/dir-from-b
Thanks in advance for any help!
Is using a named volume a requirement?
If not, then to accomplish such merging I usually just map directories to one location on the host drive, instead of using volumes, and it merges with no problems. Tested on big loads and multiple containers writing simultaneously.
proposed compose file:
version: '3'
volumes:
shared-merged-volume:
services:
a:
volumes:
- /location/on/host/system:/shared
b:
volumes:
- /location/on/host/system:/shared
Edit from comments
This method mounts everything that's in the local host directory to the /shared, meaning if it's empty - it'll mount the empty dir, and whatever was there - will be overwritten by the empty dir. Everything that will be written inside that mount after your service starts, will be persisted and merged across services as expected.
If both containers are creating different folders, I don't see how they can be contending to create their own respective folders, unless they both delete the contents of /shared first, then they create the folders? But that would mean that the use of volumes in this case is null because the contents will be deleted every time the container starts?
In any case, I find that it is often useful to persuade the containers to share the same folder by use of path redirection. I will share two ways of accomplishing this:
If you have access to the code that creates the folders in /shared, then you can use environment variables to change the expected location of /shared for each service
version: '3'
volumes:
shared-merged-volume:
services:
a:
environment:
SHARED_VOLUME_PATH: /shared/a/
volumes:
- shared-merged-volume:/shared
b:
environment:
SHARED_VOLUME_PATH: /shared/b/
volumes:
- shared-merged-volume:/shared
You may need to have the services create SHARED_VOLUME_PATH, but now they can both live peaceably with each other.
If you are unable to change the location of /shared, which means each service will always want to use that path, another way to create path redirection is to use symbolic links. For this to work, you will have to override the entrypoint of your services or do this step during the build process of the image.
version: '3'
volumes:
shared-merged-volume:
services:
a:
entrypoint: [ "ln", "-sf", "/symshared/a/", "/shared/" ]
volumes:
- shared-merged-volume:/symshared
b:
entrypoint: [ "ln", "-sf", "/symshared/b/", "/shared/" ]
volumes:
- shared-merged-volume:/symshared
Alternatively, build the images ahead of time, and add a simple RUN command in the Dockerfile which creates this symbolic link:
...
ARG SHARED_VOLUME_PATH
RUN ln -sf ${SHARED_VOLUME_PATH} /shared/
What this allows you to do is that each container will keep using /shared as they used to, but you will still be able to store it's content in the volume, without interfering with what other containers want to do with their own version of /shared.
Needless to say, the ln command only works on linux and other unixes, and in some cases, you may need to install it prior. If your container image is based on something else like windows for example, then find something else that can be used to create symlinks.

How to write variable in docker-compose, running multiple containers which uses the same image but their ports are different

I want to use docker-compose to maintain containers, there is a cluster of API servers.
They build from the same image, I knew the docker-compose scale app=5 will start 5 containers, but they all same, including port setting.
I want to run multiple containers like this:
services:
# wx_service_cluster
wx_service_51011:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/go/src/wx_service
ports:
- "51011:8080"
wx_service_51012:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/go/src/wx_service
ports:
- "51012:8080"
wx_service_...:
....
THERE ARE ALMOST 100 SERVICES NEED TO BE WROTE
ANYONE CAN HELPS ME TO MAKE IT SIMPLER.
can I make it simpler?
like a shell loop:
for each_port in $( seq 51011 51040 )
{
wx_service_${each_port}:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/go/src/wx_service
ports:
- "${each_port}:8080"
}
Simple answer to your actual questions: Use ENV variables and probably combine it with dotenv https://docs.docker.com/compose/environment-variables/
services:
foo_{$instance1}
ports:
- "${instance1}:8080"
foo_{$instance12}
ports:
- "${instance2}:8080"
but this will not help you "generating a docker-compose file with X service entries for WX" .. you seem to plan some kind of "hosting".
Alternatives:
You should step back, and rather use random-port assignment and the use docker inspect to find the port - see an example here https://github.com/EugenMayer/docker-sync/blob/master/lib/docker-sync/sync_strategy/unison.rb#L199 .. so basically you either use a template system to generate you docker-compose.yml file - e.g. like https://github.com/markround/tiller .. then you generate services with a static prefix like wx_service_ .. and later you use a different script ( for you nginx / haproxy ) to configure and upstream for each of those, find the name and port (using inspect) dynamically.
If i am right and you really go for some kind of hosting scenario and you do it commercially - you might even rethink this and add consul to the game. Let every wx service register as a service in consul and then use a additional httpd passenger like nginx / haproxy to reconfigure itself and add a backend+frontend/upstream+server entry in the passender using tiller+consul watch.
The last one is just the next level stuff, but if you do that "commercially" you should not do what you asked for initially - nevertheless, if you choose to, use dotenv as outlined above

Docker compose volumes_from not updating after rebuild

Imagine two containers: webserver (1) is hosting static HTML files that need to be built form templates inside a data volume container (2).
docker-compose.yml file looks something like this:
version: "2"
services:
webserver:
build: ./web
ports:
- "80:80"
volumes_from:
- templates
templates:
build: ./templates
Dockerfile for templates service looks like this
FROM ruby:2.3
# ... there is more but that is should not be important
WORKDIR /tmp
COPY ./Gemfile /tmp/Gemfile
RUN bundle install
COPY ./source /tmp/source
RUN bundle exec middleman build --clean
VOLUME /tmp/build
When I run docker-compose up everything is working as expected: templates are built, webserver hosts them and you can view them in the browser.
Problem is, when I update the ./source and restart/rebuild the setup, the files the webserver hosts are still the old ones, although the log shows that the container was rebuilt - at least the last three layers after COPY ./source /tmp/source. So the changes inside the source folder are picked up by the rebuilt but I'm not able to get the changes shown in the browser.
What am I doing wrong?
Compose preserves volumes when containers are recreated, which is probably why you are seeing the old files.
Generally it is not a good idea to use volumes for source code (or in this case static html files). Volumes are for data you want to persist, like data in a database. Source code changes with each version of the image, so doesn't really belong in a volume.
Instead of using a data volume container for these files, you can use a builder container to compile them and a webserver service to host them. You'll need to add a COPY to the webserver Dockerfile to include the files.
To accomplish this you would change your docker-compose.yml to this:
version: "2"
services:
webserver:
image: myapp:latest
ports: ["80:80"]
Now you just need to build myapp:latest. You could write a script which:
builds the builder container
runs the builder container
builds the myapp container
You can also use a tool like dobi instead of writing a script (disclaimer: I am the author of this tool). There is an example of building a minimal docker image which is very similar to what you're trying to do.
Your dobi.yaml might look something like this:
image=builder:
image: myapp-dev
context: ./templates
job=templates:
use: builder
image=webserver:
image: myapp
tags: [latest]
context: .
depends: [templates]
compose=serve:
files: [docker-compose.yml]
depends: [webserver]
Now if you run dobi serve it will do all the steps for you. Each step will only be run if files have changed.

Resources