Docker RUN multiple instance of a image with different parameters - docker

I am new to docker, so this may sound a bit basic question.
I have a VS.Net core2 console application that is able to take some commandline parameters and provide different services. so in a normal command prompt I can run something like
c:>dotnet myapplication.dll 5000 .\mydb1.db
c:>dotnet myapplication.dll 5001 .\mydb2.db
which creates 2 instance of this application listing on port 5000 & 5001.
I want to now create one docker container for this application and want to run multiple instance of that image and have an ability to pass this parameter as a commandline to the docker run command. However I am unable to see how to configure this either in the docker-compose.yml or the Dockerfile
DockerFile
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
# ignoring some of the code here
ENTRYPOINT ["dotnet", "myapplication.dll"]
docker-Compose.yml
version: '3.4'
services:
my.app:
image: ${DOCKER_REGISTRY}my/app
ports:
- 5000:80
build:
context: .
dockerfile: dir/Dockerfile
I am trying to avoid creating multiple image one per each combination of commandline arguments. so is it possible to achieve what I am looking for?

Docker containers are started with an entrypoint and a command; when the container actually starts they are simply concatenated together. If the ENTRYPOINT in the Dockerfile is structured like a single command then the CMD in the Dockerfile or command: in the docker-compose.yml contains arguments to it.
This means you should be able to set up your docker-compose.yml as:
services:
my.app1:
image: ${DOCKER_REGISTRY}my/app
ports:
- 5000:80
command: [80, db1.db]
my.app2:
image: ${DOCKER_REGISTRY}my/app
ports:
- 5001:80
command: [80, db2.db]
(As a side note: if one of the options to the program is the port to listen on, this needs to match the second port in the ports: specification, and in my example I've chosen to have both listen on the "normal" HTTP port and remap it on the hosts using the ports: setting. One container could reach the other, if it needed to, as http://my.app2/ on the default HTTP port.)

Related

How to access a website running in a container when you´re using network_mode: host

I have a very tricky topic because I need to access a private DB in AWS. In order to connect to this DB, first I need to create a bridge like this:
ssh -L 127.0.0.1:LOCAL_PORT:DB_URL:PORT -N -J ACCOUNT#EMAIL.DOMAIN -i ~/KEY_LOCATION/KEY_NAME.pem PC_USER#PC_ADDRESS
Via 127.0.0.1:LOCAL_PORT:DB_URL I can connect to the DB in my Java app. Let´s say the port is 9991 for this case.
My docker files more or less look this:
docker-compose.yml
version: '3.4'
services:
api:
image: fanmixco/example:v0.01
build:
context: .
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Dockerfile
FROM openjdk:11
RUN mkdir /home/app/
WORKDIR /home/app/
RUN mkdir logs
COPY ./target/MY_JAVA_APP.jar .
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "MY_JAVA_APP.jar"]
The image runs properly. However, if I try:
using localhost:8080/MY_APP fails
using 127.0.0.1/MY_APP fails
getting the container's IP and use it later fails
using host.docker.internal/MY_APP fails
I´m wondering how I can test my app. I know it´s running because I get a successful message in the console and the new data was added to the DB, but I don´t know how I can test it or access it. Any idea of the proper way to do it? Thanks.
P.S.:
I´m running my Images in Docker Desktop for Windows.
I have another case using tomcat 9 and running CMD ["catalina.sh", "run"] and I know it's working because I get this message in the console:
INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [9905] milliseconds
But I cannot access it again.
I'm not really sure what the issue is based on the above information since I cannot replicate the system on my own machine.
However, these are some places to look:
you might be running into an issue similar to this: https://github.com/docker/for-mac/issues/1031 because of the networking magic you are doing with ssh and AWS DB
you should try specifying either a build/Dockerfile or an image, and avoid specifying both
version: '3.4'
services:
api:
image: fanmixco/example:v0.01 # choose using an image
build: # or building from a Dockerfile
context: . # but not both
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Hope that helps 🤞🏻 and good luck 🍀
I guess you need to bind the port of your container.
Try to add the 'port' property to your docker-compose file
version: '3.4'
services:
api:
image: fanmixco/example:v0.01
build:
context: .
port:
- 8080:8080
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Have a look on https://docs.docker.com/compose/compose-file/compose-file-v3/#endpoint_mode

Access container_name in Dockerfile (from docker-compose)

I have setup a docker-compose project which are creating multiple images:
cache_server:
image: current_timezone/full-supervisord-cache-server:1.00
container_name: renamed-varnish-cache
networks:
- network_frontend
build:
context: "./all-services/"
dockerfile: "./cache-server/Dockerfile.cacheserver.varnish"
args:
- DOCKER_CONTAINER_USERNAME=username
ports:
- "6081:6081"
- "6082:6082"
When I use docker-compose up -f file1.yml file2.override.yml I will then get the containers: in the case of above one it will be named : renamed-varnish-cache
In the corresponding Dockerfile (./nginx-proxy/Dockerfile.proxy.nginx) I want to be able use the container_name property defined in the docker-compose.yml shown above.
When the containers are created I want to update the Varnish configurations inline inside Dockerfile : RUN sed -i "s|webserver_container_name|renamed-varnish-cache|g" /etc/varnish/default.vcl"
For instance:
backend webserver_container_name{
.host = "webserver_container_name";
.port = "8080";
}
To: I anticipate I will have to replace the - with _ for the backend:
backend renamed_varnish_cache{
.host = "renamed-varnish-cache";
.port = "8080";
}
Is there a way to receive the docker-compose named items as variables inside Dockerfile?
In core Docker, there are two separate concepts. An image is a built version of some piece of software packaged together with its dependencies; a container is a running instance of an image. There are separate docker build and docker run commands to build images and launch containers, and you can launch multiple containers from a single image.
Docker Compose wraps these concepts. In particular, the build: block corresponds to the image-build step, and that is what invokes the Dockerfile. None of the other Compose options are available or visible inside the Dockerfile. You cannot access the container_name: or environment: variables or volumes: because those don't exist at this point in the build lifecycle; you also cannot contact other Compose services from inside the Dockerfile.
It's pretty common to have multiple containers run off the same image if they have largely the same code base but need a different top-level command. One example is a Python Django application that needs Celery background workers; you'd have the same project structure but a different command for the Celery worker.
version: '3.8'
services:
web:
build: .
image: my/django-app
worker:
image: my/django-app
command: celery worker ...
Now with this stack you can docker-compose build to build the one image, and then run docker-compose up to launch both containers from that image. (During the build you can't know what the container names will be, and there will be two container names so you can't just use one in the Dockerfile.)
At a design level, this means that you often can't include configuration-type settings in the image itself (other containers' hostnames, user IDs for host-shared filesystems). If your application lets you specify these things as environment variables, that's the easiest option. You can use bind mounts (volumes:) to inject whole config files. If neither of these things work for you, you can use an entrypoint script to rewrite the config file.

Networking in Docker Compose file

I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.

docker compose config for multiple instances of one image with different arguments

I have a cli application that can run two services based on the input argument.
1- app serve // to run a web server
2- app work // to run a long-running background worker
they share the same code. what do I need when for deployment?
A: two separate containers or
B: two processes in the same container
And what would be the docker-compose config ?
If you want to have one process per container, I would suggest to have a generic image (and Dockerfile) which can run as worker or as server.
The Dockerfile file should set the entrypoint to the app, e.g. ENTRYPOINT ["/path_to_my_app/myapp"] but not the CMD. When the user invokes the command from the command line, he can start the worker with docker run IMAGENAME work or the server with docker run IMAGENAME serve.
To define both services in a compose file, you need to override the command field for each service.
version: '3'
services:
web:
build: ./docker # common Dockerfile
image: IMAGENAME
ports:
- "8090:8090"
command: ["serve"]
worker:
build: ./docker # common Dockerfile
image: IMAGENAME # reuse image
ports:
- "8091:8091"
command: ["work"]
The benefit of this solution over a solution with two separate images, is a gain in maintainability. Since there is only one Dockerfile and one image, both services should be always compatible.
after some googling, I found that as #sauerburger said in comments, better to have one process per container.
But to have multiple containers each for running my main app with a specific argument (ie. one for main app and one for worker) I need to have multiple Dockerfiles. the in my docker-compose i can referenc them separately.
but how to have different dockerfiles for a project?
the prefered solution is to have a docker directory in which each part has its own folder. for my application it will be like this:
- docker
- web
-Dockerfile
- worker
-Dockerfile
then in each Dockerfile I have a common entrypoint and a distinct cmd:
-in web Dockerfile :
- ENTRYPOINT ["/path_to_my_app/myapp"]
- CMD ["web"]
-in worker Dockerfile :
- ENTRYPOINT ["/path_to_my_app/myapp"]
- CMD ["worker"]
after doing this my docker-compose file will reference them like this:
version: '3'
services:
web:
# will build ./docker/web/Dockerfile
build: ./docker/web
ports:
- "8090:8090"
worker:
# will build ./docker/worker/Dockerfile
build: ./docker/worker
ports:
- "8091:8091"

Setting arguments in docker-compose file

Hi I want to use the haproxy exporter provided here https://github.com/prometheus/haproxy_exporter in a docker container.
I am using docker-composefor managing containers and want to recreate this command:
$ docker run -p 9101:9101 prom/haproxy-exporter -haproxy.scrape-uri="http://user:pass#haproxy.example.com/haproxy?stats;csv"
in my docker-compose.yml.
I am not sure how to pass the argument, after viewing the docker-compose documentation I tried it like this:
haproxy-exporter:
image: prom/haproxy-exporter
ports:
- 9101:9101
network_mode: "host"
build:
args:
- haproxy.scrape-uri="http://user:pass#haproxy.example.com/haproxy?stats;csv"
But this gives me an file is invalid message because it requires a context with a build.
Thanks for any help in advance
The image is already built and pulled from the hub (unless you have your own Dockerfile) so you don't need the build option. Instead, pass your arguments as the command since the image appears to use an entrypoint (if their Dockerfile only had a cmd option, then you'd need to also pass /bin/haproxy-exporter in your command):
haproxy-exporter:
image: prom/haproxy-exporter
ports:
- 9101:9101
network_mode: "host"
command: -haproxy.scrape-uri="http://user:pass#haproxy.example.com/haproxy?stats;csv"

Resources