Create images from same Dockerfile - docker

I have a nodejs application, which has two entry points worker.js and web.js
Basically, web should respond to all incoming http requests and worker should do some backend tasks.
Here is the question, how can I create two separate images from the same codebase? The problem is I can't create separate folder inside of my project with different Dockerfiles because I can't execute ADD ../ /app and I don't want to keep two copies of codebase. Project is in git repo but not published.
bash
docker run -d -p 80:80 --name app-web application-web
docker run -d --name app-worker application-worker
Thanks

Stick with one image per code-base, and use --workdir or just different CMDs to start the appropriate script:
docker run -d -p 80:80 --name app-web -w /app/web application
docker run -d --name app-worker -w /app/worker application

Abdullah's answer is the recommended one IMO (to save some disk space).
But to answer the default question, you can build an image with 2 different tags:
# assuming we're in the Dockerfile directory
docker build -t application-web .
docker build -t application-worker .
Then run different containers from these images:
docker run -d -p 80:80 --name app-web application-web
docker run -d --name app-worker application-worker

Related

403 with nginx for file located in binded volume with docker

I am trying to use my nginx server on docker but I cannot use the files / folder if they belong to my volume. Problem, the goal of my test is to keep a volume between the file in my computer and the container.
I have searched during 3 days and tried a lot of solution but no effects...( useradd, chmod, chown, www_data, etc.....)
I don't understand how is it possible to use ngnix, a volume and docker?
The only solution actually for me is to copy the folder of my volume in another folder, and so I can chown the folder and use NGIX. There is no official solution on the web and I am surprised because for me using docker with a volume binded with his container would be the basic for a daily work.
If someone has managed to implement it, I would be very happy if you could share you code. I need to understand what I am missing.
FYI I am working with a VM.
Thanks !
I think you are not passing the right path in the volume option. There are a few ways to do it, you can pass the full path or you can use the $(pwd) if you are using a Linux machine. Let's say you are on /home/my-user/code/nginx/ and your HTML files are on html folder.
You can use:
$ docker run --name my-nginx -v /home/my-user/code/nginx/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
or
$ docker run --name my-nginx -v ~/code/nginx/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
or
$ docker run --name my-nginx -v $(pwd)/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
I've created an index.html file inside the html folder, after the docker run, I was able to open it:
$ echo "hello world" >> html/index.html
$ docker run --name my-nginx -v $(pwd)/html/:/usr/share/nginx/html:ro -p 8080:80 -d nginx
$ curl localhost:8080
hello world
You can also create a Dockerfile, but you would need to use COPY command. I'll give a simple example that's working, but you should improve this by using a version and etc..
Dockerfile:
FROM nginx
COPY ./html /usr/share/nginx/html
...
$ docker build -t my-nginx:0.0.1 .
$ docker run -d -p 8080:80 my-nginx:0.0.1
$ curl localhost:8080
hello world
You can also use docker-compose. By the way, those examples are just to give you some idea of how it works.

why do i keep seeing nginx index.html on localhost when i run my docker image

I installed and run nginx on my linux machine to understand the configurations etc. After a while i decided to remove it safely by following this thread in order to use it in docker
By following this documentaion i run this command
sudo docker run --name ngix -d -p 8080:80 pillalexakis/myrestapi:01
And i saw ngix's homepage at localhost
Then i deleted all ngix images & stopped all containers and i also run this command
sudo docker system prune -a
But now restarted my service by this command
sudo docker run -p 192.168.2.9:7777:8085 phillalexakis/myfirstapi:01 and i keep seeing at localhost ngix index.html
How can i totally remove it ?
Note: I'm new with docker and i might have missed a lot of things. Let me know what extra docker commands should i run in order provide better information.
Assuming your host have been preparing as below
your files (index.html, js, etc) under folder - /myhost/nginx/html
your nginx configuration - /myhost/nginx/nginx.conf
Solution
map your files (call volume) on the fly from outside docker image via docker cli
This is the command
docker run -it --rm -d -p 8080:80 --name web \
-v /myhost/nginx/html:/usr/share/nginx/html \
-v /myhost/nginx/nginx.conf:/etc/nginx/nginx.conf \
nginx
copy your files into docker image by build your own docker image via Dockerfile
This is your Dockerfile under /myhost/nginx
FROM nginx:latest
COPY ./html/index.html /usr/share/nginx/html/index.html
This is the command to build your docker image
cd /myhost/nginx
docker build -t pillalexakis/nginx .
This is the command to run your docker image
docker run -it --rm -d -p 8080:80 --name web \
pillalexakis/nginx

Docker how to pass a relative path as an argument

I would like to run this command:
docker run docker-mup deploy --config .deploy/mup.js
where docker-mup is the name the image, and deploy, --config, .deploy/mup.js are arguments
My question: how to mount a volume such that .deploy/mup.js is understood as the relative path on the host from where the docker run command is run?
I tried different things with VOLUME but it seems that VOLUME does the contrary: it exposes a container directory to the host.
I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.
I can't use -v because this container will be used as a build step in a CI/CD pipeline and as I understand it, it is just run as is.
Using -v to expose your current directory is the only way to make that .deploy/mup.js file inside your container, unless you are baking it into the image itself using a COPY directive in your Dockerfile.
Using the -v option to map a host directory might look something like this:
docker run \
-v $PWD/.deploy:/data/.deploy \
-w /data \
docker-mup deploy --config .deploy/mup.js
This would map (using -v ...) the $PWD/.deploy directory onto /data/.deploy in your container, set the current working directory to /data (using -w ...), and then run deploy --config .deploy/mup.js.
Windows - Powershell
If you're inside the directory you want to bind mount, use ${pwd}:
docker run -it --rm -d -p 8080:80 --name web -v ${pwd}:/usr/share/nginx/html nginx
or $pwd/. (forward slash dot):
docker run -it --rm -d -p 8080:80 --name web -v $pwd/.:/usr/share/nginx/html nginx
Just $pwd will cause an error:
docker run -it --rm -d -p 8080:80 --name web -v $pwd:/usr/share/nginx/html nginx
Variable reference is not valid. ':' was not followed by a valid variable name character. Consider using ${} to
delimit the name
Mounting a subdirectory underneath your current location, e.g. "site-content", $pwd/ + subdir is fine:
docker run -it --rm -d -p 8080:80 --name web -v $pwd/site-content:/usr/share/nginx/html nginx
In my case there was no need for $pwd, and using the standard current folder notation . was enough. For reference, I used docker-compose.yml and ran docker-compose up.
Here is a relevant part of docker-compose.yml.
volumes:
- '.\logs\:/data'

Run Multiple Docker Images from One Bash script

I have a bash file that runs my apps docker image using docker run -it --network test_network -p 8000:8000 testApp but I also need to run my mysql image using docker run -it --network test_network -p 3308:3308 mysql/mysql-server
Normally I open a separate terminal window manually to run each one but I'm trying to edit my bash script so that it can do both for me. Not sure how though?
You can run both in the detached mode. That will not block the script and allow you to run both together. For that, you need to use the -d or --detach flag.
docker run --detach -it --network test_network -p 8000:8000 testApp
docker run --detach -it --network test_network -p 3308:3308
mysql/mysql-server
Edit:
While the approach mentioned above works, it is better to use docker compose to run multiple containers.

transform command line docker to Dockerfile

I have this dockers commands
1 - docker run -itp 3000:3000 --expose 3000 --name ead-courses-service
-v /home/dizie/Projects/node/ead-project-api/ead-courses:/home/ead-courses
-w /home/ead-courses --link mongodb node-service npm start
2 - docker run -itp 3001:3001 --name ead-proofs-service -v
/home/dizie/Projects/node/ead-project-api/ead-proofs:/home/ead-proofs
-w /home/ead-proofs --link ead-courses-service,mongodb node-service npm start
3 - docker run -itp 3002:3002 --name ead-students-service -v
/home/dizie/Projects/node/ead-project-api/ead-students:/home/ead-students
-w /home/ead-students --link mongodb node-service npm start
I like to execute with mode easier.
Is possible?
Example, Dockerfile or docker-compose.
Not sure if this help. If you are using Windows then just create a .bat file and put all your commands in it. You just have execute this .bat file and all your command will get executed.
If you are using unix/linux OS then create a .sh file and put all your commands in it.
As these are plain commands these should help.
A first approach just to get you started woudl be to:
run those images (you get three containers)
docker commit those containers (you get three images representing what was running)
apply to those images CenturyLinkLabs/dockerfile-from-image, which will generate a Dockerfile per image.

Resources