replicate call to start nginx in Dockerfile from docker-compose - docker

I have the following Dockerfile that does work:
FROM nginx:1.15.2-alpine
COPY ./build /var/www
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
ENTRYPOINT ["nginx","-g","daemon off;"]
I need to replicate this in docker-compose.
I'd like to specify the same image as the FROM instruction above.
I don't know where to put the COPY commands in docker-compose and I don't think ENTRYPOINT is what I am after in docker-compose

You don't need to redefine the entrypoint when using docker-compose, it will be taken from the base nginx image. You can use bind mounts instead of COPY. E.g. like this (not tested):
nginx:
image: "nginx:1.15.2-alpine"
container_name: nginx
volumes:
- ./build:/var/www
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
Of course, the ports part will already go beyond the Dockerfile and actually publish the port, you can leave that out.

Related

Meaning of PORTS column of Docker container ls

I'm getting this value on the PORTS column of a docker container ls entry (a container for a react app served by Nginx):
PORTS
80/tcp, 0.0.0.0:40000->40000/tcp, :::40000->40000/tcp
I know the second part is IPv4 mapping and the third is IPv6. I don't understand the meaning of the 80/tcp, but I think it's what really makes the app accessible from the internet, because if I use mapping "80:80" it works, but now with "40000:40000" it doesn't.
My project has a structure like this, so it can build multiple projects at once with compose:
|
|- client (a React app)
| |- Dockerfile-client
|- .env.prod
|- docker-compose.yml
The docker-compose.yml looks like this:
version: '3.7'
services:
client:
build:
dockerfile: ./client/Dockerfile-client
context: ./ # so the .env.prod file can be used by React
container_name: client
env_file:
- .env.prod # enabled to apply CLIENT_PORT var to Dockerfile-client
ports:
- "${CLIENT_PORT}:${CLIENT_PORT}"
# others
All the variables are defined in .env.prod (CLIENT_PORT is 40000), and I run compose like `docker compose --env-file .env.prod up", and it doesn't bring errors.
Here's the Dockerfile that builds the client container:
# build env
FROM node:13.13-alpine as build
WORKDIR /app
COPY ./client/package*.json ./
RUN npm ci
COPY ./client/ ./
COPY ./.env.prod ./
RUN mv ./.env.prod ./.env
RUN npm run build
# production env
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE ${CLIENT_PORT}
CMD ["nginx", "-g", "daemon off;"]
It would all work fine if I mapped "80:80", but my problem is how is that 80/tcp there in the ls output when there's no "80" to be seen in the files? Might it be because of Nginx?
80/tcp is from nginx which listens by default on this port.
Correct port mapping in this case will be 4000:80
OR
If you want nginx to listen on other port like 4000 update listen parameter in nginx.conf file to that port
http {
server {
listen 4000;
}
}
And then use port mapping as 4000:4000

How To Mount Directory Of Conainter Into Another After containers are run?

I have angular app, which is built when container is run as shown in Dockerfile, that's built from two images.
FROM node:10.15.3-alpine as builder
RUN mkdir -p /usr/src/banax_education_platform
WORKDIR /usr/src/banax_education_platform
RUN apk add git
COPY package*.json /usr/src/banax_education_platform/
RUN npm i
COPY . /usr/src/banax_education_platform
RUN npm i -g #angular/cli && npm run-script build
FROM nginx:1.13.12-alpine
RUN rm -rf /usr/share/nginx/html/*
COPY /nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /usr/src/banax_education_platform/dist/edu-app /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
So first of all, it fetches node image and build the app with all requirments. and then it fetches another image of nginx and creates server block to show the app. Now I want to split both images, so one container holds angular app and second does nginx reverse proxy.
Came up with this docker-compose.yml
version: "2"
services:
nginx:
build: ./nginx
container_name: angular-nginx
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- web
web:
build: ./angular
container_name: angular-app
ports:
- "8000"
But the issue is, I want this task:
COPY --from=builder /usr/src/banax_education_platform/dist/edu-app /usr/share/nginx/html to be done via docker-compose, cause now I have 2 Dockerfiles, one for angular, and one for nginx, so I don't know how to link that angular container's directory with nginx container. Someone has any idea?
In a typical Web application, you run some sort of packager (for instance, Webpack) that "compiles" your application down to a set of static files. You then need some Web server to send those files to clients, but nothing runs that code on the server side once it's built.
This matches the multi-stage Docker image that you show in the question pretty well. The first stage only builds artifacts, and once it's done, nothing ever runs the image it builds. (Note for example the lack of CMD.) The second stage COPY --from=build the static artifacts it builds, and that's it.
Since there's no actual application to run in the first stage, it doesn't make sense to run this as a separate Docker Compose service or to otherwise "split this up" in the way you're showing.
If you don't mind having multiple build tools, one thing you can do is to treat the packaged application as "data" from the point of view of Nginx (it doesn't know anything about the content of these files, it just sends data over the network). You can pick some directory on your host and mount that into the Nginx container
version: "3"
services:
nginx:
image: nginx:1.13.12-alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./dist:/usr/share/nginx/html
and then npm run-script build on your host system when you need to rebuild and redeploy the application.
I think there is a conceptual misunderstanding. I'm not familiar with Angular, but here's how I would approach this (not tested).
The reverse proxy itself does not need the compiled files that you generate in your Dockerfile. This container (web in your case) is just there to pass HTTP(S) requests to the correct endpoint (which is another webserver, let's call it app). app itself is an Nginx webserver that serves your Angular application.
So, what you would have to do is something like this:
version: "3.7"
services:
web:
image: nginx
volumes:
- ./config/with/proxy/rules:/etc/nginx/conf.d/default.conf
ports:
- 80:80
- 443:443
app:
build: ./path/to/your/dockerfile
expose:
- "8000"
volumes:
- ./path/to/nice/config:/etc/nginx/conf.d/default.conf
depends_on:
- web
The config file of the first volume would have to include all the proxy rules that you need. Basically, something like this should occur somewhere:
location / {
proxy_pass http://app:8000;
}
That's all your reverse proxy has to know.

Deploy with docker-compose.yml

Not sure if it will be a duplicate question but i tried to find out stuff but not sure if i have similar situation like others.
So i am new to docker and trying to setup a deployment for a small website.
So far i have a folder which has 3 files.
index.html - has basic html
Dockerfile - which has
FROM ubuntu:16.04
COPY . /var/www/html/
docker-compose.yml - which has
version: '2.1'
services:
app:
build: .
image: myname/myapp:1.0.0
nginx:
image: nginx
container_name: nginx
volumes:
- ./host-volumes:/cont-volumes
network_mode: "host"
phpfpm56:
image: php-fpm:5.6
container_name: phpfpm56
volumes:
- ./host-volumes:/cont-volumes
network_mode: "host"
mysql:
image: mysql:5.7
container_name: mysql
ports:
- "3306:3306"
volumes:
- mysql:/var/lib/mysql
volumes:
mysql:
Now i am using jenkins to create build, putting my all codes to host volumes to make it available to container and then i would run
docker-compose build
Now it creates an image and i push it to docker hub.
Then i login to remote server and pull the image and run. But that wont work because i still need to run docker-compose up inside the container.
Is this the right approach or i am missing something here?
The standard way to do this is to copy your code into the image. Do not bind-mount host folders containing your code; instead, use a Dockerfile COPY directive to copy in the application code (and in a compiled language, use a RUN command to build it). For example, your PHP container might have a corresponding Dockerfile that looks like (referencing this base Dockerfile)
FROM php-fpm:5.6
# Base Dockerfile defines a sensible WORKDIR
COPY . .
# Base Dockerfile sets EXPOSE 9000
# Base Dockerfile defines ENTRYPOINT, CMD
Then your docker-compose.yml would say, in part
version: '3'
service:
phpfpm56:
build: .
image: me/phpfpm56:2019-04-30
# No other settings
And then your nginx configuration would say, in part (using the Docker Compose service name as a hostname)
fastcgi_pass phpfpm56:9000
If you use this in production you need to comment out the build: lines I think.
If you're extremely set on a workflow where there is no hostname other than localhost and you do not need to rebuild Docker images to update code, you at least need to restart (some of) your containers after you've done the code push.
docker-compose stop app phpfpm56
docker-compose up -d
You might look into a system-automation tool like Ansible or Chef to automate the code-push mechanism. Those same tools can also just install nginx and PHP, and if you're trying to avoid the Docker image build sequence, you might have a simpler installation and deployment system running servers directly on the host.
docker-compose up should not be run inside a container but on a docker host. So this could be run via sh on a host but you need to have access to the composefile wherever you run the command.

Can I get host hostname inside Dockerfile

I need to have the hostname of the host in order to run entrypoint script, that runs accordingly (production, preproduction). How can I get the hostname and to set it as ARG inside Dockerfile. This docker file is used by docker-compose.yml.
docker-compose.yml:
...
nginx:
restart: always
build: ./nginx
ports:
- "80:80"
- "443:443"
volumes_from:
- web
depends_on:
- web
container_name: 'nginx'
...
Dockerfile inside ./nginx folder:
FROM nginx:latest
ARG HOST_HOSTNAME=hostname
ENV HOST_HOSTNAME=$HOST_HOSTNAME
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
...
And inside docker-entrypoint.sh to be able to use ${HOST_HOSTNAME}. Also I want this code to be able to run on every machine without changing anything, but just adding new hostnames to docker-entrypoint.sh
You should pass hostname though environment variable

How to expose nginx docker image to port other than 80?

I'm having the .Dockerfile (from the source):
# build stage
FROM node:9.11.1-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:1.13.12-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Where at the end the application is exposed to port 80. I'm then having another different .Dockerfile and for building both of them I'm using the following docker-compose.yml file:
version: "3"
services:
service-name-one:
image: dockerImageFromAbove
ports:
- "8080:80"
service-name-two:
image: someOtherImage
ports:
- "3000:3001"
And this is the example that is actually working. But I would need to change the port from nginx docker image and instead of port 80 I would need to have port 8081. By simple changing this in both files from above, it's not working and in my research I found that the only working example is when exposing to port 80 from nginx.
I tried replacing the line
EXPOSE 8081
with
RUN -P 80:8081
EXPOSE 8081
but seems as -P flag is not supported here. So how can I do such a mapping, before exposing nginx to port 80?
I found this post, but I can't figure out how to use the answers in my docker files.
I also found this post (part for environment variables), but also not sure how to integrate it with my docker-compose file.
The second file is not a Dockerfile but a docker-compose.yml, you have to change in the docker-compose.yml the ports and it will be ok.
The option -p "hostport:containerport" expose the port when you use command docker run.
Anyway i suggest you to use the supported and official image before change too much the Image in the dockerfile.
Anyway if you really need 8081 try something like this
version: "3"
services:
service-name-one:
image: yournginxOrSomethingelse
ports:
- "8080:80"
- "8085:8081"
I believe -P needs to be lower case: -p (this is for command-line command, not Dockerfile) The syntax is:
Dockerfile:
....
EXPOSE 80
....
Command line:
docker run -d -p 8081:80 --name my-service my-service:latest

Resources