How to have two images in the dockerfile and that are linked?
I don't want use docker compose, I want something like this
FROM node:latest
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm i
COPY . /usr/src/app
EXPOSE 4000
CMD [ "npm", "start" ]
FROM mongo:latest as mongo
WORKDIR /data
VOLUME ["/data/db"]
EXPOSE 27017
But I do not know how to join the images
Thank you
Compose is a tool for defining and running multi-container Docker applications.
Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
Creating one image for node and mongo means you will have both of them inside the container, up and running (more resources, harder to debug, less stable container, logs will be hard to follow, etc).
Create separate images, so you can run any image independently:
$ docker run -d -p 4000:4000 --name node myNode
$ docker run -d -p 27017:27017 --name mongo myMongo
And I strongly recommend to you using compose files, it will give you much more control over your whole environment.
I thing that the simply solution is to create your own image from mongo image and install node manually.
You can have multiple FROM in a Dockerfile :
FROM can appear multiple times within a single Dockerfile to create multiple images or use one build stage as a dependency for another. Simply make a note of the last image ID output by the commit before each new FROM instruction. Each FROM instruction clears any state created by previous instructions.
From : https://docs.docker.com/engine/reference/builder/#from
Related
I was reading Quickstart: Compose and Django when I came across "defining a build in a compose file". Well I've seen it before but what I'm curious about here is what's the purpose of it? I just can't get it.
Why we just don't build the image once (or update it whenever we want) and use it multiple times in different docker-compose files?
Here is the Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
And here is docker-compose.yml:
version: '3'
web:
# <<<<
# Why not building the image and using it here like "image: my/django"?
# <<<<
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
You might say: "well, do as you wish!" Why I'm asking is because I think there might be some benefits that I'm not aware of.
PS:
I mostly use Docker for bringing up some services (DNS, Monitoring, etc. Never used it for
development).
I have already read this What is the difference between `docker-compose build` and `docker build`?
There's no technical difference between docker build an image and specifying an image: in the docker-compose.yml file, and specifying the build: metadata directly in the docker-compose.yml.
The benefits to using docker-compose build to build images are more or less the same as using docker-compose up to run containers. If you have a complex set of -f path/Dockerfile --build-arg ... options, you can write those out in the build: block and not have to write them repeatedly. If you have multiple custom images that need to be built then docker-compose build can build them all in one shot.
In practice you'll frequently be iterating on your containers, which means you will need to run local unit tests, then rebuild images, then relaunch containers. Being able to drive the Docker end of this via docker-compose down; docker-compose up --build will be easier will be more convenient than remembering all of the individual docker build commands you need to run.
The one place where this doesn't work well is if you have a custom base image. So if you have a my/base image, and your application image is built FROM my/base, you need to explicitly run
docker build -t my/base base
docker build -t my/app app
docker run ... my/app
Compose doesn't help with the multi-level docker-build sequence; you'll have to explicitly docker build the base image.
I am having trouble creating a volume that maps to the directory "/app" in my container
This is basically so when I update the code I don't need to build the container again
This is my docker file
# stage 1
FROM node:latest as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
# stage 2
FROM nginx:alpine
COPY --from=node /app/dist/my-first-app /usr/share/nginx/html
I use this command to run the container
docker run -d -p 100:80/tcp -v ${PWD}/app:/app docker-testing:v1
and no volume gets linked to it.
However, if I were to do this
docker run -d -p 100:80/tcp -v ${PWD} docker-testing:v1
I do get a volume at least
Anything obvious that I am doing wrong?
Thanks
The ${PWD}:/app:/app should be ${PWD}/app:/app.
If you explode ${PWD}, you'd obtain something like /home/user/src/thingy:/app:/app which does not make much sense.
EDIT:
I'd suggest using docker-compose to avoid this kind of issues (it also simplify a lot the commands to start up docker).
In your case the docker-compose.yml would look like this:
docker run -d -p 100:80/tcp -v ${PWD}:/app:/app docker-testing:v1
version: "3"
services:
doctesting:
build: .
image: docker-testing:v1
volumes:
- "./app:/app"
ports:
- "100:80"
I didn't really test if it works, there might be typos...
I've read tutorials about use docker:
docker run -it -p 9001:3000 -v $(pwd):/app simple-node-docker
but if i use:
docker run -it -p 9001:3000 simple-node-docker
it's working too? -v is not more needed? or is taking from the Dockerfile the line WORKDIR?
FROM node:9-slim
# WORKDIR specifies the directory our
# application's code will live within
WORKDIR /app
another tutorials use mkdir ./app on the workfile, anothers don't, so WORKDIR is enough to docker create the folder automatically if does not exist
There are two common ways to get application content into a Docker container. Many Node tutorials I've seen confusingly do both of them. You don't need docker run -v, provided you docker build your container when you make changes.
The first way is to copy a static copy of the application into the image. You'd do this via a Dockerfile, typically looking something like this:
FROM node
WORKDIR /app
# Install only dependencies now, to make rebuilds faster
COPY package.json yarn.lock ./
RUN yarn install
# NB: node_modules is in .dockerignore so this doesn't overwrite
# the previous step
COPY . ./
RUN yarn build
CMD ["yarn", "start"]
The resulting Docker image is self-contained: if you have just the image (maybe you docker pulled it from a repository) you can run it, as you note, without any special -v option. This path has the downside that you need to re-run docker build to recreate the image if you've made any changes.
The second way is to use docker run -v to inject the current source directory into the container. For example:
docker run \
--rm \ # clean up after we're done
-p 3000:3000 \ # publish a port
-v $PWD:/app \ # mount current directory over /app
-w /app \ # set default working directory
node \ # image to run
yarn start # command to run
This path hides everything in the /app directory in the image and replaces it in the container with whatever you have in your current directory. This requires you to have a built functional copy of the application's source tree available, and so it supports things like live reloading; helpful for development, not what you want for Docker in production.
Like I say, I've seen a lot of tutorials do both things:
# First build an image, populating /app in that image
docker build -t myimage .
# Now run it, hiding whatever was in /app
docker run --rm -p3000:3000 -v$PWD:/app myimage
You don't need the -v option, but you do need to manually rebuild things if your application changes.
$EDITOR src/file.js
yarn test
sudo docker build -t myimage .
sudo docker run --rm -p3000:3000 myimage
As I note here the docker commands require root-equivalent permission; but on the flip side the final docker run command is very close to what you'd run "for real" (maybe via Docker Compose or Kubernetes, but without requiring a copy of the application source).
I am creating an image of my application which involves the packaging of different applications.
After doing the tests/ npm/ bower install etc I am trying to copy the content from the previous image to a fresh image. But that COPY seems very very slow and takes more than 3-4 minutes.
COPY --from=0 /data /data
(Size of /data folder is around 800MB and thousands of files)
Can anyone please suggest a better alternative or some idea to optimize this:
Here is my dockerfile:
FROM node:10-alpine
RUN apk add python git \
&& npm install -g bower
ENV CLIENT_DIR /data/current/client
ENV SERVER_DIR /data/current/server
ENV EXTRA_DIR /data/current/extra
ADD src/client $CLIENT_DIR
ADD src/server $SERVER_DIR
WORKDIR $SERVER_DIR
RUN npm install
RUN npm install --only=dev
RUN npm run build
WORKDIR $CLIENT_DIR
RUN bower --allow-root install
FROM node:10-alpine
COPY --from=0 /data /data # This step is very very slow.
EXPOSE 80
WORKDIR /data/current/server/src
CMD ["npm","run","start:staging"]
Or if anyone can help me cleaning up the first phase (to reduce the image size), so that it doesn't require using the next image that will be useful too.
It is taking time because the number of files are large . If you can compress the data folder as tar and then copy and extract will be helpful in your situation.
Otherwise
If you can take this step to running containers it will be very fast. As per your requirement you need to copy an image of your application that is created already in another image.
You can use volume sharing functionality that will share a volume in between 2 or more docker containers.
Create 1st container:
docker run -ti --name=Container -v datavolume:/datavolume ubuntu
2nd container:
docker run -ti --name=Container2 --volumes-from Container ubuntu
Or you can use -v option , so with v option create your 1st and 2nd container as:
docker run -v docker-volume:/data-volume --name centos-latest -it centos
docker run -v docker-volume:/data-volume --name centos-latest1 -it centos
This will create and share same volume folder that is data-volume in both the containers. docker-volume is the volume name and data-volume is folder name in that container that will be pointing to docker-volume volume Same way you can share a volume with more than 2 containers.
I am totally new to docker and the client I am working for have sent me dockerfile configuration .dockerignore file probably to set up the work environment.
So this is basically what he sent to me
FROM node:8
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY assets ./assets
COPY server ./server
COPY docs ./docs
COPY internals ./internals
COPY track ./track
RUN npm run build:dll
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
with docker build and run command (he also provided the same)
docker build -t reponame:tag .
docker run -p 3000:3000 admin-web:v1
Here, First can someone tell me what does copy . . mean?
He asked me to configure all the ports accordingly. From going through videos, I remember that we can map ports like this -p 3000:3000 but what does configuring port means? and how can i do? any relevant article for the same would also be helpful. Do I need to make docker-compose file?
. is current directory in linux. So basicly: copy current local directory to the current container's directory.
The switch -p is used to configure port mapping. -p 2900:3000 means publish your local port 2900 to container's 3000 port so that the container is available on the outside (by your web browser for instance). Without that mapping the port would not be available to access outside the container. This port is still available to other containers inside same docker network though.
You don't need to make a docker-compose.yml file, but it certainly will make your life easier if you have one, because then you can just run docker-compose up every time to run the container instead of having to run
docker run -p 3000:3000 admin-web:v1
every time you want to start your applicaton.
Btw here is one of the ultimate docker cheatsheets that I got from a friend, might help you: https://gist.github.com/ruanbekker/4e8e4ca9b82b103973eaaea4ac81aa5f