I am trying to use Docker for local development. The problem is that when I make a change to my code, I have to run the following commands to see the updates locally:
docker-compose down
docker images # Copy the name of the image
docker rmi <IMAGE_NAME>
docker-compose up -d
That's quite a mouthful, and takes a while. (Possibly I could make it into a bash script, but do you think that is a good idea?)
My real question is: Is there a command that I can use (even manually each time) that will update the image & container? Or do I have to go through the entire workflow above every time I make a change in my code?
Just for reference, here is my Dockerfile and docker-compose.yml.
Dockerfile
FROM node:12.18.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 4000
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Even though there are multiple good answers to this question, I think they missed the point, as the OP is asking about the local dev environment. The command I usually use in this situation is:
docker-compose up -d --build
If there aren't any errors in Dockerfile, it should rebuild all the images before bringing up the stack. It could be used in a shell script if needed.
#!/bin/bash
sudo docker-compose up -d --build
If you need to tear down the whole stack, you can have another script:
#!/bin/bash
sudo docker-compose down -v
The -v flag removes all the volumes so you can have a fresh start.
NOTE: In some cases, sudo might not be needed to run the command.
When a docker image is build the artifacts are already copied and no new change can reflect until you rebuild the image.
But
If it is only for local development, then you can leverage volume sharing to update code inside container in runtime. The idea is to share your app/repo directory on host machine with /usr/src/app (as per your Dockerfile) and with this approach your code (and new changes) will be appear on both host and the running container.
Also, you will need to restart the server on every change and for this you can run your app using nodemon (as it watches for changes in code and restarts the server)
Changes required in Dockerfile.
services:
web:
...
container_name: web
...
volumes:
- /path/in/host/machine:/usr/src/app
...
...
ports:
- "3000:3000"
depends_on:
- mongo
You may use Docker Swarm as an orchestration tool to apply rolling updates. Check Apply rolling updates to a service.
Basically you issue docker compose up once and do it with a shell script maybe, and once you get your containers running and then you may create a Jenkinsfile or configure a CI/CD pipeline to pull the updated image and apply it to running container with previous image with docker service update <NEW_IMAGE>.
Related
How to access the running containers during new container docker build?
Need to access the database container during the build of the application container
docker-compose
version: '3'
services:
db:
build: ./db
ports:
- 1433:1433
networks:
- mynetwork
app:
build: ./app
ports:
- 8080:8080
depends_on:
- db
networks:
- mynetwork
networks:
mynetwork: {}
Tried to bring up the db prior to building the app container, but not working:
docker-compose build db
docker-compose up -d db
docker-compose build app
You can't, and it's not a good idea. For example, if you run:
docker-compose build
docker-compose down -v
docker-compose up
The down step will delete all of the containers and their underlying storage (including the contents of the database); then the up step will create all new containers from existing images without re-running the Dockerfile. Even if you added a --build option, Docker's layer caching would conclude that the filesystem output of your database setup command hasn't changed, and will skip re-running that step.
You can encounter a similar problem if you docker push the built image to some registry and run it on a different host: since the image is reusable, commands from its Dockerfile won't get re-run, but it's not the same database, so the setup won't get done.
Depending on what kind of setup you're trying to do, probably the best approach is to configure your image with an entrypoint script that runs your application's database migrations, then exec "$#" runs the main container command. It can also work to put setup commands in the database's /docker-entrypoint-initdb.d directory, though these won't get re-run if your application's database schema changes.
At a technical level, this doesn't work because the docker build environment isn't on any particular Docker network, neither the mynetwork you manually specify nor the default network Compose creates on its own. The build sequence runs separately from running the resulting image, and it ignores most of the Docker Compose settings.
I've updated an environment variable in my Dockerfile, restarted with docker compose up -d
Adding in a shell file to be run on container start, with the line echo $MY_VAR, echoes the appropriate value, however, when I go open the browser console within my application and type env, it only prints out my previous env.
I've tried clearing my cache, force rebuilding of the image via the -d flag on docker compose up, deleting the old images, literally anything and everything, yet somehow the old env remains.
My Dockerfile:
FROM node:17.4.0-alpine3.14
WORKDIR /code
CMD ["bin/run"]
ENV \
MY_VAR='abcdef' \
VERSION='development'
COPY package*.json ./
RUN npm install
COPY src src
COPY cogs.js ./
COPY bin bin
RUN bin/build
My Docker Compose
version: "3.9"
services:
balancer:
image: nginx:1.19.7-alpine
ports:
- 80:80
volumes:
- ./src/nginx.conf:/etc/nginx/nginx.conf
networks:
default:
aliases:
- www.dev.mydomain.com
app: &app
build:
context: "../app"
volumes:
- ../app/bin:/code/bin
- ../app/package-lock.json:/code/package-lock.json
- ../app/package.json:/code/package.json
- ../app/src:/code/src
- app-dist:/code/dist
environment:
MY_VAR: abcdef
VERSION: 'development'
app-watch:
<<: *app
command: ["bin/watch"]
volumes:
app-dist:
Where I use it in my app; config.js
const { env } = globalThis;
export default {
myVar: env.MY_VAR,
version: env.VERSION
};
Updated docker vars (STRIPE_PUBLIC_KEY === MY_VAR)
I'm honestly completely confused as to how the variables can be updated when I echo $MY_VAR in my bin/run script, but logging out the env in browser returns an outdated version of the env.
I think you should not put the variable in both, Dockerfile and docker-compose.yml (unless you explicitly need it that way to build the app), but either in docker-compose.yml or in a .env file.
Start by docker compose build if the images depend on the env vars during build stage.
Docker detects the changes when running docker compose up but if you want to force recreate, use the --force-recreate flag. (-d is used to detach the container from the session).
docker compose restart is not suitable at that point, because:
If you make changes to your docker-compose.yml configuration these
changes are not reflected after running docker compose restart command.
Also make sure to do hard refresh on the website you are looking at the results using CTRL+R (in most browsers).
I am trying to use Docker volume/bind mount so that I don't need to build my project again and again after every small change. I do not get any error but changes in the local files are not visible in container thus I still have to rebuild the project for the new files system snapshot.
Following solution seemed to work for some people.Therefore,
I have tried restarting Docker and Reset Credentials at Docker Desktop-->Setting-->Shared Drives
Here is my docker-compose.yml file
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
I have tried through Docker CLI too. but problem persists
docker build -f Dockerfile.dev .
docker run -p 3000:3000 -v /app/node_modules -v ${pwd}:/app image-id
Windows does copy the files in current directory to container but they are
not in sync
I am using Windows 10 power shell and docker version 18.09.2
UPDATE:
I have checked container contents
using command
docker exec -t -i container-id sh
and the printed file contents using command
cat filename
And from this it is clear that the files container references to has/have changed/updated but I still don't understand why do i have to restart container to see the changes in browser.
Should not they be apparent after just refreshing the tab?
I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.
I'm a newbie to docker.
I want to create an image with my web application. I need some application server, e.g. wlp, then I need some database, e.g. postgres.
There is a Docker image for wlp and there is a Docker image for postgres.
So I created following simple Dockerfile.
FROM websphere-liberty:javaee7
FROM postgres:latest
Now, maybe it's a lame, but when I build this image
docker build -t wlp-db .
run container
docker run -it --name wlp-db-test wlp-db
and check it
docker exec -it wlp-db-test /bin/bash
only postgres is running and wlp is not even there. Directory /opt is empty.
What am I missing?
You need to use docker-compose file. This makes you bind two different containers that are running two different images. One holding your server and the other the database services.
Here is the Example of a nodejs server container working with a mongodb container
First of All, i write the docker file to configure the main container
FROM node:latest
RUN mkdir /src
RUN npm install nodemon -g
WORKDIR /src
ADD app/package.json package.json
RUN npm install
EXPOSE 3000
CMD npm start
Then i Create the docker-compose file to configure both containers and link them
version: '3' #docker-compose version
services: #Services are your different containers
node_server: #First Container, containing nodejs serveer
build: . #Saying that all of my source files are at the root path
volumes: #volume are for hot reload for exemple
- "./app:/src/app"
ports: #binding the host port with the machine
- "3030:3000"
links: #Linking the first service with the named mongo service (see below)
- "mongo:mongo"
mongo: #declaration of the mongodb container
image: mongo #using mongo image
ports: #port binding for mongodb is required
- "27017:27017"
I hope this helped.
Each service should have its own image/dockerfile. You start multiple containers and connect them over a network to be able to communicate.
If you wish to compose multiple containers in one file, check out docker-compose, which is made for just that!
You can't FROM multiple times in one file and expect both processes to run
That's creating each layer from the images, but only one entry point for the process, which is Postgres, because it's second
This pattern is typically only done when you have some "setup" docker image, then a "runtime" image on top of it.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/#use-multi-stage-builds
Also what you're trying to do is not very adherent to "microservices". Run the database separately from your application. Docker Compose can assist you with that, and almost all the examples on dockers website use Postgres with some web app
Plus, you're starting an empty database and server. You need to copy at least a WAR, for example, to run your server code