Can I use one container for software (e.g. apache, php) and other container just for application code - /var/www/ folder ?
If so, how? Any caveats here?
I need it to speed up deployment - building full image takes more time as well as uploading, downloading full image on all instances
Yes you can!
Example(s):
docker-compose.yml:
web:
build: nginx
volumes_from:
- app
...
app:
build: app
...
You would want your "nginx" Dockerfile to look like:
FROM nginx
VOLUME /var/www/html
...
Where `/var/www/html`` is part of your "app" container.
You would hack on "app" either locally and/or via Docker (docker build app, docker run ... app, etc).
When you're reasonably satisfied you can then test the whole integration by doing something like docker-compose up.
Related
this is my second day working with Docker, can you help me with a solution for this typical case:
Currently, our application is a combination of Java Netty server, Tomcat, python flask, MariaDB.
Now we want to use Docker to make the deployment more easily.
My first idea is to create 1 Docker Image for environment (CentOS + Java 8 + Python 3), another image for MariaDB, and 1 Image for application.
So the docker-compose.yml should be like this
version: '2'
services:
centos7:
build:
context: ./
dockerfile: centos7_env
image:centos7_env
container_name: centos7_env
tty: true
mariadb:
image: mariadb/server:10.3
container_name: mariadb10.3
ports:
- "3306:3306"
tty: true
app:
build:
context: ./
dockerfile: app_docker
image: app:1.0
container_name: app1.0
depends_on:
- centos7
- mariadb
ports:
- "8081:8080"
volumes:
- /home/app:/home/app
tty: true
The app_dockerfile will be like this:
FROM centos7_env
WORKDIR /home/app
COPY docker_entrypoint.sh ./docker_entrypoint.sh
ENTRYPOINT ["docker_entrypoint.sh"]
In the docker_entrypoint.sh there should couple of commands like:
#!/bin/bash
sh /home/app/server/Server.sh start
sh /home/app/web/Web.sh start
python /home/app/analyze/server.py
I have some questions:
1- Is this design good, any better idea for this?
2- Should we separate image for database like this? Or we could install database on OS image, then do commit?
3- If run docker-compose up, will docker create 2 containers for OS image and app image which based on OS image?, is there anyway to just create container for app (which run on Centos already)?
4- If the app dockerfile not base on OS image, but use FROM SCRATCH, so can it run as expected?
Sorry for long question, Thank you all in advance!!!
One thing to understand is that Docker container is not a VM - they are much more lightweight, so you can run many containers on a single machine.
What I usually do is run each service in its own container. This allows me to package only stuff related to that particular service and update each container individually when needed.
With your example I would run the following containers:
MariaDB
Container running /home/app/server/Server.sh start
Container running /home/app/web/Web.sh start
Python container running python /home/app/analyze/server.py
You don't really need to run centos7 container - this is just a base image which you used to build another container on top of it. Though you would have to build it manually first, so that you can build other image from it - I guess this is what you are trying to achieve here, but it makes docker-compose.yml a bit confusing.
There's really no need to create a huge base container which contains everything. A better practice in my opinion is to use more specialized containers. For example in you case for Python you could have a container which container Python only, for Java - your preferred JDK.
My personal preference is Alpine-based images and you can find many official images based on it: python:<version>-alpine, node:<verion>-alpine, openjdk:<version>-alpine (though I'm not quite sure about all versions), postgres:<version>-alpine and etc.
Hope this helps. Let me know if you have other questions and I will try to address them here.
I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.
I've two containers : nginx & angular. The angular container contains the code and is automatically pulled from the registry when there is a new version (with watchtower).
I set up a Shared Volume between angular & nginx to share the code from angular to nginx.
### Angular #########################################
angular:
image: registry.gitlab.com/***/***:staging
networks:
- frontend
- backend
volumes:
- client:/var/www/client
### NGINX Server #########################################
nginx:
image: registry.gitlab.com/***/***/***:staging
volumes:
- client:/var/www/client
depends_on:
- angular
networks:
- frontend
- backend
volumes:
client:
networks:
backend:
frontend:
When I build & run for the first time the environment, everything works.
The problem is when there is a new version of the client, the image is pulled, the container is re-built and the new code version is inside the angular container, but in the nginx container it still the old code version of the client.
The shared volumes does not let me do what i want because we can not specify who is the host, is it possible to mount a volumes from a container to an other ?
Thanks in advance.
EDIT
The angular container is only here to serve the files. We could rsync the built application to the server on the host machine then mouting the volume to the container (host -> guest) but it would go against our CI process : build app->build image->push to registry->watchtower pull new image
Docker volumes are not intended to share code, and I'd suggest reconsidering this workflow.
The first time you launch a container with an empty volume, and only the first time and only if the volume is already empty, Docker will populate it with contents from the container. Since volumes are intended to hold data, and the application is likely to change the data that will be persisted, Docker doesn't overwrite the application data if the container is restarted; whatever was in the volume directory remains unchanged.
In your setup that means this happens:
You start the angular container the first time, and since the client named volume is empty, Docker copies content into it.
You start the nginx container.
You delete and restart the angular container; but since the client named volume is empty, Docker leaves the old content there.
The nginx container still sees the old content.
For a typical browser application, you don't actually need a "program" running: once you've run through a Typescript/Webpack/... sequence, the output is a collection of totally static files. In the case of Angular, there is an Ahead-of-Time compiler that produces these static files. The sequence I'd recommend here is:
Check out your application source tree locally.
Develop your browser application in isolation, using developer-oriented tools like ng serve or npm start. Since this is all running locally, you don't need to fight with anything Docker-specific (filesystem mappings, permissions, port mappings, ...); it is a totally normal Javascript development sequence. The system components you need for this are just Node; it is strictly easier than installing and configuring Docker.
Compile your application to static files with the Angular AOT compiler or Webpack or npm build.
Publish those static files to a CDN; or bind-mount them into an nginx container; or maybe build them into a custom image.
In the last case you wouldn't use a named Docker volume. Instead you'd mount the local filesystem into the container. A complete docker-compose.yml file for this case could look like:
version: '3'
services:
nginx:
image: registry.gitlab.com/***/***/***:staging
volumes:
- ./client:/var/www/client
ports:
- '8000:80'
From your comment:
There is no program running for the client, the CI compile the app and build the custom Image which COPY the application files in /var/www/client. Then watchtower pull this new image and restart the container. The container only run in daemon with (tail -f /dev/null & wait).
Looking at this from a high level, I don't see any need to have two containers or volumes at all. Simply build your application with a multi-stage build that generates an nginx image with the needed content:
FROM your_angular_base AS build
COPY src /src
RUN steps to compile your code
FROM nginx_base as release
...
COPY --from=build /var/www/client/ /var/www/client/
...
Then your compose file is stripped down to just:
...
### NGINX Server #########################################
nginx:
image: registry.gitlab.com/***/***/***:staging
networks:
- frontend
- backend
networks:
backend:
frontend:
If you do find yourself in a situation where a volume is needed to be shared between two running containers, and the volume needs to be updated with each deploy of one of the images, then the best place for that is an entrypoint script that copies files from one location into the volume. I have an example of this in my docker-base with the save-volume and load-volume scripts.
I am working on a docker container that is being created from a generic image. The entry point of this container is dependent on a file in the local file system and not in the generic image. My docker-compose file looks something like this:
service_name:
image: base_generic_image
container_name: container_name
entrypoint:
- "/user/dlc/bin"
- "-p"
- "localFolder/fileName.ext"
- more parameters
The challenge that I am facing is removing this dependency and adding it to the base_generic_image at run time so that I can deploy it independently. Should I add this file to the base generic image and then proceed(this file is not required by others) or should this be done when creating the container, if so then what is the best way of going about it.
You should create a separate image for each part of your application. These can be based on the base image if you'd like; the Dockerfile might look like
FROM base_generic_image
COPY dlc /usr/bin
CMD ["dlc"]
Your Docker Compose setup might have a subdirectory for each component and could look like
servicename:
image: my/servicename
build:
context: ./servicename
command: ["dlc", "-p", ...]
In general Docker volumes and bind-mounts are good things to use for persistent data (when absolutely required; stateless containers with external databases are often easier to manage), getting log files out of containers, and pushing complex configuration into containers. The actual program that's being run generally should be built into the base image. The goal is that you can take the image and docker run it on a clean system without any of your development environment on it.
I'm trying to deploy an app that's built with docker-compose, but it feels like I'm going in completely the wrong direction.
I have everything working locally—docker-compose up brings up my app with the appropriate networks and hosts in place.
I want to be able to run the same configuration of containers and networks on a production machine, just using a different .env file.
My current workflow looks something like this:
docker save [web image] [db image] > containers.tar
zip deploy.zip containers.tar docker-compose.yml
rsync deploy.zip user#server
ssh user#server
unzip deploy.zip ./
docker load -i containers.tar
docker-compose up
At this point, I was hoping to be able to run docker-compose up again when they get there, but that tries to rebuild the containers as per the docker-compose.yml file.
I'm getting the distinct feeling that I'm missing something. Should I be shipping over my full application then building the images at the server instead? How would you start composed containers if you were storing/loading the images from a registry?
The problem was that I was using the same docker-compose.yml file in development and production.
The app service didn't specify a repository name or tag, so when I ran docker-compose up on the server, it just tried to build the Dockerfile in my app's source code directory (which doesn't exist on the server).
I ended up solving the problem by adding an explicit image field to my local docker-compose.yml.
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
build: ./app
Then created an alternative compose file for production:
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
# no build field!
After running docker-compose build locally, the web service image is built with the repository name my-private-docker-registry and the tag latest.
Then it's just a case of pushing the image up to the repository.
docker push 'my-private-docker-registry:latest'
And running docker pull, it's safe to stop and recreate the running containers, with the new images.