I have docker-compose.yml on my local machine like below:
version: "3.3"
services:
api:
build: ./api
volumes:
- ./api:/api
ports:
- 3000:3000
links:
- mysql
depends_on:
- mysql
app:
build: ./app
volumes:
- ./app:/app
ports:
- 80:80
mysql:
image: mysql:8.0.27
volumes:
- ./mysql:/var/lib/mysql
tty: true
restart: always
environment:
MYSQL_DATABASE: db
MYSQL_ROOT_PASSWORD: qwerty
MYSQL_USER: db
MYSQL_PASSWORD: qwerty
ports:
- '3306:3306'
The api is NestJS app, app, mysql - Angular and Mysql respectively. And I need to work with this one localy.
How could I make so, that any my changes will be applied without rebuilding containers every time?
You don't have to build an image for a development environment with your sources in it. For NestJS, and since you're using Docker (I voluntary specify this because it exists other container runtimes), you can simply run a NodeJS image from the Docker main registry: https://hub.docker.com/_/node.
You could run it with:
docker run -d -v ./app:/app node:12-alpine /app/index.js
N.B.: I choose 12-alpine for the example. I imagine the file to start your app is index.js, replace it with yours.
You must consider to install the node dependencies yourself and they must be in the ./app directory.
For docker-compose, it could look like this:
version: "3.3"
services:
app:
image: node:12-alpine
command: /app/index.js
volumes:
- ./app:/app
ports:
- "80:80"
Same way for your API project.
For a production image, it is still suggested to build the image with the sources in it.
Say you're working on your front-end application (app). This needs to make calls out to the other components, especially api. So you can start the things it depends on, but not the application itself
docker-compose up -d api
Update your application configuration for this different environment; if you would have proxied to http://api:3000 before, for example, you need to change this to http://localhost:3000 to connect to the container's published ports:.
Now you can develop your application totally normally, without doing anything Docker-specific.
# outside Docker, on your normal development workstation
yarn run dev
$EDITOR src/components/Foo.tsx
You might find it convenient to use environment variables for these settings that will, well, differ per environment. If you're developing the back-end code but want to attach a live UI to it, you'll either need to rebuild the container or update the front-end's back-end URL to point at the host system.
This approach also means you do not need to bind-mount your application's code into the container, and I'd recommend removing those volumes: blocks.
Related
As a bit of context, I am fairly new to Docker and Docker-compose and until recently I've never even heard of Docker Swarm. I should not be the one responsible for the task I've been given but it's not like I can offload it to someone else...
So, the idea is to have two different physical machines to host a web server. One of the machines will run an Express.js server plus a Redis database, while the other machine hosts the system database (a Postgres DB).
Up until now I had a docker-compose.yaml file which created all these services and ran them.
version: '3.8'
services:
server:
image: server
build:
context: .
target: build-node
volumes:
- ./:/src/app
- /src/app/node_modules
container_name: server
ports:
- 3000:3000
depends_on:
- postgres
- redis
entrypoint:
['./wait-for-it.sh', '-t', '30', 'postgres:5432', '--', 'yarn', 'dev']
networks:
- servernet
# postgres database
postgres:
image: postgres
user: postgres
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- ./data:/var/lib/postgresql/data # persist data even if container shuts down
- ./db_scripts/startup.sh:/docker-entrypoint-initdb.d/c_startup.sh
#- ./db_scripts/db.sql:/docker-entrypoint-initdb.d/a_db.sql
#- ./db_scripts/db_population.sql:/docker-entrypoint-initdb.d/b_db_population.sql
ports:
- '5432:5432'
networks:
- servernet
# pgadmin for managing postgis db (runs at localhost:5050)
# To add the above postgres server to pgadmin, use hostname as defined by docker: 'postgres'
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
depends_on:
- postgres
ports:
- 5050:80
networks:
- servernet
redis:
image: redis
networks:
- servernet
networks:
servernet:
I would naturally run this script with docker-compose up and that was the end of my concerns, everything running together on localhost. But now, with this setup I have no idea what to do. From what I've read, I have to create a swarm, but then how do I go about running everything from the same place (or with one command)? And how do I specify which services are to be executed on which machine?
Additionally, here is my Dockerfile in case it's useful:
FROM node as build-node
WORKDIR /src/app
COPY package.json .
COPY yarn.lock .
COPY wait-for-it.sh .
COPY . .
RUN yarn
RUN yarn style:fix
RUN yarn lint:fix
RUN yarn build
EXPOSE 3000
ENTRYPOINT yarn dev
Is my current docker-compose script even capable of being used with this new setup?
This is really over my head and I've got not idea where to start. Docker documentation is also a bit confusing since I don't have much knowledge of Docker to begin with...
Thanks in advance!
You first need to learn what's docker swarm and how it works
Docker swarm is a container orchestration tool, meaning that it allows
the user to manage multiple containers deployed across multiple hosts
machines.
to answer your questions briefly:
how do I go about running everything from the same place?
you can use docker stack deploy command to deploy a set of services
and yes you run it from one host machine, you don't have to run it on different machines, and that machine we call it master node
The good news is that you can still use your docker-compose file, with slight modifications maybe.
So to summarize the steps that you need to do are the following:
install docker swarm (1 master and 1 worker as you have only 2
machines)
make sure it's working fine (communication between nodes)
prepare your docker-compose file and deploy your stack from the
master node
I have a docker-compose LAMP stack comprised of three services; a webserver, php and mysql.
The apache2 webroot inside the container is shared to my local machine using a volume like so:
volumes:
- ./public_html:/usr/local/apache2/htdocs
When the stack is running though, I can't edit files inside of the shared volume, since I have a different local user as the user inside the apache2 container. Additionally the installer of my CMS (Processwire) is unable to acquire permissions to the required install directories.
The apache container uses alpine 2.4.35.
I've build my docker-compose file according to this tutorial:
https://medium.com/#thivi/creating-a-lamp-stack-using-docker-compose-13ca4e3950e1
Below I have attached my docker-compose.yml.
version: '3.7'
services:
apache:
build: './apache'
restart: always
ports:
- 80:80
- 443:443
networks:
- frontend
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./cert/:/usr/local/apache2/cert/
depends_on:
- php
- mysql
php:
build: './php'
restart: always
networks:
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./tmp:/usr/local/tmp
mysql:
build: './mysql'
restart: always
ports:
- 3306:3306
expose:
- 3306
networks:
- backend
volumes:
- ./database:/var/lib/mysql
networks:
backend:
frontend:
Is there any way to fix this issue? I'd be grateful for answers, I've been dealing with this issue for the past 2 days, without getting anywhere and I'm also kind of surprised that such an essential feature like directory sharing is so complicated.
/edit:
I've also noticed something interesting; when I execute a bash inside the apache-container the ownership of apache's document root is set to nobody:nobody, which probably also isn't right.
I am new to docker, so this may seem very basic to you, anyway - its freaking me out at the moment.
I decided to develop a new web-project ontop of containers, of course i thought about docker. After finishing the tutorial and reading some Dockerfiles and so on, i decided to go with docker-compose.
I want to have multiple compose-files, one for Development, one for Production and so on. Now i managed to orchestrate a basic php/mysql/redis application using 3 different services. The main application is php based and maintained in the project src. Mysql and Redis are simply configured with base images and do not require any business logic.
I can build the containers and bring them up with
build:
docker-compose -f compose-Development.yml build
up:
docker-compose -f compose-Development.yml up
Many files in the main application container are built by gulp (templates, css, etc) and code will exist in both javascript and php.
I noticed, that my app state does not change when i change my files. I would have to rebuild and restart my containers.
Having some experience with Vagrant, i would go for some kind of shared source during development. But how would i achieve that?
My application Dockerfile (for development) looks like this:
FROM webdevops/php-nginx:7.1
COPY ./ /app
COPY docker/etc/ /opt/docker/etc
# php config...
RUN ln -sf /opt/docker/etc/php/php.Development.ini /opt/docker/etc/php/php.ini
WORKDIR /app/
EXPOSE 80
The compose file:
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile.Development
links:
- mysql
- redis
volumes:
- ./data/fileadmin:/app/public/fileadmin
- ./data/uploads:/app/public/uploads
env_file:
- docker/env/All.yml
- docker/env/Development.yml
ports:
- "80:80"
restart: always
# Mysql Container
mysql:
build:
context: docker/mysql/
dockerfile: Dockerfile
restart: always
volumes:
- mysql:/var/lib/mysql
env_file:
- docker/env/All.yml
- docker/env/Development.yml
# Cache Backend Container
redis:
build:
context: docker/redis/
dockerfile: Dockerfile
ports:
- "6379:6379"
volumes:
- redis:/data
env_file:
- docker/env/All.yml
- docker/env/Development.yml
restart: always
volumes:
mysql:
redis:
So far, i used some github repositories to copy chunks from. I know there might be other problems in my setup as well, for the moment the most blocking issue is the thing with the linked/copied source.
Kind regards,
Philipp
The idea of "Development/Production parity" confuses many on this front. This doesn't mean that you can simply have a single configuration and it will work across everything; it means you'll have much closer parity and that you can create an environment that resembles something very close to what you'll have in production.
What's wrong here is that currently you're building your image and it would be ready to ship out, it'd have your code, you have volumes set aside for uploads, etc. Awesome!
Unfortunately, this setup is not correct for development. If you want to be editing code on the fly - you need to attach your local working directory to the image as a volume as well. This would not be done in production; so it's very close - but not exactly the same setup.
Add the following in to the app service volumes section of your compose-file and you should be good to go:
- .:/app
I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.
I want to try docker for my web-site. I use php, nginx, mysql. I've configured docker and I've run my website locally. Now I want to publish my web-site to production.
I have few difference between developer and production version:
I need to be able connect to mysql inside container in developer mode (for debugging), but in production mode mysql must be isolated from outside for security
I want open my web-site by address app.dev and use nginx-proxy image on my developer machine, but on production I will not use nginx-proxy for increase performance.
Could I run docker with one docker-compose.yml file?
Or should I create two version of docker-compose file for developer and production version? But in this case I lose advantage of docker - same enviroment evrywhere. If I change docker-compose-dev.yml, I need to remember to change docker-compose-prod.yml.
My docker-compose.yml:
version: '2'
services:
app:
build: .
volumes:
- ./app:/app
container_name: app
app_nginx:
image: nginx
ports:
- "8080:80"
container_name: app_nginx
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./app:/app
environment:
- VIRTUAL_HOST=app.dev
app_db:
image: mysql:5.7
volumes:
- "./data/db:/var/lib/mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
MYSQL_DATABASE: "app_db"
container_name: app_db
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
You can achieve this with environment variable based configurations.
Usually different environments i.e staging and production differs only by configurations like database it needs to connect to, external service it calls, their end-points and credentials.
Instead of hard coding all such configuration, read them from environment variables. Thus you can use same docker-compose file with different environment variables for your staging and production environment.
You can also explore Rancher by Rancher Labs at http://rancher.com/ to manage your environments.