I am very new to K8s, so I didn't use it ever. But I had familiarized myself with the concept of nodes/pods. I know that minikube is the local k8s engine for debug/etc and that I should interact with any k8s engine via kubectl tool. Now my questions are:
Does launching the same configuration on my local minikube instance and production AWS/etc instance guarantee that the result will be identic?
How do I set up continuous deployment for my project? Now I have configured CI that pushes images of tested code to the docker hub with the :latest tag. But I want them to be automatically deployed in the Rolling Update mode without interrupting uptime.
It would be great to get correct configurations with the steps I should perform to make it work on any cluster? I don't want to save docker-compose's notation and use kompose. I want to make it properly in the k8s context.
My current docker-compose.yml is (django and react services are available from dockerhub now):
version: "3.5"
services:
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
restart: always
command: bash -c "service nginx start && tail -f /dev/null"
ports:
- 80:80
- 443:443
volumes:
- /mnt/wts_new_data_volume/static:/data/django/static
- /mnt/wts_new_data_volume/media:/data/django/media
- ./certs:/etc/letsencrypt/
- ./misc/ssl/server.crt:/etc/ssl/certs/server.crt
- ./misc/ssl/server.key:/etc/ssl/private/server.key
- ./misc/conf/nginx.conf:/etc/nginx/nginx.conf:ro
- ./misc/conf/passports.htaccess:/etc/passports.htaccess:ro
depends_on:
- react
redis:
restart: always
image: redis:latest
privileged: true
command: redis-server
celery:
build:
context: backend
command: bash -c "celery -A project worker -B -l info"
env_file:
- ./misc/.env
depends_on:
- redis
django:
build:
context: backend
command: bash -c "/code/manage.py collectstatic --no-input && echo donecollectstatic && /code/manage.py migrate && bash /code/run/daphne.sh"
volumes:
- /mnt/wts_new_data_volume/static:/data/django/static
- /mnt/wts_new_data_volume/media:/data/django/media
env_file:
- ./misc/.env
depends_on:
- redis
react:
build:
context: frontend
depends_on:
- django
The short answer is yes, you can replicate what you have with docker-compose with K8s.
It depends on your infrastructure. For example if you have an external LoadBalancer in your AWS deployment, it won't be the same in your local.
You can do rolling updates (this typically works with stateless services). You can also take advantage of a GitOps type of approach.
The docker-compose notation is different from K8s so yes, you'll have to translate that to Kubernetes objects: Pods, Deployments, Secrets, ConfigMaps, Volumes, etc and for the most part the basic objects will work on any cluster, but there will always be some specific objects related to the physical characteristics of your cluster (i.e storage volumes, load balancer, etc). The Kubernetes docs are very comprehensive and are super helpful.
Related
As a bit of context, I am fairly new to Docker and Docker-compose and until recently I've never even heard of Docker Swarm. I should not be the one responsible for the task I've been given but it's not like I can offload it to someone else...
So, the idea is to have two different physical machines to host a web server. One of the machines will run an Express.js server plus a Redis database, while the other machine hosts the system database (a Postgres DB).
Up until now I had a docker-compose.yaml file which created all these services and ran them.
version: '3.8'
services:
server:
image: server
build:
context: .
target: build-node
volumes:
- ./:/src/app
- /src/app/node_modules
container_name: server
ports:
- 3000:3000
depends_on:
- postgres
- redis
entrypoint:
['./wait-for-it.sh', '-t', '30', 'postgres:5432', '--', 'yarn', 'dev']
networks:
- servernet
# postgres database
postgres:
image: postgres
user: postgres
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- ./data:/var/lib/postgresql/data # persist data even if container shuts down
- ./db_scripts/startup.sh:/docker-entrypoint-initdb.d/c_startup.sh
#- ./db_scripts/db.sql:/docker-entrypoint-initdb.d/a_db.sql
#- ./db_scripts/db_population.sql:/docker-entrypoint-initdb.d/b_db_population.sql
ports:
- '5432:5432'
networks:
- servernet
# pgadmin for managing postgis db (runs at localhost:5050)
# To add the above postgres server to pgadmin, use hostname as defined by docker: 'postgres'
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
depends_on:
- postgres
ports:
- 5050:80
networks:
- servernet
redis:
image: redis
networks:
- servernet
networks:
servernet:
I would naturally run this script with docker-compose up and that was the end of my concerns, everything running together on localhost. But now, with this setup I have no idea what to do. From what I've read, I have to create a swarm, but then how do I go about running everything from the same place (or with one command)? And how do I specify which services are to be executed on which machine?
Additionally, here is my Dockerfile in case it's useful:
FROM node as build-node
WORKDIR /src/app
COPY package.json .
COPY yarn.lock .
COPY wait-for-it.sh .
COPY . .
RUN yarn
RUN yarn style:fix
RUN yarn lint:fix
RUN yarn build
EXPOSE 3000
ENTRYPOINT yarn dev
Is my current docker-compose script even capable of being used with this new setup?
This is really over my head and I've got not idea where to start. Docker documentation is also a bit confusing since I don't have much knowledge of Docker to begin with...
Thanks in advance!
You first need to learn what's docker swarm and how it works
Docker swarm is a container orchestration tool, meaning that it allows
the user to manage multiple containers deployed across multiple hosts
machines.
to answer your questions briefly:
how do I go about running everything from the same place?
you can use docker stack deploy command to deploy a set of services
and yes you run it from one host machine, you don't have to run it on different machines, and that machine we call it master node
The good news is that you can still use your docker-compose file, with slight modifications maybe.
So to summarize the steps that you need to do are the following:
install docker swarm (1 master and 1 worker as you have only 2
machines)
make sure it's working fine (communication between nodes)
prepare your docker-compose file and deploy your stack from the
master node
I want to practice using docker-compose. I have a tournament happening over the weekend and I want to set up 10 copies of the same web app on ONE server with urls like:
http://team1.example.com
http://team2.example.com
etc...
http://team10.example.com
There will be 10 teams in the tournament, and they will all go to their respective url http://team<your team number>.example.com via web browser, save some information to a database, and maybe even modify the code on the actual server.
So I built a simple nodejs app that simply writes data to a mongo database. Then I decided to set up two websites http://team1.example.com and http://team2.example.com. So I made this docker compose file:
version: '3'
services:
api1:
image: dockerjohn/tournament:latest
environment:
- DB=database1
ports:
- 80:3000
networks:
- net1
db1:
image: mongo:4.0.3
container_name: database1
networks:
- net1
api2:
image: dockerjohn/tournament:latest
environment:
- DB=database2
ports:
- 81:3000
networks:
- net2
db2:
image: mongo:4.0.3
container_name: database2
networks:
- net2
networks:
net1:
net2:
Then I installed apache web server to reverse proxy team 1 to port 80 and team 2 to port 81. This all works fine.
To set up the remaining teams 3 to 10, I have to duplicate the entries I have in my docker compose yml file and duplicate virtual host entries in apache.
My question: Is there a docker command that will let me clone each docker stack (team 1, team2, etc...) more easily without all this data entry? Do I need Kubernetes to do this?
Kubernetes would be way easier to set this up. It can take care of the reverse proxy setup too if you install the nginx controller.
You could create a single Kubernetes manifest containing:
a mongodb deployment, service, persistent volume claim
a nodejs deployment, service
You can then apply this 10 times, each time using a different namespace:
kubectl -n team01 -f manifest.yaml
kubectl -n team02 -f manifest.yaml
kubectl -n team03 -f manifest.yaml
...
Of course, you would need 10 different ingress rules because you want 10 different domains, but that would be the only thing you need to copy-paste.
I figured it out. There are options for docker called swarm and stack. First, I simplified my docker-compose.yml file to just this:
version: '3'
services:
api:
image: dockerjohn/tournament:latest
environment:
- DB=$DB
ports:
- $WEB_PORT:3000
networks:
- mynet
db:
image: mongo:4.0.3
networks:
- mynet
networks:
mynet:
Then I ran these commands from the same folder as my docker-compose file like this
docker swarm init
DB=team1_db WEB_PORT=81 docker stack deploy -c docker-compose.yml team1
DB=team2_db WEB_PORT=82 docker stack deploy -c docker-compose.yml team2
DB=team3_db WEB_PORT=83 docker stack deploy -c docker-compose.yml team3
DB=team4_db WEB_PORT=84 docker stack deploy -c docker-compose.yml team4
DB=team5_db WEB_PORT=85 docker stack deploy -c docker-compose.yml team5
etc...
You have to structure the DB env variable as <stack name located at the end of my docker stack deploy command>_<job name in the docker-compose yaml file>.
Now I just need to find a way to simplify my apache set up so I don't have to duplicate so many vhost entries . I heard there's a docker image called Traefik which can do this reverse proxy. Maybe I'll try that out and update my answer after.
I am using docker-compose and my configuration file is simply:
version: '3.7'
volumes:
mongodb_data: {}
services:
mongodb:
image: mongo:4.4.3
restart: always
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=super-secure-password
rocket:
build:
context: .
depends_on:
- mongodb
image: rocket:dev
dns:
- 1.1.1.1
- 8.8.8.8
volumes:
- .:/var/rocket
ports:
- "30301-30309:30300"
I start MongoDB with docker-compose up, and then in new terminal windows run two Node.js application each with all the source code in /var/rocket with:
# 1st Node.js application
docker-compose run --service-ports rocket
# 2nd Node.js application
docker-compose run --service-ports rocket
The problem is that the 2nd Node.js application service needs to communicate with the 1st Node.js application service on port 30300. I was able to get this working by referencing the 1st Node.js application by the id of the Docker container:
Connect to 1st Node.js application service on: tcp://myapp_myapp_run_837785c85abb:30300 from the 2nd Node.js application service.
Obviously this does not work long term as the container id changes every time I docker-compose up and down. Is there a standard way to do networking when you start multiple of the same container from docker-compose?
You can run the same image multiple times in the same docker-compose.yml file:
version: '3.7'
services:
mongodb: { ... }
rocket1:
build: .
depends_on:
- mongodb
ports:
- "30301:30300"
rocket2:
build: .
depends_on:
- mongodb
ports:
- "30302:30300"
As described in Networking in Compose, the containers can communicate using their respective service names and their "normal" port numbers, like rocket1:30300; any ports: are ignored for this. You shouldn't need to manually docker-compose run anything.
Well you could always give specific names to your two Node containers:
$ docker-compose run --name rocket1 --service-ports rocket
$ docker-compose run --name rocket2 --service-ports rocket
And then use:
tcp://rocket1:30300
I am making a webapp with docker, i need to run code made by public user (not safe at all), i use redis to push some data between my containers (using a socket shared in a named volume)
How can i forbid unsafe containers to send data to Redis?
my docker-compose.yml
version: "3"
services:
redis:
image: redis:latest
command: >
sh -c "chown redis /tmp/redis/
redis-server /usr/local/etc/redis/redis.conf"
volumes:
- redis_socket:/tmp/redis
unsafe_container:
build:
context: ./docker
dockerfile: Dockerfile
command: python unsafe.py
volumes:
- redis_socket:/tmp/redis:ro
links:
- redis
depends_on:
- redis
data_in:
build:
context: ./docker
dockerfile: Dockerfile-data
command: >
sh -c "python3 /code/manage.py wait_db
python3 /code/manage.py start_dc"
volumes:
- redis_socket:/tmp/redis
links:
- redis
depends_on:
- redis
volumes:
redis_socket:
If i make a redis slave, can i set this one to only accept read from every connections except for the master redis?
Thanks
EDIT: After some tests, a slave is read only by default, but i can't connect the slave to the master using a socket, i dont find anything about this feature / issue on the docs
Redis replicas are primarily designed to allow high availability, and as such, connect over the network. They do not support UDS connections to the master.
You can, however, use socat to expose a Unix socket as TCP (for example https://serverfault.com/questions/517906/how-to-expose-a-unix-domain-socket-directly-over-tcp).
I'm using docker-compose to deploy into a remote host. This is what my config looks like:
# stacks/web.yml
version: '2'
services:
postgres:
image: postgres:9.6
restart: always
volumes:
- db:/var/lib/postgresql/data
redis:
image: redis:3.2.3
restart: always
web_server:
depends_on: [postgres]
build: ../sources/myapp
links: [postgres]
restart: always
volumes:
- nginx_socks:/tmp/socks
- static_assets:/source/public
sidekiq:
depends_on: [postgres, redis]
build: ../sources/myapp
links: [postgres, redis]
restart: always
volumes:
- static_assets:/source/public
nginx:
depends_on: [web_server]
build: ../sources/nginx
ports:
- "80:80"
volumes:
- nginx_socks:/tmp/socks
- static_assets:/public
restart: always
volumes:
db:
nginx_socks:
static_assets:
# stacks/web.production.yml
version: '2'
services:
web_server:
command: bundle exec puma -e production -b unix:///tmp/socks/puma.production.sock
env_file: ../env/production.env
sidekiq:
command: bundle exec sidekiq -e production -c 2 -q default -q carrierwave
env_file: ../env/production.env
nginx:
build:
args:
ENV_NAME: production
DOMAIN: production.yavende.com
I deploy using:
eval $(docker-machine env myapp-production)`
docker-compose -f stacks/web.yml -f stacks/web.production.yml -p myapp_production build -no-deps web_server sidekiq
docker-compose -f stacks/web.yml -f stacks/web.production.yml -p myapp_production up -d
Although this works perfectly locally, and I did couple successful deploys in the past with this method, now it hangs when building the "web_server" service and finally show some timeout error, like I describe in this issue.
I think that the problem originates from the combination of my slow connection (Argentina -> DigitalOcean servers on USA) and me trying to build images and push them instead of using hub hosted images.
I've been able to do deploy by cloning my compose config into the server and running docker-compose directly there.
The question is: is there a better way to automate this process? Is a good practice to use docker-compose to build images on the fly?
I've been thinking about automating this process of cloning sources into the server and docker-composeing stuff, but there may be better tooling to solve this matter.
I was remote building images. This implies pushing the whole source needed to build the image over the net. For some images that was over 400MB of data sent from Argentina to some virtual servers in USA, and proved to be terribly slow.
The solution is to totally change the approach to dockerizing my stack:
Instead of building images on the fly using Dockerfile ARGs, I've modified my source apps and it's docker images to accept options via environment variables on runtime.
Used DockerHub automated build, integrated with GitHub.
This means I only push changes -no the whole source- via git. Then DockerHub builds the image.
Then I docker-compose pull and docker-compose up -d my site.
Free alternatives are running your own self-hosted docker registry and/or possibly GitLab, since it recently released it's own docker image registry: https://about.gitlab.com/2016/05/23/gitlab-container-registry/.