I'd like to make sure that our frontend developers have access to the latest versions of the backend web app and can update it whenever desired, so to avoid incompatibilities with the API, which is also under development.
I have created a docker-compose.yml file containing two services: one for the backend web application, built with a custom Dockerfile, and a generic postgres image for the database. It all works fine.
I already published the backend webapp image to my private docker registry powered by Nexus repository manager, using the docker-compose push command.
Now I would like to somehow make my docker-compose.yml available, so that all that frontend devs need to do is run it with simple command.
Is there a way to publish docker-compose.yml to a Docker registry so I can avoid sharing backend sources with the frontend devs?
The traditional solution for sharing docker-compose.yml files has been version control (e.g. GitHub).
Recently, Docker has been working on docker-app which allows you to share docker-compose.yml files using a registry server. This is a separate install, and currently experimental, so I wouldn't base a production environment on it, but may be useful for developers to try out. You can checkout the project here:
https://github.com/docker/app
Related
I have been developing a web application using Python, Flask, Docker(-Compose) and git/github and getting to the point where I try to figure out the best way/workflow to bring it to production. I have read some articles but not sure what is a best practice from different approaches.
My current setup is purely development oriented:
Local Docker using docker-compose to build various service images (such as db, backend workers, webapp (flask & uwisg), nginx).
using .env file for docker-compose to pass configuration to the services
Source code is bind mounted from the local docker host
db data is stored in a named volume
Using local git for source control (though I have connected it to a github repository but not been using it much since I am the only one currently developing the application)
From what I understand the steps to production could be the following:
Implement docker-compose override to distinguish between dev and prod
Implement Dockerfile Multistage builds to create prod images which include the source code in the image and do not include dev dependencies
Tag and push the production images to a registry (docker, google?) or better push the git to github?
[do security scans of the prod images]
deploy/pull the prod images from the registry (or build from github) on a service like GKE for instance
Is this a common way to do it? Am I missing something?
How would I best go about using an integration/staging environment between dev and prod, so that I can first test new prod builds or debug prod images in integration?
Does GKE for instance offer an easy way to setup an integration environment? Or could I use the Docker installation on my NAS for that?
Any best practices for backing up production (like db data most importantly)?
Thanks in advance!
I have a working webpage.
It is hosted on an rpi.
Backend is using flask and SQLite.
Python is using a venv and the server is nginx.
These are connected with uWSGI.
Source code is in github.
I have heard that a docker can add an extra layer of security.
Is it possible to add this project to a docker container (without breaking functionality) after the page is up and running?
What changes must be done if possible?
Yes, it's perfectly possible. But even though you should be able to host it in your RPI, I don't think it's worth the effort.
What you will need:
Install docker in RPI device
Write a Dockerfile containing the instructions to setup and run the application
Build the image from your Dockerfile
Run the image in your RPI's docker
I am currently working on a project which needs to be deployed on customer infra (which is not cloud) and also it will not have internet.
We currently deploy manually our application and install dependencies using tarball, can docker help us here?
Note:
Application stack:
NodeJs
MySql
Elasticsearch
Redis
MongoDB
We will not have internet.
You can use docker load and docker save to load Docker images in TAR format or export these images. If you package your application files within these images this could be used to deliver your project to your customers.
Also note that the destination services must all have Docker Engine installed and running.
If you have control over your dev environment, you can also use Nexus or Gitlab as your private Docker repository. You can then pull your images from there into production, if it makes sense for your product.
I think the most advantage can be had in your local dev setup. Instead of installing, say, MySQL locally, you can run it as a Docker container. I use docker-compose for all client services in my current project. This helps keep your computer clean, makes it easy to avoid versioning hell (if you use different versions for each release or stage) and you don't have to mess around with configuration for each dev machine.
In my previous job every developer had a local Oracle SQL install, and that was not a happy state of affairs.
I am pretty new to Docker. After reading specifically what I needed I figured out how to create a pretty nice Docker setup. I have created some setup where in I can start up multiple systems using one docker-compose.yml file.
I am currently using this for testing specific PHP code on different PHP and MySQL versions. The file structure looks something like this:
./mysql55/Dockerfile
./mysql56/Dockerfile
./mysql57/Dockerfile
./php53/Dockerfile
./php54/Dockerfile
./php56/Dockerfile
./php70/Dockerfile
./php71/Dockerfile
./php72/Dockerfile
./web (shared folder with test files available on all php machines)
./master_web (web interface to send test request to all possible versions using one call)
./docker-compose.yml
In the docker-compose file I setup different containers most refering to the local Dockerfiles, some refering to online image names. When I run docker-compose up all containers start as expected in the configured network configuration and I'm able to use it as desired.
I would first of all like to know how this setup is called. Is this called a "docker swarn" or is such setup called differently?
Secondly, I'd like to make one "compiled/combined" file (image, container, swarn, engine, machine) or however it is called) of this. That I can save without having to depend on external sources again. Of course the docker-compose.yml file will work as long as all the refered external sources are still available. But I'd like to pusblish my fully confired setup as is. How do I do that?
You can publish built images with Docker registry. You can setup your own or use third-party service.
After that, you need to prefix you image names with your registry IP/DNS in docker-compose.yml. This way, you can deploy it anywhere docker-compose is installed (and docker-compose itself can be run as docker container too), just need to copy your docker-compose.yml file there.
docker-machine is tool to deploy to multiple machines, as is docker swarm.
I created an app which consists of many components so I use docker-compose.
I published all my images into my private repository (but I also use public repos from other providers).
If I have many customers: how can they receive my full app?
I could send them my docker-compose.yml file per email or if I have access to the servers, I can scp the .yml file.
But is there another solution to provide my full app without scp'ing a yml file?
Edit:
So I just read about docker-machine. This looks good, and I already linked it with an Azure subscription.
Now what's the easiest way to deploy a new VM with my docker-application? Do I still have to scp my .yml file, ssh into this machine and start docker-compose? Or can I tell to use a specific .yml during VM creation and automatically run it?
There is no official distribution system specifically for Compose files, but there are many options.
The easiest option would be to host the Compose file from a website. You could even use github or github pages. Once you have it hosted by an http server you can curl it to download it.
There is also:
composehub a community project to act as a package manager for Compose files
Some related issues: #1597, #3098, #1818
The experimental DAB feature in Docker