OS: Amazon Linux (hosted on AWS)
Docker version: 17.x
Tools: Ansible, Docker
Our developers use Ansible to be able to spin up individual AWS spot environments that get populated with docker images that get built on their local machines, pushed into a docker registry created on the AWS spot machine, then pulled down and run.
When the devs do this locally on their Macbooks, ansible will orchestrate building the code with sbt, spin up an AWS spot instance, run a docker registry, push the image into the docker registry, command the instance to pull down the image and run it, run a testsuite, etc.
To make things better and easier for non-devs to be able to run individual test environments, we put the ansible script behind Jenkins and use their username to let ansible create a domain name in Route53 that points to their temporary spot instance environment.
This all works great without the registry -- i.e. using JFrog Artifactory to have these dynamic envs just pull pre-built images. It lets QA team members spin up any version of the env they want. But now to allow it to build code and push, I need to have an insecure registry and that is where things fell apart...
Since any user can run this, the Route53 domain name is dynamic. That means I cannot just hardcode in daemon.json the --insecure-registry entry. I have tried to find a way to set a wildcard registry but it didnt seem to work for me. Also since this is a shared build server (the one that is running the ansible commands) so I dont want to keep adding entries and restarting docker because other things might be running.
So, to summarize the questions:
Is there a way to use a wildcard for the insecure-registry entry?
How can I get docker to recognize insecure-registry entry without restarting docker daemon?
So far I've found this solution to satisfy my needs, but not 100% happy yet. I'll work on it more. It doesn't handle the first case of a wildcard, but it does seem to work for the 2nd question about reloading without restart.
First problem is I was editing the wrong file. It doesn't respect /etc/sysconfig/docker nor does it respect $HOME/.docker/daemon.json. The only file that works on Amazon Linux for me is /etc/docker/daemon.json so I manually edited it and then tested a reload and verified with docker info. I'll work on this more to programmatically be able to insert entries as needed, but the manual test works:
sudo vim /etc/docker/daemon.json
sudo systemctl reload docker.service
docker info
Related
The newer docker compose (vs docker-compose) allows you to set secrets in the build section. This is nice because if you do secrets at runtime then the file is readable by anyone that can get into the container by reading /run/secrets/<my_secret>.
Unfortunately, it appears that it's only possible to pass the secrets via either the environment or a file. Doing it via the environment doesn't seem like a great idea because someone on the box could read the /proc/<pid>/environment while the image is being built to snag the secrets. Doing it via a file on disk isn't good because then the secret is being stored on disk unencrypted.
It seems like the best way to do this would be with something like
docker swarm init
$(read -sp "Enter your secret: "; echo $REPLY) | docker secret create my_secret -
docker compose build --no-cache
docker swarm leave --force
Alas, it appears that Docker can't read from the swarm for build time secrets for some unknown reason.
What is the best way to do this? This seems to be a slight oversight, along the lines of docker secrete create not having a way to prompt for the value instead of having to resort to to hacks like above to keep the secret out of your bash history.
UPDATE: This is for SWARM/Remote docker systems, not targeted on local build time secrets. (I realised you were asking for those primarily and just mentioned swarm in the second part of the question. I believe it still holds good advice for some so ill leave the answer undeleted.
Docker Swarm can only read runtime-based secrets you create with the docker secret create command and must already exist on the cluster when deploying stack. We had been in the same situation before. We solved the "issue" using docker contexts. You can create an SSH-based docker context which points to a manager (we just use the first one). Then on your LOCAL device (we use Win as the base platform and WSL2/Linux VM for the UNIX part), you can simply run docker commands with inline --context property. More on context on official docs. For instance: docker --context production secret create .... And so on.
how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/
I'm to deploy .war file using docker.
and I'm very new to docker.
and there's something little bit confusing when I'm trying that.
I'm confused about the approach I'm creating the Dockerfile.
I don't know that the product owner must installing the tomcat and java jdk in his server manually or I should handle that automatically in my Docker image?
what is common and what is the best practice of that?
No, the product owner doesn't need to install anything that's the beauty of containers approach. The approach is there to solve the problem that it runs on my machine and not on others. So, once you built an image all the product owner need is to install docker on his machine and then it is done. Because the container itself is a virtual machine in which everything required to run the project is installed and taken care of. So, short answer no, product owner doesn't need anything except docker itself.
Glad that you opted to use Docker for this, though few things to take note of :-
You will be needing to create a Dockerfile. Refer [https://stackoverflow.com/a/45870319/2519351][1]
Build a Docker image using the Dockerfile docker build -t <image_name>:<tag>
Install Docker service on your product owner's server
Deploying the Docker Image to your product owner is a bit tricky. As it will require you to transfer the Docker Image built on your machine to the Product owner's server
One option is to push the Docker Image to Docker Hub. Don't opt for this option if you don't want to make your app public.
Another option is to set up a private registry, though this would be an overkill if there is no scale of your deployment. But it is the correct approach.
Another crude option is to take remote control of the Docker daemon running on your product owner's server. This way you can start a docker container on remote server from your local machine. Refer - [https://success.docker.com/article/how-do-i-enable-the-remote-api-for-dockerd][1]
Finally run the Docker container docker -H <remote_server>:<port> run -d <image>:<tag>
I have a question about docker compose. I am new to docker and I can't figure out the "right" flow for deployment.
Lets assume we have a "Dockerfile" which contain a steps to build an image from project source files.
And we have a "docker-compose.yml" which is actually building this "Dockerfile" along with 2 more services.
It is not important here but lets say they are, nginx, webapi (actual project) and mongodb.
So, if i will run "docker compose up" on my machine - it will create 3 images (webapi, nginx, mongodb) and run them. Everything is perfect here.
Questions is, what i need to do to get it deployed to production. What i have tried:
I can checkout git on production server and run "docker compose up" and it will work. But i think this is not the way to go - use of production server to build projects seems silly.
I can run "docker compose build" locally, get 3 images, push them to docker repository, go to production download images from repository and start them one by one. In this case I don't see a point in "docker compose" at all, I am loosing the way to easily define volumes and relation between images, which I can do with docker compose. It will also require a lot of manual activity, or some custom scripts to automate it.
It seems like, there is a way to use "docker machine" to connect to remote server and use "docker compose up", but I was not able to make it work. For some reasons it was not possible to connect from Windows to a remote docker on Linux.
Before going further with that option I need to understand/confirm, it case of remote docker, and "docker compose up", where the build will happen? And if I have a few volumes defined in "docker-compose.yml" are they going to be created on local machine or on remote?
For my project I went with option that resembles your second proposal but bit more automatic. The CI is doing the docker build webapi as this is the only part of my system that is actually build from sources. Ci is also doing docker push to my private repository. Next step is running docker-compose up on production. The compose is not building the webapi it is only configuring it so rather than using build section its using image. Docker compose is also configuring other services that are required (nginx, mongo) and networks for them to communicate. Even if you have custom image creation for other services you do not require full dev environment to create them. For full automation you can do docker machine to remotely execute it. Note that docker will not update images if they are already downloaded on docker-compose up execution you need to docker pull them.
Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/