Show configuration of Docker container - docker

So, I ran a docker image with certain settings a while ago. In the meantime I updated my container settings via "docker update".
Now I want to see, what options/configurations (e.g. cpuset, stack, swap) are currently configured for my container.
Is there a docker command to check this?
If not, (why the hell isn't there and) where exactly can I find this information?
I am running docker 18.03.1-ce on debian 9.4.
Greetings,
Johannes

I found it out by myself.
To get detailed information about a containers settings one can use:
docker inspect 'options' 'containerid'

Related

How to extract docker-compose file from running docker start

I have docker stack started with docker stack deploy --compose-file ...
and later manually edited via Docker Portainer UI.
I'd like to write a script that updates the docker image tag of one of the services.
To do that I need to "download" the latest "docker-compose" stack definition however I cannot find the appropriate docker command.
I do know that the best would be to stop changing stack manually and rely on its definition stored in git but unfortunately, it is not up to me.
Please point me to the appropriate docker command or confirm that it is not available.
As far as i know there is no command you could get the compose file from the running container directly. At least not implemented out of the box in docker. You could try to parse all the relevant information from docker inspect and few other commands to list/inspect all relevant objects?.
I have once came across the similar situation where we had a running container but no run/compose command which we needed to update. At the time (roughly a year ago) i found and used docker-autocompose which did very good job. We only had to manually verify and adjust few things,but it got all the difficult parts with run parameters done for us.
It could help in your case to automate it if your compose configs are simple enough.
But if you wanted to fully automate it to mimic CD, then i would not recommend the approach above. In that case i would check if you could use portainer api as #LinFelix recommended. Or store compose files somewhere - prepared with parameters ($IMAGE_TAG) (git/on server) so you can then generate temporary compose files with all configuration and then remove the current one.

is docker has config to replace image`s repository [duplicate]

By default, if I issue command:
sudo docker pull ruby:2.2.1
it will pull from the docker.io offical site by default.
Pulling repository docker.io/library/ruby
How do I change it to my private registry. That means if I issue
sudo docker pull ruby:2.2.1
it will pull from my own private registry, the output is something like:
Pulling repository my_private.registry:port/library/ruby
UPDATE: Following your comment, it is not currently possible to change the default registry, see this issue for more info.
You should be able to do this, substituting the host and port to your own:
docker pull localhost:5000/registry-demo
If the server is remote/has auth you may need to log into the server with:
docker login https://<YOUR-DOMAIN>:8080
Then running:
docker pull <YOUR-DOMAIN>:8080/test-image
There is the use case of a mirror of Docker Hub (such as Artifactory or a custom one), which I haven't seen mentioned here. This is one of the most valid cases where changing the default registry is needed.
Luckily, Docker (at least version 19.03.3) allows you to set a mirror (tested in Docker CE). I don't know if this will work with additional images pushed to that mirror that aren't on Docker Hub, but I do know it will use the mirror instead. Docker documentation: https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon.
Essentially, you need to add "registry-mirrors": [] to the /etc/docker/daemon.json configuration file. So if you have a mirror hosted at https://my-docker-repo.my.company.com, your /etc/docker/daemon.json should contain:
{
"registry-mirrors": ["https://my-docker-repo-mirror.my.company.com"]
}
Afterwards, restart the Docker daemon. Now if you do a docker pull postgres:12, Docker should fetch the image from the mirror instead of directly from Docker Hub. This is much better than prepending all images with my-docker-repo.my.company.com
It turns out this is actually possible, but not using the genuine Docker CE or EE version.
You can either use Red Hat's fork of docker with the '--add-registry' flag or you can build docker from source yourself with registry/config.go modified to use your own hard-coded default registry namespace/index.
The short answer to this is you don't, or at least you really shouldn't.
Yes, there are some container runtimes that allow you to change the default namespace, specifically those from RedHat. However, RedHat now regrets this functionality and discourages customers from using it. Docker has also refused to support this.
The reason this is so problematic is because is results in an ambiguous namespace of images. The same command run on two different machines could pull different images depending on what registry they are configured to use. Since compose files, helm templates, and other ways of running containers are shared between machines, this actually introduces a security vulnerability.
An attacker could squat on well known image names in registries other than Docker Hub with the hopes that a user may change their default configuration and accidentally run their image instead of the one from Hub. It would be trivial to create a fork of a tool like Jenkins, push the image to other registries, but with some code that sends all the credentials loaded into Jenkins out to an attacker server. We've even seen this causing security vulnerability reports this year for other package managers like PyPI, NPM, and RubyGems.
Instead, the direction of container runtimes like containerd is to make all image names fully qualified, removing the Docker Hub automatic expansion (tooling on top of containerd like Docker still apply the default expansion, so I doubt this is going away any time soon, if ever).
Docker does allow you to define registry mirrors for Docker Hub that it will query first before querying Hub, however this assumes everything is still within the same namespace and the mirror is just a copy of upstream images, not a different namespace of images. The TL;DR on how to set that up is the following in the /etc/docker/daemon.json and then systemctl reload docker:
{
"registry-mirrors": ["https://<my-docker-mirror-host>"]
}
For most, this is a non-issue (this issue to me is the docker engine doesn't have an option to mirror non-Hub registries). The image name is defined in a configuration file, or a script, and so typing it once in that file is easy enough. And with tooling like compose files and Helm templates, the registry can be turned into a variable to allow organizations to explicitly pull images for their deploy from a configurable registry name.
if you are using the fedora distro, you can change the file
/etc/containers/registries.conf
Adding domain docker.io
Docker official position is explained in issue #11815 :
Issue 11815: Allow to specify default registries used in pull command
Resolution:
Like pointed out earlier (#11815), this would fragment the namespace, and hurt the community pretty badly, making dockerfiles no longer portable.
[the Maintainer] will close this for this reason.
Red Hat had a specific implementation that allowed it (see anwser, but it was refused by Docker upstream projet). It relied on --add-registry argument, which was set in /etc/containers/registries.conf on RHEL/CentOS 7.
EDIT:
Actually, Docker supports registry mirrors (also known as "Run a Registry as a pull-through cache").
https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon
It seems it won't be supported due to the fragmentation it would create within the community (i.e. two users would get different images pulling ubuntu:latest). You simply have to add the host in front of the image name. See this github issue to join the discussion.
(Note, this is not intended as an opinionated comment, just a very short summary of the discussion that can be followed in the mentioned github issue.)
I tried to add the following options in the /etc/docker/daemon.json.
(I used CentOS7)
"add-registry": ["192.168.100.100:5001"],
"block-registry": ["docker.io"],
after that, restarted docker daemon.
And it's working without docker.io.
I hope this someone will be helpful.
Earlier this could be achieved using DOCKER_OPTS in the /etc/default/docker config file which worked on Ubuntu 14:04 and had some issues on Ubuntu 15:04. Not sure if this has been fixed.
The below line needs to go into the file /etc/default/docker on the host which runs the docker daemon. The change points to the private registry is installed in your local network. Note: you would require to restart the docker service followed with this change.
DOCKER_OPTS="--insecure-registry <priv registry hostname/ip>:<port>"
I'm adding up to the original answer given by Guy which is still valid today (soon 2020).
Overriding the default docker registry, like you would do with maven, is actually not a good practice.
When using maven, you pull artifacts from Maven Central Repository through your local repository management system that will act as a proxy. These artifacts are plain, raw libs (jars) and it is quite unlikely that you will push jars with the same name.
On the other hand, docker images are fully operational, runnable, environments, and it makes total sens to pull an image from the Docker Hub, modify it and push this image in your local registry management system with the same name, because it is exactly what its name says it is, just in your enterprise context. In this case, the only distinction between the two images would precisely be its path!!
Therefore the need to set the following rule: the prefix of an image indicates its origin; by default if an image does not have a prefix, it is pulled from Docker Hub.
Didn't see the answer for MacOS, so want to add here:
2 Method as below:
Option 1 (Through Docker Desktop GUI):
Preference -> Docker Engine -> Edit file -> Apply and Restart
Option 2:
Directly edit the file ~/.docker/daemon.json
Haven't tried, but maybe hijacking the DNS resolution process by adding a line in /etc/hosts for hub.docker.com or something similar (docker.io?) could work?

JVM list empty after selecting remote docker container (no kubernetes)

I have the same problem as described in JProfiler remote process list empty after selecting container. But because I don't use kubernetes but plain docker, I posted a new question.
JProfiler 12 does not list the available jvm in my docker container. There are no error messages at all, the list is simply empty.
I have multiple docker containers hosting a java process and interestingly the ones that were built with the gradle jib plugin are not shown in the list, the ones that built differrently are shown. Is this just by coincidence?
[UPDATE 27. Oct. 21]
No it does not relate to jib. I built the same spring boot application with a good old Dockerfile and docker build but jprofiler is still not able to find the jvm inside the docker container.
Ok, what worked for me is to use a different JDK inside the container. It seems to be the distroless JDK that I used (although another one I used does not work either). Using the eclipse-temurin:11 as base image of the dokcer container fixes the problem for me.

Using other person's docker containers

I am new to docker and while I was searching for something related to my project, I found a popular container on dockerhub -> https://hub.docker.com/r/augury/haproxy-consul/dockerfile.
This may solve the problem that I was facing before. My question is how do I use it? Do I simply run this container, register my applications on consul and this will handle the rest, or something else.
Is it like npmjs.org, where we simply import libraries and use them?
My idea of docker is that its a replication of images in which you can make modifications,so go ahead and build a container of the said project.Changes or any form of modifications will remain yours(your container) until you push it to a repo(upstream).For how to use it just go to the docker docs for more info on how to use it.Hope this helps.
you can simply pull the image docker pull augury/haproxy-consul and run using docker run augury/haproxy-consul -p 80:80. the container will be running and accessible on 80(2nd port)
And also, You can use the image as a base image in your DockerFile if you want to add something on top of it.
You already have a good idea of how the docker runs.Use the port created to make all your modifications, and yes all the changes are on your local repo.

How to set the shared drives in Docker for Windows?

How to set the shared drives in Docker for Windows? I am using the latest version 18. Stable and Edge. My settings screen is shown below. It's missing some options like Shared Drives, Advanced and Network, which are shown in the second image. Why am I missing these options?
My settings:
Screen from a website:
Seems you are Running Docker for Windows using "Windows Containers". If you switch to "Linux containers" you'll see "Shared Drives" option. Take a look this video.
According Docker documentation: shared drives for Windows containers is not implemented.
Volume mounting requires shared drives for Linux containers (not for
Windows containers).
Update:
Since 2018, Docker for Desktop is using a new UI. I recorded a new video showing how to solve this problem.
Update:
If you are using WSL2 you will be experiencing same problem. Take a look this video.
In new UIs they are placed under resources
Ended up here, because the "Shared drives" was missing on my docker settings. If you are missing it too, but docker is set for linux container then it is because WSL 2.
Because if you are using Docker on WSL 2, there is no such option, but you can directly attach volumes from filesystem with docker run -v c:\...\your-folder:/mount ... without specifying it in docker settings.

Resources