Accessing host's api from inside a container - docker

I'm trying to make a build env with docker and i want to make this automatic. i've written a custom go binary to handle build stuff and i've built an image which has the go binary, maven and java8 sdk installed.
The steps that binary does are:
Clone a git repo
Run build command
Extract build artifacts to host. (which hasnt done yet.)
I'm passing repo url as parameter to binary while running container and it does build.
But the problem is i need those artifacts in order to run builted app.
I know i can use volumes, but i dont want to use them because when build has done volumes are becoming dangle and it needs a job for deleting those dangling volumes.
I thought i can create an api for saving files into host (that means i have to run that api inside host machine) and my custom go binary can send files to the api and api will do the saving.
But when it comes to calling host from inside a container i've got a problem. i'm getting connection refused to port xx error.
Is there a better way to do it , or should i change my approach?

found an answer on accessing-host-machine-as-localhost-from-a-docker-container-thats-also-inside
Running container with --add-host option is the answer.

While you could use
docker cp CONTAINER:SRC_PATH DEST_PATH
to get the files out of your container, I still believe using a volume is the better idea. Instead of using an anonymous volume use a named one:
docker run -v /local/host/dir:/build/output YOURIMAGE
This allows you to pick up the artefacts on your host from the /local/host/dir
https://docs.docker.com/engine/tutorials/dockervolumes/#locate-a-volume

Related

How to extract docker-compose file from running docker start

I have docker stack started with docker stack deploy --compose-file ...
and later manually edited via Docker Portainer UI.
I'd like to write a script that updates the docker image tag of one of the services.
To do that I need to "download" the latest "docker-compose" stack definition however I cannot find the appropriate docker command.
I do know that the best would be to stop changing stack manually and rely on its definition stored in git but unfortunately, it is not up to me.
Please point me to the appropriate docker command or confirm that it is not available.
As far as i know there is no command you could get the compose file from the running container directly. At least not implemented out of the box in docker. You could try to parse all the relevant information from docker inspect and few other commands to list/inspect all relevant objects?.
I have once came across the similar situation where we had a running container but no run/compose command which we needed to update. At the time (roughly a year ago) i found and used docker-autocompose which did very good job. We only had to manually verify and adjust few things,but it got all the difficult parts with run parameters done for us.
It could help in your case to automate it if your compose configs are simple enough.
But if you wanted to fully automate it to mimic CD, then i would not recommend the approach above. In that case i would check if you could use portainer api as #LinFelix recommended. Or store compose files somewhere - prepared with parameters ($IMAGE_TAG) (git/on server) so you can then generate temporary compose files with all configuration and then remove the current one.

How can I save any changes of containers?

If I have one ubuntu container and I ssh to it and make one file after the container is destroyed or I reboot the container the new file was destroyed because the kubernetes load the ubuntu image that does not contain my changes.
My question is what should I do to save any changes?
I know it can be done because some cloud provider do that.
For example:
ssh ubuntu#POD_IP
mkdir new_file
ls
new_file
reboot
after reboot I have
ssh ubuntu#POD_IP
ls
ls shows nothing
But I want to it save my current state.
And I want to do it automatically.
If I use docker commit I can not control my images because it makes hundreds of images. because I should make images by every changes.
If I want to use storage I should mount /. but kubernetes does not allow me to mount /. and it gives me this error
Error: Error response from daemon: invalid volume specification: '/var/lib/kubelet/pods/26c39eeb-85d7-11e9-933c-7c8bca006fec/volumes/kubernetes.io~rbd/pvc-d66d9039-853d-11e9-8aa3-7c8bca006fec:/': invalid mount config for type "bind": invalid specification: destination can't be '/'
You can try to use docker commit but you will need to ensure that your Kubernetes cluster is picking up the latest image that you committed -
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
This is going to create a new image out of your container which you can feed to Kubernetes.
Ref - https://docs.docker.com/engine/reference/commandline/commit/
Update 1 -
In case you want to do it automatically, you might need to store the changed state or the files at a centralized file system like NFS etc & then mount it to all running containers whenever required with the relevant permissions.
K8s ref - https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Docker and Kubernetes don't work this way. Never run docker commit. Usually you have very little need for an ssh daemon in a container/pod and you need to do special work to make both the sshd and the main process both run (and extra work to make the sshd actually be secure); your containers will be simpler and safer if you just remove these.
The usual process involves a technique known as immutable infrastructure. You never change code in an existing container; instead, you change a recipe to build a container, and tell the cluster manager that you want an update, and it will tear down and rebuild everything from scratch. To make changes in an application running in a Kubernetes pod, you typically:
Make and test your code change, locally, with no Docker or Kubernetes involved at all.
docker build a new image incorporating your code change. It should have a unique tag, often a date stamp or a source control commit ID.
(optional but recommended) docker run that image locally and run integration tests.
docker push the image to a registry.
Change the image tag in your Kubernetes deployment spec and kubectl apply (or helm upgrade) it.
Often you'll have an automated continuous integration system do steps 2-4, and a continuous deployment system do the last step; you just need to commit and push your tested change.
Note that when you docker run the image locally in step 3, you are running the exact same image your production Kubernetes system will run. Resist the temptation to mount your local source tree into it and try to do development there! If a test fails at this point, reduce it to the simplest failing case, write a unit test for it, and fix it in your local tree. Rebuilding an image shouldn't be especially expensive.
Your question hints at the unmodified ubuntu image. Beyond some very early "hello world" type experimentation, there's pretty much no reason to use this anywhere other than the FROM line of a Dockerfile. If you haven't yet, you should work through the official Docker tutorial on building and running custom images, which will be applicable to any clustering system. (Skip all of the later tutorials that cover Docker Swarm, if you've already settled on Kubernetes as an orchestrator.)

Making a new container with the same configuration as the old one

Let's say I make a container with some flags. For instance,
docker run -v my_volume:/data my_cool_image
Now, let's say my_cool_image is updated to a new version. Is there a nice way to make a new container with the same -v flag as the old one? The container has been properly configured so that the data does not get stored in the container, so deleting the old container is not a concern.
The best solution I can find is to use docker-compose, but that seems a bit silly for single-container systems.
I'd use a shell script or a Docker Compose YAML file. (Compose isn't really overkill; if you add some error handling and write out one option per line for readability, the shell script and the YAML file wind up being about the same length.)
There's nothing built in to Docker that can extract the docker run options from an existing container.

Override a volume when Building docker image from another docker image

sorry if the question is basic but would it be possible to build a docker image from another one with a different volume in the new image? My use case is the following:
Start From image library/odoo (cfr. https://hub.docker.com/_/odoo/)
upload folders into the volume "/mnt/extra-addons"
build a new image, tag it then put it in our internal image repo
how can we achieve that? I would like to avoid putting extra folders into the host filesystem
thanks a lot
This approach seems to work best until the Docker development team adds the capability you are looking for.
Dockerfile
FROM percona:5.7.24 as dbdata
MAINTAINER monkey#blackmirror.org
FROM centos:7
USER root
COPY --from=dbdata / /
Do whatever you want . This eliminates the VOLUME issue. Heck maybe I'll write tool to automatically do this :)
You have a few options, without involving the host OS that runs the container.
Make your own Dockerfile, inherit from the library/odoo Docker image using a FROM instruction, and COPY files into the /mnt/extra-addons directory. This still involves your host OS somewhat, but may be acceptable since you wouldn't necessarily be building the Docker image on the same host you were running it.
Make your own Dockerfile, as in (1), but use an entrypoint script to download the contents of /mnt/extra-addons at runtime. This would increase your container startup time since the download would need to take place before running your service, but no host directories would need be involved.
Personally I would opt for (1) if your build pipeline supports it. That would bake the addons right into the image, so the image itself would be a complete, ready-to-go build artifact.

Dynamically get docker version during image build

I'm working on a project the requires me to run docker within docker. Currently, I am just relying on the docker client to be running within docker and passing in an environment variable to the TCP address of the docker daemon with which I want to communicate.
The file in the Dockerfile that I use to install the client looks like this:
RUN curl -s https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
However, the problem is that this will always download the latest docker version. Ideally, I will always have the Docker instance running this container on the latest version, but occasionally it may be a version behind (for example I haven't yet upgraded from 1.2 to 1.3). What I really want is a way to dynamically get the version of the Docker instance that's building this Dockerfile, and then pass that in to the URL to download the appropriate version of Docker. Is this at all possible? The only thing I can think of is to have an ENV command at the top of the Dockerfile, which I need to manually set, but ideally I was hoping that it could be set dynamically based on the actual version of the Docker instance.
While your question makes sense from an engineering point of view, it is at odds with the intention of the Dockerfile. If the build process depended on the environment, it would not be reproducible elsewhere. There is not a convenient way to achieve what you ask.

Resources