How to expose host machine aliases to Docker container? - docker

Docker has great documentation on linking containers - allowing one container to make use of the other container's environment variables.
However, how would one go about exposing command line aliases (and their respective programs) of the host machine to the Docker container?
Or, perhaps the better way to go about this is to simply configure the Docker container to build from an image that has these aliases / "dotfiles" built in?

I don't think that you approach to docker as you should. A docker container's purpose is to run a network application and expose it to outside world.
If you need aliases for your application running inside a container, then you have to build an image first, that contains the whole environment your app needs...
Or specify them in the Dockerfile, while building your image.

Related

Defining Docker runtime flags from within Dockerfile

We're running Docker on Digital Ocean App Engine, and can't pass flags (e.g. --cpus) to docker run.
We can't use docker-compose either.
Is it possible to set an environment variable (ARG? ENV?), e.g. $CPUS=blah) in a way that can be picked up by the Docker instance?
Stated differently, are there internal environment variables that correspond to specific flags, that can be set from with a Dockerfile / environment itself?
As we can see in the very first steps of official documentation of Docker (this link):
Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allow you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. You can easily share containers while you work, and be sure that everyone you share with gets the same container that works in the same way.
The main concept of Docker is to isolate each container from the other container, environment variables and anything that relates to the them. So the only thing we can access and modify them is what that reside outside of the container like:
exposing ports
exposing volumes
map container port to the host port
map container volume to the host volume and vice-versa
...

Access docker within Dockerfile?

I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?
No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)

access to docker shell inside one of containers

I'm running a project in a container on my machine. this project needs to list other containers on machine. previously this project was on machine (not on a container in machine) and it was possible to do that. but now it's in one of those containers. I want to know is it possible to create an access for this type of jobs (listing containers, stop/start/... them or any other works on other containers or the host machine)?
if it's true, how can it be possible?
You can use so-called docker-in-docker technique. But before starting with it, you are obligated to read the post:
http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
which is the best explanation of pros and cons.
All you have is to export /var/run/docker.sock to your container and setup docker-cli inside the container. It will give you docker access inside container, at the same time you will be adressing your host's docker engine.

Can I bind a port while building a docker image?

I want to run the certbot-auto client while building a docker container from within this container and therefore I need port 443 to be accessible to the outside world.
Is there any way I can bind ports to the host while building a docker image ?
Short answer, no.
The option isn't there as part of docker build, and the build shouldn't hang waiting for externalities to connect in. They should also be designed to run on any developer workstation, externally hosted build server, and everything in between.
Longer answer, I think you're going down the wrong path. Injecting unique container specific data into the image creates something that goes against the typical pattern of docker images. Instead of trying to inject a certificate into your image, have it do this as part of the container entrypoint, and if you need persistence, store the result in a volume so you can skip that step on the next startup.

Using environment variables post docker container linking in java

I am trying to simulate docker container linking using a simple use case which is as follows
1) A docker container with a simple pub-sub java application, there is a publisher and subscriber both within the same container. I have used dockerfiles for building this
2) A docker container running rabbitmq, this was pulled from docker hub.
Now I link both the containers, I am able to see rabbitmq environment variables in my container #1.
Now my question is what is the best way to utilize these container variables in my pub-sub container #1. I can always java System.getenv and hardcode a environment variable. Are there any better ways of doing it?
Hard-coding an environment variable seems OK here. The environment variables follow a standard format, like RABBITMQ_PORT_5672_TCP_ADDR and RABBITMQ_PORT_5672_TCP_PORT. The only bit of those names which would change is the label RABBITMQ, which is set based on the options to docker run. Whoever runs your container controls that bit, either with --link rabbitmq or --link someothercontainer:rabbitmq to set an alias. This just forms part of your container's "contract" with the outside world: the container must be run in a way that adds variables with the right alias.
Incidentally, this doesn't force you to use Docker links if you don't want to, as you can always just pass in the environment variables if Rabbit MQ were on a different machine (e.g. --env RABBITMQ_PORT_5672_TCP_ADDR=1.2.3.4).

Resources