We're running Docker on Digital Ocean App Engine, and can't pass flags (e.g. --cpus) to docker run.
We can't use docker-compose either.
Is it possible to set an environment variable (ARG? ENV?), e.g. $CPUS=blah) in a way that can be picked up by the Docker instance?
Stated differently, are there internal environment variables that correspond to specific flags, that can be set from with a Dockerfile / environment itself?
As we can see in the very first steps of official documentation of Docker (this link):
Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allow you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. You can easily share containers while you work, and be sure that everyone you share with gets the same container that works in the same way.
The main concept of Docker is to isolate each container from the other container, environment variables and anything that relates to the them. So the only thing we can access and modify them is what that reside outside of the container like:
exposing ports
exposing volumes
map container port to the host port
map container volume to the host volume and vice-versa
...
Related
I have a script in my host machine that needs to run everytime an action occurs within a docker container (running a Django Rest API). This script depends on many local files and environment variables, so it is not possible for me to just map everything (volumes and env vars) from the host to the container in order for it to be called inside it. It must be called from the host machine. After it is executed, then it will generate some output files that will be read and used from the container (through a mounted volume).
Is there any way I can achieve this? I've seen lots of comments about using docker socket and mapping volumes, but it never seems to be suitable to this case.
Thanks!
I'm running a project in a container on my machine. this project needs to list other containers on machine. previously this project was on machine (not on a container in machine) and it was possible to do that. but now it's in one of those containers. I want to know is it possible to create an access for this type of jobs (listing containers, stop/start/... them or any other works on other containers or the host machine)?
if it's true, how can it be possible?
You can use so-called docker-in-docker technique. But before starting with it, you are obligated to read the post:
http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
which is the best explanation of pros and cons.
All you have is to export /var/run/docker.sock to your container and setup docker-cli inside the container. It will give you docker access inside container, at the same time you will be adressing your host's docker engine.
Docker has great documentation on linking containers - allowing one container to make use of the other container's environment variables.
However, how would one go about exposing command line aliases (and their respective programs) of the host machine to the Docker container?
Or, perhaps the better way to go about this is to simply configure the Docker container to build from an image that has these aliases / "dotfiles" built in?
I don't think that you approach to docker as you should. A docker container's purpose is to run a network application and expose it to outside world.
If you need aliases for your application running inside a container, then you have to build an image first, that contains the whole environment your app needs...
Or specify them in the Dockerfile, while building your image.
I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch. I am confused as to how I can create a container that can be used in production and on my local machine (mac). How are people providing configuration like this these days?
So far I have come up with having my process take environmental variables as arguments which I can pass to the container with docker run -e. It seems unlikely that I would be doing this type of thing in production.
I have a container that runs a simple service that requires a connection to elasticsearch. For this I need to provide my service with the address of elasticsearch
If elasticsearch is running in its own container on the same host (managed by the same docker daemon), then you can link it to your own container (at the docker run stage) with the --link option (which sets environment variables)
docker run --link elasticsearch:elasticsearch --name <yourContainer> <yourImage>
See "Linking containers together"
In that case, your container config can be static and known/written in advance, as it will refer to the search machine as 'elasticsearch'.
How about writing it into the configuration file of your application and mount the configuration directory onto your container with -v?
To make it more organized, I use Ansible for orchestration. This way you could have a template of the configuration file for your application while the actually parameters are in the variable file of the corresponding Ansible playbook at a centralized location. Ansible will be in charge of copying the template over to the desired location and do variable substitution for you. It also recently enhanced its Docker support.
Environment variables are absolutely fine (we use them all the time for this sort of thing) as long as you're using service names, not ip addresses. Even with ip addresses you'd have no problem as long as you only have one ES and you're willing to restart your service every time the ES ip address changes.
You should really ask someone who knows for sure how you resolve these things in your production environments, because you're unlikely to be the only person in your org who has had this problem -- connecting to a database poses the same problem.
If you have no constraints at all then you should check out something like Consul from Hashicorp. It'll help you a lot with this problem; if you are allowed to use it.
I am trying to simulate docker container linking using a simple use case which is as follows
1) A docker container with a simple pub-sub java application, there is a publisher and subscriber both within the same container. I have used dockerfiles for building this
2) A docker container running rabbitmq, this was pulled from docker hub.
Now I link both the containers, I am able to see rabbitmq environment variables in my container #1.
Now my question is what is the best way to utilize these container variables in my pub-sub container #1. I can always java System.getenv and hardcode a environment variable. Are there any better ways of doing it?
Hard-coding an environment variable seems OK here. The environment variables follow a standard format, like RABBITMQ_PORT_5672_TCP_ADDR and RABBITMQ_PORT_5672_TCP_PORT. The only bit of those names which would change is the label RABBITMQ, which is set based on the options to docker run. Whoever runs your container controls that bit, either with --link rabbitmq or --link someothercontainer:rabbitmq to set an alias. This just forms part of your container's "contract" with the outside world: the container must be run in a way that adds variables with the right alias.
Incidentally, this doesn't force you to use Docker links if you don't want to, as you can always just pass in the environment variables if Rabbit MQ were on a different machine (e.g. --env RABBITMQ_PORT_5672_TCP_ADDR=1.2.3.4).