docker compose, vagrant and insecure Repository - docker

I have setup docker-compose to pull my image from a custom repository.
Here is how the yaml file looks like
my_service:
image: d-myrepo:5000/mycompany/my_service:latest
ports:
- "8079:8079"
Now if I run vagrant up, it gets errors
==> default: File "/code/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 46, in expand_registry_url
==> default: docker.errors
==> default: .
==> default: DockerException
==> default: :
==> default: HTTPS endpoint unresponsive and insecure mode isn't enabled.
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
/usr/local/bin/docker-compose -f "/vagrant/docker-compose.yml" up -d
Stdout from the command:
Stderr from the command:
stdin: is not a tty
Creating vagrant_y2y_1...
Pulling image d-myrepo:5000/mycompany/my_service:latest...
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 31, in main
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 21, in sys_dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 27, in dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 24, in dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 59, in perform_command
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 464, in up
File "/code/build/docker-compose/out00-PYZ.pyz/compose.project", line 208, in up
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 214, in recreate_containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 197, in create_container
File "/code/build/docker-compose/out00-PYZ.pyz/docker.client", line 710, in pull
File "/code/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 67, in resolve_repository_name
File "/code/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 46, in expand_registry_url
docker.errors.DockerException: HTTPS endpoint unresponsive and insecure mode isn't enabled.
I read about it on the internet, that it has to do with having an insecure repo.
It only works, only if I edit the file
/etc/default/docker
with content
DOCKER_OPTS="-r=true --insecure-registry d-myrepo:5000 ${DOCKER_OPTS}"
restart the docker service and manually pull the image. i.e.
docker pull d-myrepo:5000/mycompany/my_service:latest
Is there a way to avoid this error? and having the provisioning running smoothly? maybe I am missing an option inside the docker-composer.yml file?

thanks for your feedack, the best way to achieve this is to set the vagrant provisioning the following way
config.vm.provision :docker
config.vm.provision :docker_compose
config.vm.provision "shell", path: "provision.sh", privileged: false
while the shell script provision.sh would include the following relevant lines.
sudo echo "DOCKER_OPTS=\"-r=true --insecure-registry my_repo:5000 \${DOCKER_OPTS}\"" | sudo tee /etc/default/docker
sudo service docker restart
sudo /usr/local/bin/docker-compose -f /vagrant/docker-compose.yml up -d --allow-insecure-ssl

Related

Can't connect Dockerized project to RabbitMQ or Cassandra container

I have a project in my pc my project contain rabbitmq, cassandra that rabbitmq and cassandra installed on docker. for successfully run project, rabbitmq and postgress container should be start.
I want dockerize my project. the dockerfile is as follow:
FROM python
WORKDIR /docker/test/samt
COPY . /docker/test/samt
RUN pip install --no-cache-dir --upgrade -r /docker/test/samt/requirements.txt
CMD ["python3","app.py","runserver"]
when I create container from created image. I get following error in my python code:
/usr/local/lib/python3.10/site-packages/flask_sqlalchemy/__init__.py:872: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
warnings.warn(FSADeprecationWarning(
Traceback (most recent call last):
File "/docker/test/samt/app.py", line 16, in <module>
FLASK_APP = create_app()
File "/docker/test/samt/project/application.py", line 16, in create_app
rabbit.init_app(app)
File "/docker/test/samt/project/extentions.py", line 27, in init_app
self.connection = pika.BlockingConnection(params)
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line 360, in __init__
self._impl = self._create_connection(parameters, _impl_class)
File "/usr/local/lib/python3.10/site-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
raise self._reap_last_connection_workflow_error(error)
pika.exceptions.AMQPConnectionError
this error mean pika can not connect to rabbitmq container
I think my dockerized project can not connect to rabbitmq.but rabbitmq container is started and is all is on same network with type bridge. how can I connect rabbitmq and cassandra container to my dockerized project?

Airflow: DockerOperator fails with Permission Denied error

I'm trying to run a docker container via Airflow but getting Permission Denied errors. I have seen a few related posts and some people seem to have solved it via sudo chmod 777 /var/run/docker.sock which is a questionable solution at best, but it still didn't work for me (even after restarting docker. If anyone managed to solve this problem, please let me know!
Here is my DAG:
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator
args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2020, 6, 21, 11, 45, 0),
'retries': 1,
'retry_delay': timedelta(minutes=1),
}
dag = DAG(
"docker",
default_args=args,
max_active_runs=1,
schedule_interval='* * * * *',
catchup=False
)
hello_operator = DockerOperator(
task_id="run_docker",
image="alpine:latest",
command="/bin/bash echo HI!",
auto_remove=True,
dag=dag
)
And here is the error that I'm getting:
[2020-06-21 14:01:36,620] {taskinstance.py:1145} ERROR - ('Connection aborted.', PermissionError(13, 'Permission denied'))
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/local/lib/python3.6/http/client.py", line 974, in send
self.connect()
File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
PermissionError: [Errno 13] Permission denied
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/util/retry.py", line 400, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/packages/six.py", line 734, in reraise
raise value.with_traceback(tb)
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/local/lib/python3.6/http/client.py", line 974, in send
self.connect()
File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', PermissionError(13, 'Permission denied'))
Here is my setup:
Dockerfile:
FROM apache/airflow
RUN pip install --upgrade --user pip && \
pip install --user psycopg2-binary && \
pip install --user docker
COPY airflow/airflow.cfg /opt/airflow/
docker-compose.yaml:
version: "3"
services:
postgres:
image: "postgres:9.6"
container_name: "postgres"
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
ports:
- "5432:5432"
volumes:
- ./data/postgres:/var/lib/postgresql/data
initdb:
image: learning/airflow
entrypoint: airflow initdb
depends_on:
- postgres
webserver:
image: learning/airflow
restart: always
entrypoint: airflow webserver
environment:
- EXECUTOR=Local
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-webserver.pid ]"]
interval: 1m
timeout: 5m
retries: 3
ports:
- "8080:8080"
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock
scheduler:
image: learning/airflow
restart: always
entrypoint: airflow scheduler
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-scheduler.pid ]"]
interval: 1m
timeout: 5m
retries: 3
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock
Even knowing that this question is old, my answer can still help other people that are having this problem.
I've found a elegant (and functional) solution in the following link:
https://onedevblog.com/how-to-fix-a-permission-denied-when-using-dockeroperator-in-airflow/
Quoting the article:
There is a more elegant approach which consists of “wrapping” the file around a service (accessible via TCP).
--
from the above link, the solution is to:
add an additional service docker-proxy to access localhost docker (/var/run/docker.sock) via tcp://docker-proxy:2375 using socat.
version: '3.7'
services:
docker-proxy:
image: bobrik/socat
command: "TCP4-LISTEN:2375,fork,reuseaddr UNIX-CONNECT:/var/run/docker.sock"
ports:
- "2376:2375"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
replace kwarg docker_url='unix://var/run/docker.sock' with docker_url='tcp://docker-proxy:2375' for all DockerOperators.
If volume is already mapped to container, run chmod on HOST:
chmod 777 /var/run/docker.sock
Solved for me.
I ran into this issue on windows (dev environment), using the puckel image.
Note that the file /var/run/docker.sock does not exist on this image, I created it and changed the owner to the airflow user already existent in the puckel image
RUN touch /var/run/docker.sock
RUN chown -R airflow /var/run/docker.sock
You can try to run your docker file with:
docker run -v /var/run/docker.sock:/var/run/docker.sock your_image_name
I remember having issues similar to this and what I did, on top of what you have already done, was to dynamically add the docker group in the container with the GID of the docker.sock in a startup script like this:
#!/usr/bin/env bash
ARGS=$*
# Check if docker sock is mounted
if [[ -S /var/run/docker.sock ]];
then
GROUP=`stat -c %g /var/run/docker.sock`
groupadd -g $GROUP docker
usermod -aG docker airflow
else
echo "Docker unix sock not found. DockerOperators will not run."
fi
su airflow -c "/usr/bin/dumb-init -- /entrypoint $ARGS"
That way you don't touch the socket's permissions and the airflow user is still able to interact with it.
Some other considerations:
I had to redeclare the default user in the Dockerfile to start as root
Run airflow as user airflow
First things first, we need to mount /var/run/docker.sock as a volume, because it is the file through which the Docker Client and Docker Server can communicate, as is in this case - to launch a separate Docker container using the DockerOperator() from inside the running Airflow container. The UNIX domain socket requires either root permission, or Docker group membership. Since the Airflow user is not the root, we need to add it to the Docker group and this way it will get access to the docker.sock. For that you need to do the following:
1.1. Add a Docker group and your user to it in the terminal on your host machine (following the official Docker documentation)
sudo groupadd docker
sudo usermod -aG docker <your_user>
newgrp docker
1.2. Log out and log back in on your host machine
2.1. Get the Docker group id in the terminal on your host machine
cut -d: -f3 < <(getent group docker)
2.2. Add the Airflow user to this docker group (use the GID from the line above) in the Airflow's docker-compose.yaml
group_add:
- <docker_gid>
3.1. Get your user id in the terminal on your host machine
id -u <your_user>
3.2. Set your AIRFLOW_UID to match your user id (use the UID from the line above) on the host machine and AIRFLOW_GID to 0 in the Airflow's docker-compose.yaml
user: "<your_uid>:-50000:0"
4.1. If you're creating your own Dockerfile for the separate container, add your user there
ARG UID=<your_uid>
ENV USER=<your_user>
RUN useradd -u $UID -ms /bin/bash $USER
Add another leading / to /var/run/docker.sock (at the source which is a part before :) in volumes, as below:
volumes:
//var/run/docker.sock:/var/run/docker.sock
In my case 'sudo' before command helped - I run
sudo docker-compose up -d --build dev
instead of
docker-compose up -d --build dev
and it helped. Issue was in lack of rights.
Try
$sudo groupadd docker
$sudo usermod -aG docker $USER

How to call non-interactively docker-compose run command - SSH?

Description of the situation : I have a Ubuntu VM running on remote computer B. I can access this computer B via SSH executed on my host (computer A).
When I enter an interactive shell via SSH on computer B, I can execute all my docker commands without any trouble, the steps are the following :
ssh user#remote_IP (enters interactively in the remote shell)
docker-compose run -d --name wfirst --user 1000:1000 container (compose a docker container and starts it)
This works perfectly, my container is mounted, up and running.
However, if I try to run that command in a non-interactive way by writing in my host terminal :
ssh user#remoteIP "cd /path_to_docker_file;docker-compose run -d --name wfirst --user 1000:1000 container"
The command does not succeed, my container is not mounted and does not run. I was able to get more information by using the "--verbose" flag on the docker-compose command.
Here the interesting part of the output of the successful method :
compose.config.config.find: Using configuration files: ./docker-compose.yml
WARNING: compose.config.environment.__getitem__: The DISPLAY variable is not set. Defaulting to a blank string.
WARNING: compose.config.environment.__getitem__: The NO_PROXY variable is not set. Defaulting to a blank string.
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
urllib3.connectionpool._new_conn: Starting new HTTP connection (1): localhost:2375
urllib3.connectionpool._make_request: http://localhost:2375 "GET /v1.25/version HTTP/1.1" 200 758
compose.cli.command.get_client: docker-compose version 1.21.0, build unknown
docker-py version: 3.4.1
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.1c 28 May 2019
compose.cli.command.get_client: Docker base_url: http://localhost:2375
...
We can see that a HTTP connection to docker is successfully established. The command then continues its execution and is able to create the container from the docker image.
Here the output of the failing method :
compose.config.config.find: Using configuration files: ./docker-compose.yml
compose.config.environment.__getitem__: The DISPLAY variable is not set. Defaulting to a blank string.
compose.config.environment.__getitem__: The NO_PROXY variable is not set. Defaulting to a blank string.
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 384, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 380, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.7/http/client.py", line 1344, in getresponse
response.begin()
File "/usr/lib/python3.7/http/client.py", line 306, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.7/http/client.py", line 267, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 367, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise
raise value
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 386, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 306, in _raise_timeout
raise ReadTimeoutError(self, url, "Read timed out. (read timeout=%s)" % timeout_value)
urllib3.exceptions.ReadTimeoutError: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 11, in <module>
load_entry_point('docker-compose==1.21.0', 'console_scripts', 'docker-compose')()
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 71, in main
command()
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 124, in perform_command
project = project_from_options('.', options)
File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 41, in project_from_options
compatibility=options.get('--compatibility'),
File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 121, in get_project
host=host, environment=environment
File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 95, in get_client
version_info = six.iteritems(client.version())
File "/usr/lib/python3/dist-packages/docker/api/daemon.py", line 181, in version
return self._result(self._get(url), json=True)
File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 46, in inner
return f(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 198, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 546, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 529, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
We can see that the HTTP connection cannot be established.
Why do I need to send my commands in a non-interactive way ? Originally, I want to send those commands using Jenkins (I added a SSH plugin in Jenkins), and I noticed that the docker commands were not working (same output as shown in this post). After a couple of tests, I realised that when Jenkins uses SSH, he sends the commands in a non-interactive way :
- ssh user#remote_IP "commands_to_execute"
This non-interactive way is not an issue for simple commands, but it appears to be an issue for some docker commands which require to be executed in an interactive shell I guess ?
Has someone found a work-around to successfully execute docker commands in a non-interactive shell ? Any help, hint, or alternative solutions will be greatly appreciated, as I tried a lot of things without any success so far..
Did you check if it is docker-engine running on tcp or by unix socket?
Maybe isn't reading right environment vars when you login by ssh u # h "command".
If it's running on unix socket (by default) try...
ssh user # remoteIP "cd /path_to_docker_file;DOCKER_HOST=unix:///var/run/docker.sock docker-compose run -d --name wfirst --user 1000: 1000 container"
or if is running on TCP, you can try ...
ssh user # remoteIP "cd /path_to_docker_file;DOCKER_HOST=tcp://127.0.0.1:2375 docker-compose run -d --name wfirst --user 1000: 1000 container"

Odoo docker, how to create database from command lines?

I want to create database from command line with docker. I try these lines
$ docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgres:9.5
$ docker run -p 8069:8069 --name odoo --link db:db -t odoo -- -d test
But I got these errors
2017-05-17 05:58:37,441 1 INFO ? odoo: Odoo version 10.0-20170207
2017-05-17 05:58:37,442 1 INFO ? odoo: Using configuration file at /etc/odoo/odoo.conf
2017-05-17 05:58:37,442 1 INFO ? odoo: addons paths: ['/var/lib/odoo/addons/10.0', u'/mnt/extra-addons', u'/usr/lib/python2.7/dist-packages/odoo/addons']
2017-05-17 05:58:37,442 1 INFO ? odoo: database: odoo#172.17.0.2:5432
2017-05-17 05:58:37,443 1 INFO ? odoo.sql_db: Connection to the database failed
Traceback (most recent call last):
File "/usr/bin/odoo", line 9, in <module>
odoo.cli.main()
File "/usr/lib/python2.7/dist-packages/odoo/cli/command.py", line 64, in main
o.run(args)
File "/usr/lib/python2.7/dist-packages/odoo/cli/server.py", line 164, in run
main(args)
File "/usr/lib/python2.7/dist-packages/odoo/cli/server.py", line 138, in main
odoo.service.db._create_empty_database(db_name)
File "/usr/lib/python2.7/dist-packages/odoo/service/db.py", line 79, in _create_empty_database
with closing(db.cursor()) as cr:
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 622, in cursor
return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 164, in __init__
self._cnx = pool.borrow(dsn)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 505, in _locked
return fun(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 573, in borrow
**connection_info)
File "/usr/lib/python2.7/dist-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "172.17.0.2" and accepting
TCP/IP connections on port 5432?
What is the problem, are there other solution to do this?

Unable to use variable substitution in docker-compose

I use a mac to run docker and hence I am dependent on boot2docker or docker-machine . My application running in the container needs the docker host's ip.
I saw that the variable substitution feature of docker-compose is exactly what I need . When I try something like the below it works fine
extra_hosts:
- "somehost:162.242.195.82"
BUT when I try something like:
export DOCKER_HOST="64.88.225.66"
extra_hosts:
- "host:${DOCKER_HOST}"
I get :
$ docker-compose build --no-cache
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/code/compose/cli/main.py", line 54, in main
File "/code/compose/cli/docopt_command.py", line 23, in sys_dispatch
File "/code/compose/cli/docopt_command.py", line 26, in dispatch
File "/code/compose/cli/main.py", line 169, in perform_command
File "/code/compose/cli/command.py", line 53, in project_from_options
File "/code/compose/cli/command.py", line 89, in get_project
File "/code/compose/cli/command.py", line 70, in get_client
File "/code/compose/cli/docker_client.py", line 28, in docker_client
File "/code/.tox/py27/lib/python2.7/site-packages/docker/client.py", line 58, in __init__
File "/code/.tox/py27/lib/python2.7/site-packages/docker/utils/utils.py", line 362, in parse_host
docker.errors.DockerException: Bind address needs a port: 64.88.225.66
docker-compose returned -1
I'm not sure why it wants a port. It wasn't happy when I gave it a port number too.
So, how can I get my docker host ip to my container? It would be nice if i can pass commands like boot2docker ip or docker-machine ip default in the extra_hosts section or maybe if docker-compose could execute a script to get this value it would be great !
The problem is that DOCKER_HOST is a variable used by docker-compose itself (https://docs.docker.com/compose/reference/overview/#docker-host) and the format is wrong (it needs a tcp:// scheme and a port I believe).
You'll have to choose a different variable to include it in extra_hosts:.
Otherwise you could just pass it in as an environment variable (environment: [DOCKER_HOST=]) and parse the ip address out of it in the container.

Resources