Airflow: DockerOperator fails with Permission Denied error - docker

I'm trying to run a docker container via Airflow but getting Permission Denied errors. I have seen a few related posts and some people seem to have solved it via sudo chmod 777 /var/run/docker.sock which is a questionable solution at best, but it still didn't work for me (even after restarting docker. If anyone managed to solve this problem, please let me know!
Here is my DAG:
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator
args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2020, 6, 21, 11, 45, 0),
'retries': 1,
'retry_delay': timedelta(minutes=1),
}
dag = DAG(
"docker",
default_args=args,
max_active_runs=1,
schedule_interval='* * * * *',
catchup=False
)
hello_operator = DockerOperator(
task_id="run_docker",
image="alpine:latest",
command="/bin/bash echo HI!",
auto_remove=True,
dag=dag
)
And here is the error that I'm getting:
[2020-06-21 14:01:36,620] {taskinstance.py:1145} ERROR - ('Connection aborted.', PermissionError(13, 'Permission denied'))
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/local/lib/python3.6/http/client.py", line 974, in send
self.connect()
File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
PermissionError: [Errno 13] Permission denied
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/util/retry.py", line 400, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/packages/six.py", line 734, in reraise
raise value.with_traceback(tb)
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/local/lib/python3.6/http/client.py", line 974, in send
self.connect()
File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', PermissionError(13, 'Permission denied'))
Here is my setup:
Dockerfile:
FROM apache/airflow
RUN pip install --upgrade --user pip && \
pip install --user psycopg2-binary && \
pip install --user docker
COPY airflow/airflow.cfg /opt/airflow/
docker-compose.yaml:
version: "3"
services:
postgres:
image: "postgres:9.6"
container_name: "postgres"
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
ports:
- "5432:5432"
volumes:
- ./data/postgres:/var/lib/postgresql/data
initdb:
image: learning/airflow
entrypoint: airflow initdb
depends_on:
- postgres
webserver:
image: learning/airflow
restart: always
entrypoint: airflow webserver
environment:
- EXECUTOR=Local
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-webserver.pid ]"]
interval: 1m
timeout: 5m
retries: 3
ports:
- "8080:8080"
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock
scheduler:
image: learning/airflow
restart: always
entrypoint: airflow scheduler
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-scheduler.pid ]"]
interval: 1m
timeout: 5m
retries: 3
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock

Even knowing that this question is old, my answer can still help other people that are having this problem.
I've found a elegant (and functional) solution in the following link:
https://onedevblog.com/how-to-fix-a-permission-denied-when-using-dockeroperator-in-airflow/
Quoting the article:
There is a more elegant approach which consists of “wrapping” the file around a service (accessible via TCP).
--
from the above link, the solution is to:
add an additional service docker-proxy to access localhost docker (/var/run/docker.sock) via tcp://docker-proxy:2375 using socat.
version: '3.7'
services:
docker-proxy:
image: bobrik/socat
command: "TCP4-LISTEN:2375,fork,reuseaddr UNIX-CONNECT:/var/run/docker.sock"
ports:
- "2376:2375"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
replace kwarg docker_url='unix://var/run/docker.sock' with docker_url='tcp://docker-proxy:2375' for all DockerOperators.

If volume is already mapped to container, run chmod on HOST:
chmod 777 /var/run/docker.sock
Solved for me.

I ran into this issue on windows (dev environment), using the puckel image.
Note that the file /var/run/docker.sock does not exist on this image, I created it and changed the owner to the airflow user already existent in the puckel image
RUN touch /var/run/docker.sock
RUN chown -R airflow /var/run/docker.sock

You can try to run your docker file with:
docker run -v /var/run/docker.sock:/var/run/docker.sock your_image_name

I remember having issues similar to this and what I did, on top of what you have already done, was to dynamically add the docker group in the container with the GID of the docker.sock in a startup script like this:
#!/usr/bin/env bash
ARGS=$*
# Check if docker sock is mounted
if [[ -S /var/run/docker.sock ]];
then
GROUP=`stat -c %g /var/run/docker.sock`
groupadd -g $GROUP docker
usermod -aG docker airflow
else
echo "Docker unix sock not found. DockerOperators will not run."
fi
su airflow -c "/usr/bin/dumb-init -- /entrypoint $ARGS"
That way you don't touch the socket's permissions and the airflow user is still able to interact with it.
Some other considerations:
I had to redeclare the default user in the Dockerfile to start as root
Run airflow as user airflow

First things first, we need to mount /var/run/docker.sock as a volume, because it is the file through which the Docker Client and Docker Server can communicate, as is in this case - to launch a separate Docker container using the DockerOperator() from inside the running Airflow container. The UNIX domain socket requires either root permission, or Docker group membership. Since the Airflow user is not the root, we need to add it to the Docker group and this way it will get access to the docker.sock. For that you need to do the following:
1.1. Add a Docker group and your user to it in the terminal on your host machine (following the official Docker documentation)
sudo groupadd docker
sudo usermod -aG docker <your_user>
newgrp docker
1.2. Log out and log back in on your host machine
2.1. Get the Docker group id in the terminal on your host machine
cut -d: -f3 < <(getent group docker)
2.2. Add the Airflow user to this docker group (use the GID from the line above) in the Airflow's docker-compose.yaml
group_add:
- <docker_gid>
3.1. Get your user id in the terminal on your host machine
id -u <your_user>
3.2. Set your AIRFLOW_UID to match your user id (use the UID from the line above) on the host machine and AIRFLOW_GID to 0 in the Airflow's docker-compose.yaml
user: "<your_uid>:-50000:0"
4.1. If you're creating your own Dockerfile for the separate container, add your user there
ARG UID=<your_uid>
ENV USER=<your_user>
RUN useradd -u $UID -ms /bin/bash $USER

Add another leading / to /var/run/docker.sock (at the source which is a part before :) in volumes, as below:
volumes:
//var/run/docker.sock:/var/run/docker.sock

In my case 'sudo' before command helped - I run
sudo docker-compose up -d --build dev
instead of
docker-compose up -d --build dev
and it helped. Issue was in lack of rights.

Try
$sudo groupadd docker
$sudo usermod -aG docker $USER

Related

podman-compose failing to compose

I realize podman-compose is still under development. I'm going to be replacing my docker stack with podman once I replace Debian with CentOS8 on my Poweredge server as part of my homelab. Right now I'm just testing out/playing around with podman on my Fedora machine.
OS: Fedora 32
KERNEL:5.6.12-300.fc32.x86_64
PODMAN: 1.9.2
PODMAN-COMPOSE: 0.1.5
PROBLEM: podman-compose is failing and I'm unable to ascertain the reason why.
Here's my docker-compose.yml:
version: "2.1"
services:
deluge:
image: linuxserver/deluge
container_name: deluge
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
# - UMASK_SET=022 #optional
# - DELUGE_LOGLEVEL=error #optional
volumes:
- /home/mike/test/deluge:/config
- /home/mike/Downloads:/downloads
restart: unless-stopped
When I run podman-compose up Here is the output:
[mike#localhost test]$ podman-compose up
podman pod create --name=test --share net
ce389be26589efe4433db15d875844b2047ea655c43dc84dbe49f69ffabe867e
0
podman create --name=deluge --pod=test -l io.podman.compose.config-hash=123 -l io.podman.compose.project=test -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=deluge --network host -e PUID=1000 -e PGID=1000 -e TZ=America/New_York --mount type=bind,source=/home/mike/test/deluge,destination=/config --mount type=bind,source=/home/mike/Downloads,destination=/downloads --add-host deluge:127.0.0.1 --add-host deluge:127.0.0.1 linuxserver/deluge
Trying to pull registry.fedoraproject.org/linuxserver/deluge...
manifest unknown: manifest unknown
Trying to pull registry.access.redhat.com/linuxserver/deluge...
name unknown: Repo not found
Trying to pull registry.centos.org/linuxserver/deluge...
manifest unknown: manifest unknown
Trying to pull docker.io/linuxserver/deluge...
Getting image source signatures
Copying blob a54f3db92256 done
Copying blob c114dc480980 done
Copying blob d0d29aaded3d done
Copying blob fa1dff0a3a53 done
Copying blob 5076df76a29a done
Copying blob a40b999f3c1e done
Copying config 31fddfa799 done
Writing manifest to image destination
Storing signatures
Error: error checking path "/home/mike/test/deluge": stat /home/mike/test/deluge: no such file or directory
125
podman start -a deluge
Error: unable to find container deluge: no container with name or ID deluge found: no such container
125
Then finally when I quite via ctrl-c :
^CTraceback (most recent call last):
File "/home/mike/.local/bin/podman-compose", line 8, in <module>
sys.exit(main())
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 1093, in main
podman_compose.run()
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 625, in run
cmd(self, args)
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 782, in wrapped
return func(*args, **kw)
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 914, in compose_up
thread.join(timeout=1.0)
File "/usr/lib64/python3.8/threading.py", line 1005, in join
if not self._started.is_set():
File "/usr/lib64/python3.8/threading.py", line 513, in is_set
def is_set(self):
KeyboardInterrupt
I'm not experienced enough to be able to read through this and figure out what the problem is so I'm hoping to learn from you all.
Thanks!
There is an error in your path:
volumes:
/home/mike/test/deluge:/config
/home/mike/test/deluge: no such file or directory
Check the folder path.

PyHive Thrift transport exception: read 0 bytes

I'm trying to connect to Hive server-2 running inside docker container (from outside the container) via python (PyHive 0.5, python 2.7) using DB-API (asynchronous) example
from pyhive import hive
conn = hive.connect(host='172.17.0.2', port='10001', auth='NOSASL')
However, I'm getting following error
Traceback (most recent call last):
File "py_2.py", line 4, in <module>
conn = hive.connect(host='172.17.0.2', port='10001', auth='NOSASL')
File "/home/foodie/anaconda2/lib/python2.7/site-packages/pyhive/hive.py", line 64, in connect
return Connection(*args, **kwargs)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/pyhive/hive.py", line 164, in __init__
response = self._client.OpenSession(open_session_req)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/TCLIService/TCLIService.py", line 187, in OpenSession
return self.recv_OpenSession()
File "/home/foodie/anaconda2/lib/python2.7/site-packages/TCLIService/TCLIService.py", line 199, in recv_OpenSession
(fname, mtype, rseqid) = iprot.readMessageBegin()
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/protocol/TBinaryProtocol.py", line 148, in readMessageBegin
name = self.trans.readAll(sz)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 60, in readAll
chunk = self.read(sz - have)
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TTransport.py", line 161, in read
self.__rbuf = BufferIO(self.__trans.read(max(sz, self.__rbuf_size)))
File "/home/foodie/anaconda2/lib/python2.7/site-packages/thrift/transport/TSocket.py", line 132, in read
message='TSocket read 0 bytes')
thrift.transport.TTransport.TTransportException: TSocket read 0 bytes
The docker image that I'm using is this (tag: mysql_corrected).
It runs following services (as outputted by jps command)
992 Master
1810 RunJar
259 DataNode
2611 Jps
584 ResourceManager
1576 RunJar
681 NodeManager
137 NameNode
426 SecondaryNameNode
1690 RunJar
732 HistoryServer
I'm launching the container using
docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -p 18080:18080 -p 10002:10002 -p 10000:10000 -e 3306 -e 9084 -h sandbox -v /home/foodie/docker/w1:/usr/tmp/test rohitbarnwal7/spark:mysql_corrected bash
Furthermore, I perform following steps to launch Hive server inside docker container
Start mysql service: service mysqld start
Switch to directory /usr/local/hive: cd $HIVE_HOME
Launch Hive metastore server: nohup bin/hive --service metastore &
Launch Hive server 2: hive --service hive-server2 (note that thrift-server port is already changed to 10001 in /usr/local/hive/conf/hive-site.xml)
Launch beeline shell: beeline
Connect beeline shell with Hive server-2: !connect jdbc:hive2://localhost:10001/default;transportMode=http;httpPath=cliservice
I've already tried the following things without any luck
Making python 2.7.3 as default python version inside docker container (original default is python 2.6.6, python 2.7.3 is installed inside container but isn't default)
Changing Hive server port to it's' default value: 10000
Trying to connect with Hive server by running same python script inside the container (it still gives the same error)

nvidia-docker can't talk to: http://localhost:3476/docker/cli/json

nvidia-docker can't talk to http://localhost:3476/docker/cli/json
Traceback (most recent call last):
File "/usr/local/bin/nvidia-docker-compose", line 43, in <module>
resp = request.urlopen('http://{0}/docker/cli/json'.format(args.nvidia_docker_host)).read().decode()
File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1228, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 111] Connection refused>
A fresh install of nvidia-docker-compose fixed this:
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
Then to test it:
Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi
Encountered this also, a customer wasn't managing to run nvidia-docker-compose. Turns out even after reinstallations of docker and nvidia-docker, the query made by nvidia-docker to docker on localhost:3476 was not getting any response (see nvidia-docker-compose code here)
I managed to solve this by generating a hand-made docker-compose file as they turn out to be quite simple, follow this example, replace 375.66 with your nvidia driver version and put as many /dev/nvidia[n] lines as you have graphic cards (did not try to put services on separate GPUs but go for it !):
services:
exampleservice0:
devices:
- /dev/nvidia0
- /dev/nvidia1
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia-uvm-tools
environment:
- EXAMPLE_ENV_VARIABLE=example
image: company/image
volumes:
- ./disk:/disk
- nvidia_driver_375.66:/usr/local/nvidia:ro
version: '2'
volumes:
media: null
nvidia_driver_375.66:
external: true
Then just run this hand-made docker-compose file with a classic docker-compose command.

Odoo docker, how to create database from command lines?

I want to create database from command line with docker. I try these lines
$ docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgres:9.5
$ docker run -p 8069:8069 --name odoo --link db:db -t odoo -- -d test
But I got these errors
2017-05-17 05:58:37,441 1 INFO ? odoo: Odoo version 10.0-20170207
2017-05-17 05:58:37,442 1 INFO ? odoo: Using configuration file at /etc/odoo/odoo.conf
2017-05-17 05:58:37,442 1 INFO ? odoo: addons paths: ['/var/lib/odoo/addons/10.0', u'/mnt/extra-addons', u'/usr/lib/python2.7/dist-packages/odoo/addons']
2017-05-17 05:58:37,442 1 INFO ? odoo: database: odoo#172.17.0.2:5432
2017-05-17 05:58:37,443 1 INFO ? odoo.sql_db: Connection to the database failed
Traceback (most recent call last):
File "/usr/bin/odoo", line 9, in <module>
odoo.cli.main()
File "/usr/lib/python2.7/dist-packages/odoo/cli/command.py", line 64, in main
o.run(args)
File "/usr/lib/python2.7/dist-packages/odoo/cli/server.py", line 164, in run
main(args)
File "/usr/lib/python2.7/dist-packages/odoo/cli/server.py", line 138, in main
odoo.service.db._create_empty_database(db_name)
File "/usr/lib/python2.7/dist-packages/odoo/service/db.py", line 79, in _create_empty_database
with closing(db.cursor()) as cr:
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 622, in cursor
return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 164, in __init__
self._cnx = pool.borrow(dsn)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 505, in _locked
return fun(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 573, in borrow
**connection_info)
File "/usr/lib/python2.7/dist-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "172.17.0.2" and accepting
TCP/IP connections on port 5432?
What is the problem, are there other solution to do this?

docker compose, vagrant and insecure Repository

I have setup docker-compose to pull my image from a custom repository.
Here is how the yaml file looks like
my_service:
image: d-myrepo:5000/mycompany/my_service:latest
ports:
- "8079:8079"
Now if I run vagrant up, it gets errors
==> default: File "/code/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 46, in expand_registry_url
==> default: docker.errors
==> default: .
==> default: DockerException
==> default: :
==> default: HTTPS endpoint unresponsive and insecure mode isn't enabled.
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
/usr/local/bin/docker-compose -f "/vagrant/docker-compose.yml" up -d
Stdout from the command:
Stderr from the command:
stdin: is not a tty
Creating vagrant_y2y_1...
Pulling image d-myrepo:5000/mycompany/my_service:latest...
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 31, in main
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 21, in sys_dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 27, in dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.docopt_command", line 24, in dispatch
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.command", line 59, in perform_command
File "/code/build/docker-compose/out00-PYZ.pyz/compose.cli.main", line 464, in up
File "/code/build/docker-compose/out00-PYZ.pyz/compose.project", line 208, in up
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 214, in recreate_containers
File "/code/build/docker-compose/out00-PYZ.pyz/compose.service", line 197, in create_container
File "/code/build/docker-compose/out00-PYZ.pyz/docker.client", line 710, in pull
File "/code/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 67, in resolve_repository_name
File "/code/build/docker-compose/out00-PYZ.pyz/docker.auth.auth", line 46, in expand_registry_url
docker.errors.DockerException: HTTPS endpoint unresponsive and insecure mode isn't enabled.
I read about it on the internet, that it has to do with having an insecure repo.
It only works, only if I edit the file
/etc/default/docker
with content
DOCKER_OPTS="-r=true --insecure-registry d-myrepo:5000 ${DOCKER_OPTS}"
restart the docker service and manually pull the image. i.e.
docker pull d-myrepo:5000/mycompany/my_service:latest
Is there a way to avoid this error? and having the provisioning running smoothly? maybe I am missing an option inside the docker-composer.yml file?
thanks for your feedack, the best way to achieve this is to set the vagrant provisioning the following way
config.vm.provision :docker
config.vm.provision :docker_compose
config.vm.provision "shell", path: "provision.sh", privileged: false
while the shell script provision.sh would include the following relevant lines.
sudo echo "DOCKER_OPTS=\"-r=true --insecure-registry my_repo:5000 \${DOCKER_OPTS}\"" | sudo tee /etc/default/docker
sudo service docker restart
sudo /usr/local/bin/docker-compose -f /vagrant/docker-compose.yml up -d --allow-insecure-ssl

Resources