UPDATE:
I've tried implementing the accepted answer from here Telegraf can not connect to Docker sock like this in my docker compose file:
telegraf3:
image: telegraf
user: telegraf:$$(stat -c '%g' /var/run/docker.sock)
volumes:
- ./telegraf/telegraf3.conf:/etc/telegraf/telegraf.conf
- /var/run/docker.sock:/var/run/docker.sock
I am getting this error:
Error response from daemon: unable to find group $(stat -c '%g' /var/run/docker.sock): no matching entries in group file
How can i fix this issue? :)
Background:
I'm trying to run Telegraf (https://github.com/influxdata/telegraf) with the docker input. I'm running Telegraf via Docker Compose, and i've configured it roughly like this:
telegraf3:
image: telegraf
volumes:
- ./telegraf/telegraf3.conf:/etc/telegraf/telegraf.conf
- /var/run/docker.sock:/var/run/docker.sock:rw
env_file:
- ./telegraf/telegraf.env
depends_on:
- influxdb
The telegraf configuration uses a docker input plugin to interact with the docker.sock. It doesn't work, i get a permission related error:
test-grafana-telegraf3-1 | 2022-12-29T12:09:10Z E! [inputs.docker] Error in plugin: Got permission denied while trying to connect to the Docker daemon socket at unix:///docker.sock: Get "http://%2Fdocker.sock/v1.24/info": dial unix /docker.sock: connect: permission denied
Basically, the entrypoint.sh script runs telegraf (the application) with the telegraf user, which can't access the docker.sock
There's a fix for this issue described here: Telegraf can not connect to Docker sock
Issue
As i am using docker compose, i would like this fix to be defined in the compose file, and not be dependent on me starting the container with docker run.
I've tried this:
telegraf3:
image: telegraf
volumes:
- ./telegraf/telegraf3.conf:/etc/telegraf/telegraf.conf
- /var/run/docker.sock:/var/run/docker.sock:rw
env_file:
- ./telegraf/telegraf.env
depends_on:
- influxdb
command: ["bash -c -u telegraf $$(stat -c '%g' /var/run/docker.sock)", "/entrypoint.sh"]
But then i get this error:
test-grafana-telegraf3-1 | setpriv: failed to execute bash -c -u telegraf $(stat -c '%g' /var/run/docker.sock): No such file or directory
test-grafana-telegraf3-1 exited with code 127
Odoo 15, installed on ubuntu server in docker container
i didnt installed it myself, admin tell me that he cant solve this problem
site1#site1:/var/docker/odoo$ docker exec -it odoo bash
odoo#f6740a7479b8:/$ odoo
2022-07-22 20:23:53,743 59 INFO ? odoo: Odoo version 15.0-20220620
2022-07-22 20:23:53,743 59 INFO ? odoo: Using configuration file at /etc/odoo/odoo.conf
2022-07-22 20:23:53,743 59 INFO ? odoo: addons paths: ['/usr/lib/python3/dist-packages/odoo/addons', '/var/lib/odoo/.local/share/Odoo/addons/15.0', '/var/odoo/custom']
2022-07-22 20:23:53,743 59 INFO ? odoo: database: odoo#db:5432
2022-07-22 20:23:53,914 59 INFO ? odoo.addons.base.models.ir_actions_report: Will use the Wkhtmltopdf binary at /usr/local/bin/wkhtmltopdf
2022-07-22 20:23:54,150 59 INFO ? odoo.service.server: HTTP service (werkzeug) running on 0.0.0.0:8069
Traceback (most recent call last):
File "/usr/bin/odoo", line 8, in <module>
odoo.cli.main()
File "/usr/lib/python3/dist-packages/odoo/cli/command.py", line 61, in main
o.run(args)
File "/usr/lib/python3/dist-packages/odoo/cli/server.py", line 179, in run
main(args)
File "/usr/lib/python3/dist-packages/odoo/cli/server.py", line 173, in main
rc = odoo.service.server.start(preload=preload, stop=stop)
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 1356, in start
rc = server.run(preload, stop)
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 907, in run
self.start()
File "/usr/lib/python3/dist-packages/odoo/service/server.py", line 877, in start
self.socket.bind((self.interface, self.port))
OSError: [Errno 98] Address already in use
when i trying to update module or just execute odoo for outputs, this error occured.
but instad, odoo worked, i can develop and update modules manualy from browser.
i tried next solutions:
adding xmlrpc_port = 7654 to config file
didnt work, error also occured and odoo web intarface isnt available.
changing ports in docker compose file:
version: '3.1'
services:
web:
build: ./etc/odoo
container_name: odoo
depends_on:
- db
ports:
- "8069:8069"
in all variations, didnt help. How to solve that problem?
Your container probably already started odoo and you are trying to start it second time.
To execute commands in container you can try --no-http parameter
Docker desktop is using Linux containers.
(Yes, I tried this: Docker Error: failed to register layer: Error processing tar file(exit status 1): "...msader15.dll.mui: no such file or directory", but using Docker Desktop with Windows containers caused the docker-compose command to fail with the response Error response from daemon: operating system is not supported)
Structure
- engine-load-tests
|- Dockerfile
|- docker-compose.yml
|- engine_load_tester_locust\
|- main.py
|- WinPerfCounters\ [I know - the casing is inconsistent]
|- main.py
|- Dockerfile
|- environment config files, README, other files
Dockerfile
FROM python:3.9.6-windowsservercore-1809
COPY . ./WinPerfCounters/
RUN pip install --no-cache-dir -r ./WinPerfCounters/requirements.txt
CMD [ "python", "WinPerfCounters/main.py", "WinPerfCounters/load_test.conf" ]
Docker-Compose
version: "3.3"
services:
win_perf_counters:
container_name: win_perf_counters
platform: windows
image: python:3.9.6-windowsservercore-1809
build: ./WinPerfCounters
depends_on:
- influxdb
links:
- influxdb
Then other containers for locust, influx, and grafana...
Output - Snippets
------
> [python:3.9.6-windowsservercore-1809 1/3] FROM docker.io/library/python:3.9.6-windowsservercore-1809#sha256:54b7eadfbbc3a983bf6ea80eb7478b68d46267bbbcc710569972c140247ccd5e:
-----
failed to solve: rpc error: code = Unknown desc = failed to register layer: Error processing tar file(exit status 1): link /Files/Program Files/common files/Microsoft Shared/Ink/en-US/micaut.dll.mui /Files/Program Files (x86)/common fi
les/Microsoft Shared/ink/en-US/micaut.dll.mui: no such file or directory
You can't run Windows containers (i.e. derived from some Windows base image like windowsservercore-1809) when Docker Desktop is set to Linux containers.
I'm trying to run a docker container via Airflow but getting Permission Denied errors. I have seen a few related posts and some people seem to have solved it via sudo chmod 777 /var/run/docker.sock which is a questionable solution at best, but it still didn't work for me (even after restarting docker. If anyone managed to solve this problem, please let me know!
Here is my DAG:
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator
args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2020, 6, 21, 11, 45, 0),
'retries': 1,
'retry_delay': timedelta(minutes=1),
}
dag = DAG(
"docker",
default_args=args,
max_active_runs=1,
schedule_interval='* * * * *',
catchup=False
)
hello_operator = DockerOperator(
task_id="run_docker",
image="alpine:latest",
command="/bin/bash echo HI!",
auto_remove=True,
dag=dag
)
And here is the error that I'm getting:
[2020-06-21 14:01:36,620] {taskinstance.py:1145} ERROR - ('Connection aborted.', PermissionError(13, 'Permission denied'))
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/local/lib/python3.6/http/client.py", line 974, in send
self.connect()
File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
PermissionError: [Errno 13] Permission denied
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/util/retry.py", line 400, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/packages/six.py", line 734, in reraise
raise value.with_traceback(tb)
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/local/lib/python3.6/http/client.py", line 974, in send
self.connect()
File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', PermissionError(13, 'Permission denied'))
Here is my setup:
Dockerfile:
FROM apache/airflow
RUN pip install --upgrade --user pip && \
pip install --user psycopg2-binary && \
pip install --user docker
COPY airflow/airflow.cfg /opt/airflow/
docker-compose.yaml:
version: "3"
services:
postgres:
image: "postgres:9.6"
container_name: "postgres"
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
ports:
- "5432:5432"
volumes:
- ./data/postgres:/var/lib/postgresql/data
initdb:
image: learning/airflow
entrypoint: airflow initdb
depends_on:
- postgres
webserver:
image: learning/airflow
restart: always
entrypoint: airflow webserver
environment:
- EXECUTOR=Local
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-webserver.pid ]"]
interval: 1m
timeout: 5m
retries: 3
ports:
- "8080:8080"
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock
scheduler:
image: learning/airflow
restart: always
entrypoint: airflow scheduler
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-scheduler.pid ]"]
interval: 1m
timeout: 5m
retries: 3
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock
Even knowing that this question is old, my answer can still help other people that are having this problem.
I've found a elegant (and functional) solution in the following link:
https://onedevblog.com/how-to-fix-a-permission-denied-when-using-dockeroperator-in-airflow/
Quoting the article:
There is a more elegant approach which consists of “wrapping” the file around a service (accessible via TCP).
--
from the above link, the solution is to:
add an additional service docker-proxy to access localhost docker (/var/run/docker.sock) via tcp://docker-proxy:2375 using socat.
version: '3.7'
services:
docker-proxy:
image: bobrik/socat
command: "TCP4-LISTEN:2375,fork,reuseaddr UNIX-CONNECT:/var/run/docker.sock"
ports:
- "2376:2375"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
replace kwarg docker_url='unix://var/run/docker.sock' with docker_url='tcp://docker-proxy:2375' for all DockerOperators.
If volume is already mapped to container, run chmod on HOST:
chmod 777 /var/run/docker.sock
Solved for me.
I ran into this issue on windows (dev environment), using the puckel image.
Note that the file /var/run/docker.sock does not exist on this image, I created it and changed the owner to the airflow user already existent in the puckel image
RUN touch /var/run/docker.sock
RUN chown -R airflow /var/run/docker.sock
You can try to run your docker file with:
docker run -v /var/run/docker.sock:/var/run/docker.sock your_image_name
I remember having issues similar to this and what I did, on top of what you have already done, was to dynamically add the docker group in the container with the GID of the docker.sock in a startup script like this:
#!/usr/bin/env bash
ARGS=$*
# Check if docker sock is mounted
if [[ -S /var/run/docker.sock ]];
then
GROUP=`stat -c %g /var/run/docker.sock`
groupadd -g $GROUP docker
usermod -aG docker airflow
else
echo "Docker unix sock not found. DockerOperators will not run."
fi
su airflow -c "/usr/bin/dumb-init -- /entrypoint $ARGS"
That way you don't touch the socket's permissions and the airflow user is still able to interact with it.
Some other considerations:
I had to redeclare the default user in the Dockerfile to start as root
Run airflow as user airflow
First things first, we need to mount /var/run/docker.sock as a volume, because it is the file through which the Docker Client and Docker Server can communicate, as is in this case - to launch a separate Docker container using the DockerOperator() from inside the running Airflow container. The UNIX domain socket requires either root permission, or Docker group membership. Since the Airflow user is not the root, we need to add it to the Docker group and this way it will get access to the docker.sock. For that you need to do the following:
1.1. Add a Docker group and your user to it in the terminal on your host machine (following the official Docker documentation)
sudo groupadd docker
sudo usermod -aG docker <your_user>
newgrp docker
1.2. Log out and log back in on your host machine
2.1. Get the Docker group id in the terminal on your host machine
cut -d: -f3 < <(getent group docker)
2.2. Add the Airflow user to this docker group (use the GID from the line above) in the Airflow's docker-compose.yaml
group_add:
- <docker_gid>
3.1. Get your user id in the terminal on your host machine
id -u <your_user>
3.2. Set your AIRFLOW_UID to match your user id (use the UID from the line above) on the host machine and AIRFLOW_GID to 0 in the Airflow's docker-compose.yaml
user: "<your_uid>:-50000:0"
4.1. If you're creating your own Dockerfile for the separate container, add your user there
ARG UID=<your_uid>
ENV USER=<your_user>
RUN useradd -u $UID -ms /bin/bash $USER
Add another leading / to /var/run/docker.sock (at the source which is a part before :) in volumes, as below:
volumes:
//var/run/docker.sock:/var/run/docker.sock
In my case 'sudo' before command helped - I run
sudo docker-compose up -d --build dev
instead of
docker-compose up -d --build dev
and it helped. Issue was in lack of rights.
Try
$sudo groupadd docker
$sudo usermod -aG docker $USER
I pulled docker image and executed below command to run image.
docker run -it bitnami/spark:latest /bin/bash
spark-shell --packages="org.elasticsearch:elasticsearch-spark-20_2.11:7.5.0"
and i got message like below
Ivy Default Cache set to: /opt/bitnami/spark/.ivy2/cache
The jars for the packages stored in: /opt/bitnami/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/bitnami/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
org.elasticsearch#elasticsearch-spark-20_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c-1.0.xml (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:70)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:62)
at org.apache.ivy.core.module.descriptor.DefaultModuleDescriptor.toIvyFile(DefaultModuleDescriptor.java:563)
at org.apache.ivy.core.cache.DefaultResolutionCacheManager.saveResolvedModuleDescriptor(DefaultResolutionCacheManager.java:176)
at org.apache.ivy.core.resolve.ResolveEngine.resolve(ResolveEngine.java:245)
at org.apache.ivy.Ivy.resolve(Ivy.java:523)
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1300)
at org.apache.spark.deploy.DependencyUtils$.resolveMavenDependencies(DependencyUtils.scala:54)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:304)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:774)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I tried other package, but it is not working with all same error message.
Can you give some advice to avoid this error?
Found the solution to it
as given in https://github.com/bitnami/bitnami-docker-spark/issues/7
what we have to do is create a volume on host mapped to docker path
volumes:
- ./jars_dir:/opt/bitnami/spark/ivy:z
give this path as cache path like this
spark-shell --conf spark.jars.ivy=/opt/bitnami/spark/ivy --conf
spark.cassandra.connection.host=127.0.0.1 --packages
com.datastax.spark:spark-cassandra-connector_2.12:3.0.0-beta --conf
spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions
All happened because /opt/bitnami/spark is not writable and we have to mount a volume to bypass that.
The error "java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/" occured because the location /opt/bitnami/spark/ is not writable. so in order to resolve this issue do modify the master spark service like this.
Added user as root and add mounted volume path for required jars.
see the working block of spark service written in docker compose:
spark:
image: docker.io/bitnami/spark:3
container_name: spark
environment:
- SPARK_MODE=master
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
user: root
ports:
- '8880:8080'
volumes:
- ./spark-defaults.conf:/opt/bitnami/spark/conf/spark-defaults.conf
- ./jars_dir:/opt/bitnami/spark/ivy:z