Invalid arguments were passed to DockerOperator (retrieve_output) - docker

In trying to create the DockerOperator, I got this error:
Invalid arguments were passed to DockerOperator (task_id: t2). Invalid arguments were:
**kwargs: {'retrieve_output': True, 'retrieve_output_path': '/tmp/script.out'}
Here is my code:
from airflow.decorators import task, dag
from airflow.providers.docker.operators.docker import DockerOperator
from datetime import datetime
#dag(start_date=datetime(2023, 1, 1), schedule="#daily", catchup=False)
def docker_dag():
#task()
def t1():
pass
t2 = DockerOperator(
task_id='t2',
container_name="task_t2",
image='stock_image:v1.0.2',
command='python3 stock_data.py',
docker_url="tcp://docker-proxy:2375", # I have to use this on MacOS or I'll get a Permission Denied error
network_mode='bridge',
xcom_all=True,
retrieve_output=True,
retrieve_output_path="/tmp/script.out",
auto_remove=True,
mount_tmp_dir=False
)
t1() >> t2
dag = docker_dag()
Note: Here is the link to the documentation which shows my arguments do exist in the documentation. So why would I be getting an InvalidArgument error for just these 2 specific arguments?

retrieve_output and retrieve_output_path were added to the DockerOperator in the Docker provider version 2.2.0. :)

Related

Error: Invalid constructor input for SearchGoogleAdsRequest

I am trying to execute an Airflow Dag using the GoogleAdsToGcsOperator, and I am getting this error:
Failed to execute job 1874 for task run_operator (Invalid constructor input for SearchGoogleAdsRequest: customer_id: "xxxxxxx"
query: "SELECT campaign.id, ad_group.id, ad_group.name FROM ad_group"
page_size: 10000
; 13968)
I am using this image version "composer-2.1.2-airflow-2.3.4"
Do someone knows how to solve this error?

Airflow DockerOperator volumes and mounts

We have Airflow running (using Docker compose) with several DAG's active. Last week we updated our Airflow to version 2.1.3.
This resulted in an error for a DAG where we use DockerOperator:
airflow.exceptions.AirflowException: Invalid arguments were passed to DockerOperator (task_id: t_docker). Invalid arguments were:
**kwargs: {'volumes':
I found this release note telling me that
The volumes parameter in airflow.providers.docker.operators.docker.DockerOperator and airflow.providers.docker.operators.docker_swarm.DockerSwarmOperator was replaced by the mounts parameter
So I changed our DAG from
t_docker = DockerOperator(
task_id='t_docker',
image='customimage:latest',
container_name='custom_1',
api_version='auto',
auto_remove=True,
volumes=['/home/airflow/scripts:/opt/airflow/scripts','/home/airflow/data:/opt/airflow/data'],
docker_url='unix://var/run/docker.sock',
network_mode='bridge',
dag=dag
)
to this
t_docker = DockerOperator(
task_id='t_docker',
image='customimage:latest',
container_name='custom_1',
api_version='auto',
auto_remove=True,
mounts=['/home/airflow/scripts:/opt/airflow/scripts','/home/airflow/data:/opt/airflow/data'],
docker_url='unix://var/run/docker.sock',
network_mode='bridge',
dag=dag
)
But now i get this error:
docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.41/containers/create?name=custom_1: Internal Server Error ("json: cannot unmarshal string into Go struct field HostConfig.HostConfig.Mounts of type mount.Mount")
What am I doing wrong?
The change isn't only in the parameter name it's also a change to Mount syntax.
You should replace
volumes=['/home/airflow/scripts:/opt/airflow/scripts','/home/airflow/data:/opt/airflow/data']
with:
mounts=[
Mount(source="/home/airflow/scripts", target="/opt/airflow/scripts", type="bind"),
Mount(source="/home/airflow/data", target="/opt/airflow/data", type="bind"),
]
So your code will be:
from docker.types import Mount
t_docker = DockerOperator(
task_id='t_docker',
image='customimage:latest',
container_name='custom_1',
api_version='auto',
auto_remove=True,
mounts=[
Mount(source="/home/airflow/scripts", target="/opt/airflow/scripts", type="bind"),
Mount(source="/home/airflow/data", target="/opt/airflow/data", type="bind"),
],
docker_url='unix://var/run/docker.sock',
network_mode='bridge',
dag=dag
)

Airflow DockerOperator unable to mount tmp directory correctly

I am trying to run a simple python script within a docker run command scheduled with Airflow.
I have followed the instructions here Airflow init.
My .env file:
AIRFLOW_UID=1000
AIRFLOW_GID=0
And the docker-compose.yaml is based on the default one docker-compose.yaml. I had to add - /var/run/docker.sock:/var/run/docker.sock as an additional volume to run docker inside of docker.
My dag is configured as followed:
""" this is an example dag """
from datetime import timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator
from airflow.utils.dates import days_ago
from docker.types import Mount
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['info#foo.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 10,
'retry_delay': timedelta(minutes=5),
}
with DAG(
'msg_europe_etl',
default_args=default_args,
description='Process MSG_EUROPE ETL',
schedule_interval=timedelta(minutes=15),
start_date=days_ago(0),
tags=['satellite_data'],
) as dag:
download_and_store = DockerOperator(
task_id='download_and_store',
image='satellite_image:latest',
auto_remove=True,
api_version='1.41',
mounts=[Mount(source='/home/archive_1/archive/satellite_data',
target='/app/data'),
Mount(source='/home/dlassahn/projects/forecast-system/meteoIntelligence-satellite',
target='/app')],
command="python3 src/scripts.py download_satellite_images "
"{{ (execution_date - macros.timedelta(hours=4)).strftime('%Y-%m-%d %H:%M') }} "
"'msg_europe' ",
)
download_and_store
The Airflow log:
[2021-08-03 17:23:58,691] {docker.py:231} INFO - Starting docker container from image satellite_image:latest
[2021-08-03 17:23:58,702] {taskinstance.py:1501} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/home/airflow/.local/lib/python3.6/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.41/containers/create
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/docker/operators/docker.py", line 319, in execute
return self._run_image()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/docker/operators/docker.py", line 258, in _run_image
tty=self.tty,
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/container.py", line 430, in create_container
return self.create_container_from_config(config, name)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/container.py", line 441, in create_container_from_config
return self._result(res, True)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 274, in _result
self._raise_for_status(response)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 400 Client Error for http+docker://localhost/v1.41/containers/create: Bad Request ("invalid mount config for type "bind": bind source path does not exist: /tmp/airflowtmp037k87u6")
Trying to set mount_tmp_dir=False yield to an Dag ImportError because of unknown Keyword Argument mount_tmp_dir. (this might be an issue for the Documentation)
Nevertheless I do not know how to configure the tmp directory correctly.
My Airflow Version: 2.1.2
There was a bug in Docker Provider 2.0.0 which prevented Docker Operator to run with Docker-In-Docker solution.
You need to upgrade to the latest Docker Provider 2.1.0
https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/index.html#id1
You can do it by extending the image as described in https://airflow.apache.org/docs/docker-stack/build.html#extending-the-image with - for example - this docker file:
FROM apache/airflow
RUN pip install --no-cache-dir apache-airflow-providers-docker==2.1.0
The operator will work out-of-the-box in this case with "fallback" mode (and Warning message), but you can also disable the mount that causes the problem. More explanation from the https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/_api/airflow/providers/docker/operators/docker/index.html
By default, a temporary directory is created on the host and mounted
into a container to allow storing files that together exceed the
default disk size of 10GB in a container. In this case The path to the
mounted directory can be accessed via the environment variable
AIRFLOW_TMP_DIR.
If the volume cannot be mounted, warning is printed and an attempt is
made to execute the docker command without the temporary folder
mounted. This is to make it works by default with remote docker engine
or when you run docker-in-docker solution and temporary directory is
not shared with the docker engine. Warning is printed in logs in this
case.
If you know you run DockerOperator with remote engine or via
docker-in-docker you should set mount_tmp_dir parameter to False. In
this case, you can still use mounts parameter to mount already
existing named volumes in your Docker Engine to achieve similar
capability where you can store files exceeding default disk size of
the container,
I had the same issue and all "recommended" ways of solving the issue here and setting up mount_dir params as descripted here just lead to other errors. The one solution that helped me was wrapping the invocated by docker code with the VPN (actually this hack was taken from another docker-powered DAG that used VPN and worked well).
So the final solution looks like:
#!/bin/bash
connect_to_vpn.sh &
sleep 10
python3 my_func.py
sleep 10
stop_vpn.sh
wait -n
exit $?
To connect to VPN I used openconnect. The took can be installed with apt install and supports anyconnect protocol (it was my crucial requirement).

No module named 'twilio'

I am trying to use twilio to send a text message...when I put the demo script in a texteditor, ave it as something.py and run it from terminal, I receive the text message.
However, when I copy and paste the same code into Spyder (Anaconda 3 environment, Python 3.8) I get the following error.
Here is the code:
from twilio.rest import Client
account_sid = 'xxx'
auth_token = 'xxx'
client = Client(account_sid, auth_token)
message = client.messages \
.create(
body="Test.",
from_='x',
to='xxx'
)
print(message.sid)
And here is there error:
File "xxx", line 14, in <module>
import twilio
ModuleNotFoundError: No module named 'twilio'
I have installed with pip, pip3, Conda, all that.
I'm not really used to this anaconda environment thing, but when I search for the package "twilio" the only thing that shows up is r-Twilio. I imagine this is something to do with the problem, but I've no idea. I tried creating a new environment with python 2.7 and again only saw the r-twilio thing.

Python 3 runing script from one user works but from the other doesn't?

When i run this script from user jenkins (Linux Mint) i get this error, and when running it from my user it works. Jenkins user is created with jenkins service. I have installed virtualenv.
import unittest
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
DRIVER = None
def getOrCreateWebdriver():
global DRIVER
DRIVER = DRIVER or webdriver.Firefox()
return DRIVER
class LoginTest(unittest.TestCase):
def setUp(self):
self.browser = getOrCreateWebdriver()
def test_Loggin(self):pass
browser = self.browser
def tearDown(self):
self.browser.close()
if __name__ == '__main__':
unittest.main(verbosity=2)
From user jenkins when i run this script i get error :
test_Loggin (__main__.LoginTest) ... ERROR
/usr/lib/python3.4/unittest/case.py:602: ResourceWarning: unclosed file <_io.BufferedWriter name='/dev/null'>
outcome.errors.clear()
======================================================================
ERROR: test_Loggin (__main__.LoginTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "Test.py", line 16, in setUp
self.browser = getOrCreateWebdriver()
File "Test.py", line 10, in getOrCreateWebdriver
DRIVER = DRIVER or webdriver.Firefox()
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/webdriver.py", line 64, in __init__
self.binary, timeout),
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/extension_connection.py", line 51, in __init__
self.binary.launch_browser(self.profile)
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 70, in launch_browser
self._wait_until_connectable()
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 100, in _wait_until_connectable
raise WebDriverException("The browser appears to have exited "
selenium.common.exceptions.WebDriverException: Message: The browser appears to have exited before we could connect. If you specified a log_file in the FirefoxBinary constructor, check it for details.
When you're logged in as yourself, you need to do echo $DISPLAY and note the display info it prints. Subsequently when you login as jenkins service you need to do xhost +, DISPLAY=[display-info]; export DISPLAY. (display-info is what you got from echo $DISPLAY, ignore the square brackets, they shouldn't be specified in the command)
Hopefully this should work. I don't have similar env to test, just mentioning what i recollect having done it quite sometime back.

Resources