I have installed Airflow on docker. I want to know how to change Airflow home path from docker to my local system.
ex:
airflow home (now) : /usr/local/airflow
want to change to : mysystempath
docker run -d -p 8080:8080 -v /path/to/dags/on/your/local/machine/:/usr/local/airflow/dags puckel/docker-airflow webserver
tried above not working:
-- error message -- docker: Error response from daemon: driver failed programming external connectivity on endpoint gallant_pasteur (6f5e5a820b81847758c4e3e23a826b3bc5d4d7d67743cf55d6b01893cf427a1e): Bind for 0.0.0.0:8080 failed: port is already allocated.
It looks like you want to mount a local directory as dags folder for the airflow run within a local docker container.
Here's one example:
Given a local directory ~/Downloads/airflow_dags, you have a DAG named tutorial.py copied from here.
Then run an airflow container from image puckel/docker-airflow:latest:
docker run -d -p 8080 -v ~/Downloads/airflow_dags:/usr/local/airflow/dags --name airflow-webserver puckel/docker-airflow:latest webserver
Then you can run the following command to work with the DAG tutorial.py:
docker exec -it airflow-webserver airflow initdb
docker exec -it airflow-webserver airflow list_dags
docker exec -it airflow-webserver airflow list_tasks tutorial
Related
I'm trying to run Zipkin server with docker but how can I store the log files? or how can I store the volume? there is no clear documentation for that.
Docker command :
docker run -d -p 9411:9411 openzipkin/zipkin
I have a server with Truenas scale in it. I tried to follow this tutorial.
The concept goes within this lines:
Server: Truenas Scale, vm(ubuntu server) in truenas, docker inside the vm.
The goal is to create docker containers in the vm but use the nfs shared folders to save the data from the containers. Al thought the process is intimidating with a number of nonsense here and there, i manage to deploy the nfs and the vm and make the vm to "talk" to the host machine(truenas) following the guide above.
Truenas:
ip: 192.168.2.144
user:docker(1000)
group:docker(1000)
pool:main
datasheet:docker-vm
shared-path: ":/mnt/main/docker-vm/docker"
VM(Ubuntu):
ip:192.168.2.143
uid:docker(1000)
gid:docker(1000)
The proces to mount the shared path to the VM is:
$ sudo mkdir /nfs
$ sudo mount 192.168.2.144:/mnt/main/docker-vm/docker
$ sudo touch /nfs/hello_world // Output: Permission denied.
To solve this problem we have to do the next step:
After that you have to go to the true nas UI and add the user "docker" as Maproot User and the group "docker" as Maproot Group and the host ip(in this case 192.168.2.143) to the UNIX (NFS) Shares.
After that i am able to:
$ touch /nfs/hello_world
$ ls /nfs //Output : -rw-rw-rw- 1 docker docker 0 Oct 9 12:36 helo_wold
The next step is to create the Portainer container and store the files in the nfs:
$ mkdir /nsf/portainer_data
$ docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v
/var/run/docker.sock:/var/run/docker.sock -v /nfs/portainer_data:/data
portainer/portainer-ce:latest
The docker run command return this error:
docker: Error response from daemon: error while creating mount source path
'/nfs/portainer_data': mkdir /nfs: read-only file system.
I am frustrated cuz i am able to create files and folders to the /nfs as user but the docker can't? I hope i covered the problem enough and someone can help me.
as per gramex-install-doc gramex could be started by running
# Run Gramex on port 9988
docker run --name gramex-instance -p 9988:9988 gramener/gramex
is it possible to start multiple gramex instances by changing --name parameter & different port numbers using -p parameter?
when I tried to start gramex by:
docker run --name gramex-test-port -p 9998:9998 gramener/gramex
in the console it was still printing:
INFO 13-Apr 18:21:41 __init__ PORT Listening on port 9988
can multiple gramex instances be started using gramex-docker-install?
adding you application's gramex.yaml with below entry:
app:
listen:
port: 9998
and then starting docker container at the application directory with below params is starting gramex at required port
docker run --name gramex-agri-prod -i -t -p 9998:9998 -v "$(pwd)":"$(pwd)" -w "$(pwd)" gramener/gramex
Note: include -d param to run it as deamon process
I pulled the latest version of airflow image from docker hub.
apache/airflow.
And I tried to run a container base on this image.
docker run -d -p 127.0.0.1:5000:5000 apache/airflow webserver
The container is running and the status of port is fine. But I still can't access the airflow webserver from my browser.
This site can’t be reached.
127.0.0.1 refused to connect.
After few minutes, the container will stop automatically.
Is there anyone could advise?
I don't have experience with airflow, but this is how you fix this image to run:
First of all you have to overwrite the entrypoint because the existing one doesn't help a lot. From what I understand this image needs 2 steps in order to run: initdb and webserver. For this reason the existing entrypoint is not useful.
Run:
docker run -p 5000:8080 --entrypoint /bin/bash -ti apache/airflow
This will open a shell inside a running container. Also note that I mapped port 8080 inside the container.
Then inside the container run:
airflow db init
airflow webserver -p 8080
Note that in older versions of airflow, the command to initialize the database is airflow initdb, instead of airflow db init.
Open a browser and navigate to http://localhost:5000
When you close the container your work is gone thou ;)
Another thing you can do is put the 2 airflow commands in a bash script and map that script inside the container and use it as entrypoint. Something like this:
docker run -p 5000:8080 -v $(pwd)/startup.sh:/opt/airflow/startup.sh --entrypoint /opt/airflow/startup.sh -d --name airflow apache/airflow
You should make startup.sh executable before running this.
Let me know if you run into issues.
I am quite new to the world of docker and I am trying to set this up:
Running a solarwinds whd container and trying to mount a local volume on the host using this command:
docker run -d -p 8081:8081 --name=whdinstance -v pwd:/usr/local/webhelpdesk/bin/pgsql/var/lib/pgsql/9.2/data solarwinds/whd-embedded:latest
This starts the container and the volume is mounted but as soon as I go to localhost:8081 to login to the web helpdesk portal it asks me to select the database and then says "Connection refused" See Screenshot
can someone please help, if this might be an issue with the way I am mounting the volume?
Here exemples of how using volumes:
For use directory volume
docker run -itd -p 80:80 --name wordpress -v /path/in/your/host :/path/in/your/container wordpress
You have to put you -v and then the path of your directory in your container : the path of your shared directory on your host. When you done this you can choose your image !
So for you it should be something like
docker run -itd -p 8081:8081 --name=whdinstance -v /usr/local/webhelpdesk/bin/pgsql/var/lib/pgsql/9.2/data solarwinds/whd-embedded:latest