I installed the official Airflow Docker Image , but I need to find the airflow.cfg file. Where did Docker put this file?
I am using Mac.
use docker-compose and get the entire setup, you need more than the airflow. You will need the UI,flower, etc. Get the docker compose image from:
https://github.com/puckel/docker-airflow
Clone the repo and if you look at the volumes
https://github.com/puckel/docker-airflow/tree/master/config
in the compose file they map to the config. simply put the files in the config and run the compose. This will save you so much time.
Related
I have pushed a Docker image on GitHub Packages and now I would like to pull it and use it.
To run the image locally, I used to go to the related folder and run it with the command docker-compose up.
However now, by pulling from GitHub Packages, I just get the Docker image without any folder and I don't know how I can run it.
By inspecting the image it has all the files related to the original folder but, when I try to run the docker-compose up ghcr.io/giuliomat/bdt-project command, I get an error saying that there is no docker-compose.yml in the directory. If I just use the command docker run ghcr.io/giuliomat/bdt-project it runs one of the two services specified in the docker-compose.yml file. How can I run the Docker Compose image correctly? Thanks in advance!
Update: I try to explain myself better. In the image there is a Dockerfile (that now I've uploaded in the question) which is used to build the web service. I developed the image locally and I have no problem running it with docker-compose up, but now I wanted to see what it has to be done in order to run it when a user pulls it from my GitHub Packages, and this is my problem. The pulled image should have all the elements needed to run but I don't know what command to use in order to tell Docker to run both services specified in the docker-compose.yml file, since when a user pulls from GitHub Packages it only gets the image and no folder where run docker-compose up.
Dockerfile:
docker-compose.yml:
content of the pulled docker image:
Update:
Docker image repository does not store yml files, therefore either you provide a README.md for the user in the image registry (with yml verbosely copy-pasted there) and/or you provide also the link to the version control repository where the rest of the files reside, so the user can clone and use docker-compose up.
docker-compose up [options] [--scale SERVICE=NUM...] [SERVICE...] means "find [service...](if specified, otherwise run all) indocker-compose.yml` in the current working directory and run it.
So if you move out of the folder with docker-compose.yml it won't pick the compose file and therefore won't work.
Also for the image using you need to specify image property of a service instead of build because build works with the Dockerfile locally and attempts to build an image instead of pulling it from GitHub Docker image registry:
web:
image: "ghcr.io/giuliomat/bdt-project:latest"
It'd be the same way you have it for redis service.
Also make sure you can pull the image locally first (otherwise docker login would be necessary prior to compose commands) by:
docker pull ghcr.io/giuliomat/bdt-project
I am a newbie as far as both Airflow and Docker are concerned; to make things more complicated, I use Astronomer, and to make things worse, I run Airflow on Windows. (Not on a Unix subsystem - could not install Docker on Ubuntu 20.4). "astro dev start" breaks with an error, but in Docker Desktop I see, and can start, 3 Airflow-related containers. They see my DAGs just fine, but my DAGs don't see the local file system. Is thus unavoidable with the Airflow + Docker combo? (Seems like a big handicap; one can only use a file in the cloud).
In general, you can declare a volume at image runtime in Docker using the -v switch with your docker run command to mount a local folder on your host to a mount point in your container, and you can access that point from inside the container.
If you go on to use docker-compose up to orchestrate your containers, you can specify volumes in the docker-compose.yml file for your containers which configures the volumes for the containers that run.
In your case, the Astronomer docs here suggest it is possible to create a custom directive in the Astronomer docker-compose.override.yml file to mount the volumes in the Airflow containers created as part of your astro commands for your stack which should then be visible from your DAGs.
I'm new to Google Cloud and Docker and I can't for the life of me figure out how to copy directories from the Docker container (pushed to the Container Registry) to the Google Compute Engine instance. I think I need to mount the volume but I don't really know how. In the docker container the main directory is /app which has my files. Basically I want to do this to see the docker container's files in Google Cloud.
I assumed that if i did: docker pull [HOSTNAME]/[PROJECT-ID]/[IMAGE]:[TAG] inside the cloud shell that the files would show up somewhere in the cloud shell i.e. in var/lib/docker but when I cd to var/lib/docker and type in: ls I get
ls: cannot open directory '.': Permission denied
Just to add I've tried following the "Connecting to Cloud Storage buckets" tutorial https://cloud.google.com/compute/docs/disks/gcs-buckets
But realised that this is for single files. Is it possible to copy over the whole root directory of the Docker image using gsutil? Do I need to use something else instead, like persistent disks?
You need to have docker installed in order to run your images and of course be able to copy anything from inside the image to your host filesystem.
Use docker cp CONTAINER:SRC_PATH DEST_PATH to copy files.
Have a look at the official Docker Documentation on how to use this command.
Simillar topic was also discussed here on StackOverflow and has a very good answer.
I want to make some changes to the config file of the VerneMQ image running on docker. Is there any way to reach the config file so that changes could be made?
If you exec into the container docker exec -it <containerID> bash, you'll see that the vernemq.conf file is located under /etc/vermnemq/. Its just the matter of replacing this default conf by your own config file. Keep your vernemq.conf in same directory as where Dockerfile is and then add
following line into Dockerfile
COPY vernemq.conf /etc/vernemq/vernemq.conf
The above line copies your config file into container at given location and replaces the existing one. Finally build the image. For more advanced stuff, do checkout this!
Another approach could be to simply set your options as environment variables for the docker image.
From the official docker hub page:
VerneMQ Configuration
All configuration parameters that are available in vernemq.conf can be
defined using the DOCKER_VERNEMQ prefix followed by the confguration
parameter name. E.g: allow_anonymous=on is -e
"DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on" or
allow_register_during_netsplit=on is -e
"DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT=on". All available
configuration parameters can be found on
https://vernemq.com/docs/configuration/.
This is especially useful for compose-like yml-based deployments.
You can create a new Dockerfile to modify image contents -
FROM erlio/docker-vernemq
RUN Modify Command
Use the new Dockerfile to build new image & run container using that.
So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.