Using the docker-compose.yml from here and running docker-compose up I am able to run Airflow without problems (Ubuntu). But this only works if the parent folder is named /airflow. If it is named something else, the airflow-init service will fail:
ERROR: for airflow-init Container "f28089f55f79" is unhealthy.
ERROR: Encountered errors while bringing up the project.
I want to be able to run Airflow using docker-compose up when my project lives in a different folder, e.g.: myproject/docker-compose.yaml
What should I do to make this work?
By checking Apache airflow page and the docs it says that $AIRFLOW_HOME by default is ~/airflow
Just change this variable.
Related
I am trying to run the puppet pupperware suite (all 3 servers/puppet server/puppet DB/DB server).
I am using the official Yaml file provided by puppetlabs for docker compose : https://github.com/puppetlabs/pupperware/blob/master/docker-compose.yml
When I run that Yaml file in docker compose however, I am running into the following error (from docker-compose logs):
postgres_1 | ls: cannot open directory '/docker-entrypoint-initdb.d/': Permission denied
And as a result, the build fails (only the puppet server comes up, but not the other ones).
My docker host is a Fedora 33 virtual machine running inside a Proxmox environment. Proxmox runs on the physical host.
I have disabled SELinux, and I am running docker (moby) rootless. My local user (uid 1000) can run docker without sudo.
I believe I need to set permission in the container (probably via a Dockerfile) but I am not sure how to change that and I am not sure how to use a Dockerfile and docker-compose simultaneously.
thank you for your help
The docker-compose file is from the Puppet 6 era. The docker images that the Pupperware setup currently pulls, are latest, which is Puppet 7.
I got my pre-existing setup functioning again by changing the image names to:
puppet/puppetserver:6.14.1
postgres:9.6
puppet/puppetdb:6.13.1
Maybe this works for you as well.
well, since it's been a month and you have no answers I will tell try to help you with what I know.
You should put a Dockerfile in the root of your project. It contains commands to be run by the docker daemon AND the commands run by the linux inside the container. Then it runs through the contents of your docker-compose.yml and runs the commands in there.
So to solve the permission problem you should add RUN, which executes the linux command in Bash and add data to the folder.
Also look at this answer
I have ran into this problem when opening the project in container.
Setting up container for folder or workspace: c:\Work\playground\moodle\lms_administrace
Run: docker-compose -f c:\Work\playground\moodle\lms_administrace\docker\docker-compose-dev.yml config --services
app
redis
db
phpmyadmin
Run: docker-compose --project-name docker -f c:\Work\playground\moodle\lms_administrace\docker\docker-compose-dev.yml up -d --build
Creating volume "docker_mysql_data_volume" with default driver
Pulling app (nodejs:)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]
The problem is that I cannot press y or N. I know why I'm having this problem - because I have used that docker compose file before and containers and volumes were created with the directory prefix (docker).
There's a way how to change the compose project name through .env file, but it does not work (I put the file in the root directory, in the directory where compose file is, and in the .devcontainer folder). And also there is -p parameter, but the MS GitHub page does not provide any information.
I can probably fix it by renaming everything, but this may be a serious issue since you can't continue in the process ...
Did anybody experienced similar problem and fixed that?
Thanks,
Karel
You probably mistyped service docker image name in docker-compose.yml.
You are trying to pull nodejs image instead of node
Also, there is could be same error with case postgresql and postgres.
I had the same problem,My problem is using the wrong mirror name.
I have a docker-container with a Python3 environment and various libraries installed.
I'm trying to develop a simple Python program against this environment.
So what I have is a volume with my source code outside the container which is ADDed and set as WORKDIR in the Dockerfile.
I'm then shelling into the container and trying to run the program on the command-line.
When I hit an error, I want to simply change the source in my editor which is outside the container, and run again.
However, when I do this, the executing code in the container doesn't seem to be taking any notice of the changes I made.
If I do
docker-compose up --build
and rebuild the container then it does.
Obviously this is very slow.
Surely it should be possible for the container to see changes to the code I'm working on without being rebuilt? If so, how do I make this happen?
Using ADD bakes files into a container image, so as you've noticed, updating files in a running application requires an entire container rebuild and restart. To get around this, you can mount a directory on your host machine over the path you've copied into your container using ADD.
To do this with Docker, you can use -v or --volume. Using Docker Compose, you can list the directory to be mounted under volumes:. For example, if you had the following in your build file:
# Copy app code into the container working directory
ADD /my/app/code /usr/app/src
You can then mount your live code over the baked-in files at container start time (note that directory paths must be absolute - you can use $PWD for this):
$ docker run -v /my/live/app/code:/usr/app/src python:latest
$ docker run -v "$PWD"/app/code:/usr/app/src python:latest
The docker-compose.yml equivalent is as follows:
my-service:
image: python:latest
volumes:
- /my/live/app/code:/usr/app/src
- ./relative/paths:/work/too
There's more about bind mounts in the documentation.
I am currently trying to deploy a basic task queue and frontend using celery, rabbitmq and flower on Kubernetes (and minikube). I am following the example here:
https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/celery-rabbitmq
I can get everything to work following the instructions; however, when I run docker build on the Dockerfile in ./celery-app-add, push the image to my own repository and replace endocode/celery-app-add with <mine>/celery-app-add, I can't get the example to run anymore. I am assuming that the Dockerfile in source control is wrong because if I pull the endocode/celery-app-add image and run bash in the image, it loads in as the root user (as opposed to user with <mine>/celery-app-add Dockerfile).
After booting up all of the containers and services, I can see the following in the logs:
2016-08-18T21:05:44.846591547Z AttributeError: 'ChannelPromise' object has no attribute '__value__'
The celery logs show:
2016-08-19T01:38:49.933659218Z [2016-08-19 01:38:49,933: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#rabbit:5672//: [Errno -2] Name or service not known.
If I echo RABBITMQ_SERVICE_SERVICE_HOST within the container, it appears as the same host as indicated in the rabbitmq-service after running kubectl get services.
I am not really sure where to go from here. Any suggestions are appreciated. Also, I added USER root (won't run this in production, don't worry) to my Dockerfile and still ran into the same issues above. docker history endocode/celery-app-add hasn't been too helpful either.
Turns out the problem is based around this celery issue. Celery prefers to use CELERY_BROKER_URL over anything that can be set in the app configuration. To fix this, I unset CELERY_BROKER_URL in the Dockerfile and it picked up my configuration correctly.
I'm trying to build a Dockerfile for a webapp that uses a file-based database. I would like to be able to mount the file from the host*
The file is in the root of the complete software install, so it's not really ideal to mount that complete dir.
Another problem is that before the first use, the database-file isn't created yet. A first time user won't have a database, but another user might. I can't 'mount' anything during a build** I believe.
It could probably work like this:
First/new database start:
Start the container (without mount).
The webapp creates a database.
Stop the container
subsequent starts:
Start the container using a -v to mount the file
It would be better if that extra start/stop isn't needed for a user. Even if it is, I'm still looking for a way to do this userfriendly, possibly having 2 'methods' of starting it (maybe I can define a first-boot thing in docker-compose as well as a 'normal' method?).
How can I do this in a simpel way, so that it's clear for any first time users?
* The reason is that you can copy your Dockerfile and the database file as a backup, and be up and running with just those 2 elements.
** How to mount host volumes into docker containers in Dockerfile during build
One approach that may work is:
Start the database in the build file in such a way that it has time to create the default file before exiting.
Declare a VOLUME in the Dockerfile for the file after the above instruction. This will cause the file to be copied into the volume when a container is started, assuming you don't explicitly provide a host path
Use data-containers rather than volumes. So the normal usage would be:
docker run --name data_con my_db echo "my_db data container"
docker run -d --volumes-from data_con my_db
...
The first container should exit immediately but set up the volume that is used in the second container.
I was trying to achieve something similar and managed to do it by mounting a folder, instead of the file, and creating a symlink in the Dockerfile, initially pointing to a non-existing file:
docker-compose.yml
version: '3.0'
services:
bash:
build: .
volumes:
- ./data:/data
command: ['bash']
Dockerfile
FROM bash:latest
RUN ln -s /data/.bash_history /root/.bash_history
Then you can run the container with:
docker-compose run --rm bash
With this setup, you can push an empty "data" folder into the repository for example (and exclude its content with .gitignore). In the first run, inside the container /root/.bash_history will be a "broken" symlink, pointing to a file that does not exist. When you exit the shell, bash will write the history to /root/.bash_history, which will end up in /data/.bash_history.
This is probably not the correct approach.
If you have multiple containers that are trying to share some information through the file-system, you should probably let them share some directory.
That way, the flow is simple and very hard to get wrong.
You simply mount the same directory, say /data (from the host's perspective) into all the containers that are trying to use it.
When an application starts and it can't find anything inside that directory, it can gracefully stop and exit with a code that says: "Cannot start, DB not initialized yet".
You can then configure some mechanism with a growing timeout to try and restart that container until you're successful.
On the other hand, the app that creates the DB can start and create it inside the directory or find an existing file to use.