My HAproxy container exits every time i try to run it .
I have tried to run it without --d to see why it exited and I get the following output:
$ sudo docker run --name=hapr -p 80:80 -v /haproxy/:/usr/local/etc/haproxy/ haproxy
I get this output:
HA-Proxy version 2.1.4 2020/04/02 - https://haproxy.org/ Status: stable branch - will stop receiving fixes around Q1 2021. Known bugs: http://www.haproxy.org/bugs/bugs-2.1.4.html Usage : haproxy [-f <cfgfile|cfgdir>]* [ -vdVD ] [ -n <maxconn> ] [ -N <maxpconn> [ -p <pidfile> ] [ -m <max megs> ] [ -C <dir> ] [-- <cfgfile>* -v displays version ; -vv shows known build options. -d enters debug mode ; -db only disables background mode. -dM[<byte>] poisons memory with <byte> (defaults to 0x50) -V enters verbose mode (disables quiet mode) -D goes daemon ; -C changes to <dir> before loading files. -W master-worker mode. -q quiet mode : don't display messages -c check mode : only check config files and exit -n sets the maximum total # of connections (uses ulimit -n) -m limits the usable amount of memory (in MB) -N sets the default, per-proxy maximum # of connections (0) -L set local peer name (default to hostname) -p writes pids of all children to this file -de disables epoll() usage even when available -dp disables poll() usage even when available -dS disables splice usage (broken on old kernels) -dG disables getaddrinfo() usage -dR disables SO_REUSEPORT usage -dr ignores server address resolution failures -dV disables SSL verify on servers side -sf/-st [pid ]* finishes/terminates old pids. -x <unix_socket> get listening sockets from a unix socket -S <bind>[,<bind options>...] new master CLI
If I list the container I get the following message:
$ docker container ls -a
Exited (1) 3 minutes ago
I have fixed my problem , If someone get same problem .
So just you should have the full path in your command .
instaed of
$ sudo docker run --name=hapr -p 80:80 -v /haproxy/:/usr/local/etc/haproxy/ haproxy
use
$ sudo docker run --name=hapr -p 80:80 -v /home/ubuntu/haproxy/:/usr/local/etc/haproxy/ haproxy
also you should have haproxy.cfg allready in your host .
If you check the official HAproxy page on DockerHub you could see that you need to have the haproxy.cfg in to the path /haproxy/. If not, HAproxy can not start.
Note that your host's /path/to/etc/haproxy folder should be populated with a file named haproxy.cfg. If this configuration file refers to any other files within that folder then you should ensure that they also exist (e.g. template files such as 400.http, 404.http, and so forth).
There is the official HAproxy documentation about the haproxy.cfg.
To continue, you need to stop and delete the current container:
$ docker stop CONTAINER
$ docker rm CONTAINER
And created again.
Related
I have tried to install Docker on google Colab through the following ways:
(1)https://phoenixnap.com/kb/how-to-install-docker-on-ubuntu-18-04
(2)https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
(3)https://colab.research.google.com/drive/10OinT5ZNGtdLLQ9K399jlKgNgidxUbGP
I started the docker service and saw the status, but it showed 'Docker is not running'. Maybe the docker can not work on the Colab.
I feel confused and want to know the reason.
Thanks
It's possible to run Docker in Colab, but with limiting functionality.
There are two methods of running Docker service, a regular one (more restrictive), and in rootless mode (dockerd inside RootlessKit).
dockerd
Install by:
!apt-get -qq install docker.io
Use the following shell script:
%%shell
set -x
dockerd -b none --iptables=0 -l warn &
for i in $(seq 5); do [ ! -S "/var/run/docker.sock" ] && sleep 2 || break; done
docker info
docker network ls
docker pull hello-world
docker pull ubuntu
# docker build -t myimage .
docker images
kill $(jobs -p)
As shown above, before each docker command, you've to run Docker service (dockerd) in the background, then kill it. Unfortunately you've to run dockerd for each cell where you want to run your docker commands.
Notes on dockerd arguments:
-b none/--bridge none - Disables a network bridge to avoid errors.
--iptables=0 - Disables addition of iptables rules to avoid errors.
-D - Add to enable debug mode.
However in this mode running most of the containers will generate the errors related to read-only file system.
Additional notes:
To disable cpuset support, run: !umount -vl /sys/fs/cgroup/cpuset.
Related issue: https://github.com/docker/for-linux/issues/1124.
Here are some notepads:
https://colab.research.google.com/drive/1Lmbkc7v7XjSWK64E3NY1cw7iJ0sF1brl
https://colab.research.google.com/drive/1RVS5EngPybRZ45PQRmz56PPdz9nWStIb (without cpuset support)
Rootless dockerd
Rootless mode allows running the Docker daemon and containers as a non-root user.
To install, use the following code:
%%shell
useradd -md /opt/docker docker
apt-get -qq install iproute2 uidmap
sudo -Hu docker SKIP_IPTABLES=1 bash < <(curl -fsSL https://get.docker.com/rootless)
To run dockerd service, there are two methods: using a script (dockerd-rootless.sh) or running rootlesskit directly.
Here is the script which uses dockerd-rootless.sh to run a hello-world container:
%%writefile docker-run.sh
#!/usr/bin/env bash
set -e
export DOCKER_SOCK=/opt/docker/.docker/run/docker.sock
export DOCKER_HOST=unix://$DOCKER_SOCK
export PATH=/opt/docker/bin:$PATH
export XDG_RUNTIME_DIR=/opt/docker/.docker/run
/opt/docker/bin/dockerd-rootless.sh --experimental --iptables=false --storage-driver vfs &
for i in $(seq 5); do [ ! -S "$DOCKER_SOCK" ] && sleep 2 || break; done
docker run $#
jobs -p
kill $(jobs -p)
To run above script, run:
!sudo -Hu docker bash -x docker-run.sh hello-world
The above may generate the following warnings:
WARN[0000] failed to mount sysfs, falling back to read-only mount: operation not permitted
To remount some folders with write access, you can try:
!mount -vt sysfs sysfs /sys -o rw,remount
!mount -vt tmpfs tmpfs /sys/fs/cgroup -o rw,remount
[rootlesskit:child ] error: executing [[ip tuntap add name tap0 mode tap] [ip link set tap0 address 02:50:00:00:00:01]]: exit status 1
The above error is related to dockerd-rootless.sh script which adds extra network parameters to rootlesskit such as:
--net=vpnkit --mtu=1500 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin
This has been reported at https://github.com/rootless-containers/rootlesskit/issues/181 (however ignored).
To workaround the above problem, we can pass our own arguments to rootlesskit using the following file instead:
%%writefile docker-run.sh
#!/usr/bin/env bash
set -e
export DOCKER_SOCK=/opt/docker/.docker/run/docker.sock
export DOCKER_HOST=unix://$DOCKER_SOCK
export PATH=/opt/docker/bin:$PATH
export XDG_RUNTIME_DIR=/opt/docker/.docker/run
rootlesskit --debug --disable-host-loopback --copy-up=/etc --copy-up=/run /opt/docker/bin/dockerd -b none --experimental --iptables=false --storage-driver vfs &
for i in $(seq 5); do [ ! -S "$DOCKER_SOCK" ] && sleep 2 || break; done
docker $#
jobs -p
kill $(jobs -p)
Then run as:
!sudo -Hu docker bash docker-run.sh run --cap-add SYS_ADMIN hello-world
Depending on your image, this may generate the following error:
process_linux.go:449: container init caused "join session keyring: create session key: operation not permitted": unknown.
Which could be solved by !sysctl -w kernel.keys.maxkeys=500, however Colab doesn't allow it. Related: Error response from daemon: join session keyring: create session key: disk quota exceeded.
Notepad showing the above:
https://colab.research.google.com/drive/1oRja4v-PtY6lFMJIIF79No4s3s-vbqd4
Suggested further reading:
Finding the minimal set of privileges for a docker container.
I had the same issue as you and apparently Docker is not supported in Google Colab according to the answers on this issue from its Github repository: https://github.com/googlecolab/colabtools/issues/299#issuecomment-615308778.
I know, it is an old question, but this an old answer (2020) by a member of the Google Colaboratory team.
this isn't possible, and we currently have no plans to support this.
The virtualization/isolation provided by docker is available in Colab as each Colab session is an isolation by itself, if one installs the required libraries, hardware abstraction (Colab by default offers a free GPU and one can choose it during run time).....Have used conda and when I switched to dockers, there was a distinct difference in performance......Docker never had GPU memory fragmentation, but using conda (bare-metal) had the same......I have been trying single colab sessions for training in TF2 and soon will have testing and monitoring sessions(using Tensorboard) and can fully understand, whether having docker in Colab is good or not......Will come back and post my feed back soon....
I created a fresh Digital Ocean server with Docker on it (using Laradock) and got my Laravel website working well.
Now I want to automate my deployments using Deployer.
I think my only problem is that I can't get Deployer to run docker exec -it $(docker-compose ps -q php-fpm) bash;, which is the command I successfully manually use to enter the appropriate Docker container (after using SSH to connect from my local machine to the Digital Ocean server).
When Deployer tries to run it, I get this error message:
➤ Executing task execphpfpm
[1.5.6.6] > cd /root/laradock && (pwd;)
[1.5.6.6] < /root/laradock
[1.5.6.6] > cd /root/laradock && (docker exec -it $(docker-compose ps -q php-fpm) bash;)
[1.5.6.6] < the input device is not a TTY
➤ Executing task deploy:failed
• done on [1.5.6.6]
✔ Ok [3ms]
➤ Executing task deploy:unlock
[1.5.6.6] > rm -f ~/daily/.dep/deploy.lock
• done on [1.5.6.6]
✔ Ok [188ms]
In Client.php line 99:
[Deployer\Exception\RuntimeException (1)]
The command "cd /root/laradock && (docker exec -it $(docker-compose ps -q php-fpm) bash;)" failed.
Exit Code: 1 (General error)
Host Name: 1.5.6.6
================
the input device is not a TTY
Here are the relevant parts of my deploy.php:
host('1.5.6.6')
->user('root')
->identityFile('~/.ssh/id_rsa2018-07-09')
->forwardAgent(true)
->stage('production')
->set('deploy_path', '~/{{application}}');
before('deploy:prepare', 'execphpfpm');
task('execphpfpm', function () {
cd('/root/laradock');
run('pwd;');
run('docker exec -it $(docker-compose ps -q php-fpm) bash;');
run('pwd');
});
I've already spent a day and a half reading countless articles and trying so many different variations. E.g. replacing the -it flag with -i, or setting export COMPOSE_INTERACTIVE_NO_CLI=1 or replacing the whole docker exec command with docker-compose exec php-fpm bash;.
I expect that I'm missing something fairly simple. Docker is widely used, and Deployer seems popular too.
To use Laravel Deployer you should connect via ssh directly to the workspace container.
You can expose the container's ssh port:
https://laradock.io/documentation/#access-workspace-via-ssh
Let's say you've forwarded container ssh port 22 to vm port 2222. In that case you need configure your Deployer to use the port 2222.
Also remember to set proper secure SSH keys, not the default ones.
You should try
docker-compose exec -T php-fpm bash;
The -T option will
Disable pseudo-tty allocation. By default docker-compose exec allocates a TTY.
In my particular case I had separate containers for PHP and Composer. That is why I could not connect to the container via SSH while deploying.
So I configured the bin/php and bin/composer parameters like this:
set('bin/php', 'docker exec php php');
set('bin/composer', 'docker run --volume={{release_path}}:/app composer');
Notice that here we use exec for a persistent php container which is already running at the moment and run to start a new instance of composer container which will stop after installing dependencies.
Consider the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get install -y apache2 && \
apt-get clean
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
When running the container with the command docker run -p 8080:80 <image-id>, then the container starts and remains running, allowing the default Apache web page to be accessed on https://localhost:8080 from the host as expected. With this run command however, I am not able to quit the container using Ctrl+C, also as expected, since the container was not launched with the -it option. Now, if the -it option is added to the run command, then the container exits immediately after startup. Why is that? Is there an elegant way to have apache run in the foreground while exiting on Ctrl+C?
This behaviour is caused by Apache and it is not an issue with Docker. Apache is designed to shut down gracefully when it receives the SIGWINCH signal. When running the container interactively, the SIGWINCH signal is passed from the host to the container, effectively signalling Apache to shut down gracefully. On some hosts the container may exit immediately after it is started. On other hosts the container may stay running until the terminal window is resized.
It is possible to confirm that this is the source of the issue after the container exits by reviewing the Apache log file as follows:
# Run container interactively:
docker run -it <image-id>
# Get the ID of the container after it exits:
docker ps -a
# Copy the Apache log file from the container to the host:
docker cp <container-id>:/var/log/apache2/error.log .
# Use any text editor to review the log file:
vim error.log
# The last line in the log file should contain the following:
AH00492: caught SIGWINCH, shutting down gracefully
Sources:
https://bz.apache.org/bugzilla/show_bug.cgi?id=50669
https://bugzilla.redhat.com/show_bug.cgi?id=1212224
https://github.com/docker-library/httpd/issues/9
All that you need to do is pass the -d option to the run command:
docker run -d -p 8080:80 my-container
As yamenk mentioned, daemonizing works because you send it to the background and decouple the window resizing.
Since the follow-up post mentioned that running in the foreground may have been desirable, there is a good way to simulate that experience after daemonizing:
docker logs -f container-name
This will drop the usual stdout like "GET / HTTP..." connection messages back onto the console so you can watch them flow.
Now you can resize the window and stuff and still see your troubleshooting info.
I am also experiencing this problem on wsl2 under Windows 10, Docker Engine v20.10.7
Workaround:
# start bash in httpd container:
docker run --rm -ti -p 80:80 httpd:2.4.48 /bin/bash
# inside container execute:
httpd -D FOREGROUND
Now Apache httpd keeps running until you press CTRL-C or resize(?!) the terminal window.
After closing httpd, type:
exit
to leave the container
A workaround is to pipe the output to cat:
docker run -it -p 8080:80 <image-id> | cat
NOTE: It is important to use -i and -t.
Ctrl+C will work and resizing the terminal will not shut down Apache.
I read my Docker container log output using
docker logs -f <container_name>
I log lots of data to the log in my node.js app via calls to console.log(). I need to clean the log, because it's gotten too long and the docker logs command first runs through the existing lines of the log before getting to the end. How do I clean it to make it short again? I'd like to see a command like:
docker logs clean <container_name>
But it doesn't seem to exist.
First, if you just need to see less output, you can have docker only show you the more recent lines:
docker logs --since 30s -f <container_name_or_id>
Or you can put a number of lines to limit:
docker logs --tail 20 -f <container_name_or_id>
To delete the logs on a Docker for Linux install, you can run the following for a single container:
echo "" > $(docker inspect --format='{{.LogPath}}' <container_name_or_id>)
Note that this requires root, and I do not recommend this. You could potentially corrupt the logfile if you null the file in the middle of docker writing a log to the same file. Instead you should configure docker to rotate the logs.
Lastly, you can configure docker to automatically rotate logs with the following in an /etc/docker/daemon.json file:
{
"log-driver": "json-file",
"log-opts": {"max-size": "10m", "max-file": "3"}
}
That allows docker to keep up to 3 log files per container, with each file limited to 10 megs (so a limit between 20 and 30 megs of logs per container). You will need to run a systemctl reload docker to apply those changes. And these changes are the defaults for any newly created container, they do not apply to already created containers. You will need to remove and recreate any existing containers to have these settings apply.
The best script I found is
sudo sh -c 'truncate -s 0 /var/lib/docker/containers/*/*-json.log'
It cleans all logs and you don't need to stop the containers.
Credit goes to https://bytefreaks.net/applications/docker/horrible-solution-how-to-delete-all-docker-logs
If you want to remove all log files, not only for a specific container's log, you can use:
docker system prune
But, note that this does not clear logs for running containers.
This is not the ideal solution, but until Docker builds in a command to do it, this is a good workaround.
Create a script file docker-clean-logs.sh with this content:
#!/bin/bash
rm $(docker inspect $1 | grep -G '"LogPath": "*"' | sed -e 's/.*"LogPath": "//g' | sed -e 's/",//g');
Grant the execute permission to it:
chmod +x ./docker-clean-logs.sh
Stop the Docker container that you want to clean:
docker stop <container_name>
Then run the above script:
./docker-clean-logs.sh <container_name>
And finally run your container again:
docker start ...
Credit goes to the user sgarbesi on this page: https://github.com/docker/compose/issues/1083
You can use logrotate as explained in this article
https://sandro-keil.de/blog/2015/03/11/logrotate-for-docker-container/
This needs to be done before launching the container.
Run:
docker inspect {containerId}
Copy LogPath value
truncate -s 0 {LogaPath}
Solution for a docker swarm service:
logging:
options:
max-size: "10m"
max-file: "10"
In order to do this on OSX, you need to get to the virtual machine the Docker containers are running in.
You can use the walkerlee/nsenter image to run commands inside the VM like so:
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh
Combining that with a simplified version of the accepted answer you get:
#!/bin/sh
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n \
cp /dev/null $(docker inspect -f '{{.LogPath}}' $1)
Save it, chmod +x it, run it.
As far as I can tell this doesn't require the container to be stopped. Also, it clears out the log file (instead of deleting it) avoiding errors when doing docker logs right after cleanup.
On Windows 10 none of the solutions worked for me, I kept getting 'No such file or directory'
This worked
Get container ID (inspect the container)
In file explorer open docker-desktop-data (in WSL)
Navigate to version-pack-data\community\docker\containers\CONTAINER_ID
Stop the container
Open the file CONTAINER_ID-json.log file and trim it or just create a blank file with same name
source
I'm currently learning Docker, and have made a nice and simple Docker Compose setup. 3 containers, all with their own Dockerfile setup. How could I go about converting this to work on CoreOS so I can setup up a cluster later on?
web:
build: ./app
ports:
- "3030:3000"
links:
- "redis"
newrelic:
build: ./newrelic
links:
- "redis"
redis:
build: ./redis
ports:
- "6379:6379"
volumes:
- /data/redis:/data
taken from https://docs.docker.com/compose/install/
the only thing is that /usr is read only, but /opt/bin is writable and in the path, so:
sd-xx~ # mkdir /opt/
sd-xx~ # mkdir /opt/bin
sd-xx~ # curl -L https://github.com/docker/compose/releases/download/1.3.3/docker-compose-`uname -s`-`uname -m` > /opt/bin/docker-compose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 403 0 403 0 0 1076 0 --:--:-- --:--:-- --:--:-- 1080
100 7990k 100 7990k 0 0 2137k 0 0:00:03 0:00:03 --:--:-- 3176k
sd-xx~ # chmod +x /opt/bin/docker-compose
sd-xx~ # docker-compose
Define and run multi-container applications with Docker.
Usage:
docker-compose [options] [COMMAND] [ARGS...]
docker-compose -h|--help
Options:
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name (default: directory name)
--verbose Show more output
-v, --version Print version and exit
Commands:
build Build or rebuild services
help Get help on a command
kill Kill containers
logs View output from containers
port Print the public port for a port binding
ps List containers
pull Pulls service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
up Create and start containers
migrate-to-labels Recreate containers to add labels
I've created simple script for installing latest Docker Compose on CoreOS:
https://gist.github.com/marszall87/ee7c5ea6f6da9f8968dd
#!/bin/bash
mkdir -p /opt/bin
curl -L `curl -s https://api.github.com/repos/docker/compose/releases/latest | jq -r '.assets[].browser_download_url | select(contains("Linux") and contains("x86_64"))'` > /opt/bin/docker-compose
chmod +x /opt/bin/docker-compose
Just run it with sudo
The proper way to install or run really anything on CoreOS is either
Install it as a unit
Run in a separate docker container
For docker-compose you probably want to install it as a unit, just like you have docker as a unit. See Digital Ocean's excellent guides on CoreOS and the systemd units chapter to learn more.
Locate your cloud config based on your cloud provider or custom installation, see https://coreos.com/os/docs/latest/cloud-config-locations.html for locations.
Install docker-compose by adding it as a unit
#cloud-config
coreos:
units:
- name: install-docker-compose.service
command: start
content: |
[Unit]
Description=Install docker-compose
ConditionPathExists=!/opt/bin/docker-compose
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/mkdir -p /opt/bin/
ExecStart=/usr/bin/curl -o /opt/bin/docker-compose -sL "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-linux-x86_64"
ExecStart=/usr/bin/chmod +x /opt/bin/docker-compose
Note that I couldn't get the uname -s and uname -m expansions to work in the curl statement so I just replaced them with their expanded values.
Validate your config file with
coreos-cloudinit -validate --from-file path-to-cloud-config
It should output something like
myhost core # coreos-cloudinit -validate --from-file path-to-cloudconfig
2016/12/12 12:45:03 Checking availability of "local-file"
2016/12/12 12:45:03 Fetching user-data from datasource of type "local-file"
myhost core #
Note that coreos-cloudinit doesn't validate the contents-blocks in your cloud-config. Restart CoreOS when you're finished, and you're ready to go.
Update: As #Wolfgang comments, you can run coreos-cloudinit --from-file path-to-cloud-config instead of restarting CoreOS.
I would also suggest docker-compose in a docker container like the one from dduportal.
For the sake of usability I extended my cloud-config.yml as follows:
write_files:
- path: "/etc/profile.d/aliases.sh"
content: |
alias docker-compose="docker run -v \"\$(pwd)\":\"\$(pwd)\" -v /var/run/docker.sock:/var/run/docker.sock -e COMPOSE_PROJECT_NAME=\$(basename \"\$(pwd)\") -ti --rm --workdir=\"\$(pwd)\" dduportal/docker-compose:latest"
After updating the cloud-config via sudo coreos-cloudinit -from-url http-path-to/cloud-config.yml and a system reboot, you are able to use the docker-compose command like you are used to on every other machine.
CentruyLabs created a rubygem called fig2coreos
It translates fig.yml to .service files
fig is deprecated since docker-compose was created but the syntax seems to be the same so that it could probably work.
Simple 3 Steps:
sudo mkdir -p /opt/bin
Grab the command in the official website https://docs.docker.com/compose/install/ and change the output path from /usr/local/bin/docker-compose to /opt/bin :
sudo curl -L "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-$(uname -s)-$(uname -m)" -o /opt/bin/docker-compose
Make executable:
sudo chmod +x /opt/bin/docker-compose
Now you have docker-compose :)
here it is,
the best way I found:
core#london-1 ~ $ docker pull dduportal/docker-compose
core#london-1 ~ $ cd /dir/where-it-is-your/docker-compose.yml
core#london-1 ~ $ docker run -v "$(pwd)":/app \
-v /var/run/docker.sock:/var/run/docker.sock \
-e COMPOSE_PROJECT_NAME=$(basename "$(pwd)")\
-ti --rm \
dduportal/docker-compose:latest up
done!
well, coreOS supports docker but it is bare bone linux with clustering suppport so you need to include a base image for all your containers ( use FROM and in Dockerfile you might also need to do RUN yum -y install bzip2 gnupg etc., ) that has the bins and libs that are needed by you app and redis ( better take some ubuntu base image )
Here you can put all of them in one container/docker or seperate if you do it seperate then you need to link the containers and optionally volume mount - docker has some good notes about it (https://docs.docker.com/userguide/dockervolumes/)
Atlast, you need to write cloud config which specifies the systemd units . In your case you will have 3 units that will be started by systemd ( systemd replaces the good old init system in coreOS) and feed it to coreos-cloudinit ( tip: coreos-cloudinit -from-file=./cloud-config -validate=false ), You also need to provide this cloud-config on the linux bootcmd for persistency.
Currently, the easiest way to use docker-compose agains a CoreOS Vagrant VM. You just need to make sure to forward Docker port.
If you are not particularly attached to using docker-compose, you can try CoreOS running Kubernetes. There are multiple options and I have implemented one of those for Azure.
For using docker-compose with Fedora CoreOS you may run into issues with python, however running docker-compose from a container works perfectly.
There is a handy bash wrapper script and it is documented in the official documentation here: https://docs.docker.com/compose/install/#alternative-install-options under the "Install as a container" section.