I followed the instructions on this page to build and deploy Mesos. I did this on a Ubuntu Trusty VM with 1 Mesos master and 1 slave. The following commands are what I used to run Mesos.
$ mesos-master --ip=10.0.2.15 --work_dir=/var/lib/mesos --log_dir=/var/log/mesos
$ mesos-slave --master=10.0.2.15:5050 --containerizers=docker,mesos
All of three tests finished without error message.
Then I followed this page to deploy Kubernetes. After building Kubernetes-Mesos, I used the following commands to deploy Kubernetes.
$ export KUBERNETES_MASTER_IP=10.0.2.15
$ export KUBERNETES_MASTER=http://${KUBERNETES_MASTER_IP}:8888
$ docker run -d --hostname $(uname -n) --name etcd \
-p 4001:4001 -p 7001:7001 quay.io/coreos/etcd:v2.0.12 \
--listen-client-urls http://0.0.0.0:4001 \
--advertise-client-urls http://${KUBERNETES_MASTER_IP}:4001
etcd container is running.
$ export PATH="$(pwd)/_output/local/go/bin:$PATH"
$ export MESOS_MASTER=10.0.2.15:5050
$ cat <<EOF >mesos-cloud.conf
[mesos-cloud]
mesos-master = ${MESOS_MASTER}
EOF
$ km apiserver \
--address=${KUBERNETES_MASTER_IP} \
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
--service-cluster-ip-range=10.10.10.0/24 \
--port=8888 \
--cloud-provider=mesos \
--cloud-config=mesos-cloud.conf \
--secure-port=0 \
--v=1 >apiserver.log 2>&1 &
$ km controller-manager \
--master=${KUBERNETES_MASTER_IP}:8888 \
--cloud-provider=mesos \
--cloud-config=./mesos-cloud.conf \
--v=1 >controller.log 2>&1 &
$ km scheduler \
--address=${KUBERNETES_MASTER_IP} \
--mesos-master=${MESOS_MASTER} \
--etcd-servers=http://${KUBERNETES_MASTER_IP}:4001 \
--mesos-user=root \
--api-servers=${KUBERNETES_MASTER_IP}:8888 \
--cluster-dns=10.10.10.10 \
--cluster-domain=cluster.local \
--v=2 >scheduler.log 2>&1 &
Logs seem correct, no error message.
kubectl get services shows:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8sm-scheduler 10.10.10.50 <none> 10251/TCP 1m
kubernetes 10.10.10.1 <none> 443/TCP 2m
Then I created a simple nginx pod, kubectl get pods always shows it's pending. kubectl get events shows:
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
2m 47s 9 nginx Pod Warning FailedScheduling {default-scheduler } Error scheduling: No suitable offers for pod/task
What does it mean by No suitable offers for pod/task? In Mesos' log, I see Mesos keeps sending offer to Kubernetes framework, but keeps being DECLINED. If I run mesos-execute --master=10.0.2.15:5050 --name=echo --command="echo 'hello world'" --containerizer=docker --docker_image=ubuntu:14.04 it can deploy a Docker image with "mesos-" prefix and run the command. So it seems Docker containerizer works properly.
Kubernetes-Mesos will decline offers for several reasons:
the resources in the offer don't satisfy the minimum required to launch the pod-task. The first pod-task launched on a given slave requires executor resources in addition to the pod-task resources.
the resources in the offer aren't compatible with the scheduler. this happens if you start the framework, launch a task, kill the scheduler process, then restart the scheduler with different flags; some scheduler flags affect the command-line used to launch the executor. the quickest way to remedy this is to delete any running pods and manually kill the incompatible executor process(es) already running on the slave(s).
there is a problem with the node info in the apiserver registry.
What version of k8sm are you running? master? You might try increasing the verbosity of the scheduler logs (--v=3) and then dumping a copy of your scheduler logs up on pastebin or some such so that they can be analyzed. It's often difficult to troubleshoot situations like this without the logs.
It sounds like the offers that are coming in do not satisfy the needs of Kubernetes. You have to find out what your framework needs, and then compare that to what the rejected offers look like.
Related
How does a docker container running on a docker machine instead of a k8s pod operate the k8s cluster. For example, if i need to do something like this inside a container:
kubectl get pods
In my dockerfile, I installed kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN sudo mv ./kubectl /usr/local/bin/kubectl
when i run kubectl get pods, the result is as follows:
kubectl get pod
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
So I mounted the config into the docker container at docker runcommand
docker run -v /root/.kube/config:/root/.kube/config my-images
the result is as follows:
kubectl get pod
Error in configuration:
* unable to read client-cert /root/.minikube/profiles/minikube/client.crt for minikube due to open /root/.minikube/profiles/minikube/client.crt: no such file or directory
* unable to read client-key /root/.minikube/profiles/minikube/client.key for minikube due to open /root/.minikube/profiles/minikube/client.key: no such file or directory
* unable to read certificate-authority /root/.minikube/ca.crt for minikube due to open /root/.minikube/ca.crt: no such file or directory
This seems to be due to the current-context: minikube in the k8s config file
Then mount the authentication file again, it run success.
Now, I can call the kubectl get pods command or otherwise manipulate a cluster outside the container when I mount -v /root/.kube/config:/root/.kube/config -v /root/.minikube/:/root/.minikube/, however, this does not apply to cluster mounts created by kubeadm or otherwise.
But I want to be able to mount the required configuration files and so on to the container in a uniform way so that I can use the same command to manipulate the k8s cluster, which may be created by minikube or rancher k3s or kubeadm
In summary, I want to mount a uniform set of files or directories for all cases of the k8s cluster, such as -v file: file -v dir:dir, to implement operations on the k8s cluster created in any way, such as getting the pod status, creating, deleting various types of resources, and so on
I need to have the maximum permission to operate on k8s
Can someone please tell me what is it that I need to do?
I think you can set the Docker user when running your container
You can run (in this example - ubuntu image) with an explicit user id and group id.
$ docker run -it --rm \
--mount "type=bind,src=$(pwd)/shared,dst=/opt/shared" \
--workdir /opt/shared \
--user "$(id -u):$(id -g)" \
ubuntu bash
The difference is ‘–user “$(id -u):$(id -g)“’ - they tell the container to run with the current user id and group id which are obtained dynamically through bash command substitution by running the “id -u” and “id -g” and passing on their values.
This can be good enough already. The problem here is, that the user and group don’t really exist in the container. This approach works for the terminal command, but the session looks broken and you’ll see some ugly error messages like:
"groups: cannot find name for group ID"
"I have no name!"
- your container, complaining
While bash works, some apps might refuse to run if those configs look fishy.
Next you have to configure and run your Docker containers correctly, so you don’t have to fight permission errors and access your files easily.
As you should create a non-root user in your Dockerfile in any case, this is a nice thing to do. You might as well set the user id and group id explicitly.
Below is a minimal Dockerfile which expects to receive build-time arguments, and creates a new user called “user”:
FROM ubuntu
ARG USER_ID
ARG GROUP_ID
RUN addgroup --gid $GROUP_ID user
RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user
USER user
Take a look: add-user-to-container.
You can use this Dockerfile, to build a fresh image with the host uid and gid. This image, needs to be built specifically for each machine it will run on to make sure everything is in order.
Then, you can run use this image for our command. The user id and group id are correct without having to specify them when running the container.
$ docker build -t your-image \
--build-arg USER_ID=$(id -u) \
--build-arg GROUP_ID=$(id -g) .
$ docker run -it --rm \
--mount "type=bind,src=$(pwd)/shared,dst=/opt/shared" \
--workdir /opt/shared \
your-image bash
There is no need to use “chown”, and you will get rid of annoying permission errors anymore.
Please take a look on this very interesting article: kubernetes-management-docker, docker-shared-permissions.
I have tried to install Docker on google Colab through the following ways:
(1)https://phoenixnap.com/kb/how-to-install-docker-on-ubuntu-18-04
(2)https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-18-04
(3)https://colab.research.google.com/drive/10OinT5ZNGtdLLQ9K399jlKgNgidxUbGP
I started the docker service and saw the status, but it showed 'Docker is not running'. Maybe the docker can not work on the Colab.
I feel confused and want to know the reason.
Thanks
It's possible to run Docker in Colab, but with limiting functionality.
There are two methods of running Docker service, a regular one (more restrictive), and in rootless mode (dockerd inside RootlessKit).
dockerd
Install by:
!apt-get -qq install docker.io
Use the following shell script:
%%shell
set -x
dockerd -b none --iptables=0 -l warn &
for i in $(seq 5); do [ ! -S "/var/run/docker.sock" ] && sleep 2 || break; done
docker info
docker network ls
docker pull hello-world
docker pull ubuntu
# docker build -t myimage .
docker images
kill $(jobs -p)
As shown above, before each docker command, you've to run Docker service (dockerd) in the background, then kill it. Unfortunately you've to run dockerd for each cell where you want to run your docker commands.
Notes on dockerd arguments:
-b none/--bridge none - Disables a network bridge to avoid errors.
--iptables=0 - Disables addition of iptables rules to avoid errors.
-D - Add to enable debug mode.
However in this mode running most of the containers will generate the errors related to read-only file system.
Additional notes:
To disable cpuset support, run: !umount -vl /sys/fs/cgroup/cpuset.
Related issue: https://github.com/docker/for-linux/issues/1124.
Here are some notepads:
https://colab.research.google.com/drive/1Lmbkc7v7XjSWK64E3NY1cw7iJ0sF1brl
https://colab.research.google.com/drive/1RVS5EngPybRZ45PQRmz56PPdz9nWStIb (without cpuset support)
Rootless dockerd
Rootless mode allows running the Docker daemon and containers as a non-root user.
To install, use the following code:
%%shell
useradd -md /opt/docker docker
apt-get -qq install iproute2 uidmap
sudo -Hu docker SKIP_IPTABLES=1 bash < <(curl -fsSL https://get.docker.com/rootless)
To run dockerd service, there are two methods: using a script (dockerd-rootless.sh) or running rootlesskit directly.
Here is the script which uses dockerd-rootless.sh to run a hello-world container:
%%writefile docker-run.sh
#!/usr/bin/env bash
set -e
export DOCKER_SOCK=/opt/docker/.docker/run/docker.sock
export DOCKER_HOST=unix://$DOCKER_SOCK
export PATH=/opt/docker/bin:$PATH
export XDG_RUNTIME_DIR=/opt/docker/.docker/run
/opt/docker/bin/dockerd-rootless.sh --experimental --iptables=false --storage-driver vfs &
for i in $(seq 5); do [ ! -S "$DOCKER_SOCK" ] && sleep 2 || break; done
docker run $#
jobs -p
kill $(jobs -p)
To run above script, run:
!sudo -Hu docker bash -x docker-run.sh hello-world
The above may generate the following warnings:
WARN[0000] failed to mount sysfs, falling back to read-only mount: operation not permitted
To remount some folders with write access, you can try:
!mount -vt sysfs sysfs /sys -o rw,remount
!mount -vt tmpfs tmpfs /sys/fs/cgroup -o rw,remount
[rootlesskit:child ] error: executing [[ip tuntap add name tap0 mode tap] [ip link set tap0 address 02:50:00:00:00:01]]: exit status 1
The above error is related to dockerd-rootless.sh script which adds extra network parameters to rootlesskit such as:
--net=vpnkit --mtu=1500 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin
This has been reported at https://github.com/rootless-containers/rootlesskit/issues/181 (however ignored).
To workaround the above problem, we can pass our own arguments to rootlesskit using the following file instead:
%%writefile docker-run.sh
#!/usr/bin/env bash
set -e
export DOCKER_SOCK=/opt/docker/.docker/run/docker.sock
export DOCKER_HOST=unix://$DOCKER_SOCK
export PATH=/opt/docker/bin:$PATH
export XDG_RUNTIME_DIR=/opt/docker/.docker/run
rootlesskit --debug --disable-host-loopback --copy-up=/etc --copy-up=/run /opt/docker/bin/dockerd -b none --experimental --iptables=false --storage-driver vfs &
for i in $(seq 5); do [ ! -S "$DOCKER_SOCK" ] && sleep 2 || break; done
docker $#
jobs -p
kill $(jobs -p)
Then run as:
!sudo -Hu docker bash docker-run.sh run --cap-add SYS_ADMIN hello-world
Depending on your image, this may generate the following error:
process_linux.go:449: container init caused "join session keyring: create session key: operation not permitted": unknown.
Which could be solved by !sysctl -w kernel.keys.maxkeys=500, however Colab doesn't allow it. Related: Error response from daemon: join session keyring: create session key: disk quota exceeded.
Notepad showing the above:
https://colab.research.google.com/drive/1oRja4v-PtY6lFMJIIF79No4s3s-vbqd4
Suggested further reading:
Finding the minimal set of privileges for a docker container.
I had the same issue as you and apparently Docker is not supported in Google Colab according to the answers on this issue from its Github repository: https://github.com/googlecolab/colabtools/issues/299#issuecomment-615308778.
I know, it is an old question, but this an old answer (2020) by a member of the Google Colaboratory team.
this isn't possible, and we currently have no plans to support this.
The virtualization/isolation provided by docker is available in Colab as each Colab session is an isolation by itself, if one installs the required libraries, hardware abstraction (Colab by default offers a free GPU and one can choose it during run time).....Have used conda and when I switched to dockers, there was a distinct difference in performance......Docker never had GPU memory fragmentation, but using conda (bare-metal) had the same......I have been trying single colab sessions for training in TF2 and soon will have testing and monitoring sessions(using Tensorboard) and can fully understand, whether having docker in Colab is good or not......Will come back and post my feed back soon....
I am trying to submit a job via Kubernetes. Went through https://spark.apache.org/docs/latest/running-on-kubernetes.html and successfully submit a job via below command:
$ bin/spark-submit \
--master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=5 \
--conf spark.kubernetes.container.image=<spark-image> \
local:///path/to/examples.jar
Now, I am trying to submit my job which involves Kafka & PostgreSQL access of which is available over the VPN.
The job works on my local via IntelliJ but the same job fails when I try to submit to Kubernetes.
Exception is
Caused by: java.net.UnknownHostException: db-host-name
How can I resolve DNS name over the VPN?
If you can trying configuring the dns options for the docker image. Either of these two options have worked for DNS/VPN issues I've experienced in the past:
--dns=<IP_ADDRESS>
--dns-search=<DOMAIN>
Here's more detailed docs
I recently found out about Podman (https://podman.io). Having a way to use Linux fork processes instead of a Daemon and not having to run using root just got my attention.
But I'm very used to orchestrate the containers running on my machine (in production we use kubernetes) using docker-compose. And I truly like it.
So I'm trying to replace docker-compose. I will try to keep docker-compose and using podman as an alias to docker as Podman uses the same syntax as docker:
alias docker=podman
Will it work? Can you suggest any other tool? I really intend to keep my docker-compose.yml file, if possible.
Yes, that is doable now, check podman-compose, this is one way of doing it, another way is to convert the docker-compose yaml file to a kubernetes deployment using Kompose. there is a blog post from Jérôme Petazzoni #jpetazzo: from docker-compose to kubernetes deployment
Update 6 May 2022 : Podman now supports Docker Compose v2.2 and higher (see Podman 4.1.0 release notes)
Old answer:
Running docker-compose with Podman as a normal user (rootless)
Requirement: Podman version >= 3.2.1 (released in June 2021)
Install the executable docker-compose
curl -sL -o ~/docker-compose https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)
chmod 755 ~/docker-compose
Alternatively you could also run docker-compose in a container image (see below).
Run
systemctl --user start podman.socket
Set the environment variable DOCKER_HOST
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
Run
~/docker-compose up -d
Running docker-compose with Podman as root
Requirement: Podman version >= 3.0 (released in February 2021)
Follow the same procedure but remove the flag --user
systemctl start podman.socket
Running docker-compose in a container image
Use the container image docker.io/docker/compose to run
docker-compose
podman \
run \
--rm \
--detach \
--env DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock \
--security-opt label=disable \
--volume $XDG_RUNTIME_DIR/podman/podman.sock:$XDG_RUNTIME_DIR/podman/podman.sock \
--volume $(pwd):$(pwd) \
--workdir $(pwd) \
docker.io/docker/compose \
--verbose \
up -d
(the flag --verbose is optional)
The same command with short command-line options on a single line:
podman run --rm -d -e DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock --security-opt label=disable -v $XDG_RUNTIME_DIR/podman/podman.sock:$XDG_RUNTIME_DIR/podman/podman.sock -v $(pwd):$(pwd) -w $(pwd) docker.io/docker/compose --verbose up -d
Regarding SELINUX: Runnng Podman with SELINUX is preferable from a security point-of-view, but I didn't get it to work on a Fedora 34 computer so I disabled SELINUX by adding the command-line option
--security-opt label=disable
Troubleshooting tips
Test the Docker REST API
A minimal check to see that the Docker REST API is working:
$ curl -H "Content-Type: application/json" \
--unix-socket $XDG_RUNTIME_DIR/podman/podman.sock \
http://localhost/_ping
OK$
Avoid short container image names
If any of your docker-compose.yaml or Dockerfile files contain a short container image name, for instance
$ grep image: docker-compose.yaml
image: mysql:8.0.19
$
$ grep FROM Dockerfile
FROM python:3.9
$
edit the files to use the whole container image name instead
$ grep image: docker-compose.yaml
image: docker.io/library/mysql:8.0.19
$
$ grep FROM Dockerfile
FROM docker.io/library/python:3.9
$
Most often short names have been used to reference DockerHub Official Images
(a catalogue) so a good guess would be to prepend the container image name with docker.io/library/
There are currently many different container image registries, not just DockerHub (docker.io). Writing the whole container image name is thus a good practice. Podman might complain otherwise depending on how Podman is configured.
Rootless users can't bind to ports below 1024
If for instance
$ grep -A1 ports: docker-compose.yml
ports:
- 80:80
$
edit docker-compose.yaml so that the host port number >= 1024, for instance 8080
$ grep -A1 ports: docker-compose.yml
ports:
- 8080:80
$
An alternative solution is to adjust net.ipv4.ip_unprivileged_port_start with sysctl (see Shortcomings of Rootless Podman)
In case Systemd is missing
Most Linux distributions use Systemd where you would preferably start the Podman service (providing the REST API) by "starting" the Podman socket
systemctl --user start podman.socket
or
systemctl start podman.socket
but in case Systemd is missing you could also start the Podman service directly
podman system service --time 0 unix:/some/path/podman.sock
Systemd gives the extra benefit that the Podman service is started on demand with Systemd socket activation and stops after some time of inactivity.
Caveat: Swarm functionality is missing
A difference to Docker is that the functionality relating to Swarm is not supported when using docker-compose with Podman.
References:
https://www.redhat.com/sysadmin/podman-docker-compose
https://github.com/containers/podman/discussions/10644#discussioncomment-857897
Ensure Podman is installed on your machine.
You can install Podman Compose in a terminal with the following command:
pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz
cd into the directory your docker-compose file is located in
Run podman-compose up
See the following link for a decent introduction.
I'm currently learning Docker, and have made a nice and simple Docker Compose setup. 3 containers, all with their own Dockerfile setup. How could I go about converting this to work on CoreOS so I can setup up a cluster later on?
web:
build: ./app
ports:
- "3030:3000"
links:
- "redis"
newrelic:
build: ./newrelic
links:
- "redis"
redis:
build: ./redis
ports:
- "6379:6379"
volumes:
- /data/redis:/data
taken from https://docs.docker.com/compose/install/
the only thing is that /usr is read only, but /opt/bin is writable and in the path, so:
sd-xx~ # mkdir /opt/
sd-xx~ # mkdir /opt/bin
sd-xx~ # curl -L https://github.com/docker/compose/releases/download/1.3.3/docker-compose-`uname -s`-`uname -m` > /opt/bin/docker-compose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 403 0 403 0 0 1076 0 --:--:-- --:--:-- --:--:-- 1080
100 7990k 100 7990k 0 0 2137k 0 0:00:03 0:00:03 --:--:-- 3176k
sd-xx~ # chmod +x /opt/bin/docker-compose
sd-xx~ # docker-compose
Define and run multi-container applications with Docker.
Usage:
docker-compose [options] [COMMAND] [ARGS...]
docker-compose -h|--help
Options:
-f, --file FILE Specify an alternate compose file (default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name (default: directory name)
--verbose Show more output
-v, --version Print version and exit
Commands:
build Build or rebuild services
help Get help on a command
kill Kill containers
logs View output from containers
port Print the public port for a port binding
ps List containers
pull Pulls service images
restart Restart services
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
up Create and start containers
migrate-to-labels Recreate containers to add labels
I've created simple script for installing latest Docker Compose on CoreOS:
https://gist.github.com/marszall87/ee7c5ea6f6da9f8968dd
#!/bin/bash
mkdir -p /opt/bin
curl -L `curl -s https://api.github.com/repos/docker/compose/releases/latest | jq -r '.assets[].browser_download_url | select(contains("Linux") and contains("x86_64"))'` > /opt/bin/docker-compose
chmod +x /opt/bin/docker-compose
Just run it with sudo
The proper way to install or run really anything on CoreOS is either
Install it as a unit
Run in a separate docker container
For docker-compose you probably want to install it as a unit, just like you have docker as a unit. See Digital Ocean's excellent guides on CoreOS and the systemd units chapter to learn more.
Locate your cloud config based on your cloud provider or custom installation, see https://coreos.com/os/docs/latest/cloud-config-locations.html for locations.
Install docker-compose by adding it as a unit
#cloud-config
coreos:
units:
- name: install-docker-compose.service
command: start
content: |
[Unit]
Description=Install docker-compose
ConditionPathExists=!/opt/bin/docker-compose
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/mkdir -p /opt/bin/
ExecStart=/usr/bin/curl -o /opt/bin/docker-compose -sL "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-linux-x86_64"
ExecStart=/usr/bin/chmod +x /opt/bin/docker-compose
Note that I couldn't get the uname -s and uname -m expansions to work in the curl statement so I just replaced them with their expanded values.
Validate your config file with
coreos-cloudinit -validate --from-file path-to-cloud-config
It should output something like
myhost core # coreos-cloudinit -validate --from-file path-to-cloudconfig
2016/12/12 12:45:03 Checking availability of "local-file"
2016/12/12 12:45:03 Fetching user-data from datasource of type "local-file"
myhost core #
Note that coreos-cloudinit doesn't validate the contents-blocks in your cloud-config. Restart CoreOS when you're finished, and you're ready to go.
Update: As #Wolfgang comments, you can run coreos-cloudinit --from-file path-to-cloud-config instead of restarting CoreOS.
I would also suggest docker-compose in a docker container like the one from dduportal.
For the sake of usability I extended my cloud-config.yml as follows:
write_files:
- path: "/etc/profile.d/aliases.sh"
content: |
alias docker-compose="docker run -v \"\$(pwd)\":\"\$(pwd)\" -v /var/run/docker.sock:/var/run/docker.sock -e COMPOSE_PROJECT_NAME=\$(basename \"\$(pwd)\") -ti --rm --workdir=\"\$(pwd)\" dduportal/docker-compose:latest"
After updating the cloud-config via sudo coreos-cloudinit -from-url http-path-to/cloud-config.yml and a system reboot, you are able to use the docker-compose command like you are used to on every other machine.
CentruyLabs created a rubygem called fig2coreos
It translates fig.yml to .service files
fig is deprecated since docker-compose was created but the syntax seems to be the same so that it could probably work.
Simple 3 Steps:
sudo mkdir -p /opt/bin
Grab the command in the official website https://docs.docker.com/compose/install/ and change the output path from /usr/local/bin/docker-compose to /opt/bin :
sudo curl -L "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-$(uname -s)-$(uname -m)" -o /opt/bin/docker-compose
Make executable:
sudo chmod +x /opt/bin/docker-compose
Now you have docker-compose :)
here it is,
the best way I found:
core#london-1 ~ $ docker pull dduportal/docker-compose
core#london-1 ~ $ cd /dir/where-it-is-your/docker-compose.yml
core#london-1 ~ $ docker run -v "$(pwd)":/app \
-v /var/run/docker.sock:/var/run/docker.sock \
-e COMPOSE_PROJECT_NAME=$(basename "$(pwd)")\
-ti --rm \
dduportal/docker-compose:latest up
done!
well, coreOS supports docker but it is bare bone linux with clustering suppport so you need to include a base image for all your containers ( use FROM and in Dockerfile you might also need to do RUN yum -y install bzip2 gnupg etc., ) that has the bins and libs that are needed by you app and redis ( better take some ubuntu base image )
Here you can put all of them in one container/docker or seperate if you do it seperate then you need to link the containers and optionally volume mount - docker has some good notes about it (https://docs.docker.com/userguide/dockervolumes/)
Atlast, you need to write cloud config which specifies the systemd units . In your case you will have 3 units that will be started by systemd ( systemd replaces the good old init system in coreOS) and feed it to coreos-cloudinit ( tip: coreos-cloudinit -from-file=./cloud-config -validate=false ), You also need to provide this cloud-config on the linux bootcmd for persistency.
Currently, the easiest way to use docker-compose agains a CoreOS Vagrant VM. You just need to make sure to forward Docker port.
If you are not particularly attached to using docker-compose, you can try CoreOS running Kubernetes. There are multiple options and I have implemented one of those for Azure.
For using docker-compose with Fedora CoreOS you may run into issues with python, however running docker-compose from a container works perfectly.
There is a handy bash wrapper script and it is documented in the official documentation here: https://docs.docker.com/compose/install/#alternative-install-options under the "Install as a container" section.