I'm a bit new to Mesos / Marathon and I try to integrate it with my Docker Images.
So far : Mesos 0.21 for slave & master / Marathon 0.7.5 and of course, Zookeeper.
I succeed on adding with curl my docker images but, unfortunately, I have 2 main issues:
Even if I have build my image locally (in that case a tomcat7 Docker image) and see the Marathon config that it is well taken into account, the docker image started is not the one expected, it is always a ubuntu:latest image.
How to manage docker port forwarding ? Are we forced to use a solution such as HAProxy ? I see that My Mesos slave uses always the same range of Port (31000 - 32000) for started containers.
Thank you everyone for support.
Here is an anwser from ConnerDoyle found on mIRC #mesos :
ConnorDoyle: Mesos comes with a Docker containerizer that always pulls from a docker registry.
You can configure the registry that dockerd pulls from in the usual way (via a .dockercfg file)
* Retrieving #mesos modes...
Alex: So even if for isntance eerything is in local
ConnorDoyle: Yeah. You can use any image up on Dockerhub (the default registry for dockerd) or you can set your own.
AlexFR: I shall define a private registry ?
AlexFR: or push it to Dockerhub
ConnorDoyle: Yes, because it assumes you're on a big cluster and you want to fetch the image from somewhere when the job starts :)
ConnorDoyle: Yeah, pushing to dockerhub is easier probably.
This answers the first Question.
Concerning the second , seems that HAProxy is the "standard approach" https://mesosphere.github.io/marathon/docs/service-discovery-load-balancing.html
Related
I've set up a Docker Swarm consisting of two VMs on my local network, (1 manager, 1 worker). In the manager node, I created a private registry service and I want to deploy a number of locally built images in my local dev machine (which is not in the swarm) to that registry. The Swarm docs and the dozens of examples I've read in the Internet seem not to go beyond the basics, running commands inside the manager node, building, tagging and pushing images from the manager's local cache to the registry in that same node, and I have that uneasy feeling that I'm missing something right on my face.
I see that my machine could simply join the swarm as a manager, owning the registry. The other nodes would automagically receive the updates and my problem would go away. But does this make sense for a production swarm setting, a cluster of nodes serving production code, depending on my dev's home machine - even as non-worker, manager-only?
Things I've tried:
Retagging my local image to <my.node.manager.ip>/my_app:1.0.0, followed by docker-compose push. I can see this does push the image to the manager's registry, but the service fails to start with the message "No such image: <my.node.manager.ip>/my_app:1.0.0"
Creating a context and, from my machine, run docker-compose --context my_context up --no-start. This (re)creates the image in the manager node's local cache, which I can then push to the registry, but it feels very unwieldy as a deploy process.
Should I run a remote script in the manager node to git pull my code and then do the build/push/docker stack deploy?
TL;DR What's the expected steps to deploy an image/app to a Docker Swarm from a local dev machine outside the swarm? Is this possible? Is this supported by Docker Swarm?
After reading a bit more on private registries and tags, I could finally wrap my head around the necessary tagging for my use case to work. My first approach was halfway right, but I had to change my deploy script so as to:
extract the image field from my docker-compose.yml (in the form localhost:5000/my_app:${MY_APP_VERSION-latest}, to circumvent the "No such image" error)
create a second tag for pushing to the remote registry, replacing "localhost" by my manager node address (where the registry is at)
Tag my locally built image with that tag and docker-compose push it
Deploy the app with docker --context <staging|production> stack deploy my_app
I'm answering myself since I did solve my original problem, but would love to see other DevOps implementations for similar scenarios.
I've researched around before asking here, but all answers lead me to the same conclusion:
Build your Docker Compose stack locally
Tag and push the images to a registry (either a private or public one like Docker Hub)
Push the stack to the swarm using docker stack deploy --compose-file docker-compose.yml stackdemo
From here, the stack picks up and "pulls" the images from the registry and runs the containers
Is there no straightforward way to make the following (I think common sense) scenario work seamlessly?
Docker swarm manager has access (SSH keys) to pull the project from Git.
It periodically pulls the project and builds it "locally" using docker-compose up
When the build succeeds (containers are ready), it pushes the stack to the swarm using docker stack deploy, propagating the images to all worker nodes.
In that way, the original "source code" is only known by the Manager Node and only it has direct access to the Git repository.
Maintaining a registry (or paying for a cloud one), seems like a huge disadvantage for using Docker in Swarm Mode.
Side note: I've tried the approach of deploying a registry as a service within the stack and tagging + pushing the images to 127.0.0.1/myimage but that led to a different set of problems of its own - e.g. the fact that worker nodes that do not have an instance of the Registry container running, have no access to pull the image (the registry needs to be replicated to all nodes).
Use the docker save and docker load commands to transfer images from your dev machine to all of your swarm machines.
Docker swarm is orchestration & used or intended for managing docker node cluster. When any service is deployed, docker engine can start it on any of the node in the cluster (node that satisfy placement constraints if provided). Now, if one dont have registry, & images are available on node locally, docker cant verify if all nodes will point to same version of the image. Hence, swarm pulls image from registry & then deploys it to nodes.
Having registry also helps in keeping copy of images & registry keeps all version even if images are prune on one or all of docker nodes. One can enable backup of registry & hence there's no chance for loss of any image built & pushed to a registry.
Starting a registry (at least on localhost) is very easy - but that's not what your question.
Coming to your question, you can keep the compose stack file in the same directory where you have Dockerfile & then in the stack compose file you can write service which will get build at the time you deploy the stack:
version: "3.9"
services:
web:
image: 127.0.0.1:5000/stackdemo
build: .
ports:
- "8000:8000"
Here,
build: .
this line builds the image with given name tagging to registry on localhost - but is not pushed which needs to be done manually.
So, in Dockerfile you can write git pull or git clone & then run command to build your code & so on.
Here's a link which provides simple steps to start a registry & build image on the fly while deploying the stack:
https://docs.docker.com/engine/swarm/stack-deploy/
Also, swarm does not works without registry & hence it's not possible to just save & load image & use with swarm orchestration.
I am currently working on the following scenario
I am trying to setup a container in OpenShift that runs a Jenkins that is itsself able to run docker to make use of declarative pipelines where the build is running in it's own docker container. This basically makes it necessary to install and run docker inside this container.
I have been working on it on quite some time now. Checked dozens of posts and threads online but I have not been able to accomplish it. Basically I got so far
I can install docker in my container (from the baseimage openshift/jenkins-2-centos7:latest)
I can't get docker to run as this makes use of systemctl which
Now I read that systemctl is not working inside docker containers or at least highly unrecommended as it interferes with the PID 1 in the system. Without
systemctl start docker
that will leave me with docker beeing unable to connect with the daemon (as expected) and the error message
Can't connect to docker daemon. Is 'docker -d' running on this host?
So I tried to set up the daemon myself using
the follwoing in my Dockerfile
RUN usermod -aG docker $(whoami)
RUN dockerd -H unix:///var/run/docker.sock
which will also not work telling me that cgroups cannot be mounted. After some more research I found that this could be handled with the cgroupfs-mount script from
https://github.com/tianon/cgroupfs-mount/tree/master
But also here I got no luck leaving me with the following error
Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.4.21: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
Now after hours I am out of ideas. Does anyone have an idea how to make docker work inside of OpenShift? Would be really greatful
I am trying to setup a container in OpenShift that runs a Jenkins that is itsself able to run docker to make use of declarative pipelines where the build is running in it's own docker container. This basically makes it necessary to install and run docker inside this container.
I don't think your conclusion here is the only possibility, and what I'll describe below is an easier approach to get what (I think) you want! :) If there are any other use cases that you have than these 3 I'll describe, let me know and I'll try to update to cover them:
Pipelines running in their own containers
Running additional containers from Pipelines
Building container images from Pipelines
Pipelines running in their own containers
For this case, there's the excellent Kubernetes plugin.
With this plugin, you add a Kubernetes/OpenShift cloud to the Jenkins global config. This can either be the one in which Jenkins is running (if you use the Jenkins image provided by OpenShift, this gets added by default at least), or an external cluster.
Inside that configuration, you can define PodTemplates (again, there are a couple of examples provided in the Jenkins image provided by OpenShift), or you can specify that in your pipeline directly also I think. When your pipeline requests a node/agent with a label that matches one of these (and there are no long-running agents that match), then a pod will be created from that template, and your pipeline execution will happen inside a container in that. Once it's no longer needed, it will be deprovisioned again.
Here are the pipeline steps exposed by this plugin: https://jenkins.io/doc/pipeline/steps/kubernetes/
Running additional containers from Pipelines
As part of your pipeline, you may want to run some tests, and those may expect to be able to interact with e.g. a database. You can create resources for that in your OpenShift project (e.g. a Deployment & expose it with a Service), and tear them down after. The openshift-client plugin is very useful here and has docs on how to interact with OpenShift.
Building container images from Pipelines
If your goal is to build container images from pipelines, remember that OpenShift also exposes this capability (depending on the security configuration) through Builds. Just like in the previous section, you can use the openshift-client plugin to create and trigger builds.
For more information on the Jenkins image that's maintained by OpenShift (and generally how to do useful things in Jenkins on OpenShift), there's this dedicated page in the OpenShift docs.
You have this article by #jpetazzo, from Docker team, about Docker In Docker (DinD):
article:
The primary purpose of Docker-in-Docker was to help with the development of Docker itself. Many people use it to run CI (e.g. with Jenkins), which seems fine at first, but they run into many “interesting” problems that can be avoided by bind-mounting the Docker socket into your Jenkins container instead.
DinD Repo:
This work is now obsolete, thanks to the combined efforts of some amazing people like #jfrazelle and #tianon, who also are black belts in the art of putting IKEA furniture together.
If you want to run Docker-in-Docker today, all you need to do is:
docker run --privileged -d docker:dind
So here is an article using another approach to build docker containers with Jenkins inside a docker container:
docker run -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
--name jenkins \
jenkins/jenkins:lts
So you may want to adapt one of this solutions to your OpenShift scenario. I hope it solves your issue.
You'll need a privileged pod running jenkins wich mounts the openshift node docker socket. This is a bad idea as jenkins'll launch container outside kubernetes semantics and control.
Why do not use s2i service shipped with openshift ?
Regards.
I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.
A colleague find out Docker and want to use it for our project. I start to use Docker for test. After reading an article about Docker swarm I want to test it.
I have installed 3 VM (ubuntu server 14.04) with docker and swarm. I followed some How To ( http://blog.remmelt.com/2014/12/07/docker-swarm-setup/ and http://devopscube.com/docker-tutorial-getting-started-with-docker-swarm/). My cluster work. I can launch for exemple a basic apache container (the image was pull in the Docker hub) but I want to use my own image (an apache server with my web site).
I tested to load an image (after save it in a .tar) but this option isn't supported by the clustering mode, same thing with the import option.
So my question is : Can I use my own image without to push it in the Docker hub and how I do this ?
If your own image is based on a Dockerfile that you build you can execute the build command on your project while targeting the swarm.
However if the image wasn't built, but created manually you need to have a registry in between that you can push to, either docker hub or some other registry solution like https://github.com/docker/docker-registry