Kubernetes Guestbook Example Not Loading Page - docker

New question:
I've followed the guestbook tutorial here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/guestbook/README.md
And the output of my commands match their outputs exactly. When I try to access the guestbook web server, the page does not load.
Specifically, I have the frontend on port 80, I have enabled http/s connections on the console for all instances, I have run the command:
gcloud compute firewall-rules create --allow=tcp:<PortNumberHere> --target-tags=TagNameHere TagNameHere-<PortNumberHere>
and also
cluster/kubectl.sh get services guestbook -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}'
But when I run curl -v http://:, the connection simply times out.
What am I missing?
Old Question - Ignore:
Edit: Specifically, I have 3 separate docker images. How can I tell kubernetes to run these three images?
I have 3 docker images, each of which use each other to perform their tasks. One is influxdb, the other is a web app, and the third is an engine that does data processing.
I have managed to get them working locally on my machine with docker-compose, and now I want to deploy them on googles compute engine so that I can access it over the web. I also want to be able to scale the software. I am completely, 100% new to cloud computing, and have never used gce before.
I have looked at Kubernetes, and followed the docs, but I cannot get it to work on a gce instance. What am I missing/not understanding? I have searched and read all the docs I could find, but I still don't feel any closer to getting it than before.
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/gce.md

To get best results on SO you need to ask specific questions.
But, to answer a general question with a general answer, Google's Cloud Platform Kubernetes wrapper is Container Engine. I suggest you run through the Container Engine tutorials, paying careful attention to the configuration files, before you attempt to implement your own solution.
See the guestbook to get started: https://cloud.google.com/container-engine/docs/tutorials/guestbook

To echo what rdc said, you should definitely go through the tutorial, which will help you understand the system better. But the short answer to your question is that you want to create a ReplicationController and specify the containers' information in the pod template.

Related

How to expose Docker and/or Kubernetes ports on DigitalOcean

First off I want to say I am in no way inexperienced, I am a professional, and I have been Googling this issue for a week; I've followed tutorials and also largely found threads on this site that tell people they're asking for free labor and the answer is on Google. The answer is not on Google, so please bear with me. I have been working on my "homework," as people like to say here, and I am missing something significant.
My use case: I want to run code-server and JupyterLab as browser-accessible services on a DigitalOcean droplet OR Kubernetes cluster. I would like to do this in a way that allows as much of my budget for hosting as possible to be used for processing software (I write Python machine learning/natural language code). My ideal setup is that I have a subdomain, with SSL (LetsEncrypt is fine), for code-server and another for JupyterLab. Ideally they can access the same storage, but that's a secondary concern for the moment. I'd be okay with not having a domain and just passing traffic through OpenVPN to an IP and ports, but code-server just won't run full featured without SSL.
The actual problem: on nearly every attempt to implement this, I have found that I cannot access ports. On a good attempt, I manage to get one service (often something like Python http.server) where going to my domain or IP/port gets me anything other than "connection refused" instantly. I've checked firewall settings (I don't use DigitalOcean's and I have consistently opened the ports that my native services and/or Docker containers are listening on/being forwarded to). Best I pulled off was using Kubernetes and this tutorial following this tutorial: I got code-server and two example sites running in separate subdomains (pointed using a node balancer, and yes, I have a fully registered domain on DO's name servers).
There was a problem however: I couldn't get LetsEncrypt to issue a certificate on Kubernetes and I didn't know how to get it into the container for code-server.
That gets me to my next problem, which is relevant bc I'm not sure this is entirely a Kubernetes problem: I have not successfully exposed a port in any Linux distro in the past four years. I used to administer multiple sites on a single Linode, from 2012-16 or so, and it was no problem, although probably quite insecure, but I'm talking not even being able to expose ports on IP addresses now. Something in how cloud providers handle things has changed. I know AWS, GCloud etc. isolate their VMs on private networks but that's not what DO, Linode, or Vultr do, and yet I can't so much as expose a port successfully - even if I follow port exposing tutorials for the distro in question. I've literally used Rancher to launch a Docker container on a port, managed by the OS, and verified that port is exposed, and it just doesn't work. With Kubernetes SOMETIMES the load balancer helps here. I also was able to get a full server up in FreeBSD but too much of what I need to run depends on Docker and Node which sadly haven't been ported well to that system.
I want to note that I've also Googled StackOverflow and found other people with similar issues, but their questions were all closed there and they were told to Google; Googling turns up DO tutorials and the closed
StackOverflow threads. I should note I've also tried to do this on Google Cloud and Linode with similar results.
ALSO: I'm aware Docker containers are isolated by default from the OS network and have followed guidelines for deployment to make sure their OS-native ports are forwarded.
tl;dr; I'm having trouble exposing ports, despite following OS procedures, and also I am not sure if my personal development server for just me to use should be a Kubernetes cluster or a single server with Docker deployment, and I don't know how to route ports to subdomains for the two apps I want to expose if I'm not using a Kubernetes load balancer. Please don't close this as somehow "too broad" when it's an incredibly narrow situation, other people have had it, and I've been doing my research for a week.
You can find where to do it here:
https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/#ssl-certificates

How to find a Docker image on Docker Hub?

I am new to Docker. Using Kitematic, how can I setup a Docker container containing the following?
Apache, Memcached, MySQL, Nginx, PHP FPM
Should I find one single image with all these? If so, how do I find that on https://hub.docker.com? It doesn't seem possible to filter by above requirements.
Or should I install these as separate containers?
Bart,
I don't know anything about kitematic but I can give you some general information though to clear things up.
General concensus is to run only a single process per container. There are lot's of discussions and information around why this would be good or bad, one such discussion for example: https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container.
That said, these are the images I would choose for an environment with the software you described above:
Memcache: https://hub.docker.com/_/memcached
MySql: https://hub.docker.com/_/mysql
Nginx: https://hub.docker.com/_/nginx
PHP FPM: https://hub.docker.com/_/php
How do I get these images? I go to hub.docker.com and search for the software I want, I then start with the official images and see if they suite my needs. If they do, great! Otherwise, I would look for non-official images and eventually if I don't find what I want I will extend the existing images by creating a custom image, based on one from hub.docker.com
Some more explanation about the last one, PHP. PHP is distributed with multiple tags. By going to the docker hub page ('description'-tab) you can see the supported tags. Clicking the tag you are interested in will lead you to a github repo where the Dockerfile is hosted. This file contains the commands, used to construct the image you are researching. You can check all the tags to see which one installs the software you need. For example, there are PHP tags where apache is installed (i.e. 7-apache) and there are tags where FPM is installed (i.e. 7-fpm).
Hope this will help you with the research about what images to use!
You need to run those images within the same docker network, tough a docker-compose (and is associated docker-compose.yml) such as this one.
The docker-compose support in Kinematic UI though... is still an open issue.
you cant find all of these containers as one image.. all you can do is create a docker-compose file and add all those independent images into the compose file.
This way you can handle all your containers as a service in a single with there dependencies too..
For further info refer to https://docs.docker.com/compose/

Dockerizing composer-playground with deployed (embedded) business network archive

I found out there is hyperledger/composer-playground as a docker image. It's easily startable using
docker run --name composer-playground --publish 8080:8080 --detach hyperledger/composer-playground
Now I want to make a Dockerfile out of it that can serve an existing Business Network Definition as demo application. It should be embedded, so no real Fabric network is required. What possibilities do I have to accomplish that?
First idea: Card file structures could be copied into /home/composer/.composer/cards but as far as I understand, these cards could only have the embedded connection type, otherwise a real Fabric network is required.
Second idea: Is there some API endpoint that could be queried to create an embedded network for a .bna file?
Interesting idea, and with the direction of Composer playground cropping up a bit recently, it would be a good one to discuss on a Composer community call
As for how things are now, I think you'll have to set everything up with a real Fabric. I haven't seen a Dockerfile that does that but seems doable. The hosted playground does everything in local storage and pouch DB (indexedDB) so I don't think you would be able to get a demo bna in there without changes to the playground.
One thing that I had pondered in the past was making it possible to configure where the playground looks for sample networks, and that could even include the primary 'get started' network.
Might that help in this case? Could be worth opening a Github issue to explore the use cases if that does sound useful (pull requests gratefully accepted!)

Docker, Jenkins and Rails - Setup for running specs on a typical Rails stack

I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.

Setup commands for Mesos and Kubernetes on Docker?

When trying to move a web container (Tomcat) to the latest technologies for better growth and support, I came across this blog. This part seems ideal for my needs:
... we are also incorporating Kubernetes into Mesos to manage the deployment of Docker workloads. Together, we provide customers with a commercial-grade, highly-available and production-ready compute fabric.
Now, how to setup a local test environment to try this out? All these technologies seem interchangable! I can run docker on mesos, mesos on docker, etc etc etc. Prepackaged instances allow me to run on others Clouds. Other videos also make this seem great! Running out on the cloud is not a viable (allowed) option for me. Unfortunately, I can not find 'instructions' on how to setup the configuration described/marketed/advertised.
If I am new to these technologies, and know there will be a learning curve, is there a way to get initialized for doing such a "simple task": running a tomcat container on a Docker machine that is running Mesos/Kubernetes? That is, without spending days trying to learn and figure out each individual part! This is the picture from the blog site referenced:
Assuming that I "only" know how to create a docker container(s) (for say, centos-7). What commands, in what order, (i.e. the secret 'code') do I need to use to configure small (2 or 3) local environment to try out running Tomcat?
Although I searched quite a bit, apparently not enough! Someone pointed me to this:
https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/mesos-docker.md
which is pretty close to exactly what I was looking for.

Resources