I have managed to run the latest elasticsearch in Kubernetes with only ONE pod. I would like to extend this to a full-blown elasticsearch cluster on Kubernetes. I have checked out https://github.com/pires/kubernetes-elasticsearch-cluster but it is not maintained anymore and does not have the latest ES docker image. I tried to use the .yaml files from that github with the latest ES image from docker hub but have not been able to set up the cluster. Any advice and insight is appreciated.
See this. I've answered that question and it might be what you are looking for.
Related
I have a 2 node nifi cluster running on a kubernetes cloud environment, along with a registry. Currently the cluster and registry are running 1.16.2 versions. I want to update them to 1.17.0, and was wondering if there's any steps I need to take?
I have a yaml file for both nifi and registry configured to our needs and use them to deploy on to the cloud. Am I overthinking this? Wondering if I could just point to the new image versions?
Tried a simple image version change, and everything works fine on local instance. But not sure if this is the best practice.
I'm new to docker and am currently working on dockerizing a simple ELK Stack application at work. I've seen several tutorials on how to do this, however my biggest issue is that I can't use just any existing docker image, as this is corporate code. So, from my understanding, I'll need dockerize/create 3 separate images of ELK from artifacts that we have currently available internally. My current approach so far has been to get the rpms (using RHEL7), create a dockerfile to install/expose them ect.
Reason for my approach: I am working behind a corporate firewall and proxy and don't know if downloading an official docker image is possible nor if it is compliant
So far unsuccessful, but does anyone have experience doing this?
Thanks in advance!
It seems your env can not access through internet to docker registry to download docker images right ? So you just want to get the docker image related with EFK, refer How to copy Docker images from one host to another without using a repository for copy the images to your env.
Docker could not pull kubernetes related images
(gcr.io/google_containers/kube-apiserver-amd64,gcr.io/google_containers/kube-controller-manager-amd64,gcr.io/google_containers/kube-scheduler-amd64,gcr.io/google_containers/kube-proxy-amd64,gcr.io/google_containers/etcd-amd64 etc while installing kubernetes master(kubeadm init} and worker(kubeadm join).
Unsetting the proxy didn't work.
Can anyone suggest, what might be the issue.
Where are you from?
If you come from China, many mirror can use in mainland.
etc. AliCloud
I have deployed the cloudera/quickstart image for a single node deployment with docker. However I would like to have a multinode cdh deployment on 4 nodes using docker. I am new to this so anyone who has done the same please let me know how can that be achieved.
Instructions and script now available:
http://blog.cloudera.com/blog/2016/08/multi-node-clusters-with-cloudera-quickstart-for-docker/
I had the same question (running a CDH cluster deployment in docker) but I didn't find tools to do that for the latest CDH releases. That is why I prepared docker images. I hope it will be useful to someone else.
To run a CDH cluster easily, you can use docker-compose. Just create a configuration file based on https://github.com/ipogudin/cloudera-cluster-docker/blob/master/docker-compose.yml Please, remember that you need to remove build sections from definitions for each service.
Note, you don't need to build images locally (unless you want to customize them). You can use built images from docker hub (https://hub.docker.com/r/ipogudin/cloudera-cluster-gateway/).
We're thinking about using mesos and mesosphere to host our docker containers. Reading the docs it says that a prerequisite is that:
Docker version 1.0.0 or later needs to be installed on each slave
node.
We don't want to manually SSH into each new machine and install the correct version of the Docker daemon. Instead we're thinking about using something like Ansible to install Docker (and perhaps other services that may be required on each slave).
Is this a good way to solve it or does Mesosphere/DCOS or any of Mesos ecosystem components have other ways of dealing with this?
I've seen the quick intro where someone from Mesosphere just use dcos resize to change the cluster size on the Google Cloud Platform. Is there a way to hook in to this process and install additional services on the (google) container when it has booted? Or is this something we should avoid and instead just use a "pre-baked image"?
In your own datacenter using your favorite configuration tool such as ansible, salt, ... is probably a good choice.
On the cloud it might be easier to use virtual machine images providing docker, so for example dcos on aws uses coreOS which comes with docker out of the box. Shouldn't be too difficult with Ubuntu either...