I have 2 questions as I am currently trying to learn minikube and now wants to install it;
1- Which driver is preferable for Minikube (KVM or docker) ? Does one have some sort of advantage over other ?
2- Is it possible to install and run the minkube inside a VM managed by KVM ?
1 - There is no "better" or "worse". Using Docker is the default and with that the most supported version.
2 - Yes, it is possible to run Minikube inside a VM.
Found more answers to my question after some digging in the documentation.
https://minikube.sigs.k8s.io/docs/drivers/
As there are options available but when using minikub inside a VM then preferred driver will be either none or ssh. I ran into some networking issues when initially used docker but resolved it after digging more into the documentation. Otherwise docker is fine unless someone who is expert in resolving networking issues between the host and guest(vm)
Related
I am working on a platform which my company can use to host containerized application for out own purposes.
We use the following solution: PXE server -> PXE boot CoreOS -> Docker -> Ceph cluster in Docker containers.
Everything works great, we have built our own provisioning-service which uses Ignition-files to configure the host. The last step (Mounting Ceph Block Device) is the biggest issue for me.
When I mount it in CentOS7 it's pretty simple, I only need to install ceph-common and everything works like charm, but now I need to be able to mount it inside a Docker container on CoreOS.
What is really the best practice to achieve this? I would really appreciate an example or link to article about it as every guide I come across is simply 3 or 4 years old and the solutions don't work anymore.
CoreOS is specifically designed not to have packages installed on it directly, but instead to have systems composed on top of it using containers.
To use Ceph on CoreOS then, you need to use containers to run the Ceph applications on the hosts and mount the required devices and host paths into the container. There is a basic overview (though somewhat out-of-date, being from 2015) in the Ceph blog.
So, I ran a docker image with certain settings a while ago. In the meantime I updated my container settings via "docker update".
Now I want to see, what options/configurations (e.g. cpuset, stack, swap) are currently configured for my container.
Is there a docker command to check this?
If not, (why the hell isn't there and) where exactly can I find this information?
I am running docker 18.03.1-ce on debian 9.4.
Greetings,
Johannes
I found it out by myself.
To get detailed information about a containers settings one can use:
docker inspect 'options' 'containerid'
My question revolves around the following problem/error.
Service/Service[jenkins]: Provider redhat is not functional on this host. OR directly that D-BUS not available.
Let's say for instance i'm running packer, which invokes a puppet-masterless provisioner on a docker builder.
The puppet code base & contrib modules for the most part will attempt to manage the 'service' of the installed module. For instance, lets take Jenkins as an example. Jenkins puppet module although good, will fail, on packer builds to a centos7 & puppet docker host. As systemctl will not be available.
At this moment in time i'm confused how this would/will ever work for puppet/ansible code bases which attempt to manage the service. Without considerable extra effort to the codebase.
I have considered the contain running being /sbin/init but still feels a bit hacky.
Can anyone shed any light on this issue for me?
I am using ansible code to provision real machines or docker containers - to get away with SystemD / D-Bus I have created the docker-systemctl-replacement
This may be the stupid question.
Does Hyperledger Fabric require Docker for its operations.
I'm just wondering that Docker is needed only if we need to run Fabric peer, orderer or couchDB as virtual machine in the same physical machine. I think Docker might not be necessary if we install those sofwares (peer, order, couchDB, etc) natively on the separate and same server.
Thank you.
Just so this point does not go unnoticed, while you do not need to run the peer in a Docker container, endorsing peers (the ones which run chaincode) need access to a Docker daemon (ideally on the same host). Chaincode is currently only deployed via Docker containers.
The question as to whether Docker is required to run a peer, orderer, fabric-ca, etc. depends on what effort you are willing to expend.
The Hyperledger Fabric community publishes stable, tested Docker images for X86, PowerPC and s390 (mainframe) architectures for each of its releases. These images are based on Ubuntu.
To use the Hyperledger Fabric published release images, you need Docker and some form of orchestration support. For sample use cases, we provide some simple Docker Compose definitions. Hyperledger Cello and other provisioning platforms such as the IBM sandbox, provide kubernetes helm charts.
It is possible to build the binaries outside of their Docker images without modification of the source. However, if you wish to build for an alternative OS (e.g. Windows, RHEL or CENTOS, etc) then you will need to modify the build process. However, it can and has been done. Suggest you reach out to the hyperledger-fabric#lists.hyperledger.org mailing list to see if any in the community that have built for alternative deployment will share their work.
Starting HLF 2.0 things have changed. According to documentation, chaincode can be in 'external containers' also.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/cc_launcher.html
Yes, it is the second heading on the prerequisites page at http://hyperledger-fabric.readthedocs.io/en/latest/prereqs.html
Docker and Docker Compose
We're thinking about using mesos and mesosphere to host our docker containers. Reading the docs it says that a prerequisite is that:
Docker version 1.0.0 or later needs to be installed on each slave
node.
We don't want to manually SSH into each new machine and install the correct version of the Docker daemon. Instead we're thinking about using something like Ansible to install Docker (and perhaps other services that may be required on each slave).
Is this a good way to solve it or does Mesosphere/DCOS or any of Mesos ecosystem components have other ways of dealing with this?
I've seen the quick intro where someone from Mesosphere just use dcos resize to change the cluster size on the Google Cloud Platform. Is there a way to hook in to this process and install additional services on the (google) container when it has booted? Or is this something we should avoid and instead just use a "pre-baked image"?
In your own datacenter using your favorite configuration tool such as ansible, salt, ... is probably a good choice.
On the cloud it might be easier to use virtual machine images providing docker, so for example dcos on aws uses coreOS which comes with docker out of the box. Shouldn't be too difficult with Ubuntu either...