Does Hyperledger Fabric need Docker? - docker

This may be the stupid question.
Does Hyperledger Fabric require Docker for its operations.
I'm just wondering that Docker is needed only if we need to run Fabric peer, orderer or couchDB as virtual machine in the same physical machine. I think Docker might not be necessary if we install those sofwares (peer, order, couchDB, etc) natively on the separate and same server.
Thank you.

Just so this point does not go unnoticed, while you do not need to run the peer in a Docker container, endorsing peers (the ones which run chaincode) need access to a Docker daemon (ideally on the same host). Chaincode is currently only deployed via Docker containers.

The question as to whether Docker is required to run a peer, orderer, fabric-ca, etc. depends on what effort you are willing to expend.
The Hyperledger Fabric community publishes stable, tested Docker images for X86, PowerPC and s390 (mainframe) architectures for each of its releases. These images are based on Ubuntu.
To use the Hyperledger Fabric published release images, you need Docker and some form of orchestration support. For sample use cases, we provide some simple Docker Compose definitions. Hyperledger Cello and other provisioning platforms such as the IBM sandbox, provide kubernetes helm charts.
It is possible to build the binaries outside of their Docker images without modification of the source. However, if you wish to build for an alternative OS (e.g. Windows, RHEL or CENTOS, etc) then you will need to modify the build process. However, it can and has been done. Suggest you reach out to the hyperledger-fabric#lists.hyperledger.org mailing list to see if any in the community that have built for alternative deployment will share their work.

Starting HLF 2.0 things have changed. According to documentation, chaincode can be in 'external containers' also.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/cc_launcher.html

Yes, it is the second heading on the prerequisites page at http://hyperledger-fabric.readthedocs.io/en/latest/prereqs.html
Docker and Docker Compose

Related

Is docker a infrastructure as code technology because it virtualizes an OS to handle multiple workloads on a single OS instance?

I have come across the word IaC many times while learning DevOps and when I googled it to know what it is it showed that it used code as it is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. So is docker also a infrastructure as code technology because it virtualizes an OS to handle multiple workloads on a single OS instance? Thanks in advance
I'm not sure exactly what you are asking, but Docker provides infrastructure as code because the Docker functionality is set via Dockerfiles and shell scripts. You don't install a list of programs manually when defining an image. You don't configure anything with a GUI in order to create an environment when you pull an image from Docker hub or when you deploy your own image.
And as said in another answer, Docker is not virtualization, as everything is actually running in your Linux kernel, but with limited resources in its own namespace. You can see a container process via htop in the host machine, for instance. There's no hypervisor. There's no overhead.
I think you misunderstud the concept, because neither Docker is an hypervidor nor containers are VMs.
From this page: https://www.docker.com/resources/what-container
A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
Container images become containers at runtime and in the case of Docker containers - images become containers when they run on Docker Engine.
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space.

Use QEMU in GitLab CI instead of Docker image

GitLab CI is highly integrated with Docker.
But sometimes, if the project depends on the interaction with Linux kernel like LUKS. It cannot works properly.
The project cryptsetup use Travis-CI instead of GitLab CI even if it host on gitlab.com. I don't know if it is just personal preference of project maintainer.
Hence is it possible to run QEMU or Firecracker instead of Docker?
Is there any equivalent alternative in GitLab than Travis-CI?
This is not yet supported.
A recent (mid-2019) gitlab-org/gitlab-runner issue 4338 mentions katacontainers with firecracker vms as one possible alternative to Docker Machine, for autoscaling.
But this is still being studied.

Allow one private registry with docker

I need to block all registries and allow only one private registry for docker to pull images from , how can that be done natively in docker.
Using the RedHat options will not work on the upstream Docker CE or EE engine, RedHat had forked the docker engine and added their own features that are incompatible. You'll also find that /etc/sysconfig/docker is a RedHat only configuration file, designed to work with their version of the startup scripts. And I don't believe RedHat supports this old fork either, instead preferring their own podman and crio runtimes.
A hard limit on registry servers is not currently supported in the Linux Docker engine. The standard way to implement this for servers is with firewall rules on outbound connections, but that needs to only permit outbound connections to a known allow list. You still need to ensure that users don't import images from a tar file, or rebuild the otherwise blocked images from scratch (for example, all of official images on Docker Hub have the source available to rebuild them).
With Docker Desktop, the ability to restrict what registries a user can pull from has been added in their paid business tier with their image access management.
Previously I might have suggested using Notary and Docker Content Trust to ensure you only run trusted images, but that tooling has a variety of known issue, including the use of TOFU (trust on first use) that allows any image from a repo that hasn't been seen before to be signed by anyone and trusted to run. There are a few attempts to replace this, and the current leader is sigstore/cosign, but that isn't integrated directly into the docker engine. If you run in Kubernetes, this would be configured in your admission controller, like Gatekeeper or Kyverno.
Just found in redhat docs:
This can be done on docker daemon config:
/etc/sysconfig/docker
BLOCK_REGISTRY='--block-registry=all'
ADD_REGISTRY='--add-registry=registry.access.redhat.com'
and then do:
systemctl restart docker

Hyperledger fabric v1 network of physical layers

How to setup hyperledger fabric v1 network on physical peers instead of docker peers?
You can take a look at https://github.com/yacovm/fabricDeployment
It deploys automatically to linux virtual machines / physical hosts:
A few peers, according to your configuration
A solo orderer
Everything with TLS
Creates a channel and installs and invokes example02 chaincode for sanity testing
The docker containers provide a mechanism to take care of a lot of configs that happens behind the curtain, and that is the preferred way. If you choose to use fabric directly over a server without Docker, one way would be to build the binaries yourself via the make command and take a look at the 1) shell script in getting started and the 2) Docker-compose file (in http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html) to deconstruct the steps and configs, but this will be a pretty involved process.

How to install docker daemon when resizing data center cluster size in Mesosphere?

We're thinking about using mesos and mesosphere to host our docker containers. Reading the docs it says that a prerequisite is that:
Docker version 1.0.0 or later needs to be installed on each slave
node.
We don't want to manually SSH into each new machine and install the correct version of the Docker daemon. Instead we're thinking about using something like Ansible to install Docker (and perhaps other services that may be required on each slave).
Is this a good way to solve it or does Mesosphere/DCOS or any of Mesos ecosystem components have other ways of dealing with this?
I've seen the quick intro where someone from Mesosphere just use dcos resize to change the cluster size on the Google Cloud Platform. Is there a way to hook in to this process and install additional services on the (google) container when it has booted? Or is this something we should avoid and instead just use a "pre-baked image"?
In your own datacenter using your favorite configuration tool such as ansible, salt, ... is probably a good choice.
On the cloud it might be easier to use virtual machine images providing docker, so for example dcos on aws uses coreOS which comes with docker out of the box. Shouldn't be too difficult with Ubuntu either...

Resources