What is the difference between running a quick start version of hyperledger iroha and building iroha? - hyperledger

The documentation provided from the site https://iroha.readthedocs.io
highlight two different sections titled as Building Iroha and Quick Start Guide (which runs an example test version of Hyperledger Iroha). If any experts here could explain me on the difference between these two, I would be thankful.
Thanks!

Quick Start Guide provides instructions how to run iroha on docker - it is fastest and easiest way.
On the other hand building iroha from scratch is not really complicated, because we need just to copy few commands, and almost all dependencies would be downloaded automatically by vpkg, or with cmake.
About other differences:
When you use docker's image:
It is faster to set up, it is harder to make mistake and is more probably than docker's version is fully working.
When you build from scratch: You need to read more, find dependencies (there are listed for debian-based linuxes, but for Manjaro you need to find by your own). You also need to wait longer. And what is most important - you are not sure that Your version would work, or even compile (if something is changed in dependence libraries).
Personally despite all those disadvantages I prefer to build manually, because I prefer to compile on my system without extra layer like docker.

Related

Emacs workflow with development containers

New to Emacs and recently been trying to get used to it. loving it so far!
One thing I cannot seem to figure out by myself nor find any proper examples of how to figure out to following workflow:
Since I work on multiple projects with different languages and like to keep my work and private projects separated as much as possible in my OS, ive been working with development containers using docker and VScode for the past years.
This allowed me to keep both my project dependencies and the development tools in one container, where i just attached my VScode instance to a project and used extensions such as Language servers / linting / debugging from within that container.
Currently I can open my projects in emacs as the code is local and mounted to the containers, but im looking for a way to either:
Allow my local emacs to use the language/linting/debugging services installed in the container.
Install emacs in the dev containers and mount my configs to keep this synchronized.
Or better alternatives?
Most valuable would be to get language servers working again.
In case it matters, i'm working in DOOM Emacs on Arch. Mostly Python, PHP and NodeJS projects.
... use the language/linting/debugging services installed in the container
By design this is difficult to do with Docker: by design the host system can't directly access files or binaries installed in a container. Without a lot of tricks around bind mounts and user IDs and paths and permissions it's very difficult to run a program in a container in a way that looks like it's on the host system. A couple of tools have those tricks built in, but it's not at all universal. (Jenkins for example generates about 5 lines' worth of docker run command options if you ask it to run a step inside a container.)
My Emacs experience has generally been much better using a host-based per-language version manager and per-project packaging tool (a per-project node_modules directory, rbenv plus Ruby gem sets, pipenv for Python programs, ...).
In short: Emacs can't use language servers, language interpreters, or other tools from Docker images instead of the host system (without writing a lot of Lisp (and if you do consider publishing it to MELPA (and also to GitHub))).
Most valuable would be to get language servers working again.
M-x lsp-install-server will download one of the language servers lsp-mode knows about and save it in your $HOME/.emacs directory. If you activate lsp-mode and it doesn't already have a language server for the current major mode, it will offer to download it for you. There's not much to "get working" usually.

Is Docker-ized dev envoirment good for maintaining legacy software?

Let's say I have old, unmaintained application that lives on a VPS (i.e. Symfony 3 PHP app that relies on PHP 5).
If some changes are needed I have to clone this app to my desktop, build it, change and re-deploy. As time goes, recreating desktop dev environment gets harder - in this example I can't simply build the app as I use PHP7 in my CLI that breaks building process.
I tried to dockerize the app, so I added Ubuntu 18 to my docker-compose file... and it doesn't work as latest Ubuntu that has PHP5 support is 14.04. 14.04 is also the oldest (official) version available on DockerHub. But will it be still available in 3 years? If not, Docker won't build a container.
So, my question is: is Docker a right tool to solve described problem at all?
If so, should I backup docker images described that my build relies on?
If not, beside proper maintenance, what tool is better?
You can install PHP5 in newer ubuntu versions, but it means adding an external repository.
You could also create your own docker image, containing only the libraries you want. If so, I'd advise to try and use alpine as a base image. There is a bit of a learning curve to adapt, but once you do it you'll have a small image tailored to your needs.
Given that containers allow you to isolate processus and conf with minimal footprint compared to a VM, I think it is the best option. Tailoring and maintaining your own image is not that expensive in terms of maintenance if you document it correctly, and it will allow you to always have a system 'maintaining' all your precise requirements.

Docker query on containerizing

Our requirement is to create a container for legacy apps over docker.
We don't have the operating system support/application server support available, nor do we have knowledge to build them from scratch.
But we have a physical instance of the legacy app running in our farm.
We could get an ISO image from our server team if required, our question is if we get this ISO image can we export this as a docker image?
if yes, please let me know if there is any specific procedure or steps associated with it.
if no, please tell me why? and the possible workarounds for the same.
if we get this ISO image can we export this as a docker image?
I don't think there is an easy way (like push-the-export-button) to do this. Explanation follows...
You are describing a procedure taking place in the Virtual Machine world. You take a snapshot of a server, move the .iso file somewhere else and create a new VM that will run on a Hypervisor.
Containers are not VMs. They "contain" all the bytes that a service needs to run but not a whole operating system. They are supposed to run as processes on the host.
Workarounds:
You will have to get your hands dirty. This means that you will have to find out what the legacy app uses (for example Apache + PHP + MySql + app code) and build it from scratch with Docker.
Some thoughts:
containers are supposed to be lightweight. For example one might use one container for the database, another one for the Apache etc... Your case looks like you are moving towards a fat container that has everything inside.
Depending on what the legacy technology is, you might hit a wall... For example, if we are talking about something working with old php, mysql you might find ready-to-use images on hub.docker.com. But if the legacy app is a financial system written in cobol, I don't know what your starting point might be...
You will need to reverse engineer the application dependencies from the artifacts that you have in access to. This means recovering the language specific dependencies (whether python, java, php, node, etc). And any operating system level packages/dependencies that are required.
Essentially you are rebuilding the contents of that ISO image inside your docker file using OS package installation tools like apt, language level tools like pip, PECL, PEAR, composer, or maven, and finally the files that make up the app code.
So, for example: a PHP application might be dependent on having build-essential and php-mysql installed in the OS. Then the app may be dependent on packages like twig and monolog loaded through composer. If you are using SASS you may need to install ruby as well.
Your job is to track all these down and create a docker file that reproduces the iso image. If you are using a common stack like a J2EE app in tomcat, or a php app fronted by apache or ngnix, there will be base docker images that will get you most of the way to where you need to go.
It does look like there are some tools that can do this for you automatically: Dependency Walker equivalent for Linux?. I can't vouch for any of them. But you can also use command line tools. For example this will give you a list of all the user installed packages on a fedora system:
sudo dnf history userinstalled
When an app is using a dependency manager like composer or pip, there is usually a file that lists all the language specific dependencies.
At the end of the process you'll have a portable legacy app that can be easily deployed anywhere with a minimal footprint.
As one of the comments rightly points out, creating a VM from the ISO image is another way forward that will be much easier to accomplish. The application dependencies won't be explicit, but maybe that's ok for your use case.

Compile Tensorflow from source with Docker to get CPU speed up

I am looking for a way to set up or modify an existing Docker image for installing tensorflow that will install it such that the SSE4, AVX, AVX2, and FMA instructions can be utilized for CPU speed up. So far I have found how to install from source using bazel How to Compile Tensorflow... and CPU instructions not compiled.... Neither of these explain how to do this within Docker. So I think what I am looking for is what you need to add to an existing docker image that installs without these options so that you can get a compile version of tensorflow with the CPU options enabled. The existing docker images do not do this because they want the image to run on as many machines as possible. I am using Ubuntu 14.04 on linux PC. I am new to docker but have installed tensorflow and have it working without getting the CPU warnings I get when I use the docker images. I may not need this for speed, but I have seen posts that claim the speed up can be significant. I searched for existing docker images that do this and could not find anything. I need this to work with gpu so needs to be compatible with nvidia-docker.
I just found this docker support for bazel and it might provide an answer, however I do not understand it well enough to know for sure. I believe this is saying that you can not build tensorflow with bazel inside a Dockerfile. You have to build a Dockerfile using bazel. Is my understanding correct and is this the only way to get a docker image with tensorflow compiled from source? If so, I could still use help in how to do it and still get the other dependencies that I would get if using an existing docker image for tensorflow.
Dockerfiles that build with CPU support can be found here.
Hope that helps! Spent many a late night here on Stack Overflow and Github Issues and stuff. Now it's my turn to give back! :)
The GPU stuff in particular is really hairy - especially when enabling the XLA/JIT/AOT stuff as well as the Graph Transform Tools.
Lots of hacks embedded in my Dockerfiles. Feel free to review and ask me questions!
The contributing guidelines mention building TensorFlow from source with Docker to run the unit tests:
Refer to the
CPU-only developer Dockerfile and
GPU developer Dockerfile
for the required packages. Alternatively, use the said
Docker images, e.g.,
tensorflow/tensorflow:nightly-devel and tensorflow/tensorflow:nightly-devel-gpu
for development to avoid installing the packages directly on your system.

Making use of docker for development: a use case

my question is little vague but I tried looking for the answer here and there but could not understand if I can leverage docker for my work. My requirements
I usually try different versions of java, python and other software like different versions of eclipse, Linux package and other tools. This at the end make my Ubuntu installation a complete mess and some time completely broken. Then I started using Vm it solve most of the problem but make my pc very slow for frequent switching.
So my question can I achieve my work using docker without affecting my os? Can I run gui application, install different package without affecting underlying OS.
Switch actively between different docker container and underlying os.
Clean/remove unused/broken install of docker instance (containers?) etc. Any pointer to similar use case or how to would be helpful.
Thanks.
Ps- if it doesn't fit for SO then please move it to where it is best fitted. Sorry for non programming question.
Can it be done?
yes, there are examples of docker images that run graphical application, but running those containers might be a bit tricky. See for instance Can you run GUI apps in a docker container?
Is Docker the right tool for your problem ?
Maybe a package manager such as Nix would be better suited, as graphical software installed with it won't have any issue. With Nix you can install side-by-side many versions of a single software without interference.

Resources