I'm part of a team developing a machine learning application.
currently we're using Vagrant with a Docker provider as a uniform dev environment.
We want to utilize the GPUs on our computers when we play around during development, and I found that Nvidia released nvidia-docker to enable that for a simple docker container.
How can I use nvidia-docker as a provider for Vagrant?
If it's not possible, is there any equivalent solution?
It is important for us to develop on top of the same docker image that we deploy since we depend on multiple interacting opensource libraries, and we want to manage them in one place
(no dependencies breaking when deploying)
Related
Are containers specific to a particular host OS? For instance, if a container is created on Windows with particular dependencies (e.g., DLL files), can it run in a setup in which the host OS is Linux? I initially assumed that a container must be specific to a particular host OS.
But the following two excerpts seem to suggest that I may not have understood the mechanics correctly. So my question is: are containers built over the docker engine so when the dependencies are included, they are relative to the docker engine and the underlying host OS does not matter?
(1) From IBM:
Containerization allows developers to create and deploy applications faster and more securely. With traditional methods, code is developed in a specific computing environment which, when transferred to a new location, often results in bugs and errors. For example, when a developer transfers code from a desktop computer to a virtual machine (VM) or from a Linux to a Windows operating system. Containerization eliminates this problem by bundling the application code together with the related configuration files, libraries, and dependencies required for it to run. This single package of software or “container” is abstracted away from the host operating system, and hence, it stands alone and becomes portable—able to run across any platform or cloud, free of issues. [https://www.ibm.com/cloud/learn/containerization]
(2) From Docker:
Does Docker run on Linux, macOS, and Windows?
You can run both Linux and Windows programs and executables in Docker containers. The Docker platform runs natively on Linux (on x86-64, ARM and many other CPU architectures) and on Windows (x86-64).
Docker Inc. builds products that let you build and run containers on Linux, Windows and macOS.
What does Docker technology add to just plain LXC?🔗
Docker technology is not a replacement for LXC. “LXC” refers to capabilities of the Linux kernel (specifically namespaces and control groups) which allow sandboxing processes from one another, and controlling their resource allocations. On top of this low-level foundation of kernel features, Docker offers a high-level tool with several powerful functionalities:
Portable deployment across machines. Docker defines a format for bundling an application and all its dependencies into a single object called a container. This container can be transferred to any Docker-enabled machine. The container can be executed there with the guarantee that the execution environment exposed to the application is the same in development, testing, and production. LXC implements process sandboxing, which is an important pre-requisite for portable deployment, but is not sufficient for portable deployment. If you sent me a copy of your application installed in a custom LXC configuration, it would almost certainly not run on my machine the way it does on yours. The app you sent me is tied to your machine’s specific configuration: networking, storage, logging, etc. Docker defines an abstraction for these machine-specific settings. The exact same Docker container can run - unchanged - on many different machines, with many different configurations.
The host OS, or precisely, the kernel provided still matters. That's why you can't run Windows containers on Linux. You can run Linux container on Windows due to Hyper-V and WSL2, and on macOS with Hypervisor, but that's it. If the provided kernel is compatible (doesn't have to be identical), usually similar version and the same architecture (remember, there are x64, ARM64, etc) or at least supported virtualization (x64 containers can run on M1, which is ARM64) then you can just run the container, no need to worry about DLLs because they're supposed to be included either in one of the base image you start with or the image you generate.
Short description:
Is it possible to reference an SDK ( or any folder ) in a Docker Container from the Host computer?
Long description:
My team and I work in different environments ( Windows & Mac ) and on different stacks ( Asp .Net MVC / Elixir & Phoenix )
I'm trying to help everyone by creating separate Docker Stacks for each solution ( or group of projects )
What I have been able to do is set up the Docker Stacks so that each solution can be run in 1 or more Docker Containers and the developers can work on the code locally ( using direct host path mounts/volumes ) using an IDE of their choosing.
The issue is different solutions use different SDKs or even different versions of the same SDKs.
So what I would like to do is it up so that anyone in the team could reference the SDK installed on the Docker Container instead of installing the SDKs and each version of the SDKs they need for all the projects.
As far as I can tell, if I create a host mount binding, it will overwrite what's in the container with what's in the host, but I'd like to do it the other way round, I'd like to create a binding between the Docker Container and the Host and have the contents in the Docker Container show up in the Host.
Is this possible? Is there a better way to achieve this?
SDK images from vendors (e.g. asp.net core SDK images from Microsoft) best recommended compile/build time purpose and its lightweight version recommended for runtime in hosting/deployment environment.
Sole purpose of compile/build SDK images is for creating docker runtime images at build stage especially if target runtime OS (linux) is different than development machine OS e.g. Windows. If used efficiently with multistage builder pattern inside dockerfile, can create much lightweight runtime images for hosted environments.
e.g. aspnet.core SDK images used for building docker images and then run locally with host:guest port mapping. But if the dev machine OS is any linux distro then using SDK images is better as you can test validate in multiple SDK images. And these images just need exact name and docker deamon would download auto and use whenever required - that's it but certainly would needs good IDE orchestration support e.g. Visual Studio provides for docker based development on windows 10. or-else simply use docker CLI for build run.
Hope this helps clarify your need if not a solution
There was a project thrown my way recently that involves the orchestration of several (Linux capable) embedded devices, deploying software to them, and allowing for the applications to be updated when the code base updates in a git repo.
The initial thought was to make a standard image for each device, and I set out, attempting to install docker on an UDOO Quad and an Intel Edison to start, but without any success up to this point.
My thinking is that it seems to be a good idea to install Docker on embedded devices--but if that's the case, surely it would have been ported by now. The only group out there that seems to be making these efforts is Resin.io.
Is there something I'm missing, or is there a clear reason why Docker doesn't make sense on embedded devices? If there isn't a reason, and it does make sense to run Docker on embedded systems, is there something I've overlooked out there: are there any sources of discussion on porting, or how-to's that cover this?
I have considered running docker on embedded devices (a mips system), but didn't go that way. There are some problems with it, in my humble view:
Docker is implemented in Golang. There is currently no available tool chain for mips to compile go. You will need to create the tool chain yourself using gcc-go.
The size of docker is larger than lxc. In a desktop computer this is not a problem, but the embedded device has limited flash storage.
Docker uses some quite up-to-date feature of linux kernel. Sometimes the kernel version on embedded devices are not so new and back-port is needed to make it work.
The docker image has to be built on the same architecture as the run time environment. It means that if you want to run a docker container on Raspberry Pi, the docker image has to be built on an ARM-architecture system. QEMU can be used to build docker image in the cloud, but it doesn't support all CPU architectures used in embedded system. (for example, it currently doesn't support MIPS)
In the end, lxc was chosen for the specific task of running a container on embedded device. It has limited features compared to docker, but currently it suits the requirement of the project.
As of year 2019, I would like to update this answer since I did port docker to embedded system with ARM cpu. With the price of flash usage, memory usage, by using docker you will have container management, image management, and many ready to run images from docker hub. So the decision is a balance between cost and features.
Here is an update for 2018:
You can work with Docker on embedded devices such as Raspberry Pi and Orange Pi quite easily now because of advancements in the development of Raspbian and Armbian operating system images. Specifically, both types of devices and their respective OS images now support kernels that are of sufficiently high enough versions to install Docker without any problems (at least version 3.10, though both now offer 4.x+ versions).
Your desire for faster rates of change can definitely be realized by using embedded Docker. I can say from experience that I have tested and regularly run the approach you describe. Basically, you start with a base operating system image such as Raspbian or Armbian, tweak that operating system enough that it's secure and has Docker installed, and then you use Docker for handling development iteration and application updates.
As an aside, if you are interested in running Docker on embedded Linux devices, then I recommend you check out a free, open-source, MIT-licensed command line tool I wrote to help developers work with embedded Docker on multiple devices at once: https://github.com/ForwardLoopLLC/floopcli .
Even if you are not interested in the tool itself, the documentation for the tool describes several patterns for working with Dockerized applications across multiple devices in multiple languages: https://docs.forward-loop.com/floopcli/master/index.html . The materials there should serve as a starting point for porting applications to Docker and then deploying them on embedded devices. The documentation also addresses some embedded device subtleties, such as differences between ARMv6 and ARMv7. Hopefully this helps you get started!
There is a great article on LinkedIn describing his experience with that
https://www.linkedin.com/pulse/whale-jar-when-running-docker-embedded-linux-good-thing-fletcher#pulse-comments-urn:li:article:7736487387895237975
Often embedded systems have a very slow rate of change. Docker works well on a minimum build then layering on top. If you want to sacrifice the overhead of running docker on a minimum embedded system for docker's ability to have a build system and steady rate of change then you could explorer it.
I would like to use the Spark-graphX packages available to Neo4j through Mazerunner, however I am an analyst and not a software person. I am running Windows 7 on my laptop and Neo4j 2.3.0, and would like a step-by-step guide explaining how I can set-up Mazerunner for both Community & Enterprise. There's a lot of mention of dockers and containers, and I have no idea what these are, or how to set them up. Simple instructions would be of sooo much help! :)
Docker is primarily Operating System Level Visualization technology designed to run on Unix based systems (Linux,Mac,FreeBSD). Luckily Docker provides a Windows version that sort of does the same thing on Unix.
What happens is, after you have installed Docker, it allows you to run what they call containers which are basically virtual machines on top of your host (Windows 7 Running Docker). This allows you to run services like Neo4j in an isolated environment. Docker also allows you to download and install pre-configured, pre-compiled images of operating systems that usually provide some sort of service or have some software pre-installed.
In your case, I believe all you have to do is:
First install Docker
Use "Docker Compose" to download and install the images.
Continue Reading the Tutorial as you have now installed the required docker images
Note: Some of the operations, like the one in Step 2 will require command-line access and Also the creation of a "docker-compose.yml" so, be sure to visit all the links I have provided. Spend a little time going through them and you should be alright.
PS: great blog. definitely bookmarking it!
Due to a hardware issue, I had to change my work station to another Mac for a few weeks.
It took me a couple of hours to setup everything: Android Studio, git, Apache, MySql, etc...
Could I use a docker image to bundle all my development tools ?
(My goal is to have a "backup" of my development environment that I could start running right away on another machine)
Could I use a docker image to bundle all my development tools ?
That means all your dev tools would be Linux tools working in Linux container, on a Linux host.
You would need to provide that Linux host (on your Mac) through a boot2docker Virtual Machine.
But that also mean you could not directly type "git" from a Mac shell, you would need to connect to your VM first in order to launch your 'git' container and run dome docker run --name=git commands.
So no, this doesn't seem to be a good fit for your backup plan on Mac.
Not necessarily. It kinda depends what you are looking for in a development environment.
I do use it for part of my dev env though.
Vagrant + Docker
My personal approach is to rely on Vagrant to fire up a bunch of environments, some of which being full-fledged VMs and others being lightweight containers.
This is a rather controversial approach though, many people would not agree with it, as the tools overlap, both in terms of platform capability and provisioning.
Docker Containers for 3rd Party Services
My personal approach for this is to use Vagrant to fire up a bunch of different VMs, where one is my main dev VM with tools I use for development (IDEs, editors, SCM tools, etc...), and the rest are Docker containers for 3rd party apps that relate to my daily activities (IRC client, database servers like MySQL or MongoDB, etc...).
This fits my cycle decently as these types of tools (like databases) are not something you generally interact with directly through a tty, but something I'd rather connect to with another tool via an API. So I don't need direct access to them, and I do want them to be isolated and easy to jumpstart and dispose of when I jump between projects.
So, docker containers fit part of my idea of a dev environment, but not necessarily all of it.
Just my use case though. Hope it helps.
Shameless plug: Docker Shell
This tool lets you set up a uniform cross platform development environment inside a docker container.
http://dockershell.io/