Are there any tools to install/deploy a docker image as a standalone/portable installation.
So that you don't have to install docker manually beforehand, just one installation, and it will run and deploy your docker image. And perhaps autostart it as well on boot.
Mainly interested for Win&OSX, but for linux would be nice too.
You can get a standalone Docker image automatically with preconfigured scaling options using the already packaged Docker engine. The details of this solution and its installation are described in the instruction.
I don't think that this is even possible. Docker has so many dependencies.
(Linux &/ OSX)
The much easier way would be a bash script wich starts the installation and afterwards runs the container. Shouldn't be that time consuming.
Related
I have a bootable iso image (live cd) with Linux system that is pretty old. That distro doesn't have remote repo (all installations are done from cdrom and separate disk with packages). I wanted to turn it into a docker image. Reading through articles google gave me, I've found several ways to do that. The first one is to mount the iso and find filesystem.squashfs - only modern distros use that way, not my case. My distro doesn't have that file available. The second approach is to call debootstrap but it requires to specify the repo for the distro with dist directory available in it. My distro doesn't have a public repo. What can I do? Is it even possible? I think that should be possible by doing a lot of things manually but how?
I faced similar problems when I had to containerize an old build server (building natively for legacy systems), eventually I succeeded. This approach describes how to containerize some old Linux distro (kernel 2.6.27 in my case), in the present Linux kernel 5 era.
General steps
if necessary: boot the old OS (or Live CD image)
login to the old system as root (or use sudo)
create a tarball from the relevant folders present in root
cd / ; tar cfvz image.tar.gz --one-file-system --exclude=/var/log --exclude=/image.tar.gz /
the selection worked in my case; review for yourself which folders to include or exclude
transfer the tarball to the Docker host (step not shown here)
and import it:
docker import image.tar.gz
the previous command will print out some hash
if convenient, tag the imported image:
docker tag <import-hash> <your-label>
Legacy problem: unsupported system calls
The imported image contains a Linux distribution snapshot. Some binaries can be executed from Docker, eg.:
docker run --rm <your-label> bin/ls
may actually work.
Some important binaries initially did not work for me, most notably bash:
docker run -it --rm <your-label> bin/bash
was failing silently. (Also, running with strace was possible but gave no clear indication.)
As #hiranchaudhuri pointed out, this is likely due to an API discrepancy between the host's kernel and the container's user space code.
In my case the problem was solved by enabling the legacy vsyscall kernel API
for Windows WSL2, this is described here https://learn.microsoft.com/en-us/windows/wsl/wsl-config
for native Linux systems of today, I guess this can be set in the boot configuration, with the kernel command-line parameter vsyscall=emulate, if the present kernel supports this option
I seriously doubt you will succeed on that.
Be aware Docker is not a full virtualization like KVM or VirtualBox. The lightweight virtualization benefits from the docker containers running on the host's Linux kernel. Which means the kernel is the same inside and outside of the container.
If you now try to install some old distro inside the container you may end up with an incompatible combination. Patching the kernel may involve upgrading glibc, and patching that may involve recompiling the rest of the OS.
I am not sure why you want to stick to the old distro, but seriously I believe you are better off with real virtualization.
I am trying to package an Electron app that relies on several Docker containers into a single executable. I would like to be able to convert Dockerfile's into executables runnable on Windows. Is this possible? Can I pull this off without having Docker installed on the client machine? How can I do this?
It is not possible.
Take into account, that docker images are packaged for certain architecture. Kernel must be compatible. And sure, you need docker engine to work on client machine. I think windows is not ready for this.
BRs
I have learned to use docker as development server (LAMP and MEAN) and now I feel I should take next step, By removing PHP and node binaries from system and use binaries from containers. So on a fresh Solus install, I setup containers for PHP, node, Ruby etc. Solus already recommends using containers for such tasks. But I got stuck on first day.
I installed vs code (Code-oss) on installed extensions (prettier, PHPCS etc) on it, and they need path of installed binaries (path/to/phpcs, path/to/node etc).
I initially set up configuration path as
docker run -it --rm herloct/phpcs phpcs
based on https://gist.github.com/barraq/e7f85262bc7a0af2d8d8884d27b62d2c but using more updated container. It didn't work, So I set it up as alias thinking it would fool VSCode into thinking it is native command, but it didn't work either. I have confirmed that using those command directly from terminal does work, But VSCode PHPIntellisense extension does not want to work.
Any suggestion?
P.S. Any tip to keep container running in background as to avoid container bootup delay everytime I use PHPCS or javac from container? I can keep LAMP server running but everytime I enter terminal tools, it loads up new container to execute command, and then kill container causing delay for bootup and closing.
In case it is still relevant to someone: You might want to create a VS Code development container to use dockerized binaries.
For this to work, a .devcontainer.json is required which could be as simple as:
{
"image": "mcr.microsoft.com/vscode/devcontainers/typescript-node:0-12"
}
Is there anyway to build an image with Dockerfile while using Kitematic?
From the top of the docs for Kitematic
Legacy desktop solution. Kitematic is a legacy solution, bundled with Docker Toolbox. We recommend updating to Docker for Mac or Docker for Windows if your system meets the requirements for one of those applications.
If possible, you should avoid using the tool.
If you have to use Kitematic, the feature you are asking about is tracked by this GitHub issue: Import Dockerfile - (Docker build). At the time of writing the feature is not implemented.
I have searched the history a little bit but failed to find a good answer. So I just asked my question here. If there is a good answer already, please redirect it for me. Thanks.
The question is, I found my company's new hire doc lists a bunch of software to install to setup the development environment. Usually it took 1 or 2 days for a new hire to setup everything ready for a new mac. We want to shorten that process. The first thing I thought is Docker.
I read through the user guide of Docker and followed some blogs regarding to how to setup dev environment using Docker but still a little confused if Docker applies to our setting. So here's the detail of requirements:
We need to install a bunch of software (many of them are customized binaries). Right now, we distribute the source code, a new hire need to build from the source code, install it and set environment to include the binary into path. I am wondering if Docker allows us to install customized binaries into it's container?
The source code should not stay in the container. The source code is still checked out in one's local machine using git. Then, how can I rely on the Docker container's environment to build my software? I have searched a little bit is that, you need to mount your folder into the container, and then shell into your container to build? Is that how it works?
We usually develop in mac, does Docker also support mac container or it just allows you to run Linux container using boot2Docker?
Thank you so much in advance for your help.
Some answers :)
First, I think it's a really good idea to use Docker to standardise the development configuration (softwares, custom packages, env variables, ...).
With Docker, you can get your customised binaries from the host, it's not a problem. With the CMD command, you can use bash to install them and add them into your PATH. You can also write a shell script to install all your stuff and launch this script when you build your container
Your code will be on the host and you can "mount" a host folder in your docker image with the -v command. Ex: docker run -v /home/user/code:/tmp/code your_image. I'll detail below how the developer will use your Docker image.
Yep, you have to use Boot2Docker, it works well
Once your development image will be ready, you have to publish it on the official Docker registry (or to host a local registry on your network).
Next, the developer will launch the following Docker command:
docker run -rm -ti your_build_image /bin/bash
This will launch a bash terminal in your Docker image and the developer will be able to compile the code. Ex: cd /tmp/code + mvn clean install
Please have a look to this article to learn about volumes: http://jam.sg/blog/mongodb-docker-part-2/
And this one about Dockerfile: https://www.digitalocean.com/community/tutorials/docker-explained-using-dockerfiles-to-automate-building-of-images
You can also find a lot of Dockerfiles on github (search Dockerfile).
If the goal is to speed up the time it takes to get a Mac setup and usable in your environment, you might want to look at Boxen.
From the "About" section:
"Boxen is your team's IT robot. It's a dangerously opinionated framework that automates every piece of your development environment. GitHub, Inc. wrote the first version of Boxen (imaginatively called “The Setup”) to help employees start shipping on day one."