Can I use docker for installing ubuntu on a Mac? - docker

I'm using a Mac, but I want to learn and use Ubuntu for development and I don't care about the GUI. I used to use Vagrant and ssh to the machine, but it consumes much of my machine resources. Can I use docker for the same purpose while also having the isolation (when I mess things up) of a VM?

First install Docker Desktop for Mac.
Then in a terminal window run: docker run -it --name ubuntu ubuntu:xenial bash
You are in a terminal with ubuntu and can do whatever you like.
Note: If you are using an ubuntu version bionic (18.04) or newer (ubuntu:bionic or ubuntu:latest), you
must run the command unminimize inside the container so the tools
for human interaction be installed.
To start again after a reboot:
docker start ubuntu
docker exec -it ubuntu bash
If you want save your changes:
docker commit ubuntu
docker images
See the unnamed image and:
docker tag <imageid> myubuntu
Then you can run another container using your new image.
docker run -it --name myubuntu myubuntu bash
Or replace the former
docker stop ubuntu
docker rm ubuntu
docker run -it --name ubuntu myubuntu bash
Hope it helps

This is one of the few scenarios I wouldn't use Docker for :)
Base images like Ubuntu are heavily stripped down versions of the full OS. The latest Ubuntu image doesn't have basic tools like ping and curl - that's a deliberate strategy from Canonical to minimise the size of the image, and therefore the attack vector. Typically you'd build an image to run a single app process in a container, you wouldn't SSH in and use ordinary dev tools, so they're not needed. That will make it hard for you to learn Ubuntu, because a lot of the core stuff isn't there.
On the Mac, the best VM tool I've used is Parallels - it manages to share CPU without hammering the battery. VirtualBox is good too, and for either of them you can install full Ubuntu Server from the ISO - 5GB disk and 1GB RAM allocation will be plenty if you're just looking around.
With any hypervisor you can pause VMs so they stop using resources, and checkpoint them to save the image so you can restore back to it later.

Yes, you can.
Try searching docker hub for ubuntu containers of your choice (version and who is supporting the image)
Most of them are very well documented on what was used to build it and also how to run and access/expose resources if needed.
Check the official one here: https://hub.docker.com/_/ubuntu/

Related

How may I connect to a docker desktop virtual machine on mac? (docker desktop version 2.4)

On a mac, docker utilizes HyperKit in order to create a LinuxKit VM. This means, for example, among other things, that I cannot see any of the image layers that are pulled down for a given container in places like /var/lib/docker, since the VM controls all of that.
Is there a way to actually get a shell on that VM to be able to do that sort of introspection?
In Docker Desktop 2.4 for Mac, it is possible to get a nearly full terminal into the LinuxKit VM, with sane tab auto-completion, and be able to inspect its contents.
For example, to see the layers of pulled down docker images, you may perform the following commands:
$ stty -echo -icanon && nc -U ~/Library/Containers/com.docker.docker/Data/debug-shell.sock && stty sane
/ # ls -al /var/lib/docker/overlay2/
The nc -U ~/Library/Containers/com.docker.docker/Data/debug-shell.sock may be run on its own, per the Docker release docs, but if it is not combined with stty per the above example, you will not see very good output, nor will you have tab completion in the vm.

How to access the VM created by docker's HyperKit?

Docker for Mac uses a Linux VM created by HyperKit for storing and running containers on Mac.
With Docker Toolbox, I can just open VirtualBox and access the docker-machine VM. But with Docker for Mac, how do I access the VM created by HyperKit?
Update 2019-01-31, thanks to ru10's update, now there is a better way:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
Original Answer:
After a while, I found following way to get a shell of the VM that was created by HyperKit:
Run from terminal:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
You will see an empty screen, then type enter, you will get a login prompt. Login as root and hit enter, you will get a shell (no password), you will gett the shell:
To exit the session, type Ctrl-A k (then y to confirm).
It is a little bit hacky, but it seems to work for now (Sep 2016) (Sep 2017).
Mac OS High Sierra Docker version 18.06.0-ce-mac70 (26399)
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
instead of
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
According to this GitHub issue comment by a Docker maintainer, the recommended way to access the VM is through a privileged docker container.
Try logging into the VM: (I recommend this instead of using screen on the TTY)
$ docker run -it --privileged --pid=host justincormack/nsenter1
In fact, the answer from augurar is the only working as of 2021 as smammy says, the other options are deprecated.
So:
$ docker run -it --privileged --pid=host justincormack/nsenter1
was the right answer and worked for me in MacOS Big Sur as of July 2021.
I'm using docker desktop 4.7.1 on Mac. As mentioned, some of the good solutions proposed above does not work on newer docker desktop (tty link is gone).
I preferred the solution of Smammy which does not involve using image from unverified publisher (image: justincormack/nsenter1, though the image comes from a docker maintainer and the repository has a lot of stars), especially when it needs to run the docker with '--privileged' flag which grant the docker full access to the host machine.
This worked for me (using busybox image, which contains nsenter utility):
docker run -it --rm --privileged --pid=host busybox nsenter -t1 -m -u -i -n
you can find explanation of the command at
https://www.bretfisher.com/docker-for-mac-commands-for-getting-into-local-docker-vm/ (and similar suggestion, using debian image instead of busybox)
another solution proposed there (but less convenient, as it does not have auto-completion) is to use netcat
nc -U ~/Library/Containers/com.docker.docker/Data/debug-shell.sock

docker : When creating a machine, VT-X/AMD is enabled yet

I'm going through this tutorial
Dockerizing Flask With Compose and Machine - From Localhost to the Cloud
When trying to create a virtualbox with the command below
docker-machine create -d virtualbox dev;
I have the following error
Error creating machine : Error in driver during machine creation. This computer doesn't have VT-X/AMD enabled. Enabling it in the BIOS is mandatory
(Addendum: I'm running an ubuntu image on a virtual box. The physical host is a windows machine. The VT VT-X/AMD is enabled both , in the bios and in the virtualbox.)
Reading here and there, it seems to be a normal behavior because I'm trying to create a virtualbox within a virtualbox -> Click here for the explanation
What command should I use instead of docker-machine ?
Any insights are more than welcomed ...
Update: I've asked 3 additional questions to #VonC after his initial answer. Please find the questions below , in italic
1) How should I make the virtualbox and the docker config see that new "virtualbox"?
2) Will the ubuntu box, be able to do the docker-compose and build the container on that host?
3) If I'm pulling an image like debian, how can I use it as a machine and build an container on top of it?
If you do not want to change the BIOS settings, please run the below command.
I have the same problem because I have Hyper-V manager installed in my Windows 8 server. To avoid this issue I ran the below with the below option
--virtualbox-no-vtx-check
Example: docker-machine create default --virtualbox-no-vtx-check
I'm in a VM already , running ubuntu. Physical host is a windows machine
Then you don't need docker-machine.
You would create a small Linux image from windows with (again, type in a regular Windows CMD shell)
docker-machine create -d virtualbox dev
But on a full-fledged Ubuntu VM, you just need to install docker and run it directly.
If you need to use docker-machine, just copy (on Windows) v0.6.0-rc1/docker-machine_windows-amd64.exe as docker-machine.exe anywhere you want.
Also: set VBOX_MSI_INSTALL_PATH=C:\Program Files\Oracle\VirtualBox\ (if your VirtualBox is installed there)
You now can use docker-machine -d virtualbox dev.
2) Will the ubuntu box, be able to do the docker-compose and build the container on that host?
Yes, no issue there. The installation is straightforward.
3) If I'm pulling an image like debian, how can I use it as a machine and build an container on top of it?
You simply write a Dockerfile starting with FROM debian:jessie (see an example here), add some commands (RUN, COPY, ...): for instance:
FROM debian:stable
RUN apt-get update && apt-get install -y --force-yes apache2
EXPOSE 80 443
VOLUME ["/var/www", "/var/log/apache2", "/etc/apache2"]
ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Build it (docker build .)and run it (docker run).

Not enough entropy to support /dev/random in docker containers running in boot2docker

Running out of entropy in virtualized Linux systems seems to be a common problem (e.g. /dev/random Extremely Slow?, Getting linux to buffer /dev/random). Despite of using a hardware random number generator (HRNG) the use of a an entropy gathering daemon like HAVEGED is often suggested. However an entropy gathering daemon (EGD) cannot be run inside a Docker container, it must be provided by the host.
Using an EGD works fine for docker hosts based on linux distributions like Ubuntu, RHEL, etc. Getting such a daemon to work inside boot2docker - which is based on Tiny Core Linux (TCL) - seems to be another story. Although TCL has a extension mechanism, an extension for an entropy gathering daemon doesn't seem to be available.
So an EGD seems like a proper solution for running docker containers in a (production) hosting environment, but how to solve it for development/testing in boot2docker?
Since running an EGD in boot2docker seemed too difficult, I thought about simply using /dev/urandom instead of /dev/random. Using /dev/urandom is a litte less secure, but still fine for most applications which are not generating long-term cryptographic keys. At least it should be fine for development/testing inside boot2docker.
I just realized, that it is simple as mounting /dev/urandom from the host as /dev/random into the container:
$ docker run -v /dev/urandom:/dev/random ...
The result is as expected:
$ docker run --rm -it -v /dev/urandom:/dev/random ubuntu dd if=/dev/random of=/dev/null bs=1 count=1024
1024+0 records in
1024+0 records out
1024 bytes (1.0 kB) copied, 0.00223239 s, 459 kB/s
At least I know how to build my own boot2docker images now ;-)
The most elegant solution I've found is running Haveged in separate container:
docker pull harbur/haveged
docker run --privileged -d harbur/haveged
Check whether enough entropy available:
$ cat /proc/sys/kernel/random/entropy_avail
2066
Another option is to install the rng-tools package and map it to use the /dev/urandom
yum install rng-tools
rngd -r /dev/urandom
With this I didn't need to map any volume in the docker container.
Since I didn't like to modify my Docker containers for development/testing I tried to modify the boot2docker image. Luckily, the boot2docker image is build with Docker and can be easily extended. So I've set up my own Docker build boot2docker-urandom. It extends the standard boot2docker image with a udev rule found here.
Building your own boot2docker.iso image is simple as
$ docker run --rm mbonato/boot2docker-urandom > boot2docker.iso
To replace the standard boot2docker.iso that comes with boot2docker you need to:
$ boot2docker stop
$ boot2docker delete
$ mv boot2docker.iso ~/.boot2docker/
$ boot2docker init
$ boot2docker up
Limitations, from inside a Docker container /dev/random still blocks. Most likely, because the Docker containers do not use /dev/random of the host directly, but use the corresponding kernel device - which still blocks.
Alpine Linux may be a better choice for a lightweight docker host. Alpine LXC & docker images are only 5mb (versus 27mb for boot2docker)
I use haveged on Alpine for LXC guests & on Debian for docker guests. It gives enough entropy to generate gpg / ssh keys & openssl certificates in containers. Alpine now has an official docker repo.
Alternatively build a haveged package for Tiny Core - there is a package build system available.
if you have this problem in a docker container created from a self-built image that runs a java app (e.g. created FROM tomcat:alpine) and don't have access to the host (e.g. on a managed k8s cluster), you can add the following command to your dockerfile to use non-blocking seeding of SecureRandom:
RUN sed -i.bak \
-e "s/securerandom.source=file:\/dev\/random/securerandom.source=file:\/dev\/urandom/g" \
-e "s/securerandom.strongAlgorithms=NativePRNGBlocking/securerandom.strongAlgorithms=NativePRNG/g" \
$JAVA_HOME/lib/security/java.security
the two regex expressions replace file:/dev/random by file:/dev/urandom and NativePRNGBlocking by NativePRNG in the file $JAVA_HOME/lib/security/java.security which causes tomcat to startup reasonably fast on a vm. i haven't checked whether this works also on non alpine-based openjdk images, but if the sed command fails, just check the location of the file java.security inside the container and adapt the path accordingly.
note: in jdk11 the path has changed to $JAVA_HOME/conf/security/java.security

Start full container in Docker?

According to this github issue it should be possible to start a full container with Upstart, cron etc. with Docker 0.6 or later but how do I do that?
I was expecting that
docker run -t -i ubuntu /sbin/init
would work just like
lxc-start -n ubuntu /sbin/init
and I would get a login screen, but instead it displays nothing. I also tried to access it using ssh, but no luck. I'm using the default ubuntu image from Docker index.
docker run ubuntu /sbin/init appears to work flawlessly for me with 0.6.6. You won't get a login screen because Docker only manages the process. Instead, you can use docker ps -notrunc to get the full lxc container ID and then use lxc-attach -n <container_id> run bash in that container as root. sshd isn't installed in the container, so you can't ssh to it.
You can use the ubuntu-upstart image:
docker run -t -i ubuntu-upstart:14.04 /sbin/init
Although this solution is unfortunately deprecated, it is good enough if you need a full OS container that 'drives' like a normal Ubuntu 12.04, 14.04 or 14.10 (change the :14.04 bit) system today. If no version is specified it defaults to 14.04. I have not used it heavily, and had some issues installing more complicated packages (e.g. dbus!), but it might work for you.
Alas Ubuntu has switched to systemd in more recent releases. Googling reveals that there seems to be ongoing work to make systemd work in a docker container without requiring elevated privileges, but it does not seem to be quite ready for prime-time. Hopefully it will be ready when 16.04 becomes LTS.
Another option is of course to use phusion/baseimage, but it has it's own approach for starting services. Seems better suited to minimal multi-process containers.

Resources