I'm trying to create a Docker image with a /dev/net/tun device so that the image can be used across Linux, Mac and Windows host machines. The device does not need access to the host's network interface.
Note that passing --device /dev/net/tun:/dev/net/tun to docker run is undesirable because this only works on Linux.
Once the container is started, I can manually add the device by running:
$ sudo mkdir /dev/net
$ sudo mknod /dev/net/tun c 10 200
$ sudo ip tuntap add mode tap tap
but when I add these lines to the Dockerfile it results in an error:
Step 35/46 : RUN mkdir /dev/net
---> Running in 5475f2e4b778
Removing intermediate container 5475f2e4b778
---> c6f8e2998e1a
Step 36/46 : RUN mknod /dev/net/tun c 10 200
---> Running in fdb0ed813cdb
mknod: /dev/net/tun: No such file or directory
The command '/bin/sh -c mknod /dev/net/tun c 10 200' returned a non-zero code: 1
I believe the crux here is creating a filesystem node from within a docker build step? Is this possible?
The /dev directory is special, and Docker build steps cannot really put anything there. That also is mentioned in an answer to question 56346114.
Apparently a device in /dev isn't a file with data in it, but a placeholder, an address, a pointer, a link to driver code in memory that does something when accessed. Such driver code in memory is not something that a Docker image would hold.
I got device creation working in a container by putting your command line code in an .sh script wrapping the app we really want to run.
I managed to work around this by programmatically creating the TUN device in our software that needs it (which are mostly unit tests). In the setup of the program we can create a temporary file node with major/minor code 10/200:
// Create a random temporary filename. We are not using tmpfile() or the
// usual suspects because we need to create the temp file using mknod(),
// below.
snprintf(tmp_filename_, IFNAMSIZ, "/tmp/ect_%d_%d", rand(), rand());
// Create a temporary file node for use as a TUN interface.
// Device 10, 200 is the device code for a TAP/TUN device.
// See https://www.kernel.org/doc/Documentation/admin-guide/devices.txt
int result = mknod(tmp_filename_, S_IFCHR | 0644, makedev(10, 200));
if (result < 0) {
perror("Failed to make temporary file");
}
ASSERT_GE(result, 0);
and then in the tear-down of the program we close and delete the temporary file.
One issue remaining is this program only works when run as the root user because the program doesn't have cap_net_admin,cap_net_raw capabilities. Another annoyance that can be worked-around.
Related
Hi i am trying to load into cassandra in docker. Unfortunately, i can't make it. I pretty sure the path is correct, as i directly copy and paste from the properties section. May i know is there any alternative to solve it?
p.s. i am using windows 11, latest cassandra 4.1
cqlsh:cds502_project> COPY data (id)
... FROM 'D:\USM\Data Science\CDS 502 Big data storage and management\Assignment\Project\forest area by state.csv'
... WITH HEADER = TRUE;
Using 7 child processes
Starting copy of cds502_project.data with columns [id].
Failed to import 0 rows: OSError - Can't open 'D:\\USM\\Data Science\\CDS 502 Big data storage and management\\Assignment\\Project\\forest area by state.csv' for reading: no matching file found, given up after 1 attempts
Processed: 0 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s
0 rows imported from 0 files in 0.246 seconds (0 skipped).
above is my code and the result. I have tried https://www.geeksforgeeks.org/export-and-import-data-in-cassandra/ exactly and it works when i create the data inside the docker, export and reimport it, but not working when i use external data.
I also notice the csv file i exported using cassandra in docker is missing in my laptop but can be access by docker.
Behaviour you are observing is what is expected from docker. What I know there are cp commands in Kubernetes which copy the data from outside to inside container and vice-versa. Either you can check those commands to take the data from inside or outside the docker or other way is you can push your csv into docker using a Docker Image.
you need to leverage Docker bind mounting a volume in order to access local files within your container. docker run -it -v <path> ...
See references below:
https://www.digitalocean.com/community/tutorials/how-to-share-data-between-the-docker-container-and-the-host
https://www.docker.com/blog/file-sharing-with-docker-desktop/
I want to build on top of a windows docker container by installing a couple programs. The files total .5 GB and I want to keep the layers as small as possible. I am hoping I can run the setup files from the build-context, and then have the build-context swept away at the end so I don't have a needless copy of the source files for the setup.exe embedded in my container layers. However, I have not found an example where this is the case. Instead I mostly see people run a COPY command to a temporary build folder, run their setup, then remove the folder. Won't those files still be in the container layers because the COPY command creates a new layer when it's done?
I don't know if the container can see the build-context directly. I was hoping for some magical folder filled with the build-context files so I could run a script using it, but haven't found anything.
It seems like the alternative is to create a private file-server and perform a RUN that can download them from that private server and unpack them, run the install, and remove them (all as 1 docker step). I understand this would make it more available to others who need to rerun the build, but I'm not convinced we'll need to rerun it. It's not likely to change as the container will build patches for a legacy application. Just seems like a lot to host files on a private, public-facing server for something that will get called once every couple years if ever.
So are these my two options?
Make a container with needless copies of source files embedded within
Host the files on a private file server and download/install/remove them
Or am I missing another option or point about how the containers work?
It's a long shot as Windows is a tricky thing with file system, but you could do this way:
In your Dockerfile use a COPY command, install then RUN del ... to remove the installation files
Build your image docker build -t my-large-image:latest .
Run your image docker run --name my-large-container my-large-image:latest
Stop the container
Export your container filesystem docker export my-large-container > my-large-container.tar
Import the filesystem to a new image cat my-large-container.tar | docker import - my-small-image
Caveat is you need to run the container once which might not be what you want. And also I haven't tested with windows container, sorry.
I usually do the download or copy in one step, then in the next step I do the silent installation and remove the installer.
# escape=`
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2016
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ADD https://download.visualstudio.microsoft.com/download/pr/6afa582f-fa26-4a73-8cb9-194321e85f8d/ecea51ead62beb7acc73ad9799511ffdb3083ad384fe04ec50e2cbecfb426482/VS_RemoteTools.exe VS_RemoteTools_x64.exe
RUN Start-Process .\\VS_RemoteTools_x64.exe -ArgumentList #('/install','/quiet','/norestart') -NoNewWindow -Wait; `
Remove-Item -Path C:/VS_RemoteTools_x64.exe -Force;
But otherwise, I don't think you can mount a custom volume while it's being built.
I didn't find a satisfactory answer to this. Docker seems designed for only the modern era and assumes you'll be able to download what you need via scripts and tools hitting APIs and file servers. The easiest option I found that I eventually went with was to host the files on a private file server or service (in my case, AWS S3).
I really wish there was a way to have files hosted by the docker daemon in some way, eg. if it acted like a temporary server that you could get data from via http instead of needing to COPY the files and create a layer. Alas, I found no such feature.
Taking this route made my container about a GB smaller.
I'm using VS Code with DevContainer extension to run inside a Docker container.
It works great, but every time either VS Code is updated or the Dockerfile and I have to rebuild the container it takes few minutes to install the extensions I need inside the container.
[218513 ms] Start: Run in container: cd /root/.vscode-server/bin/e5a624b788d92b8d34d1392e4c4d9789406efe8f; export VSCODE_AGENT_FOLDER=/root/.vscode-server; /root/.vscode-server/bin/e5a624b788d92b8d34d1392e4c4d9789406efe8f/server.sh --disable-telemetry --extensions-download-dir /root/.vscode-server/extensionsCache --install-extension ms-python.python --install-extension ms-python.vscode-pylance --force
[537378 ms] Installing extensions...
Installing extension 'ms-python.python' v2020.12.424452561...
Installing extension 'ms-python.vscode-pylance' v2020.12.2...
Extension 'ms-python.vscode-pylance' v2020.12.2 was successfully installed.
Extension 'ms-python.python' v2020.12.424452561 was successfully installed.
[537379 ms]
[537379 ms] Start: Run in container: ls /root/.vscode-server/extensionsCache || true
[537387 ms] ms-python.python-2020.12.424452561
ms-python.vscode-pylance-2020.12.2
ms-toolsai.jupyter-2020.12.414227025
I have 2 questions about this:
Is it possible to measure what is taking the time? is it download or install (or both) that takes that long?
If it is download that takes most of the time, is there a way to cache the extensions?
There are multiple solution to speed up container initialization:
One way is to use docker volumes and mount that under $HOME/.vscode-server. In that case VS Code will use the instance already installed.
The other way is to mount a local folder into the dev-container as $HOME folder. It may slow down the overall performance of the container but we will also maintain permanent session eg. bash history, azure session, etc.
The second solution has currently some issue around extension installation (see: issue related to .installExtensionsMarker file) so for the moment I would recommend using docker volumes.
For more detail on how to configure the volumes see the following section of the Advanced Container Configuration document:
Avoiding extension reinstalls on container rebuild
I would also recommend to use some final image that is already have been built and which is pushed to container registry to avoid installation of any python or other packages during container rebuild.
According to the documentation at bazelbuild/rules_docker, it should be possible to work with these container images on OSX, and it also claims that it's possible to do so without docker.
These rules do not require / use Docker for pulling, building, or pushing images. This means:
They can be used to develop Docker containers on Windows / OSX without boot2docker or docker-machine installed.
They do not require root access on your workstation.
How do I do that? Here's a simple rule:
go_image(
name = "helloworld_image",
importpath = "github.com/nictuku/helloworld",
library = ":go_default_library",
visibility = ["//visibility:public"],
)
I can build the image with bazel build :helloworld_image. It produces a tar ball in blaze-bin, but it won't run it:
INFO: Running command line: bazel-bin/helloworld_image
Loaded image ID: sha256:08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852
Tagging 08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852 as bazel:helloworld_image
standard_init_linux.go:185: exec user process caused "exec format error"
ERROR: Non-zero return code '1' from command: Process exited with status 1.
It's trying to run the linux this is OSX, which is silly.
I also tried doing a "docker load" on the .tar content but it doesn't seem to like that format.
$ docker load -i bazel-bin/helloworld_image-layer.tar
open /var/lib/docker/tmp/docker-import-330829602/app/json: no such file or directory
Help? Thanks!
You are building for your host platform by default so you need to build for the container platform if you want to do that.
Since you are using a go binary, you can do cross compilation by specifying --cpu=k8 on the command line. Ideally we would be able to just say that the docker image needs a linux binary (so no need to specify the --cpu command-line flag) but this is still a work in progress in Bazel.
Contents of the Dockerfile:
FROM XYZ
MAINTAINER ABC
RUN echo "hello world"
EXPOSE 80
ENTRYPOINT ["/usr/sbin/httpd","-D","FOREGROUND"]
When I try to build an image from this file, I see the following:
permission denied Removing intermediate container
when docker tries to execute the RUN command
Observations:
This error is irrespective of the content of the RUN command.
Removing it ensures the build completes without issues.
I am able to build from the same docker file and image on another host.
"docker info" produced similar information on both machines.
How can I debug this further to see what the issue is?
Update (in response to the comments below):
I have been able to build the same image (and others) on this instance before
The issue occurred irrespective of the base image used
The issue was specific to this one instance which is running CentOS
The user I was logged in as was different from the user the daemon was running as (root)
Assuming the issue may have been because of the user mismatch, I changed to root and tried the command. It went through without issues. Then, I changed back to the original user, removed the image and tried again: it went through again! The original issue is not reproducible anymore.