I'm configuring a bamboo build plan to build docker images. Using AWS ECS as registry. Build plan is something like this;
pull the latest tag
docker pull xxx.dkr.ecr.eu-west-1.amazonaws.com/myimage:latest
build image with latest tag
docker build -t myimage:latest .
tag the image (necessary for ECS)
docker tag -f myimage:latest xxx.dkr.ecr.eu-west-1.amazonaws.com/myimage:latest
Push the image to the registry
docker push xx.dkr.ecr.eu-west-1.amazonaws.com/myimage:latest
Because build tasks run on different and fresh build engines/servers every time, It doesn't have local cache.
When I don't change anything at Dockerfile and execute it again(at another server), I would expect docker to use local cache(comes from docker pull) and doesn't execute each line again. But it tries to build image everytime. I was also expecting that when I change something at the bottom of the file, it's going to use cache and executes only the latest line, but I'm not sure about this.
Do I know something wrong or are there any opinions on approach?
are you considering using squid proxy
?
edit : in case you dont want to go to the official website above, here is quick setup on squid proxy (debian based)
apt-get install squid-deb-proxy
and then change the squid configuration to create a larger space by open up
/etc/squid/squid.conf
and replace #cache_dir ufs /var/spool/squid with cache_dir ufs /var/spool/
squid 10000 16 256
and there you go,, a 10.000 MB worth of cache space
and then point the proxy address in dockerfile,,
here is a example of dockerfile with squid proxy
yum and apt-get based distro:
apt-get based distro
`FROM debian
RUN apt-get update -y && apt-get install net-tools
RUN echo "Acquire::http::Proxy \"http://$( \
route -n | awk '/^0.0.0.0/ {print $2}' \
):8000\";" \ > /etc/apt/apt.conf.d/30proxy
RUN echo "Acquire::http::Proxy::ppa.launchpad.net DIRECT;" >> \
/etc/apt/apt.conf.d/30proxy
CMD ["/bin/bash"]`
yum based distro
`FROM centos:centos7
RUN yum update -y && yum install -y net-tools
RUN echo "proxy=http://$(route -n | \
awk '/^0.0.0.0/ {print $2}'):3128" >> /etc/yum.conf
RUN sed -i 's/^mirrorlist/#mirrorlist/' \
/etc/yum.repos.d/CentOS-Base.repo
RUN sed -i 's/^#baseurl/baseurl/' \
/etc/yum.repos.d/CentOS-Base.repo
RUN rm -f /etc/yum/pluginconf.d/fastestmirror.conf
RUN yum update -y
CMD ["/bin/bash"]`
lets say you install squid proxy in your aws registry,,only the first build would fetch the data from the internet the rest (another server) build should be from the squid proxy cached. .
this technique based on book docker in practice technique 57 with tittle set up a package cache for faster build
i dont think there is a cache feature in docker without any third party software.. maybe there is and i just dont know it. .im not sure,,
just correct me if im wrong. .
Related
I have set up a docker image and install ubuntu on it. Can you please tell me how can I install Openmodelica inside ubuntu to that docker image?
for example, if I want to install node.js on this docker image I could use this code:
apt install nodejs
so I need some codes like that to install open Modelica on my docker image.
p.s: my docker image is an ubuntu image.
I happened to create a Docker image for OpenModelica to debug something, so I might add it here as well. We got this questions in the OpenModelica forum as well.
While the answer of #sjoelund.se will stay up to date this one is a bit more explaining.
Dockerfile
FROM ubuntu:18.04
# Export DISPLAY, so a XServer can display OMEdit
ARG DEBIAN_FRONTEND=noninteractive
ENV DISPLAY=host.docker.internal:0.0
# Install wget, gnupg, lsb-release
RUN apt-get update \
&& apt install -y wget gnupg lsb-release
# Get the OpenModelica stable version
RUN for deb in deb deb-src; do echo "$deb http://build.openmodelica.org/apt `lsb_release -cs` stable"; done | tee /etc/apt/sources.list.d/openmodelica.list
RUN wget -q http://build.openmodelica.org/apt/openmodelica.asc -O- | apt-key add -
# Install OpenModelica
RUN apt-get update \
&& apt install -y openmodelica
# Install OpenModelica libraries (like all of them)
RUN for PKG in `apt-cache search "omlib-.*" | cut -d" " -f1`; do apt-get install -y "$PKG"; done
# Add non-root user for security
RUN useradd -m -s /bin/bash openmodelicausers
USER openmodelicausers
ENV HOME /home/openmodelicausers
ENV USER openmodelicausers
WORKDIR $HOME
# Return omc version
CMD ["omc", "--version"]
Let's build and tag it:
docker build --tag openmodelica:ubuntubionic .
How to use omc from the docker image
Let's create a small helloWorld.mo Modelica model:
model helloWorld
Real x(start=1.0, fixed=true);
equations
der(x) = 2.5*x;
end helloWorld;
and a MOS script which will simulate it, called runHelloWorld.mos
loadFile("helloWorld.mo"); getErrorString();
simulate(helloWorld); getErrorString();
Now we can make our files accessible to the docker container with the -v flag and run our small example with:
docker run \
--rm \
-v $(pwd):/home/openmodelicausers \
openmodelica:ubuntubionic \
omc runHelloWorld.mos
Note that -v needs an absolute path. I added --rm to clean up.
Using OMEdit with a GUI
I'm using Windows + Docker with WSL2. So in order to get OMEdit running I need to have a XServer installed on my Windows host system. They are not trivial to set up, but I'm using VcXsrv and so far it is working for me. On Linux this is of course much simpler.
I'm using this config to start XLaunch:
<?xml version="1.0" encoding="UTF-8"?>
<XLaunch WindowMode="MultiWindow" ClientMode="NoClient" LocalClient="False" Display="-1" LocalProgram="xcalc" RemoteProgram="xterm" RemotePassword="" PrivateKey="" RemoteHost="" RemoteUser="" XDMCPHost="" XDMCPBroadcast="False" XDMCPIndirect="False" Clipboard="True" ClipboardPrimary="True" ExtraParams="" Wgl="True" DisableAC="True" XDMCPTerminate="False"/>
But when the XServer is running you can use OMEdit in nearly the same fashion you would from a Linux OS, just mount some directory with your files and that's it:
docker run \
--rm \
-v $(pwd):/home/openmodelicausers \
openmodelica:ubuntubionic \
OMEdit
You could get some inspiration from the Dockerfiles that are used to generate the OpenModelica docker images. For example: https://github.com/OpenModelica/OpenModelicaDockerImages/tree/v1.16.2
I'm trying to build a docker image that uses nvidia hardware decoding in gstreamer and have encountered a strange problem with making the image.
The build process does not find the nvidia cuda related stuff while running docker build (or nvidia-docker build), but when I spin up the failed image as a container and do those very same steps from within the container everything works. I even saved the container as image which gave me a persistent image that works as intended.
Has anyone experienced similar problem and can shed some light on it?
Dockerfile:
FROM nvcr.io/nvidia/deepstream:3.0-18.11 AS base
ENV DEBIAN_FRONTEND noninteractive
#install some dependencies. NOTE - not removing apt cache for the MWE
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libdc1394-22 \
tmux \
vim \
libjpeg-dev \
libpng-dev \
libpng12-dev \
cuda-toolkit-10-0 \
python3-setuptools \
python3-pip ninja-build pkg-config gobject-introspection gnome-devel bison flex libgirepository1.0-dev liborc-0.4-dev
RUN pip3 install meson && ldconfig
FROM base
#pull and make gstreamer:
RUN cd /tmp && mkdir gstreamer
RUN git clone https://github.com/GStreamer/gst-build.git /tmp/gstreamer \
&& cd /tmp/gstreamer \
&& git checkout tags/1.16.0 \
&& ./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure \
&& ninja -C build \
&& ninja install -C build
Testing:
build and run the container. Inside the container:
$ gst-inspect-1.0 nvdec
No such element or plugin 'nvdec'
$ cd /tmp/gstreamer
$ ./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure
$ ninja -C build
$ ninja install -C build
$ gst-inspect-1.0 nvdec
Factory Details:
Rank primary (256)
[... all plugin parameters show up]
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstVideoDecoder
+----GstNvDec
EDIT1
The image builds with no errors, only when I try to call gstreamer it is built with no acceleration. I noticed that in the build process the major difference is
meson.build:109:2: Exception: Problem encountered: The nvdec plugin was enabled explicitly, but required CUDA dependencies were not found.
which does not happen when building from within the container.
Lack of error is related, most likely, to the ninja+meson build system which looks for compatible packages, reports the exception, but doesn't throw it and continues as if nothing wrong happened
EDIT2
Answering comment:
To build it and get the error, just build the attached docker image:
sudo docker build -t gst16:latest . > build.log
This will dump all the output into the build.log file.
I don't have a docker registry that I could use for this and the docker image gets quite big by docker standards (~8 Gigs), but to produce successfully, it's fairly simple:
sudo docker run --runtime="nvidia" -ti gst16:latest /bin/bash
or
sudo nvidia-docker run -ti gst16:latest /bin/bash
which seems to work the same for me. Notice no --rm flag! From within the container:
#check if nvidia decoder plugin is there:
gst-inspect-1.0 nvdec
#fail!
#now build it from within:
cd /opt/gstreamer
./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure
ninja -C build
ninja install -C build
gst-inspect-1.0 nvdec
#success reported
Now to get the image, exit the container (ctrl+d) and in the host shell:
sudo docker container ls -a to view all containers including stopped ones
from gst16:latest get the CONTAINER_ID and copy it
sudo docker commit <CONTAINER_ID> gst16:manual and after a few seconds you should have the container saved as an image. Verify with sudo docker images
run the new image with sudo docker run --runtime=`nvidia` --rm -ti gst16:manual /bin/bash
from within the container try again the gst-inspect-1.0 nvdec to verify it's working
EDIT3
$ nvidia-docker --version
Docker version 18.09.0, build 4d60db4
I think I found the solution/reason
Writing it here in case someone finds themselves in similar situation, plus I hate finding old threads with similar problem and no answer or "nevermind, I solved it" as the only follow up
Docker build does not have any ties to nvidia runtime and gstreamer requires access to the full nvidia toolchain in order to build the plugins that need it. This is to be resolved with gstreamer 1.18 but until then, there is no way to build gstreamer with nvidia codecs in docker build.
The workaround:
Build image with all dependencies.
Run a container of said image using runtime="nvidia" but don't use --rm flag
In the container, build gstreamer and install it as normally.
Verify with gst-inspect-1.0
Commit the container as new image: docker commit <container_name> <temporary_image_name>
Tag the temporary image properly.
I'm trying to create a Docker image from a pretty large installer binary (300+ MB). I want to add the installer to the image, install it, and delete the installer. This doesn't seem to be possible:
COPY huge-installer.bin /tmp
RUN /tmp/huge-installer.bin
RUN rm /tmp/huge-installer.bin # <- has no effect on the image size
Using multiple build stages doesn't seem to solve this, since I need to run the installer in the final image. If I could execute the installer directly from a previous build stage, without copying it, that would solve my problem, but as far as I know that's not possible.
Is there any way to avoid including the full weight of the installer in the final image?
I ended up solving this by using the built-in HTTP server in Python to make the project directory available to the image over HTTP.
Inside the Dockerfile, I can run commands like this, piping scripts directly to bash using curl:
RUN curl "http://127.0.0.1:${SERVER_PORT}/installer-${INSTALLER_VERSION}.bin" | bash
Or save binaries, run them and delete them in one step:
RUN curl -O "http://127.0.0.1:${SERVER_PORT}/binary-${INSTALLER_VERSION}.bin" && \
./binary-${INSTALLER_VERSION}.bin && \
rm binary-${INSTALLER_VERSION}.bin
I use a Makefile to start the server and stop it after the build, but you can use a build script instead.
Here's a Makefile example:
SHELL := bash
IMAGE_NAME := app-test
VERSION := 1.0.0
SERVER_PORT := 8580
.ONESHELL:
.PHONY: build
build:
# Kills the HTTP server when the build is done
function cleanup {
pkill -f "python3 -m http.server.*${SERVER_PORT}"
}
trap cleanup EXIT
# Starts a HTTP server that makes the contents of the project directory
# available to the image
python3 -m http.server -b 127.0.0.1 ${SERVER_PORT} &>/dev/null &
sleep 1
EXTRA_ARGS=""
# Allows skipping the build cache by setting NO_CACHE=1
if [[ -n $$NO_CACHE ]]; then
EXTRA_ARGS="--no-cache"
fi
docker build $$EXTRA_ARGS \
--network host \
--build-arg SERVER_PORT=${SERVER_PORT} \
-t ${IMAGE_NAME}:latest \
.
docker tag ${IMAGE_NAME}:latest ${IMAGE_NAME}:${VERSION}
I think the best way is to download the bin from a website then run it:
RUN wget http://myweb/huge-installer.bin && /tmp/huge-installer.bin && rm /tmp/huge-installer.bin
in this way your image layer will not contain the binary you download
I didn't test it thoroughly, but wouldn't such an approach be viable? (Besides LinPy's answer, which is way easier if you have the possibility to just do it that way.)
Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /tmp/entrypoint.sh
RUN \
echo "I am an image that can run your huge installer binary!" \
&& echo "I will only function when you give it to me as a volume mount."
ENTRYPOINT [ "/tmp/entrypoint.sh" ]
entrypoint.sh:
#!/bin/sh
/tmp/your-installer # install your stuff here
while true; do
echo "installer finished, commit me now!"
sleep 5
done
Then run:
$ docker build -t foo-1
$ docker run --rm --name foo-1 --rm -d -v $(pwd)/your-installer:/tmp/your-installer
$ docker logs -f foo-1
# once it echoes "commit me now!", run the next command
$ docker commit foo-1 foo-2
$ docker stop foo-1
Since the installer was only mounted as a volume, the image foo-2 should not contain it anymore. You could also go and build another Dockerfile based on foo-2 to change the entrypoint, for example.
Cf. docker commit
The question is most clear,
How to start complete desktop environment (KDE, XFCE, Gnome doesn't matter) in the Docker remote container.
I were digging over the internet and there are lots of questions about the related topic, but not the same, they all about how to run GUI application not the full desktop.
What I found out:
Necessary run Xvfb
Somehow run e.g. Xfce in that FrameBuffer
Allow x11vnc to share that running X environment
But I'm stuck here actually, always getting whatever errors:
... (EE) Invalid screen configuration 1024x768 for -screen 0
... Cannot open /dev/tty0 (No such file or directory)
Could you give some Dockerfile lines in order reach the goal?
That is I was looking for, the simplest form of the desktop in Docker:
FROM ubuntu
RUN apt-get update
RUN apt-get install xfce4 -y
RUN apt-get install xfce4-goodies -y
RUN apt-get purge -y pm-utils xscreensaver*
RUN apt-get install wget -y
EXPOSE 5901
RUN wget -qO- https://dl.bintray.com/tigervnc/stable/tigervnc-1.8.0.x86_64.tar.gz | tar xz --strip 1 -C /
RUN mkdir ~/.vnc
RUN echo "123456" | vncpasswd -f >> ~/.vnc/passwd
RUN chmod 600 ~/.vnc/passwd
CMD ["/usr/bin/vncserver", "-fg"]
Unfortunately I could not sort out with x11vnc and xvfb. But TigerVNC turned out much better.
This sample generate container with xfce gui and run vncserver with 123456 password. There is no need to overwrite ~/.vnc/xstartup manually because TigerVNC starts up X server by default!
To run the server:
sudo docker run --rm -dti -p 5901:5901 3ab3e0e7cb
To connect there with vncviewer:
vncviewer -AutoSelect 0 -QualityLevel 9 -CompressLevel 0 192.168.1.100:5901
Also you could not care about screen resolution because by default it will resize to fit your screen:
You may also encounter the issue with ipc_channel_posix (chrome and other browsers will not work properly) to eliminate this run container with memory sharing:
docker run -d --shm-size=2g --privileged -p 5901:5901 image-name
x11docker allows to run desktop environments as well as single GUI applications in docker.
Could you give some Dockerfile lines in order reach the goal?
Example desktop images on docker hub.
x11docker does a lot of setup to keep container isolation and provides some additional options like hardware acceleration or pulseaudio sound. Example:
x11docker --desktop x11docker/lxde
x11docker also supports network setups with SSH, VNC and HTML5
Example for SSH setup with xpra:
read Xenv < <(x11docker --xdummy --display=30 x11docker/lxde pcmanfm)
echo $Xenv && export $Xenv
# replace "start" with "start-desktop" to forward a desktop environment
xpra start :30 --use-display --start-via-proxy=no
From client system, connect with
xpra attach ssh:HOSTNAME:30 # replace HOSTNAME with IP or host name of ssh server
Without x11docker:
A quite short setup using Xephyr as nested X server on host is:
Xephyr :1
docker run -v /tmp/.X11-unix/X1:/tmp/.X11-unix/X1:rw \
-e DISPLAY=:1 \
x11docker/xfce
A short Dockerfile with Xfce desktop:
FROM debian:stretch
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y --no-install-recommends xfce4 dbus-x11
CMD startxfce4
I want to run my unit tests against the latest versions of PHP and Node, which means I need both installed into one image for it to work with Bitbucket Pipelines.
What I've been doing until now is to pick one or the other as my base, and then manually install the other. e.g., I've started with php:5.6-fpm as my base, and then installed Node:
# Dockerfile
FROM php:5.6-fpm
RUN docker-php-ext-install bcmath
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y git mercurial unzip nodejs
RUN npm set progress=false
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
RUN php -r "if (hash_file('SHA384', 'composer-setup.php') === 'e115a8dc7871f15d853148a7fbac7da27d6c0030b848d9b3dc09e2a0388afed865e6a3d6b3c0fad45c48e2b5fc1196ae') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
RUN php composer-setup.php --install-dir=/usr/local/bin --filename=composer
RUN php -r "unlink('composer-setup.php');"
Is there any way to utilize both PHP and Node for my image, and then install some stuff on top of that (e.g. Composer and Yarn)?
You could create a image and commit it either only locally on your machine by:
docker commit <container-id> image-name:tagname
from: https://docs.docker.com/engine/reference/commandline/commit/
then later use this image in new Dockerfile using FROM image-name:tagname
Everything that you will add to new Dockerfile will be stacked on top of that image you created with PHP and Node.js
Sometimes you could create several layers of images that will progress with different processes and functions. Very good reference is: https://hub.docker.com/u/million12/
So you can create a base image with PHP and then another using PHP image that has Node.js installed on it and then another with composer.
If you wish to export your images from your local machine you should register on docker hub and export to it or alternative service like quay.io
Hope that answered your question.