Our embedded system product is built in an Ubuntu 12.04 with some ancient tools that are no longer available. We have the tools in our local git repo.
Setting up the build environment for a new comer is extremely challenging. I would like to set up the build environment in a docker container, download the source code into a host machine, mount the source code into the container and execute the build so that someone starting fresh doesnt have to endure the challenging setup. Is this a reasonable thing to do?
Here is what I have done so far:
Created a dockerfile to set up the env
# Ubuntu 12.04.5 LTS is the standard platform for development
FROM ubuntu:12.04.5
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
build-essential \
dialog \
autoconf \
automake \
libtool \
libgtk-3-dev \
default-jdk \
bison \
flex \
php5 \
php5-pgsql \
libglib2.0-dev \
gperf \
sqlite3 \
txt2man \
libssl-dev \
libudev-dev \
ia32-libs \
git
ENV PATH="$PATH:toolchain/bin"
The last line (ENV ...) sets the path to the toolchain location. Also there are a few more env variables to set.
On my host machine I run a have my source pulled in to my working dir.
Built the docker image using:
docker build --tag=myimage:latest .
And then I mounted the source code as a volume to the container using:
docker run -it --volume /path/to/host/code:/path/in/container myimage
All this works - it mounts the code in the container and I am in the container's terminal, I can see the code. However I dont see the path I set to the toolchain in my dockerfile. I was hoping the path would get set and I could call make.
Is this not how it is supposed to work, is there a better way to do this?
Related
I am trying to install a GUI based software called Dragonfly as a container, since the software has conflicts with my host OS RHEL7. So I thought installing as a Docker container could be a solution, even though I am completely new with Docker. My Dockerfile looks like below:
FROM ubuntu
COPY DragonflyInstaller /Dragonfly/
WORKDIR /Dragonfly/
# Dependent packages for Dragonfly
ARG DEBIAN_FRONTEND=noninteractive #
ENV TZ=Europe/Berlin
RUN apt-get update && apt-get install -y apt-utils \
fontconfig \
libxcb1 \
libxcb-glx0 \
x11-common \
x11-apps \
libx11-xcb-dev \
libxrender1 \
libxext6 \
libxkbcommon-x11-0 \
libglu1 \
libxcb-xinerama0 \
qt5-default \
libxcb-icccm4 \
libxcb-image0 \
libxcb-render-util0 \
libxcb-util1 \
freeglut3-dev \
python3-pip \
xauth
CMD ./DragonflyInstaller
After building the corresponding Docker image, it cannot launch GUI based installer-window of Dragonfly. I am using the two following commands:
xhost +local:docker
sudo docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix: dragonfly
I tried various suggestions posted on different forums, accordingly, I tried various arguments for docker run, however, I am getting two errors every time, as below:
No protocol specified
qt.qpa.xcb: could not connect to display :340.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: minimal, xcb.
Could you please suggest me how to resolve this issue?
In fact I was logging in my machine with X2Go remote desktop client that provides its own desktop after log in. However, I also tried another remote login software called NoMachine which does not create its own display or desktop, rather it keeps original desktop for the remote user. When I tried with NoMachine, there was no errors.
So I guess, the above two errors are caused by remote desktop software X2Go.
I have tried to build a docker image and found that the PATH variable I set has some issues. A Minimal non-working example is:
FROM ubuntu:latest
SHELL ["/bin/bash", "-cu"]
ARG CTAGS_DIR=/root/tools/ctags
# Install common dev tools
RUN apt-get update --allow-unauthenticated \
&& apt-get install --allow-unauthenticated -y git curl autoconf pkg-config zsh
# Compile ctags
RUN cd /tmp \
&& git clone https://github.com/universal-ctags/ctags.git \
&& cd ctags \
&& ./autogen.sh \
&& ./configure --prefix=${CTAGS_DIR} \
&& make -j$(nproc) \
&& make install \
&& rm -rf /tmp/ctags
ENV PATH=$HOME/tools/ctags/bin:$PATH
RUN echo "PATH is $PATH"
RUN which ctags
In the above Dockerfile, the line ENV PATH=$HOME/tools/ctags/bin:$PATH does not work as expected. It seems that $HOME is not correctly expanded. The following two instructions also do not work:
ENV PATH=~/tools/ctags/bin:$PATH
ENV PATH="~/tools/ctags/bin:$PATH"
Only settings the absolute path works:
# the following setting works.
ENV PATH="/root/tools/ctags/bin:$PATH"
I have looked up the docker references but can not find document about this.
In general, when you're building a Docker image, it's okay to install things into the normal "system" directories. Whatever you're building will be isolated inside the image, and it can't conflict with other tools.
The easiest answer to your immediate question is to arrange things so you don't need to set $PATH.
In the example you give, you can safely use Autoconf's default installation directory of /usr/local. That will almost certainly be empty when you start your image build and only things you install will be there.
RUN ... \
&& ./configure \
&& make \
&& make install
(The Python corollary is to not create a virtual environment for your application; just use the system pip to install things into the default Python library directories.)
Don't expect there to be a home directory. If you have to install in some non-default place, /app is common, and /opt/whatever is consistent with non-Docker Linux practice. Avoid $HOME or ~, they aren't generally well-defined in Docker (unless you go out of your way to make them be).
I have a suite of tests written in Jest for dynamoDB that use the dynamodb-local instance as explained here using this dependency. I use a custom-built Docker image which builds a container within which the tests are executed.
Here's the Dockerfile
FROM openjdk:8-jre-alpine
RUN apk -v --no-cache add \
curl \
build-base \
groff \
jq \
less \
py-pip \
python openssl \
python3 \
python3-dev \
yarn \
&& \
pip3 install --upgrade pip awscli boto3 aws-sam-cli
EXPOSE 8000
I yarn install all of my dependencies and then yarn test, at this point after a long time it will output this:
Error
This is the command I ma using:
docker run -it --rm -p 8000:8000 -v $(pwd):/data -w /data aws-cli-java8-v15:latest
The tests work completely fine on my own machine, but no matter what project I use or what I include in my Dockerfile connection always gets dropped.
I solved the issue, turns out it has to do with Alpine Linux. Because it uses musl instead of Glibc local dynamodb won't be able to start and it will crash a few seconds after it was executed without outputting any error messages. The solution is to either use OracleJDK on alpine, which is hard enough given their new license or using any other OS that does use glibc with OpenJDK. Or you could try to install glibc on Alpine and try to link it to your OpenJDK, but it's not a terribly good idea.
we are trying to host tensorflow object-detection model on GCP.
we have maintain below directory structure before running "gcloud app deploy".
For you convenient I am attaching the configuration files with the question.
Wer are getting deployment error which is mentioned below. Please suggest a solution.
+root
+object_detection/
+slim/
+env
+app.yaml
+Dockerfile
+requirement.txt
+index.html
+test.py
Dockerfile
FROM gcr.io/google-appengine/python
LABEL python_version=python2.7
RUN virtualenv --no-download /env -p python2.7
# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Various Python and C/build deps
RUN apt-get update && apt-get install -y \
wget \
build-essential \
cmake \
git \
unzip \
pkg-config \
python-dev \
python-opencv \
libopencv-dev \
libav-tools \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libjasper-dev \
libgtk2.0-dev \
python-numpy \
python-pycurl \
libatlas-base-dev \
gfortran \
webp \
python-opencv \
qt5-default \
libvtk6-dev \
zlib1g-dev \
protobuf-compiler \
python-pil python-lxml \
python-tk
# Install Open CV - Warning, this takes absolutely forever
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
RUN protoc /app/object_detection/protos/*.proto --python_out=/app/.
RUN export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/app/slim
CMD exec gunicorn -b :$PORT UploadTest:app
requirement.txt
Flask==0.12.2
gunicorn==19.7.1
numpy==1.13.1
requests==0.11.1
bs4==0.0.1
nltk==3.2.1
pymysql==0.7.2
xlsxwriter==0.8.5
Pillow==4.2.1
pytesseract==0.1
opencv-python>=3.0
matplotlib==2.0.2
tensorflow==1.3.0
lxml==4.0.0
app.yaml
runtime: custom
env: flex
entrypoint: gunicorn -b :$PORT UploadTest:app
threadsafe: true
runtime_config:
python_version: 2
After all this i am seeting up the google cloud environment with gcloud init
And then start command gcloud app deploy
I am getting below error while deploying the solution.
Error:
Step 10/12 : RUN protoc /app/object_detection/protos/*.proto --python_out=/app/.
---> Running in 9b3ec9c43c2d
/app/object_detection/protos/anchor_generator.proto: File does not reside within any path specified using --proto_path (or -I). You must specify a --proto_path which encompasses this file. Note that the proto_path must be an exact prefix of the .proto file names -- protoc is too dumb to figure out when two paths (e.g. absolute and relative) are equivalent (it's harder than you think).
The command '/bin/sh -c protoc /app/object_detection/protos/*.proto --python_out=/app/.' returned a non-zero code: 1
ERROR
ERROR: build step "gcr.io/cloud-builders/docker#sha256:a4a83be9b2fb61452e864ecf1bcfca99d1845499ef9500ae2905cea0ea593769" failed: exit status 1
----------------------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.app.deploy) Cloud build failed. Check logs at https://console.cloud.google.com/gcr/builds/4dba3217-b7d6-4341-b28e-09a9dad45c18?
There is a directory "object_detection/protos" present and all necessary files are present there. Still getting deployment error. Please suggest where to change in dockerfile to deploy it successfully.
My assumption: GCP is not able to figure out the path of the protc file. May be I have to alter something in Docketfile. But not able to figure out the solution. Please answer.
NB: This setup is running well in local machine. But not working in GCP
I have the following Dockerfile for a php runtime based on the official [php][1] image.
FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
zip \
unzip \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-enable opcache \
&& php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === '669656bab3166a7aff8a7506b8cb2d1c292f042046c5a994c43155c0be6190fa0355160742ab2e1c88d40d5be660b410') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& php composer-setup.php \
&& php -r "unlink('composer-setup.php');" \
&& mv composer.phar /usr/local/bin/composer
I am having trouble running composer install.
I am guessing that the Dockerfile runs before a volume is mounted because I receive a composer.json file not found error if adding:
...
&& mv composer.phar /usr/local/bin/composer \
&& composer install
to the above.
But, adding the following property to docker-compose.yml:
command: sh -c "composer install && composer require drush/drush"
seems to terminate the container after the command finishes executing.
Is there a way to:
wait for a volume to become mounted
run composer install using the mounted composer.json file
have the container keep running afters
?
I generally agree with Chris's answer for local development. I am going to offer something that combines with a recent Docker feature that may set a path for doing both local development and eventual production deployment with the same image.
Let's first start with the image that we can build in a manner that can be used for either local development or deployment somewhere that contains the code and dependencies. In the latest Docker version (17.05) there is a new multi-stage build feature that we can take advantage of. In this case we can first install all your Composer dependencies to a folder in the build context and then later copy them to the final image without needing to add Composer to the final image. This might look like:
FROM composer as composer
COPY . /app
RUN composer install --ignore-platform-reqs --no-scripts
FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
zip \
unzip \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-enable opcache
COPY . /var/www/root
COPY --from=composer /app/vendor /var/www/root/vendor
This removes all of Composer from the application image itself and instead uses the first stage to install the dependencies in another context and copy them over to the final image.
Now, during development you have some options. Based on your docker-compose.yml command it sounds like you are mounting the application into the container as .:/var/www/root. You could add a composer service to your docker-compose.yml similar to my example at https://gist.github.com/andyshinn/e2c428f2cd234b718239. Here, you just do docker-compose run --rm composer install when you need to update dependencies locally (this keeps the dependencies build inside the container which could matter for native compiled extensions, especially if you are deploying as containers and developing on Windows or Mac).
The other option is to just do something similar to what Chris has already suggested, and use the official Composer image to update and manage dependencies when needed. I've done something like this locally before where I had private dependencies on GitHub which required SSH authentication:
docker run --rm --interactive --tty \
--volume $PWD:/app:rw,cached \
--volume $SSH_AUTH_SOCK:/ssh-auth.sock \
--env SSH_AUTH_SOCK=/ssh-auth.sock \
--volume $COMPOSER_HOME:/composer \
composer:1.4 install --ignore-platform-reqs --no-scripts
To recap, the reasoning for this method of building the image and installing Composer dependencies using an external container / service:
Platform specific dependencies will be built correctly for the container (Linux architecture vs Windows or Mac).
No Composer or PHP is required on your local computer (it is all contained inside Docker and Docker Compose).
The initial image you built is runnable and deployable without needing to mount code into it. In development, you are just overriding the /var/www/root folder with a local volume.
I've been down this rabbit hole for 5 hours, all of the solutions out there are way too complicated. The easiest solution is to exclude vendor or node_modules and similar directories from volume.
#docker-compose.yml
volumes:
- .:/srv/app/
- /srv/app/vendor/
So this will map current project directory but exclude its vendor subdirectory. Dont forget the trailing slash!
So now you can easily run composer install in dockerfile and when docker mounts your volume it will ignore vendor directory.
If this is is for a general development environment, then the intention is not really ideal because it's coupling the application to the Docker configuration.
Just run composer install seperately by some other means (there is an image available for this on dockerhub, which allows you to just do (docker run -it --rm -v $(pwd):/app composer/composer install).
But yes it is possible you would need the last line in the Dockerfile to be bash -c "composer install && php-fpm".
wait for a volume to become mounted
No, volumes are not able to be mounted during a docker build process. Though you can copy the source code in.
run composer install using the mounted composer.json file
No, see above response.
have the container keep running after
Yes, you would need to execute php-fpm --nodaemonize ( which is a long running process, hence it won't terminate.
To execute a command after you have mounted a volume on a docker container
Assuming that you are fetching dependencies from a public repo
docker run --interactive -t --privileged --volume ${pwd}:/xyz composer /bin/sh -c 'composer install'
For fetching dependencies from a private git repo, you would need to copy/create ssh keys, I guess that should be out of scope of this question.