Jest-dynamoDB connection gets refused inside of docker container - docker

I have a suite of tests written in Jest for dynamoDB that use the dynamodb-local instance as explained here using this dependency. I use a custom-built Docker image which builds a container within which the tests are executed.
Here's the Dockerfile
FROM openjdk:8-jre-alpine
RUN apk -v --no-cache add \
curl \
build-base \
groff \
jq \
less \
py-pip \
python openssl \
python3 \
python3-dev \
yarn \
&& \
pip3 install --upgrade pip awscli boto3 aws-sam-cli
EXPOSE 8000
I yarn install all of my dependencies and then yarn test, at this point after a long time it will output this:
Error
This is the command I ma using:
docker run -it --rm -p 8000:8000 -v $(pwd):/data -w /data aws-cli-java8-v15:latest
The tests work completely fine on my own machine, but no matter what project I use or what I include in my Dockerfile connection always gets dropped.

I solved the issue, turns out it has to do with Alpine Linux. Because it uses musl instead of Glibc local dynamodb won't be able to start and it will crash a few seconds after it was executed without outputting any error messages. The solution is to either use OracleJDK on alpine, which is hard enough given their new license or using any other OS that does use glibc with OpenJDK. Or you could try to install glibc on Alpine and try to link it to your OpenJDK, but it's not a terribly good idea.

Related

Issues running a docker container with a GUI application with host network mode

I am trying to create a docker container with a ROS install and a simulation setup to streamline the process for people joining the project later.
When I run rviz this way, I get the rviz window showing up on my host just fine, as expected following this ros tutorial:
sudo docker run -it --env="DISPLAY" --env="QT_X11_NO_MITSHM=1" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" xrf-robot-repo rviz
ROS Master isn't running so this output is expected
Now my issue is that I want to run my container in the host network mode (--net=host) the rviz dialogue does not show up anymore. Here's what I run this:
sudo docker run -it --env="DISPLAY" --env="QT_X11_NO_MITSHM=1" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" --net=host xrf-robot-repo rviz I don't think these errors have anything to do with the gui window not showing up
I have no idea why the GUI window does not show up. I was hoping for some guidance here. I would guess this would have something to do with the different network mode affecting how the x11 forwarding may work, but I am not sure how to further look into this.
Here's what my Dockerfile looks like that I used to build the image in case it may be helpful:
FROM osrf/ros:melodic-desktop-full
SHELL ["/bin/bash", "-c"]
RUN apt-get update && apt-get install -y --no-install-recommends \
git apt-utils python3-catkin-tools\
&& rm -rf /var/lib/apt/lists/*
RUN source ./ros_entrypoint.sh && git clone https://github.com/RumailM/xrf-robot-stack
RUN source ./ros_entrypoint.sh && cd xrf-robot-stack && catkin_init_workspace
RUN apt-get update
ARG DEBIAN_FRONTEND=noninteractive
RUN source ./ros_entrypoint.sh && cd xrf-robot-stack && rosdep install
--from-paths src --ignore-src -r -y
RUN source ./ros_entrypoint.sh && cd xrf-robot-stack && catkin build
The reason I need to use network mode is that I would like the host to be able to communicate with the rosmaster node and any other nodes within the container. I also do not know before hand what nodes may exist outside the container and at what ports they may communicate on, so the obvious answer of forwarding only the ports that I will use will not work (the ports may change at runtime). Forwarding large ranges of ports does not seem viable either.
Any guidance is appreciated!
You should add --privileged option to the docker run command. This is related to this issue Qt applications and network host .
I ran into a similar issue and was able to resolve by following this question on ROS Answers.
Also to be able to run rviz properly, I needed the docker container to access my graphics card drivers(NVIDIA) in my case. So I needed to add -e NVIDIA_VISIBLE_DEVICES=all and -e NVIDIA_DRIVER_CAPABILITIES=all and --runtime nvidia to the docker run command.
This was the final run command
docker run -it --rm \
--privileged \
--network host \
-e NVIDIA_VISIBLE_DEVICES=all \
-e NVIDIA_DRIVER_CAPABILITIES=all \
--env="DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--name "$CONTAINER_NAME" \
--runtime nvidia \
xrf-robot-repo \
/bin/bash
Hope this helps

Use docker to containerize and build an old application

Our embedded system product is built in an Ubuntu 12.04 with some ancient tools that are no longer available. We have the tools in our local git repo.
Setting up the build environment for a new comer is extremely challenging. I would like to set up the build environment in a docker container, download the source code into a host machine, mount the source code into the container and execute the build so that someone starting fresh doesnt have to endure the challenging setup. Is this a reasonable thing to do?
Here is what I have done so far:
Created a dockerfile to set up the env
# Ubuntu 12.04.5 LTS is the standard platform for development
FROM ubuntu:12.04.5
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
build-essential \
dialog \
autoconf \
automake \
libtool \
libgtk-3-dev \
default-jdk \
bison \
flex \
php5 \
php5-pgsql \
libglib2.0-dev \
gperf \
sqlite3 \
txt2man \
libssl-dev \
libudev-dev \
ia32-libs \
git
ENV PATH="$PATH:toolchain/bin"
The last line (ENV ...) sets the path to the toolchain location. Also there are a few more env variables to set.
On my host machine I run a have my source pulled in to my working dir.
Built the docker image using:
docker build --tag=myimage:latest .
And then I mounted the source code as a volume to the container using:
docker run -it --volume /path/to/host/code:/path/in/container myimage
All this works - it mounts the code in the container and I am in the container's terminal, I can see the code. However I dont see the path I set to the toolchain in my dockerfile. I was hoping the path would get set and I could call make.
Is this not how it is supposed to work, is there a better way to do this?

How build dockerfile with few needed ports

I want to learn Docker so I decided to create all files (Dockerfile, docker-compose) step by step by my own.
I need Centos 8 with httpd and webmin. I prepared Dockerfile with httpd and it works very well but when I am trying add RUN with install webmin cmd I can’t figure how open webmin panel. Port 10000 doesn’t work or it works but I don’t know how to get there.
Also, If I need Centos 8 with phpmyadmin, webmin, apache etc. should I create docker-compose with Centos 8 and phpmyadmin separately? Or another way?
My Dockerfile
FROM centos:8
RUN yum update -y && yum install -y \
httpd \
httpd-tools \
wget \
perl \
perl-Net-SSLeay \
openssl perl-Encode-Detect
RUN wget https://prdownloads.sourceforge.net/webadmin/webmin-1.930-1.noarch.rpm \
&& rpm -U webmin-1.930-1.noarch.rpm
EXPOSE 80
CMD ["/usr/sbin/httpd","-D","FOREGROUND"]

Tensorflow serving GPU using REST API and SSL self certificate

I am trying to install TensorFlow-gpu with REST API in Centos 7 Docker container. But I am unable to find an exact procedure for this. Do I need to install following dependencies?
I have installed cuda 9.0
cdDNN 7.4
NCCL 2.x
I haven't started yet to build tensorflow serving using GPU. I m in the middle of research stage ~ in this process in every article showing related to Ubuntu installation and m trying to install.in centos 7 .. so I don't have any docker file ..
Hope this may help you and me to get solution .
Here is what I use to build a tensorflow-serving-runtime docker image.
FROM nvidia/cuda:9.0-cudnn7-runtime-centos7
ARG TF_VERSION=1.9.0
RUN yum install -y \
yum-plugin-ovl \
libgomp \
ca-certificates \
zip \
unzip \
curl \
&& \
yum clean all
WORKDIR /usr/
RUN curl -sSL -o /usr/nccl_2.2.13-1-cuda9.0_x86_64.tgz http://some-of-my-net-disk/tensorflow-serving/lib/nccl_2.2.13-1-cuda9.0_x86_64.tgz && \ # Change your way to get nccl library here
tar -xvf nccl_2.2.13-1-cuda9.0_x86_64.tgz &&\
rm -f nccl_2.2.13-1-cuda9.0_x86_64.tgz
ENV LD_LIBRARY_PATH /usr/nccl_2.2.13-1+cuda9.0_x86_64/lib/:${LD_LIBRARY_PATH}
# Change your way to get tensorflow_model_server here
WORKDIR /serving
RUN curl -sSL -o /usr/local/bin/tensorflow_model_server http://some-of-my-net-disk/tensorflow-serving/bin/tf-serving-${TF_VERSION}/tensorflow_model_server_gpu-centos &&\
chmod u+x /usr/local/bin/tensorflow_model_server
For me, this worked fine. Hope it helps.

Docker - Execute command after mounting a volume

I have the following Dockerfile for a php runtime based on the official [php][1] image.
FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
zip \
unzip \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-enable opcache \
&& php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === '669656bab3166a7aff8a7506b8cb2d1c292f042046c5a994c43155c0be6190fa0355160742ab2e1c88d40d5be660b410') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& php composer-setup.php \
&& php -r "unlink('composer-setup.php');" \
&& mv composer.phar /usr/local/bin/composer
I am having trouble running composer install.
I am guessing that the Dockerfile runs before a volume is mounted because I receive a composer.json file not found error if adding:
...
&& mv composer.phar /usr/local/bin/composer \
&& composer install
to the above.
But, adding the following property to docker-compose.yml:
command: sh -c "composer install && composer require drush/drush"
seems to terminate the container after the command finishes executing.
Is there a way to:
wait for a volume to become mounted
run composer install using the mounted composer.json file
have the container keep running afters
?
I generally agree with Chris's answer for local development. I am going to offer something that combines with a recent Docker feature that may set a path for doing both local development and eventual production deployment with the same image.
Let's first start with the image that we can build in a manner that can be used for either local development or deployment somewhere that contains the code and dependencies. In the latest Docker version (17.05) there is a new multi-stage build feature that we can take advantage of. In this case we can first install all your Composer dependencies to a folder in the build context and then later copy them to the final image without needing to add Composer to the final image. This might look like:
FROM composer as composer
COPY . /app
RUN composer install --ignore-platform-reqs --no-scripts
FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
zip \
unzip \
&& docker-php-ext-install -j$(nproc) iconv mcrypt \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-ext-install mysqli \
&& docker-php-ext-enable opcache
COPY . /var/www/root
COPY --from=composer /app/vendor /var/www/root/vendor
This removes all of Composer from the application image itself and instead uses the first stage to install the dependencies in another context and copy them over to the final image.
Now, during development you have some options. Based on your docker-compose.yml command it sounds like you are mounting the application into the container as .:/var/www/root. You could add a composer service to your docker-compose.yml similar to my example at https://gist.github.com/andyshinn/e2c428f2cd234b718239. Here, you just do docker-compose run --rm composer install when you need to update dependencies locally (this keeps the dependencies build inside the container which could matter for native compiled extensions, especially if you are deploying as containers and developing on Windows or Mac).
The other option is to just do something similar to what Chris has already suggested, and use the official Composer image to update and manage dependencies when needed. I've done something like this locally before where I had private dependencies on GitHub which required SSH authentication:
docker run --rm --interactive --tty \
--volume $PWD:/app:rw,cached \
--volume $SSH_AUTH_SOCK:/ssh-auth.sock \
--env SSH_AUTH_SOCK=/ssh-auth.sock \
--volume $COMPOSER_HOME:/composer \
composer:1.4 install --ignore-platform-reqs --no-scripts
To recap, the reasoning for this method of building the image and installing Composer dependencies using an external container / service:
Platform specific dependencies will be built correctly for the container (Linux architecture vs Windows or Mac).
No Composer or PHP is required on your local computer (it is all contained inside Docker and Docker Compose).
The initial image you built is runnable and deployable without needing to mount code into it. In development, you are just overriding the /var/www/root folder with a local volume.
I've been down this rabbit hole for 5 hours, all of the solutions out there are way too complicated. The easiest solution is to exclude vendor or node_modules and similar directories from volume.
#docker-compose.yml
volumes:
- .:/srv/app/
- /srv/app/vendor/
So this will map current project directory but exclude its vendor subdirectory. Dont forget the trailing slash!
So now you can easily run composer install in dockerfile and when docker mounts your volume it will ignore vendor directory.
If this is is for a general development environment, then the intention is not really ideal because it's coupling the application to the Docker configuration.
Just run composer install seperately by some other means (there is an image available for this on dockerhub, which allows you to just do (docker run -it --rm -v $(pwd):/app composer/composer install).
But yes it is possible you would need the last line in the Dockerfile to be bash -c "composer install && php-fpm".
wait for a volume to become mounted
No, volumes are not able to be mounted during a docker build process. Though you can copy the source code in.
run composer install using the mounted composer.json file
No, see above response.
have the container keep running after
Yes, you would need to execute php-fpm --nodaemonize ( which is a long running process, hence it won't terminate.
To execute a command after you have mounted a volume on a docker container
Assuming that you are fetching dependencies from a public repo
docker run --interactive -t --privileged --volume ${pwd}:/xyz composer /bin/sh -c 'composer install'
For fetching dependencies from a private git repo, you would need to copy/create ssh keys, I guess that should be out of scope of this question.

Resources