How to connect to docker container from the host machine? - docker

My host machine application is trying to establish a SCTP connection at port 38412 to a docker container which is running SCTP server at port 38412. I am mapping and publishing the ports but getting an error. My host application is timing out at the following check:
if (connect(fd, (struct sockaddr*)&myaddr,sizeof(myaddr))== -1){
higLog("SCTP CLIENT connect failed, port %d, Error %s",
port, strerror(errno));
return -1;
}
Error Connection timed out
My DockerFile looks something like this.
FROM ubuntu:focal
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y \
dialog \
apt-utils \
iputils-ping \
inetutils-traceroute \
iproute2 \
curl \
dnsutils \
nano \
build-essential \
libcpprest-dev \
libboost-all-dev \
libssl-dev \
cmake \
libyaml-cpp-dev \
libsctp1 \
libsctp-dev \
tre-agrep \
libtre5 \
libtre-dev \
libhiredis-dev \
gcovr \
lcov \
curl \
make \
binutils \
libcurlpp-dev \
libcurlpp0 \
libjsoncpp-dev
COPY . $HOME/src/D1/Func
RUN cd src
RUN mkdir utility_library
RUN cd ..
RUN mv src/D1/Func/utility_library src/utility_library
WORKDIR "$HOME/src/D1/Func"
EXPOSE 80
CMD ["/bin/bash", "-c", "cmake .;make -j16;./build/bin/Func eth0"]
And running this container using the following docker-run command:
docker run --name con1 --net Network1 --ip 10.0.0.21 -p 127.0.0.1:38412:38412/sctp sourav/image1:1.0.0
Can I get some ideas on how to solve it ?

Related

Elastic user password is not working for my docker image

When I am using the elasticsearch official docker image ELASTIC_PASSWORD env variable is working good
docker run -dti -e ELASTIC_PASSWORD=my_own_password -e discovery.type=single-node elasticsearch:7.8.0
But when I build my own customized docker image the ELASTIC_PASSWORD is not working can you please help me on this
Here is my Docker file
FROM ubuntu:18.04
ENV \
REFRESHED_AT=2020-06-20
###############################################################################
# INSTALLATION
###############################################################################
### install prerequisites (cURL, gosu, tzdata, JDK for Logstash)
RUN set -x \
&& apt update -qq \
&& apt install -qqy --no-install-recommends ca-certificates curl gosu tzdata openjdk-11-jdk-headless \
&& apt clean \
&& rm -rf /var/lib/apt/lists/* \
&& gosu nobody true \
&& set +x
### set current package version
ARG ELK_VERSION=7.8.0
### install Elasticsearch
# predefine env vars, as you can't define an env var that references another one in the same block
ENV \
ES_VERSION=${ELK_VERSION} \
ES_HOME=/opt/elasticsearch
ENV \
ES_PACKAGE=elasticsearch-${ES_VERSION}-linux-x86_64.tar.gz \
ES_GID=991 \
ES_UID=991 \
ES_PATH_CONF=/etc/elasticsearch \
ES_PATH_BACKUP=/var/backups \
KIBANA_VERSION=${ELK_VERSION}
RUN DEBIAN_FRONTEND=noninteractive \
&& mkdir ${ES_HOME} \
&& curl -O https://artifacts.elastic.co/downloads/elasticsearch/${ES_PACKAGE} \
&& tar xzf ${ES_PACKAGE} -C ${ES_HOME} --strip-components=1 \
&& rm -f ${ES_PACKAGE} \
&& groupadd -r elasticsearch -g ${ES_GID} \
&& useradd -r -s /usr/sbin/nologin -M -c "Elasticsearch service user" -u ${ES_UID} -g elasticsearch elasticsearch \
&& mkdir -p /var/log/elasticsearch ${ES_PATH_CONF} ${ES_PATH_CONF}/scripts /var/lib/elasticsearch ${ES_PATH_BACKUP}
As I think in order to achieve this functionality (set ELASTIC_PASSWORD from command line and it works) for your own container you need to re-configure Elasticsearch startup script. It's not a trivial task.
For example here is docker-entrypoint.sh from official docker image.
https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/bin/docker-entrypoint.sh
You can see that script do all 'hidden' work to allow us to run it by only command.

How to get an X11 socket connection between a docker container and the desktop

I wish to run Pycharm community within Docker on my desktop. I have created a Dockerfile (below) and seen it work fine on Mac.
FROM debian:buster-slim
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
ca-certificates \
curl \
apt-utils \
dirmngr \
gnupg \
libasound2 \
libdbus-glib-1-2 \
libgtk-3-0 \
libxrender1 \
libx11-xcb-dev \
libx11-xcb1 \
libxt6 \
xz-utils \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
ENV HOME /home/user
RUN useradd --create-home --home-dir $HOME user \
&& chown -R user:user $HOME
ENV LANG C.UTF-8
RUN apt-get update && \
apt-get install -y python-pip \
vim \
wget \
x11-utils \
xfonts-base \
xpra
# install PyCharm
RUN cd / && \
wget -q http://download.jetbrains.com/python/pycharm-community-2019.1.1.tar.gz && \
tar xvfz pycharm-community-2019.1.1.tar.gz && \
rm pycharm-community-2019.1.1.tar.gz
USER user
CMD [ "/pycharm-community-2019.1.1/bin/pycharm.sh"]
However when I try to run this on Ubuntu I get X11 errors from the Pycharm code:
Start Failed: Failed to initialize graphics environment
java.awt.AWTError: Can't connect to X11 window server using ':1' as
the value of the DISPLAY variable. at
java.desktop/sun.awt.X11GraphicsEnvironment.initDisplay(Native Method)
The command to invoke the container is:
docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=${DISPLAY} <<image_id>>
I have tried many variations on the DISPLAY var (eg unix$DISPLAY), but none have worked.
Update:
I ran:
docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=${DISPLAY} --entrypoint /bin/sh <<image_id>>
to get access into the container, and then ran:
$ ls -l /tmp
total 0
I am confused - I thought the X11 socket residing in my host machine would have been bound to the same location in the container. Is this a red-herring?

Riofs - fuse device not found

I am trying to run riofs inside a docker container, however when I try to run riofs I get the following error:
fuse: device not found, try 'modprobe fuse' first
ERROR! Failed to mount FUSE partition !
ERROR! Failed to create FUSE fs ! Mount point: /path/to/dir
Here is what my DockerFile looks like:
FROM ubuntu:16.04
RUN apt-get update -qq
RUN apt-get install -y \
build-essential \
gcc \
make \
automake \
autoconf \
libtool \
pkg-config \
intltool \
libglib2.0-dev \
libfuse-dev \
libxml2-dev \
libevent-dev \
libssl-dev \
&& rm -rf /var/lib/apt/lists/*
RUN curl -L https://github.com/skoobe/riofs/archive/v${VERSION}.tar.gz | tar zxv -C /usr/src
RUN cd /usr/src/riofs-${VERSION} && ./autogen.sh && ./configure --prefix=/usr && make && make install
WORKDIR /opt/riofs/bin
CMD ["bash"]
I needed to add the runtime privilege SYS_ADMIN because fuse needs permissions to mount/umount.
docker run -it --cap-add SYS_ADMIN --device /dev/fuse [IMAGE] bash

How to expose port from host to container in Docker?

You can use EXPOSE in Docker for:
The EXPOSE instructions informs Docker that the container will listen
on the specified network ports at runtime.
Can I do the opposite? Can I expose port from my Ubuntu to the docker container?
Background: I'm trying to setup a simple php7-fpm as a docker image and I would like to expose port 3306 (MySQL service) to the docker container.
My Dockerfile:
FROM debian:jessie
# persistent / runtime deps
RUN apt-get update && apt-get install -y ca-certificates curl libpcre3 librecode0 libsqlite3-0 libxml2 --no-install-recommends && rm -r /var/lib/apt/lists/*
# phpize deps
RUN apt-get update && apt-get install -y autoconf file g++ gcc libc-dev make pkg-config re2c --no-install-recommends && rm -r /var/lib/apt/lists/*
ENV PHP_INI_DIR /usr/local/etc/php
RUN mkdir -p $PHP_INI_DIR/conf.d
##<autogenerated>##
ENV PHP_EXTRA_CONFIGURE_ARGS --enable-fpm --with-fpm-user=www-data --with-fpm-group=www-data
##</autogenerated>##
ENV PHP_VERSION 7.0.0RC2
# --enable-mysqlnd is included below because it's harder to compile after the fact the extensions are (since it's a plugin for several extensions, not an extension in itself)
RUN buildDeps=" \
$PHP_EXTRA_BUILD_DEPS \
libcurl4-openssl-dev \
libpcre3-dev \
libreadline6-dev \
librecode-dev \
libsqlite3-dev \
libssl-dev \
libxml2-dev \
xz-utils \
" \
&& set -x \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends && rm -rf /var/lib/apt/lists/* \
&& curl -SL "https://downloads.php.net/~ab/php-$PHP_VERSION.tar.xz" -o php.tar.xz \
&& mkdir -p /usr/src/php \
&& tar -xof php.tar.xz -C /usr/src/php --strip-components=1 \
&& rm php.tar.xz* \
&& cd /usr/src/php \
&& ./configure \
--with-config-file-path="$PHP_INI_DIR" \
--with-config-file-scan-dir="$PHP_INI_DIR/conf.d" \
$PHP_EXTRA_CONFIGURE_ARGS \
--disable-cgi \
--enable-mysqlnd \
--with-pdo-mysql \
--enable-mbstring \
--with-curl \
--with-openssl \
--with-pcre \
--with-readline \
--with-recode \
--with-zlib \
&& make -j"$(nproc)" \
&& make install \
&& { find /usr/local/bin /usr/local/sbin -type f -executable -exec strip --strip-all '{}' + || true; } \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false -o APT::AutoRemove::SuggestsImportant=false $buildDeps \
&& make clean
COPY docker-php-ext-* /usr/local/bin/
##<autogenerated>##
WORKDIR /var/www/html
COPY php-fpm.conf /usr/local/etc/
EXPOSE 9000
CMD ["php-fpm"]
##</autogenerated>##
This is the command I use to run my container:
docker run --name=php7-fpm -v /var/www/html/:/var/www/html/ -p 9002:9000 marty/php7
My PHP app database configuration:
database:
main:
host: 127.0.0.1
dbname: edu
user: root
password: myPassword
port: 3306
You can run container with --net=host then it will have access to the host's ports directly. See https://docs.docker.com/engine/reference/run/#network-settings

creating a docker image with nginx compile options for Optional HTTP modules

I am trying to build an nginx image for installing nginx with the Module ngx_http_auth_request_module.
this is my current docker file:
#ubuntu OS
FROM ubuntu:14.04
#update apt-get non interactive and install nginx
RUN \
sudo apt-get -q -y update; \
sudo apt-get -q -y install nginx
#copy all mapping configurations for all environments
COPY ./resources/routing-configs/* /routing-configs/
#expose port for nginx
EXPOSE 80
#run task to copy only relevant mapping configuration to nginx and reload nginx service
COPY ./resources/start.sh /opt/mysite/router/start.sh
RUN sudo chmod 766 /opt/mysite/router/start.sh
CMD sudo -E sh /opt/mysite/router/start.sh
typically i would have compiled the nginx files locally like this:
sudo ./configure --with-http_auth_request_module
and then install nginx
sudo make install
but how can i do this with docker file?
please help
I'm somewhat of a noob with Docker, but I had to solve this same problem. I used this Dockerfile as a starting point.
FROM centos:centos7
WORKDIR /tmp
# Install prerequisites for Nginx compile
RUN yum install -y \
wget \
tar \
openssl-devel \
gcc \
gcc-c++ \
make \
zlib-devel \
pcre-devel \
gd-devel \
krb5-devel \
openldap-devel \
git
# Download Nginx and Nginx modules source
RUN wget http://nginx.org/download/nginx-1.9.3.tar.gz -O nginx.tar.gz && \
mkdir /tmp/nginx && \
tar -xzvf nginx.tar.gz -C /tmp/nginx --strip-components=1 &&\
git clone https://github.com/kvspb/nginx-auth-ldap.git /tmp/nginx/nginx-auth-ldap
# Build Nginx
WORKDIR /tmp/nginx
RUN ./configure \
--user=nginx \
--with-debug \
--group=nginx \
--prefix=/usr/share/nginx \
--sbin-path=/usr/sbin/nginx \
--conf-path=/etc/nginx/nginx.conf \
--pid-path=/run/nginx.pid \
--lock-path=/run/lock/subsys/nginx \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-http_gzip_static_module \
--with-http_stub_status_module \
--with-http_ssl_module \
--with-http_spdy_module \
--with-pcre \
--with-http_image_filter_module \
--with-file-aio \
--with-ipv6 \
--with-http_dav_module \
--with-http_flv_module \
--with-http_mp4_module \
--with-http_gunzip_module \
--add-module=nginx-auth-ldap && \
make && \
make install
WORKDIR /tmp
# Add nginx user
RUN adduser -c "Nginx user" nginx && \
setcap cap_net_bind_service=ep /usr/sbin/nginx
RUN touch /run/nginx.pid
RUN chown nginx:nginx /etc/nginx /etc/nginx/nginx.conf /var/log/nginx /usr/share/nginx /run/nginx.pid
# Cleanup after Nginx build
RUN yum remove -y \
wget \
tar \
gcc \
gcc-c++ \
make \
git && \
yum autoremove -y && \
rm -rf /tmp/*
# PORTS
EXPOSE 80
EXPOSE 443
USER nginx
CMD ["/usr/sbin/nginx", "-g", "daemon off;"]

Resources