Elastic beanstalk multicontainer backend not working - docker

I have play application exposing backend on port 9000 and React.js front end wrapped within the same React application which exposes UI on port 3000. Whole application is deployed using dockerfile:
FROM hseeberger/scala-sbt:11.0.10_1.4.7_2.13.4
RUN apt-get --allow-releaseinfo-change update
RUN apt-get install -y unzip xvfb libxi6 libgconf-2-4 gnupg2
RUN apt-get update
#RUN apt-get clean
# Installing tools to build node packages
RUN apt-get update && apt-get install -y build-essential
#Installing docker-ce dependencies
RUN apt-get install -y ca-certificates gnupg-agent software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | \
tee -a /etc/apt/sources.list.d/docker.list
# Installing nodejs
RUN curl -fsSL https://deb.nodesource.com/setup_14.x | bash -
# Install g8, which unfortunately requires Java 8 to install
# https://github.com/foundweekends/giter8/issues/449
RUN apt-get install -y apt-transport-https ca-certificates wget dirmngr gnupg software-properties-common
RUN wget -qO - https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public | apt-key add -
RUN add-apt-repository -y https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/
RUN apt-get update && apt install -y adoptopenjdk-8-hotspot
RUN PATH=/usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin:$PATH && curl https://raw.githubusercontent.com/foundweekends/conscript/master/setup.sh | sh && ~/.conscript/bin/cs foundweekends/giter8
RUN export PATH=/root/.conscript/bin:$PATH && g8 --version
RUN apt remove -y adoptopenjdk-8-hotspot
RUN add-apt-repository -r https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/
# Install docker
RUN apt-get update && apt-get install -y docker-ce docker-ce-cli
RUN docker --version || true
# Install Node.js and npm
RUN apt-get update && apt-get install -y nodejs
RUN node --version || true
RUN npm --version || true
# Install protobuf & protoc-gen-grpc-web plugin
RUN apt-get install -y protobuf-compiler
RUN protoc --version || true
# jq for parsing config from secrets/cloudflow
RUN apt-get -y install jq
# Install kpt
RUN curl https://storage.googleapis.com/kpt-dev/latest/linux_amd64/kpt --output ./kpt
RUN chmod +x ./kpt && mv ./kpt /usr/bin
RUN kpt version
# Creating Home Directory in container
RUN mkdir - p /usr/src/app
# Setting Home Directory
WORKDIR /usr/src/app
# Copying src code to Container
COPY . /usr/src/app
# Compiling Scala Code
# RUN sbt compile
# Exposing Port
EXPOSE 3000
EXPOSE 9000
# Running Scala Application
CMD ["sbt", "clean", "compile", "run"]
I'm also having nginx with the following configuration:
upstream frontend {
server play:3000;
}
upstream backend {
server play:9000;
}
server {
listen 80;
location / {
proxy_pass http://frontend;
}
location /api {
client_max_body_size 200M;
client_body_buffer_size 200M;
proxy_pass http://backend;
}
}
It works fine locally when I run it with the following docker-compose:
version: '3'
services:
play:
build:
dockerfile: Dockerfile
context: .
volumes:
- .:/app
nginx:
depends_on:
- play
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
ports:
- '80:80'
I'm having two containers running properly and whole application works as expected:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eb5c29b5f361 danamex-app_nginx "/docker-entrypoint.…" 55 minutes ago Up 8 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp danamex-app_nginx_1
368ddb6f8fec danamex-app_play "sbt clean compile r…" 55 minutes ago Up 8 seconds 3000/tcp, 9000/tcp danamex-app_play_1
However, when I deploy the same application to elastic beanstalk using same docker-compose, play application doesn't expose ports:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e2186d4f1745 current_nginx "/docker-entrypoint.…" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, :::80->80/tcp current_nginx_1
96dccefba9a7 jeremycod/danamex "bin/danamex -Dpidfi…" About a minute ago Up About a minute current_play_1
I don't see anything from EB logs. There is no indication that something was wrong with the application.
play_1 | Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
play_1 | [[37minfo[0m] play.api.Play - Application started (Prod) (no global state)
play_1 | [[37minfo[0m] p.c.s.AkkaHttpServer - Listening for HTTP on /0.0.0.0:9000
nginx_1 | 2021/09/11 22:36:01 [error] 30#30: *43 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xx.2.40, server: , request: "GET / HTTP/1.1", upstream: "http://172.19.0.2:3000/", host: "xxxxxxxxxx.us-west-2.elasticbeanstalk.com"
nginx_1 | xxx.xx.2.40 - - [11/Sep/2021:22:36:01 +0000] "GET / HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36" "24.85.210.60"
nginx_1 | 2021/09/11 22:36:01 [error] 30#30: *43 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.xx.2.40, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://172.19.0.2:3000/favicon.ico", host: "xxxxxxxxx.us-west-2.elasticbeanstalk.com", referrer: "http://xxxxxxxxxx.us-west-2.elasticbeanstalk.com/"
I've tried to change instance to t2.large in order to validate that it's not running out of memory.
Any idea what could be the problem?

Related

Docker health checks returning HTTP 301

I have a Docker container running my Ruby on Rails application, and the health check keeps failing because it returns an HTTP 301 instead of an HTTP 200. My app has been successfully deployed to AWS ECS as a service using Docker. However, when I check the healthcheck, it returns an HTTP 301 (Source):
> curl -I http://${IPADDR}:3000/healthcheck
HTTP/1.1 301 Moved Permanently
Content-Type: text/html
Location: https://172.17.0.2:3000/healthcheck
I have a load balancer set up with two listeners, one of them being HTTPS, as you can see below.
The HTTP 301 issue seems to be caused by HTTPS. What can I do to fix it so it returns an HTTP 200? When I manually visit the HTTPS healthcheck endpoint, it returns an HTTP 200.
Here is my Docker config in case it's helpful:
docker-compose.yml
version: '3'
services:
web:
build:
args:
DEPLOY_ENV_ARG: ${DEPLOY_ENV:-development}
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
Dockerfile
FROM ruby:2.7.2
SHELL ["/bin/bash", "-c"]
# development | test | production
ARG DEPLOY_ENV_ARG
ENV RAILS_ENV=${DEPLOY_ENV_ARG}
ENV NODE_ENV=${DEPLOY_ENV_ARG}
ENV APP_HOME=/myapp
LABEL app=myapp
LABEL environment=${DEPLOY_ENV_ARG}
RUN apt-get update
WORKDIR /
# Install dependencies
RUN apt-get install -y git nodejs
# For Redis
RUN apt-get install -y build-essential tcl
# Install NVM & Yarn
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get install -y nodejs
# Install yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
&& apt-get update -qq \
&& apt-get install -y yarn
RUN mkdir -p ${APP_HOME} ${APP_HOME}/log
WORKDIR ${APP_HOME}
COPY . ${APP_HOME}
RUN gem install bundler && bundle install -j4 --with ${DEPLOY_ENV_ARG}
RUN yarn install
RUN bundle exec rails assets:precompile
EXPOSE 3000
CMD /bin/bash
RUN bundle exec rails s -b 0.0.0.0 -p 3000
Your health check is configured to check the HTTP endpoint but since you have forced SSL in your Rails app, it is redirecting it to the HTTPS endpoint. This is what makes it to fail.
Since you are performing SSL offloading at the load balancer, your best option is to let the Load Balancer perform the HTTPS redirection and have your health check pointing to the HTTP endpoint. So you'll need to disable force SSL in your Rails app.

How to install elasticsearch in a docker container?

I am trying to install elasticsearch in an ubuntu docker container. This is my Dockerfile:
FROM ubuntu:21.04 as elastic_install
RUN apt-get update
RUN apt-get install -y wget gnupg apt-transport-https openjdk-8-jdk
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-7.x.list
RUN apt-get update && apt-get install -y elasticsearch
When I now try to run elasticsearch, it is killed after a few seconds with the following message:
root#18c3d6649c1b:/# /usr/share/elasticsearch/bin/elasticsearch
Killed
root#18c3d6649c1b:/# /usr/share/elasticsearch/bin/elasticsearch -d
/usr/share/elasticsearch/bin/elasticsearch: line 95: 369 Killed exec "$JAVA" "$XSHARE" $ES_JAVA_OPTS -Des.path.home="$ES_HOME" -Des.path.conf="$ES_PATH_CONF" -Des.distribution.flavor="$ES_DISTRIBUTION_FLAVOR" -Des.distribution.type="$ES_DISTRIBUTION_TYPE" -Des.bundled_jdk="$ES_BUNDLED_JDK" -cp "$ES_CLASSPATH" org.elasticsearch.bootstrap.Elasticsearch "$#" <<< "$KEYSTORE_PASSWORD"
root#18c3d6649c1b:/#
How do I install/run elasticsearch correctly? Am I missing something crucial?
When directly running elasticsearch, the environment variables ES_PATH_CONFIG and ES_JAVA_OPTS must be defined.
elasticuser#c5f357e42e51:/# ES_PATH_CONF=/etc/elasticsearch ES_JAVA_OPTS="-Xms8g -Xmx8g" /usr/share/elasticsearch/bin/elasticsearch

Speed up image building docker with docker compose

i am a newby in docker and I need an installation based on ubuntu of logstash.
I try the official logstash image and without success I can't run it so I decided to build my own installation based on my needs.
It works well but takes a lot of time to build it.
I wonder how can I improve (speed up) my building
This is my Dockerfile
FROM ubuntu:18.04
# RUN adduser --disabled-password --gecos "" ubuntu
# RUN usermod -aG sudo ubuntu
# RUN echo "ubuntu ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
RUN apt-get update && \
apt-get autoclean && \
apt-get autoremove
RUN echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-7.x.list
RUN apt-get install curl ca-certificates apt-utils wget apt-transport-https default-jre gnupg apt-transport-https software-properties-common -y
# RUN update-alternatives --config java
ENV LANG C.UTF-8
ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64
ENV PATH $JAVA_HOME/bin:$PATH
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN apt update
RUN apt install ruby-full -y
RUN ruby --version
RUN apt install jruby -y
#install jruby (last version)
RUN jruby --version
RUN apt install logstash
#install plugins and bundles.
RUN cd /usr/share/logstash && gem install bundler
COPY logstash-pcap-test.conf /usr/share/logstash/
RUN mkdir /home/logstash/ && mkdir /home/logstash/traffic
COPY /traffic-example/* /home/logstash/traffic/
WORKDIR /usr/share/logstash
CMD ["/bin/bash","-c","bin/logstash -f logstash-pcap-test.conf --config.reload.automatic"]
And this is my docker-compose
version: "3"
services:
logstash_test:
build:
context: .
dockerfile: container/Dockerfile
image: logstash_test:latest
container_name: logstash_test
hostname: logstash_test
ports:
- 9600:9600
- 8089:8089
networks:
- elknetwork
networks:
elknetwork:
driver: bridge
Any thoughts?

Port missing in Docker container

I wanted to install apache, php7, postgres12, node, java in ubuntu:18.04 base image dockerfile and wanted to show postgre data in php file. After building and running the container when I checked the process status, the port is missing. I started docker recently so I am new in this. Here is my Dockerfile
FROM ubuntu:18.04
ARG DEBIAN_FRONTEND=noninteractive
# # Install openjdk-8-jdk
RUN apt-get update && \
apt-get install -y openjdk-8-jdk
RUN apt-get -y install nodejs
RUN apt-get update && apt-get -qq -y install curl
RUN apt-get -y install apache2
RUN apt-get -y install php7.2
RUN apt-get -y install libapache2-mod-php7.2
RUN rm -f /var/www/html/index.html
COPY . /var/www/html
RUN apt-get update && apt-get install -y gnupg2 && apt-get install -y wget
Run wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main" > /etc/apt/sources.list.d/pgdg.list
RUN apt-get update && apt-get install -y postgresql-12 postgresql-client-12
USER postgres
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
createdb -O docker docker
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/12/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/12/main/postgresql.conf
EXPOSE 80 5432
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
error
PS G:\Docker\test-docker-ubuntu-php\website> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1fba1b8327fd test-custom-all "/usr/sbin/apache2ct…" 2 minutes ago Exited (1) 2 minutes ago test-custom
PS G:\Docker\test-docker-ubuntu-php\website> docker logs 1fba1b8327fd
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
Action '-D FOREGROUND' failed.
The Apache error log may have more information.
Can you help me with this?
Thanks

Connecting to couchdb inside docker

I'm trying to setup a Docker image running couchDB that loads some data during the build phase. All that seems to work, but I can't connect it once it's running...
curl localhost:5984
curl: (52) Empty reply from server
My Dockerfile looks like:
FROM ubuntu:16.04
COPY . .
# Load deps
RUN apt-get update && apt-get install -y apt-utils apt-transport-https curl
# Install couchDB
RUN echo "deb https://apache.bintray.com/couchdb-deb xenial main" \
| tee -a /etc/apt/sources.list
RUN curl -L https://couchdb.apache.org/repo/bintray-pubkey.asc \
| apt-key add -
RUN apt-get update && apt-get install -y couchdb
# Load data
RUN ./myLoadScript.sh
# Expose couchDB port
EXPOSE 5984
# Start couchDB
CMD ["/opt/couchdb/bin/couchdb"]
and I build and run it with:
docker build --tag=database .
docker run -p 5984:5984 database
Any thoughts?
Thanks in advance,
Dan
CouchDB is accessible by default on localhost which will be localhost
inside the container since you are using docker.
you can try exec inside the CouchDB container and run curl
localhost:5984 and it should work.
If you want to allow certain IPs to connect to your CouchDB server then you should use bind_address config_docs.
To allow all IPs use bind_address = 0.0.0.0 in local.ini.

Resources