I have a Docker container running my Ruby on Rails application, and the health check keeps failing because it returns an HTTP 301 instead of an HTTP 200. My app has been successfully deployed to AWS ECS as a service using Docker. However, when I check the healthcheck, it returns an HTTP 301 (Source):
> curl -I http://${IPADDR}:3000/healthcheck
HTTP/1.1 301 Moved Permanently
Content-Type: text/html
Location: https://172.17.0.2:3000/healthcheck
I have a load balancer set up with two listeners, one of them being HTTPS, as you can see below.
The HTTP 301 issue seems to be caused by HTTPS. What can I do to fix it so it returns an HTTP 200? When I manually visit the HTTPS healthcheck endpoint, it returns an HTTP 200.
Here is my Docker config in case it's helpful:
docker-compose.yml
version: '3'
services:
web:
build:
args:
DEPLOY_ENV_ARG: ${DEPLOY_ENV:-development}
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
Dockerfile
FROM ruby:2.7.2
SHELL ["/bin/bash", "-c"]
# development | test | production
ARG DEPLOY_ENV_ARG
ENV RAILS_ENV=${DEPLOY_ENV_ARG}
ENV NODE_ENV=${DEPLOY_ENV_ARG}
ENV APP_HOME=/myapp
LABEL app=myapp
LABEL environment=${DEPLOY_ENV_ARG}
RUN apt-get update
WORKDIR /
# Install dependencies
RUN apt-get install -y git nodejs
# For Redis
RUN apt-get install -y build-essential tcl
# Install NVM & Yarn
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get install -y nodejs
# Install yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
&& apt-get update -qq \
&& apt-get install -y yarn
RUN mkdir -p ${APP_HOME} ${APP_HOME}/log
WORKDIR ${APP_HOME}
COPY . ${APP_HOME}
RUN gem install bundler && bundle install -j4 --with ${DEPLOY_ENV_ARG}
RUN yarn install
RUN bundle exec rails assets:precompile
EXPOSE 3000
CMD /bin/bash
RUN bundle exec rails s -b 0.0.0.0 -p 3000
Your health check is configured to check the HTTP endpoint but since you have forced SSL in your Rails app, it is redirecting it to the HTTPS endpoint. This is what makes it to fail.
Since you are performing SSL offloading at the load balancer, your best option is to let the Load Balancer perform the HTTPS redirection and have your health check pointing to the HTTP endpoint. So you'll need to disable force SSL in your Rails app.
Related
So, my development team was trying to migrate from GCE to GCR and we have succeeded in deploying the Cloud Run service and The CI/CD using github actions. But we encountered an issue which is the amount of request the Cloud Run service can handle is not more than 100 request concurrently.
So our base framework for the app is php/codeigniter and the web server we are using is apache2 webserver, along with sql server as our database that we already included in our dockerfile
FROM php:7.4.22-apache
USER root
RUN apt-get update && apt-get upgrade -y
RUN apt-get update && apt-get install -y gnupg2
RUN apt-get install libcurl4-openssl-dev
RUN apt-get install zlib1g-dev
RUN apt-get install libpng-dev -y
RUN docker-php-ext-install curl
RUN docker-php-ext-install gd
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN apt-get install wget
RUN wget http://ftp.de.debian.org/debian/pool/main/g/glibc/multiarch-support_2.28-10+deb10u1_amd64.deb
RUN dpkg -i multiarch-support_2.28-10+deb10u1_amd64.deb
RUN apt-get install -y libodbc1
RUN apt-get install -y unixodbc-dev
RUN pecl install sqlsrv
RUN pecl install pdo_sqlsrv
RUN echo "extension=pdo_sqlsrv.so" >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.:\s||"`/30-pdo_sqlsrv.ini
RUN echo "extension=sqlsrv.so" >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.:\s||"`/30-sqlsrv.ini
RUN rm multiarch-support_2.28-10+deb10u1_amd64.deb
# RUN ACCEPT_EULA=Y apt-get -y install mssql-tools
RUN ACCEPT_EULA=Y apt-get install msodbcsql17
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
COPY apache2.conf /etc/apache2/apache2.conf
COPY openssl.cnf /etc/ssl/openssl.cnf
COPY php.ini /etc/php/7.4/apache2/php.ini
RUN a2enmod rewrite
RUN /etc/init.d/apache2 restart
and this is the dockerfile that we used
FROM jamesjones/test-base:latest
USER root
COPY . /var/www/html
RUN cd /var/www/html
RUN chown -R www-data:www-data /var/www/html
COPY v1/application/config/config.prod.php /var/www/html/v1/application/config/config.php
COPY v1/application/config/database.prod.php /var/www/html/v1/application/config/database.php
COPY v1/application/config/routes.prod.php /var/www/html/v1/application/config/routes.php
COPY v2/application/config/config.prod.php /var/www/html/v2/application/config/config.php
COPY v2/application/config/database.prod.php /var/www/html/v2/application/config/database.php
COPY v2/application/config/routes.prod.php /var/www/html/v2/application/config/routes.php
COPY .htaccess.prod /var/www/html/.htaccess
VOLUME /var/www/html
i have tried this steps and it appears that the problem still persist
https://cloud.google.com/blog/topics/developers-practitioners/3-ways-optimize-cloud-run-response-times
this is our cloud run specification
i have also tried to increase the minimum number of instance in autoscaling to 10 but there seems to be no difference.
are there any alternatives to this issue ?
It turns out that the problem was vpc connector, because our database is connected using tcp/ip and we have to whitelist a public ip to be able to access it securely and that is why we are using vpc, so we are using mikrotik to bind our ip instead of using vpc connector. and the gcr service can finally handle 10000 request.
I have a program that builds servers automatically whenever we want stakeholders to test a new feature.
Currently I have the following setup:
Container 1 - all (contains nodejs, php and other dependencies)
Container 2 - db (contains the mysql database)
I'm aware that container 1 should be split but this will involve more unnecessary complexity to this stage of development.
Whenever a new feature is completed and ready to be deployed to a stage server we run: yarn run create:server --branchName=new-feature. This will create all of the configuration necessary to bring up our newly created server.
My problem is that whenever I run the command above I need to create a database in db container from all container:
mysql -u root -pxxxx -e "CREATE DATABASE IF NOT EXISTS `xxxx`"
The script main.ts is running in the context of all container, so it is necessary for all to communicate with db.
export const createDatabase = (subdomain: string) => {
const username = process.env.DB_USERNAME;
const password = process.env.DB_PASSWORD;
console.log(`[INFO] Creating database with name \`${subdomain}\``);
// triple back slash is necessary to avoid `command substitution` in some shells
if (isLocalEnviroment()) {
execSync(`docker run -it stage-manager-db mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
} else {
execSync(`mysql -u ${username} -p${password} -e "CREATE DATABASE IF NOT EXISTS \\\`${subdomain}\\\`"`)
}
console.log(`[INFO] Database \`${subdomain}\` created successfully`);
}
On local environment we would like to use docker, while in production everything will sit in the same machine (db, frontendapp and api).
When trying to run the following command docker run -it stage-manager-db mysql -u root -ppassword -e "CREATE DATABASE IF NOT EXISTS master" from all I get
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
I have tried restarting the service with:
service docker restart
which gives
[ ok ] Starting Docker: docker.
but trying to communicate with db from all keeps getting the same error. Upon trying to service docker stop I get:
[....] Stopping Docker: dockerstart-stop-daemon: warning: failed to kill 825: No such process
No process in pidfile '/var/run/docker-ssd.pid' found running; none killed.
failed!
From now on I have tried the several links to fix this issue:
https://github.com/docker/for-linux/issues/52#issuecomment-333563492
https://askubuntu.com/questions/1146634/how-to-remove-docker-from-windows-subsystem
Cannot connect to the Docker daemon at unix:///var/run/docker.sock
Cant uninstall Docker from Ubuntu on WSL
How can I communicate from all container to db container?
Dockerfile
FROM php:7.4-fpm
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
libzip-dev \
libfontconfig1 \
libxrender1 \
libpng-dev \
make \
nginx \
apt-transport-https \
gnupg2 \
wget \
procps \
docker.io
# Install nodejs
RUN apt -y install curl dirmngr apt-transport-https lsb-release ca-certificates
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt -y install nodejs
# Install extensions
RUN docker-php-ext-install pdo_mysql exif zip pcntl gd
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
RUN docker-php-ext-install -j$(nproc) gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install Yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt install yarn
# Install dependencies for this project
RUN yarn global add ts-node typescript
RUN useradd -m forge
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=forge:forge . /var/forge
# Copy ssh keys
COPY ./config/ssh /home/forge/.ssh/
# Give right permissions to `ssh` keys
RUN chmod 600 /home/forge/.ssh/config
RUN chmod 600 /home/forge/.ssh/back_end_deploy_key
RUN chmod 600 /home/forge/.ssh/frontend_deploy_key
RUN chmod 644 /home/forge/.ssh/back_end_deploy_key.pub
RUN chmod 644 /home/forge/.ssh/frontend_deploy_key.pub
RUN chown forge:forge /home/forge/.ssh/*
# Up Docker
RUN service docker start
RUN usermod -aG docker forge
# Create folder for stage servers
RUN mkdir -p /var/www/stage-servers
# Give correct permissions to `stage-servers` folder
RUN chown forge:www-data /var/www/stage-servers
RUN chmod g+s /var/www/stage-servers
RUN chmod o-rwx /var/www/stage-servers
# Change current user to forge
USER forge
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
docker-composer.yml
version: '3.7'
services:
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
networks:
- main
#MySQL Service
db:
image: mysql:5.7.22
container_name: stage-manager-db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: whatever
MYSQL_ROOT_PASSWORD: password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql/
networks:
- main
volumes:
project:
driver: local
driver_opts:
type: none
device: $PWD/
o: bind
dbdata:
driver: local
networks:
main:
I'm fairly new to docker so any approach that I might be doing wrong, please let me know. I have a feeling that this could be done much better so feel free to suggest improvements.
Update
** DO NOT DO THIS **
Instead of deleting this answer I will leave it here so others can see that this is not a secure/valid solution to this problem
By David Maze's comment:
Remember that anyone who can access the Docker socket has unrestricted root-level access over the whole host system. I would not add the Docker socket in casually here.
I was able to make it working by sharing the socket between my host OS and the all container.
docker-compose.yml
all:
working_dir: /var/www/stage-manager
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:80"
volumes:
- "./:/var/www/stage-manager"
- "./config/ssh:/root/.ssh"
- "/var/run/docker.sock:/var/run/docker.sock" -> important part
networks:
- main
I am trying to use a Docker image on Google App Engine Flexible Environment.
FROM ubuntu:bionic
MAINTAINER Makina Corpus "contact#makina-corpus.com"
ENV PYTHONUNBUFFERED 1
ENV DEBIAN_FRONTEND noninteractive
ENV LANG C.UTF-8
RUN apt-get update -qq && apt-get install -y -qq \
# std libs
git less nano curl \
ca-certificates \
wget build-essential\
# python basic libs
python3.8 python3.8-dev python3.8-venv gettext \
# geodjango
gdal-bin binutils libproj-dev libgdal-dev \
# postgresql
libpq-dev postgresql-client && \
apt-get clean all && rm -rf /var/apt/lists/* && rm -rf /var/cache/apt/*
# install pip
RUN wget https://bootstrap.pypa.io/get-pip.py && python3.8 get-pip.py && rm get-pip.py
RUN pip3 install --no-cache-dir setuptools wheel -U
CMD ["/bin/bash"]
The docker image appears to build correctly but when the service deploys the application crashes and i get this error message:
File "/Users/NAME/Documents/gcp/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 183, in IsDone
encoding.MessageToPyValue(operation.error)))
OperationError: Error Response: [9]
Application startup error! Code: APP_CONTAINER_CRASHED
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error! Code: APP_CONTAINER_CRASHED
This is failing as the Dockerfile is installing a significantly outdated version of the GDAL package which conflicts with the more current python installation.
How do I ensure that the dockerfile has the correct package repository and is installing the right, up to date, versions? Is there some line that I can insert to update the repository, or at least print the repository, before it starts installing?
EDIT:
My app.yaml:
# [START django_app]
runtime: custom
env: flex
entrypoint: gunicorn -b :$PORT MyApplication.wsgi
runtime_config:
python_version: 3
# [END runtime]
handlers:
# This configures Google App Engine to serve the files in the app's static
# directory.
#- url: /static
# static_dir: static/
#- url: /MyApplication/static
# static_dir: MyApplication/static/
# This handler routes all requests not caught above to your main app. It is
# required when static routes are defined, but can be omitted (along with
# the entire handlers section) when there are no static files defined.
- url: /.*
script: auto
# [END django_app]
resources:
cpu: 1
memory_gb: 2
disk_size_gb: 10
You App Engine deployment is failing because it needs a service listening on port 8080 and it cannot run bash on the cloud. If you need to debug your App Engine Flex instance, you need to first get a service on port 8080 and then enable SSH.
Similar issues are being tackled here and here
Your Dockerfile should run a command that spins up your application listening on port 8080:
CMD gunicorn -b :$PORT MyApplication.wsgi
GAE actually spins up containers with docker run and I am not sure why they would also have the entrypoint specified in the app.yaml file. Better not ask too many question with GAE.
Other issues for you to think about as mentioned in some of the comments above:
Wouldn't it be better to use Google's GAE base image (as mentioned in some of the comments above) -> FROM gcr.io/google-appengine/python?
If so, you need to consider it is based off Ubuntu 16.04 and you need to update dependencies (by adding the UbuntuGIS PPA: add-apt-repository -y ppa:ubuntugis/ppa)
How do you install your other dependencies? Running pip using a requirements file?
I am building a docker image on macOS. This is the part of my Dockerfile:
FROM ubuntu:18.04
# Dir you need to override to keep data on reboot/new container:
VOLUME /var/lib/mysql
#VOLUME /var/www/MISP/Config
# Dir you might want to override in order to have custom ssl certs
# Need: "misp.key" and "misp.crt"
#VOLUME /etc/ssl/private
# 80/443 - MISP web server, 3306 - mysql, 6379 - redis, 50000 - MISP ZeroMQ
EXPOSE 80 443 3306 6379 50000
ENV DEBIAN_FRONTEND noninteractive
ENV DEBIAN_PRIORITY critical
RUN apt-get update && apt-get install -y supervisor cron logrotate syslog-ng-core postfix curl gcc git gnupg-agent make python3 openssl redis-server sudo vim zip wget mariadb-client mariadb-server sqlite3 moreutils apache2 apache2-doc apache2-utils libapache2-mod-php php php-cli php-gnupg php-dev php-json php-mysql php7.2-opcache php-readline php-redis php-xml php-mbstring rng-tools python3-dev python3-pip python3-yara python3-redis python3-zmq libxml2-dev libxslt1-dev zlib1g-dev python3-setuptools libpq5 libjpeg-dev libfuzzy-dev ruby asciidoctor tesseract-ocr imagemagick libpoppler-cpp-dev
RUN mkdir -p /var/www/MISP /root/.config /root/.git
WORKDIR /var/www/MISP
RUN chown -R www-data:www-data /var/www/MISP /root/.config /root/.git
RUN sudo -u www-data -H git clone https://github.com/MISP/MISP.git /var/www/MISP
When I build the image, it returns this error:
Step 16/16 : RUN sudo -u www-data -H git clone https://github.com/MISP/MISP.git /var/www/MISP
---> Running in f0f0b76b353c
Cloning into '/var/www/MISP'...
fatal: unable to access 'https://github.com/MISP/MISP.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
It can't find the certificate. Even if I add RUN git config --global http.sslverify "false" it passes this error but it returns error in the rest of the Dockerfile.
It means that my Docker for macOS has problem with ssl connections, is there any configuration for ssl certificates or other ssl setting for Docker in mac that I missed?
I have run into the same issue. Exactly the same error message.
What you have to realize here is that the git command you are running is inside the docker environment and has nothing to do with the fact that you are on MacOS, or the git version you have installed in your operating system.
What is happening here is that the plain ubuntu:18.04 image is outdated and doesn't have up to date certificates.
In my case I fixed this by adding the following two statements right after you FROM line:
FROM ubuntu:18.04
# Update the OS
RUN apt-get update && apt-get upgrade -y
In my case it was another base image but the behavior/solution was the same.
I'm trying to deploy the following container on google cloud app engine using gcloud app deploy, it's the meanjs.org vanilla image. It uses a dockerfile, I'm new to docker and I'm trying to learn it on the fly, so if anyone can help that'd be great, thanks.
It looks as if the install of node via the dockerfile fails, I've checked node's documentation on github, and nothing has changed syntactically to what is in the existing dockerfile. I will attempt to recreate on my local workstation this morning, and will update this query shortly.
the errors are as follows..first docker error second errorbuild fail error
The docker file..
# Build:
# docker build -t meanjs/mean .
#
# Run:
# docker run -it meanjs/mean
#
# Compose:
# docker-compose up -d
FROM ubuntu:latest
MAINTAINER MEAN.JS
# 80 = HTTP, 443 = HTTPS, 3000 = MEAN.JS server, 35729 = livereload, 8080 = node-inspector
EXPOSE 80 443 3000 35729 8080
# Set development environment as default
ENV NODE_ENV development
# Install Utilities
RUN apt-get update -q \
&& apt-get install -yqq \
curl \
git \
ssh \
gcc \
make \
build-essential \
libkrb5-dev \
sudo \
apt-utils \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install nodejs
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
RUN sudo apt-get install -yq nodejs \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install MEAN.JS Prerequisites
RUN npm install --quiet -g gulp bower yo mocha karma-cli pm2 && npm cache clean
RUN mkdir -p /opt/mean.js/public/lib
WORKDIR /opt/mean.js
# Copies the local package.json file to the container
# and utilities docker container cache to not needing to rebuild
# and install node_modules/ everytime we build the docker, but only
# when the local package.json file changes.
# Install npm packages
COPY package.json /opt/mean.js/package.json
RUN npm install --quiet && npm cache clean
# Install bower packages
COPY bower.json /opt/mean.js/bower.json
COPY .bowerrc /opt/mean.js/.bowerrc
RUN bower install --quiet --allow-root --config.interactive=false
COPY . /opt/mean.js
# Run MEAN.JS server
CMD npm install && npm start
Okay, so after much wrestling unsuccessfully trying to install docker on windows, I went back to the dockerfile to try and identify the core issue here. Fortunately I find a solution as follows..
NodeJS is attempting to install on Ubuntu.
In the dockerfile at the root of the app
Ubuntu version is configured as:
FROM ubuntu:latest
simply change it to:
FROM ubuntu:14.04
I'm not sure if this is the best version to use for the build but it seems to be running successfully. Please feel free to amend/recommend an alternative solution. I'm new to Docker so pls be kind.