Google Cloud Run can only handle 100 request maximum concurrently - docker

So, my development team was trying to migrate from GCE to GCR and we have succeeded in deploying the Cloud Run service and The CI/CD using github actions. But we encountered an issue which is the amount of request the Cloud Run service can handle is not more than 100 request concurrently.
So our base framework for the app is php/codeigniter and the web server we are using is apache2 webserver, along with sql server as our database that we already included in our dockerfile
FROM php:7.4.22-apache
USER root
RUN apt-get update && apt-get upgrade -y
RUN apt-get update && apt-get install -y gnupg2
RUN apt-get install libcurl4-openssl-dev
RUN apt-get install zlib1g-dev
RUN apt-get install libpng-dev -y
RUN docker-php-ext-install curl
RUN docker-php-ext-install gd
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN apt-get install wget
RUN wget http://ftp.de.debian.org/debian/pool/main/g/glibc/multiarch-support_2.28-10+deb10u1_amd64.deb
RUN dpkg -i multiarch-support_2.28-10+deb10u1_amd64.deb
RUN apt-get install -y libodbc1
RUN apt-get install -y unixodbc-dev
RUN pecl install sqlsrv
RUN pecl install pdo_sqlsrv
RUN echo "extension=pdo_sqlsrv.so" >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.:\s||"`/30-pdo_sqlsrv.ini
RUN echo "extension=sqlsrv.so" >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.:\s||"`/30-sqlsrv.ini
RUN rm multiarch-support_2.28-10+deb10u1_amd64.deb
# RUN ACCEPT_EULA=Y apt-get -y install mssql-tools
RUN ACCEPT_EULA=Y apt-get install msodbcsql17
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
COPY apache2.conf /etc/apache2/apache2.conf
COPY openssl.cnf /etc/ssl/openssl.cnf
COPY php.ini /etc/php/7.4/apache2/php.ini
RUN a2enmod rewrite
RUN /etc/init.d/apache2 restart
and this is the dockerfile that we used
FROM jamesjones/test-base:latest
USER root
COPY . /var/www/html
RUN cd /var/www/html
RUN chown -R www-data:www-data /var/www/html
COPY v1/application/config/config.prod.php /var/www/html/v1/application/config/config.php
COPY v1/application/config/database.prod.php /var/www/html/v1/application/config/database.php
COPY v1/application/config/routes.prod.php /var/www/html/v1/application/config/routes.php
COPY v2/application/config/config.prod.php /var/www/html/v2/application/config/config.php
COPY v2/application/config/database.prod.php /var/www/html/v2/application/config/database.php
COPY v2/application/config/routes.prod.php /var/www/html/v2/application/config/routes.php
COPY .htaccess.prod /var/www/html/.htaccess
VOLUME /var/www/html
i have tried this steps and it appears that the problem still persist
https://cloud.google.com/blog/topics/developers-practitioners/3-ways-optimize-cloud-run-response-times
this is our cloud run specification
i have also tried to increase the minimum number of instance in autoscaling to 10 but there seems to be no difference.
are there any alternatives to this issue ?

It turns out that the problem was vpc connector, because our database is connected using tcp/ip and we have to whitelist a public ip to be able to access it securely and that is why we are using vpc, so we are using mikrotik to bind our ip instead of using vpc connector. and the gcr service can finally handle 10000 request.

Related

Container docker remains in exited status

Hi I'm building a docker image that starts Mysql, Elasticsearch and application, it completes the build of the image but when I run the container it doesn't start and remains in exited status.
DOCKERFILE:
FROM ubuntu:20.04
WORKDIR \\wsl.localhost\Ubuntu\home\marco\project\multiple-docker-container\backend
COPY . .
RUN apt update
RUN apt-get update
RUN apt-get install wget -y
RUN apt install apt-transport-https ca-certificates gnupg -y
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN sh -c 'echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" > /etc/apt/sources.list.d/elastic-7.x.list'
RUN apt-get update -y
RUN apt-get install elasticsearch -y
RUN apt install mysql-server -y
RUN apt install npm -y
RUN npm install
RUN npm install nodemon
RUN apt install nodejs -y
EXPOSE 3306
EXPOSE 9200
EXPOSE 5000
COPY entrypoint.sh /bin/entrypoint.sh
RUN chmod +x /bin/entrypoint.sh
ENTRYPOINT ["/bin/entrypoint.sh"]
ENTRYPOINT.SH
#!/bin/sh
service elasticsearch start
service elasticsearch enable
service mysql start
npm start
I expect at least that it goes up, I would like to understand which is the problem since on WSL ubuntu gives me this problem while on the virtual machine it works

Dockerfile to create an image to be used as an container agent in jenkins

I am trying to use a node agent container in Jenkins to run npm instructions on it. So that, I am creating a Dockerfile to get a valid image with ssh and nodejs. The executor runs fine, but when I use npm it says that it doesn't know the command.
The same problem happens when (after building the dockerfile) I do docker exec -it af5451297d85 bash and after that, inside the container, I try to do npm --v (for example).
# This Dockerfile is used to build an image containing an node jenkins agent
FROM node:9.0
MAINTAINER Estefania Castro <estefania.castro#luceit.es>
# Upgrade and Install packages
RUN apt-get update && apt-get -y upgrade && apt-get install -y git openssh-server
# Install NGINX to test.
RUN apt-get install nginx -y
# Prepare container for ssh
RUN mkdir /var/run/sshd && adduser --quiet jenkins && echo "jenkins:jenkins" | chpasswd
RUN npm install
ENV CI=true
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
I would like to run npm instructions like npm install, npm publish, ... to manage my project in a jenkinsfile. Could anyone help?
Thanks
I have already solved the problem (after two weeks haha).
FROM jenkins/ssh-slave
# Install selected extensions and other stuff
RUN apt-get update && apt-get -y --no-install-recommends install && apt-get clean
RUN apt-get install -y curl
# Install nodejs
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y nodejs && apt-get install -y nginx

Gcloud meanjs build failing via docker install

I'm trying to deploy the following container on google cloud app engine using gcloud app deploy, it's the meanjs.org vanilla image. It uses a dockerfile, I'm new to docker and I'm trying to learn it on the fly, so if anyone can help that'd be great, thanks.
It looks as if the install of node via the dockerfile fails, I've checked node's documentation on github, and nothing has changed syntactically to what is in the existing dockerfile. I will attempt to recreate on my local workstation this morning, and will update this query shortly.
the errors are as follows..first docker error second errorbuild fail error
The docker file..
# Build:
# docker build -t meanjs/mean .
#
# Run:
# docker run -it meanjs/mean
#
# Compose:
# docker-compose up -d
FROM ubuntu:latest
MAINTAINER MEAN.JS
# 80 = HTTP, 443 = HTTPS, 3000 = MEAN.JS server, 35729 = livereload, 8080 = node-inspector
EXPOSE 80 443 3000 35729 8080
# Set development environment as default
ENV NODE_ENV development
# Install Utilities
RUN apt-get update -q \
&& apt-get install -yqq \
curl \
git \
ssh \
gcc \
make \
build-essential \
libkrb5-dev \
sudo \
apt-utils \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install nodejs
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
RUN sudo apt-get install -yq nodejs \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install MEAN.JS Prerequisites
RUN npm install --quiet -g gulp bower yo mocha karma-cli pm2 && npm cache clean
RUN mkdir -p /opt/mean.js/public/lib
WORKDIR /opt/mean.js
# Copies the local package.json file to the container
# and utilities docker container cache to not needing to rebuild
# and install node_modules/ everytime we build the docker, but only
# when the local package.json file changes.
# Install npm packages
COPY package.json /opt/mean.js/package.json
RUN npm install --quiet && npm cache clean
# Install bower packages
COPY bower.json /opt/mean.js/bower.json
COPY .bowerrc /opt/mean.js/.bowerrc
RUN bower install --quiet --allow-root --config.interactive=false
COPY . /opt/mean.js
# Run MEAN.JS server
CMD npm install && npm start
Okay, so after much wrestling unsuccessfully trying to install docker on windows, I went back to the dockerfile to try and identify the core issue here. Fortunately I find a solution as follows..
NodeJS is attempting to install on Ubuntu.
In the dockerfile at the root of the app
Ubuntu version is configured as:
FROM ubuntu:latest
simply change it to:
FROM ubuntu:14.04
I'm not sure if this is the best version to use for the build but it seems to be running successfully. Please feel free to amend/recommend an alternative solution. I'm new to Docker so pls be kind.

docker-compose update from S3 bucket

Our Dockerfile invokes a python script which copies a binary from S3 to /usr/bin. This works fine the first time. But from then on "docker-compose build" does nothing because everything is cached. This is a problem if the binary has changed.
Short of building with --no-cache, what is the best way to make sure "docker-compose build" will always pick up the new binary if there is one. We don't mind if it unnecessarily downloads the binary even if unchanged, so long as it does work then the binary has changed.
Seems like we want a Dockerfile step that always executes?
FROM ubuntu:trusty
RUN apt-get update
RUN apt-get -y install software-properties-common
RUN apt-get -y install --reinstall ca-certificates
RUN add-apt-repository ppa:fkrull/deadsnakes
RUN apt-get update && apt-get install -y \
curl \
wget \
vim \
git \
python3.5 \
python3-pip \
python3-setuptools \
libpcap0.8-dev
RUN ln -sf /usr/bin/python3.5 /usr/bin/python3
ADD . /app
WORKDIR /app
# Install Python Requirements
RUN pip3 install -r etc/python/requirements.txt
# Download/Install processor and associated libs
RUN python3 setup_processor.py
RUN mkdir -p /logs
ENTRYPOINT ["/app/entrypoint.sh"]
Where setup_processor.py downloads directly from S3 to /usr/bin.
So as of now there is no direct feature like this. But there is a workaround to your solution.
Add Build argument before your download step
ARG BUILD_ON=now
# Download/Install processor and associated libs
RUN python3 setup_processor.py
While building the image use below
docker build --build-arg BUILD_ON=$(date) ....
This will always make sure that you get a change in the ARG step and all steps cache after that will be invalidated
A feature has already been requested and being worked out on below thread
https://github.com/moby/moby/issues/1996

Supervisor is not starting up

I am following cloudera cdh4 installation guide.
My base file
FROM ubuntu:precise
RUN apt-get update -y
#RUN apt-get install -y curl
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository ppa:webupd8team/java
RUN apt-get update -y
RUN echo debconf shared/accepted-oracle-license-v1-1 select true | \
debconf-set-selections
RUN apt-get install -y oracle-java7-installer
#Checking java version
RUN java -version
My hadoop installation file
java_ubuntu is the image build from my base file.
FROM java_ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y curl
RUN curl http://archive.cloudera.com/cdh4/one-click-install/precise/amd64/cdh4-repository_1.0_all.deb > cdh4-repository_1.0_all.deb
RUN dpkg -i cdh4-repository_1.0_all.deb
RUN curl -s http://archive.cloudera.com/cdh4/ubuntu/precise/amd64/cdh/archive.key | apt-key add -
RUN apt-get update -y
RUN apt-get install -y hadoop-0.20-conf-pseudo
#Check for /etc/hadoop/conf.pseudo.mrl to verfiy hadoop packages
RUN echo "dhis"
RUN dpkg -L hadoop-0.20-conf-pseudo
Supervisor part
hadoop_ubuntu is the image build from my hadoop installation docker file
FROM hadoop_ubuntu:latest
USER hdfs
RUN hdfs namenode -format
USER root
RUN apt-get install -y supervisor
RUN echo "[supervisord] nodameon=true [program=namenode] command=/etc/init.d/hadoop-hdfs-namenode -D" > /etc/supervisorconf.d
CMD ["/usr/bin/supervisord"]
Program is successfully build. But namenode is not starting up? How to use supervisor?
You have your config in /etc/supervisorconf.d and I don't believe that's the right location.
It should be /etc/supervisor/conf.d/supervisord.conf instead.
Also it's easier to maintain if you make a file locally and then use the COPY instruction to put it in the image.
Then as someone mentioned you can connect to the container after it's running (docker exec -it <container id> /bin/bash) and then run supervisorctl to see what's running and what might be wrong.
Perhaps you need line breaks in your supervisor.conf. Try hand crafting one and COPY it into your dockerfile for testing.
Docker and supervisord

Resources