I am building a docker image on macOS. This is the part of my Dockerfile:
FROM ubuntu:18.04
# Dir you need to override to keep data on reboot/new container:
VOLUME /var/lib/mysql
#VOLUME /var/www/MISP/Config
# Dir you might want to override in order to have custom ssl certs
# Need: "misp.key" and "misp.crt"
#VOLUME /etc/ssl/private
# 80/443 - MISP web server, 3306 - mysql, 6379 - redis, 50000 - MISP ZeroMQ
EXPOSE 80 443 3306 6379 50000
ENV DEBIAN_FRONTEND noninteractive
ENV DEBIAN_PRIORITY critical
RUN apt-get update && apt-get install -y supervisor cron logrotate syslog-ng-core postfix curl gcc git gnupg-agent make python3 openssl redis-server sudo vim zip wget mariadb-client mariadb-server sqlite3 moreutils apache2 apache2-doc apache2-utils libapache2-mod-php php php-cli php-gnupg php-dev php-json php-mysql php7.2-opcache php-readline php-redis php-xml php-mbstring rng-tools python3-dev python3-pip python3-yara python3-redis python3-zmq libxml2-dev libxslt1-dev zlib1g-dev python3-setuptools libpq5 libjpeg-dev libfuzzy-dev ruby asciidoctor tesseract-ocr imagemagick libpoppler-cpp-dev
RUN mkdir -p /var/www/MISP /root/.config /root/.git
WORKDIR /var/www/MISP
RUN chown -R www-data:www-data /var/www/MISP /root/.config /root/.git
RUN sudo -u www-data -H git clone https://github.com/MISP/MISP.git /var/www/MISP
When I build the image, it returns this error:
Step 16/16 : RUN sudo -u www-data -H git clone https://github.com/MISP/MISP.git /var/www/MISP
---> Running in f0f0b76b353c
Cloning into '/var/www/MISP'...
fatal: unable to access 'https://github.com/MISP/MISP.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
It can't find the certificate. Even if I add RUN git config --global http.sslverify "false" it passes this error but it returns error in the rest of the Dockerfile.
It means that my Docker for macOS has problem with ssl connections, is there any configuration for ssl certificates or other ssl setting for Docker in mac that I missed?
I have run into the same issue. Exactly the same error message.
What you have to realize here is that the git command you are running is inside the docker environment and has nothing to do with the fact that you are on MacOS, or the git version you have installed in your operating system.
What is happening here is that the plain ubuntu:18.04 image is outdated and doesn't have up to date certificates.
In my case I fixed this by adding the following two statements right after you FROM line:
FROM ubuntu:18.04
# Update the OS
RUN apt-get update && apt-get upgrade -y
In my case it was another base image but the behavior/solution was the same.
Related
So, my development team was trying to migrate from GCE to GCR and we have succeeded in deploying the Cloud Run service and The CI/CD using github actions. But we encountered an issue which is the amount of request the Cloud Run service can handle is not more than 100 request concurrently.
So our base framework for the app is php/codeigniter and the web server we are using is apache2 webserver, along with sql server as our database that we already included in our dockerfile
FROM php:7.4.22-apache
USER root
RUN apt-get update && apt-get upgrade -y
RUN apt-get update && apt-get install -y gnupg2
RUN apt-get install libcurl4-openssl-dev
RUN apt-get install zlib1g-dev
RUN apt-get install libpng-dev -y
RUN docker-php-ext-install curl
RUN docker-php-ext-install gd
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN apt-get install wget
RUN wget http://ftp.de.debian.org/debian/pool/main/g/glibc/multiarch-support_2.28-10+deb10u1_amd64.deb
RUN dpkg -i multiarch-support_2.28-10+deb10u1_amd64.deb
RUN apt-get install -y libodbc1
RUN apt-get install -y unixodbc-dev
RUN pecl install sqlsrv
RUN pecl install pdo_sqlsrv
RUN echo "extension=pdo_sqlsrv.so" >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.:\s||"`/30-pdo_sqlsrv.ini
RUN echo "extension=sqlsrv.so" >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.:\s||"`/30-sqlsrv.ini
RUN rm multiarch-support_2.28-10+deb10u1_amd64.deb
# RUN ACCEPT_EULA=Y apt-get -y install mssql-tools
RUN ACCEPT_EULA=Y apt-get install msodbcsql17
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
COPY apache2.conf /etc/apache2/apache2.conf
COPY openssl.cnf /etc/ssl/openssl.cnf
COPY php.ini /etc/php/7.4/apache2/php.ini
RUN a2enmod rewrite
RUN /etc/init.d/apache2 restart
and this is the dockerfile that we used
FROM jamesjones/test-base:latest
USER root
COPY . /var/www/html
RUN cd /var/www/html
RUN chown -R www-data:www-data /var/www/html
COPY v1/application/config/config.prod.php /var/www/html/v1/application/config/config.php
COPY v1/application/config/database.prod.php /var/www/html/v1/application/config/database.php
COPY v1/application/config/routes.prod.php /var/www/html/v1/application/config/routes.php
COPY v2/application/config/config.prod.php /var/www/html/v2/application/config/config.php
COPY v2/application/config/database.prod.php /var/www/html/v2/application/config/database.php
COPY v2/application/config/routes.prod.php /var/www/html/v2/application/config/routes.php
COPY .htaccess.prod /var/www/html/.htaccess
VOLUME /var/www/html
i have tried this steps and it appears that the problem still persist
https://cloud.google.com/blog/topics/developers-practitioners/3-ways-optimize-cloud-run-response-times
this is our cloud run specification
i have also tried to increase the minimum number of instance in autoscaling to 10 but there seems to be no difference.
are there any alternatives to this issue ?
It turns out that the problem was vpc connector, because our database is connected using tcp/ip and we have to whitelist a public ip to be able to access it securely and that is why we are using vpc, so we are using mikrotik to bind our ip instead of using vpc connector. and the gcr service can finally handle 10000 request.
I am trying to use a Docker image on Google App Engine Flexible Environment.
FROM ubuntu:bionic
MAINTAINER Makina Corpus "contact#makina-corpus.com"
ENV PYTHONUNBUFFERED 1
ENV DEBIAN_FRONTEND noninteractive
ENV LANG C.UTF-8
RUN apt-get update -qq && apt-get install -y -qq \
# std libs
git less nano curl \
ca-certificates \
wget build-essential\
# python basic libs
python3.8 python3.8-dev python3.8-venv gettext \
# geodjango
gdal-bin binutils libproj-dev libgdal-dev \
# postgresql
libpq-dev postgresql-client && \
apt-get clean all && rm -rf /var/apt/lists/* && rm -rf /var/cache/apt/*
# install pip
RUN wget https://bootstrap.pypa.io/get-pip.py && python3.8 get-pip.py && rm get-pip.py
RUN pip3 install --no-cache-dir setuptools wheel -U
CMD ["/bin/bash"]
The docker image appears to build correctly but when the service deploys the application crashes and i get this error message:
File "/Users/NAME/Documents/gcp/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 183, in IsDone
encoding.MessageToPyValue(operation.error)))
OperationError: Error Response: [9]
Application startup error! Code: APP_CONTAINER_CRASHED
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error! Code: APP_CONTAINER_CRASHED
This is failing as the Dockerfile is installing a significantly outdated version of the GDAL package which conflicts with the more current python installation.
How do I ensure that the dockerfile has the correct package repository and is installing the right, up to date, versions? Is there some line that I can insert to update the repository, or at least print the repository, before it starts installing?
EDIT:
My app.yaml:
# [START django_app]
runtime: custom
env: flex
entrypoint: gunicorn -b :$PORT MyApplication.wsgi
runtime_config:
python_version: 3
# [END runtime]
handlers:
# This configures Google App Engine to serve the files in the app's static
# directory.
#- url: /static
# static_dir: static/
#- url: /MyApplication/static
# static_dir: MyApplication/static/
# This handler routes all requests not caught above to your main app. It is
# required when static routes are defined, but can be omitted (along with
# the entire handlers section) when there are no static files defined.
- url: /.*
script: auto
# [END django_app]
resources:
cpu: 1
memory_gb: 2
disk_size_gb: 10
You App Engine deployment is failing because it needs a service listening on port 8080 and it cannot run bash on the cloud. If you need to debug your App Engine Flex instance, you need to first get a service on port 8080 and then enable SSH.
Similar issues are being tackled here and here
Your Dockerfile should run a command that spins up your application listening on port 8080:
CMD gunicorn -b :$PORT MyApplication.wsgi
GAE actually spins up containers with docker run and I am not sure why they would also have the entrypoint specified in the app.yaml file. Better not ask too many question with GAE.
Other issues for you to think about as mentioned in some of the comments above:
Wouldn't it be better to use Google's GAE base image (as mentioned in some of the comments above) -> FROM gcr.io/google-appengine/python?
If so, you need to consider it is based off Ubuntu 16.04 and you need to update dependencies (by adding the UbuntuGIS PPA: add-apt-repository -y ppa:ubuntugis/ppa)
How do you install your other dependencies? Running pip using a requirements file?
I am trying to build and nginx plus docker image with centos 7 .Below is the Dockerfile I am using.
FROM centos:7
MAINTAINER NGINX Docker Maintainers "x#x.com"
RUN yum install -y wget
# Copy certificate and key to the build context
ADD nginx-repo.crt /etc/ssl/nginx/
ADD nginx-repo.key /etc/ssl/nginx/
COPY nginx.conf /etc/nginx/nginx.conf
# Get other files required for installation
RUN wget -q -O /etc/ssl/nginx/CA.crt https://cs.nginx.com/static/files/CA.crt
RUN wget -q -O /etc/yum.repos.d/nginx-plus-7.repo https://cs.nginx.com/static/files/nginx-plus-7.repo
# Install NGINX Plus
RUN yum install -y nginx-plus
# forward request logs to Docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
I have nginx-repo.crt and nginx-repo.key with me for the developer license. When I do docker build with this I am getting the below error.
Step 10/14 : RUN yum install -y nginx-plus
---> Running in 4e8ccb452b81
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
* base: mirrors.mit.edu
* extras: mirror.umd.edu
* updates: centos.servint.com
https://plus-pkgs.nginx.com/centos/7/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 403 - Forbidden
Trying other mirror.
To address this issue please refer to the below wiki article
https://wiki.centos.org/yum-errors
If above article doesn't help to resolve this issue please use https://bugs.centos.org/.
One of the configured repositories failed (nginx-plus repo),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=nginx-plus ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable nginx-plus
or
subscription-manager repos --disable=nginx-plus
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=nginx-plus.skip_if_unavailable=true
failure: repodata/repomd.xml from nginx-plus: [Errno 256] No more mirrors to try.
https://plus-pkgs.nginx.com/centos/7/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 403 - Forbidden
The command '/bin/sh -c yum install -y nginx-plus' returned a non-zero code: 1
Couldn't figure out what is the issue. The steps before
RUN yum install -y nginx-plus
are all succesfull
UPDATE
Issue was fixed after replacing the repo path and changing to certificate installation. Updated Docker file
FROM centos:centos7
MAINTAINER NGINX Docker Maintainers "docker-maint#nginx.com"
RUN yum install -y wget
# Download certificate and key from the customer portal (https://cs.nginx.com)
# and copy to the build context
ADD nginx-repo.crt /etc/ssl/nginx/
ADD nginx-repo.key /etc/ssl/nginx/
RUN yum install ca-certificates
#Get other files required for installation
RUN wget -P /etc/yum.repos.d https://cs.nginx.com/static/files/nginx-plus-7.4.repo
#Install NGINX Plus
RUN yum install -y nginx-plus
#forward request logs to Docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Looks like the repo is not available, as per NGINX Docs, you can try the latest nginx-plus-repo:
sudo wget -P /etc/yum.repos.d https://cs.nginx.com/static/files/nginx-plus-7.4.repo
So your Dockerfile will look like:
FROM centos:7
MAINTAINER NGINX Docker Maintainers "x#x.com"
RUN yum install -y wget
# Copy certificate and key to the build context
ADD nginx-repo.crt /etc/ssl/nginx/
ADD nginx-repo.key /etc/ssl/nginx/
COPY nginx.conf /etc/nginx/nginx.conf
# Get other files required for installation
RUN yum install ca-certificates
RUN sudo wget -P /etc/yum.repos.d https://cs.nginx.com/static/files/nginx-plus-7.4.repo
# Install NGINX Plus
RUN yum install -y nginx-plus
# forward request logs to Docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
I'm trying to deploy the following container on google cloud app engine using gcloud app deploy, it's the meanjs.org vanilla image. It uses a dockerfile, I'm new to docker and I'm trying to learn it on the fly, so if anyone can help that'd be great, thanks.
It looks as if the install of node via the dockerfile fails, I've checked node's documentation on github, and nothing has changed syntactically to what is in the existing dockerfile. I will attempt to recreate on my local workstation this morning, and will update this query shortly.
the errors are as follows..first docker error second errorbuild fail error
The docker file..
# Build:
# docker build -t meanjs/mean .
#
# Run:
# docker run -it meanjs/mean
#
# Compose:
# docker-compose up -d
FROM ubuntu:latest
MAINTAINER MEAN.JS
# 80 = HTTP, 443 = HTTPS, 3000 = MEAN.JS server, 35729 = livereload, 8080 = node-inspector
EXPOSE 80 443 3000 35729 8080
# Set development environment as default
ENV NODE_ENV development
# Install Utilities
RUN apt-get update -q \
&& apt-get install -yqq \
curl \
git \
ssh \
gcc \
make \
build-essential \
libkrb5-dev \
sudo \
apt-utils \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install nodejs
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
RUN sudo apt-get install -yq nodejs \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install MEAN.JS Prerequisites
RUN npm install --quiet -g gulp bower yo mocha karma-cli pm2 && npm cache clean
RUN mkdir -p /opt/mean.js/public/lib
WORKDIR /opt/mean.js
# Copies the local package.json file to the container
# and utilities docker container cache to not needing to rebuild
# and install node_modules/ everytime we build the docker, but only
# when the local package.json file changes.
# Install npm packages
COPY package.json /opt/mean.js/package.json
RUN npm install --quiet && npm cache clean
# Install bower packages
COPY bower.json /opt/mean.js/bower.json
COPY .bowerrc /opt/mean.js/.bowerrc
RUN bower install --quiet --allow-root --config.interactive=false
COPY . /opt/mean.js
# Run MEAN.JS server
CMD npm install && npm start
Okay, so after much wrestling unsuccessfully trying to install docker on windows, I went back to the dockerfile to try and identify the core issue here. Fortunately I find a solution as follows..
NodeJS is attempting to install on Ubuntu.
In the dockerfile at the root of the app
Ubuntu version is configured as:
FROM ubuntu:latest
simply change it to:
FROM ubuntu:14.04
I'm not sure if this is the best version to use for the build but it seems to be running successfully. Please feel free to amend/recommend an alternative solution. I'm new to Docker so pls be kind.
I have the following Dockerfile and was wondering what I would need to do in order to get access to it from my host machine by visiting myapp.dev:
FROM ubuntu:16.04
USER root
RUN apt-get update && apt-get -y upgrade && apt-get install apt-utils -y && DEBIAN_FRONTEND=noninteractive apt-get -y install \
apache2 php7.0 php7.0-mysql libapache2-mod-php7.0 curl lynx-cur git
EXPOSE 80
ADD www /var/www/site
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
CMD /usr/sbin/apache2ctl -D FOREGROUND
EXPOSE 80
I am using the following command to run the container:
docker run -d -p 8080:80
If you only want to be able to resolve it locally you could add an alias for localhost in your hosts file.
Locate your hosts file.
Linux: /etc/hosts
MacOS: /private/etc/hosts
Windows: C:\Windows\System32\drivers\etc\hosts
Add this line at the end of the file:
127.0.0.1 myapp.dev
Now you can access your container using myapp.dev:8080.