I am working in the following Dockerfile:
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get install -y \
curl \
apache2 \
php5 \
php5-cli \
libapache2-mod-php5 \
php5-gd \
php5-json \
php5-mcrypt \
php5-mysql \
php5-curl \
php5-memcached \
php5-mongo \
zend-framework
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
chown www-data /usr/local/bin/composer && composer --version
# Install usefull PHP tools
RUN composer global require sebastian/phpcpd && \
composer global require phpmd/phpmd && \
composer global require squizlabs/php_codesniffer
# Install xdebug after we install composer since it cause issues
# see https://getcomposer.org/doc/articles/troubleshooting.md#xdebug-impact-on-composer
RUN apt-get install -y php5-xdebug
As you may notice this install PHP 5.5.x and it comes with the default configuration which I would like to override with my own values.
I have the following directory structure:
docker-php55/
├── container-files
│ ├── config
│ │ └── init
│ │ └── vhost_default
│ └── etc
│ └── php.d
│ ├── zz-php-directories.ini
│ └── zz-php.ini
├── Dockerfile
├── LICENSE
├── README.md
└── run
The files zz-php-directories.ini and zz-php.ini are my configurations that I should write to /etc/php5/apache2/php.ini upon image creation. The content of the files is the following:
zz-php.ini
; Basic configuration override
expose_php = Off
memory_limit = 512M
post_max_size = 128M
upload_max_filesize = 128M
date.timezone = UTC
max_execution_time = 120
; Error reporting
display_errors = stderr
display_startup_errors = Off
error_reporting = E_ALL
; A bit of performance tuning
realpath_cache_size = 128k
; OpCache tuning
opcache.max_accelerated_files = 32000
; Temporarily disable using HUGE PAGES by OpCache.
; This should improve performance, but requires appropriate OS configuration
; and for now it often results with some weird PHP warning:
; PHP Warning: Zend OPcache huge_code_pages: madvise(HUGEPAGE) failed: Invalid argument (22) in Unknown on line 0
opcache.huge_code_pages=0
; Xdebug
[Xdebug]
xdebug.remote_enable = true
xdebug.remote_host = "192.168.3.1" // this IP should be the host IP
xdebug.remote_port = "9001"
xdebug.idekey = "XDEBUG_PHPSTORM"
zz-php-directories.ini
; Configure temp path locations
sys_temp_dir = /data/tmp/php
upload_tmp_dir = /data/tmp/php/uploads
session.save_path = /data/tmp/php/sessions
uploadprogress.file.contents_template = "/data/tmp/php/upload_contents_%s"
uploadprogress.file.filename_template = "/data/tmp/php/upt_%s.txt"
How do I override the default php.ini parameters on the image with the ones on those files upon image creation?
EDIT: Improve the question
To leave an example, zz-php.ini is a local file placed in my laptop|PC. As soon as I install PHP in the image it comes with a default configuration file, this mean I should have a file under /etc/php5/apache2/php.ini.
This default configuration file already has default values like for example: expose_php = On (again this is the default, others comes as ;realpath_cache_size =) so what I want to do is to change the value for the default file with the value from my file, in other words:
default (as in /etc/php5/apache2/php.ini) expose_php = On
override (as in zz-php.ini) expose_php = Off
At the end I should have the values from zz-php.ini overwrited in /etc/php5/apache2/php.ini
As for the host IP address I think I could use a ENV var and pass to the build as an argument, I am right? If no, then how would you get the host IP address needed for that setup?
That's two questions.
1) Just use the COPY instruction to copy your local php.ini into the image location. Eg:
COPY php.ini /etc/php5/apache2/php.ini
2) You don't want to hardcode any ip into your image. That needs to be done when the container is started. The standard way of doing this with docker is to specify an environment variable like HOST_IP and you use a shell script to make the modifications on the container at start time. For instance:
Your inject.sh script:
#!/usr/bin/bash
sed -i -E "s/xdebug.remote_host.*/xdebug.remote_host=$HOST_IP/" /etc/php5/apache2/php.ini
You need to add the inject.sh file to your image when you build it.
COPY inject.sh /usr/local/bin/
Then you can initialize and start your container as follow:
docker run -e HOST_IP=53.62.10.12 mycontainer bash -c "inject.sh && exec myphpapp"
The exec is needed to make sure the myphpapp becomes the main process of the container (ie: it has PID 1) otherwise it won't receive kill commands (like Ctrl-C).
Related
I am trying to build a docker image that contains all of the necessary plugins/providers that several source repos need, so that when an automated terraform validate runs, it doesn't have to download gigs of redundant data.
However, I recognize that this provides for a maintenance problem in that someone may update a plugin version, and that would needed to be downloaded, since the docker image would not contain it.
The question
How can I pre-download all providers and plugins
Tell the CLI use those predownloaded plugins AND
also tell it that, if it doesn't find what it needs locally, then it can go to the network
Below are the relevant file:
.terraformrc
plugin_cache_dir = "$HOME/.terraform.d/plugin-cache"
disable_checkpoint = true
provider_installation {
filesystem_mirror {
path = "$HOME/.terraform/providers"
}
direct {
}
}
tflint (not relevant to this question, but it shows up in the below Dockerfile)
plugin "aws" {
enabled = true
version = "0.21.1"
source = "github.com/terraform-linters/tflint-ruleset-aws"
}
plugin "azurerm" {
enabled = true
version = "0.20.0"
source = "github.com/terraform-linters/tflint-ruleset-azurerm"
}
Dockerfile
FROM ghcr.io/terraform-linters/tflint-bundle AS base
LABEL name=tflint
RUN adduser -h /home/jenkins -s /bin/sh -u 1000 -D jenkins
RUN apk fix && apk --no-cache --update add git terraform openssh
ADD .terraformrc /home/jenkins/.terraformrc
RUN mkdir -p /home/jenkins/.terraform.d/plugin-cache/registry.terraform.io
ADD .tflint.hcl /home/jenkins/.tflint.hcl
WORKDIR /home/jenkins
RUN tflint --init
FROM base AS build
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh && \
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_ed25519 && \
chmod 400 /root/.ssh/id_ed25519 && \
touch /root/.ssh/known_hosts && \
ssh-keyscan mygitrepo >> /root/.ssh/known_hosts
RUN git clone git#mygitrepo:wrai/tools/g.git
RUN git clone git#mygitrepo:myproject/a.git && \
git clone git#mygitrepo:myproject/b.git && \
git clone git#mygitrepo:myproject/c.git && \
git clone git#mygitrepo:myproject/d.git && \
git clone git#mygitrepo:myproject/e.git && \
git clone git#mygitrepo:myproject/f.git
RUN ls -1d */ | xargs -I {} find {} -name '*.tf' | xargs -n 1 dirname | sort -u | \
xargs -I {} -n 1 -P 20 terraform -chdir={} providers mirror /home/jenkins/.terraform.d
RUN chown -R jenkins:jenkins /home/jenkins
USER jenkins
FROM base AS a
COPY --from=build /home/jenkins/a/ /home/jenkins/a
RUN cd /home/jenkins/a && terraform init
FROM base AS b
COPY --from=build /home/jenkins/b/ /home/jenkins/b
RUN cd /home/jenkins/b && terraform init
FROM base AS c
COPY --from=build /home/jenkins/c/ /home/jenkins/c
RUN cd /home/jenkins/c && terraform init
FROM base AS azure_infrastructure
COPY --from=build /home/jenkins/d/ /home/jenkins/d
RUN cd /home/jenkins/d && terraform init
FROM base AS aws_infrastructure
COPY --from=build /home/jenkins/e/ /home/jenkins/e
RUN cd /home/jenkins/e && terraform init
Staging plugins:
This is most easily accomplished with the plugin cache dir setting in the CLI. This supersedes the old usage with the -plugin-dir=PATH argument for the init command. You could also set a filesystem mirror in each terraform block within the root module config, but this would be cumbersome for your use case. In your situation, you are already configuring this in your .terraformrc, but the filesystem_mirror path conflicts with the plugin_cache_dir. You would want to resolve that conflict, or perhaps remove the mirror block entirely.
Use staged plugins:
Since the setting is captured in the CLI configuration file within the Dockerfile, this would be automatically used in future commands.
Download additional plugins if necessary:
This is default behavior of the init command, and therefore requires no further actions on your part.
Side note:
The jenkins user typically is /sbin/nologin for shell and /var/lib/jenkins for home directory. If the purpose of this Docker image is for a Jenkins build agent, then you may want the jenkins user configuration to be more aligned with the standard.
TL;DR:
Configure the terraform plugin cache directory
Create directory with a single TF file containing required_providers block
Run terraform init from there
...
I've stumbled over this question as I tried to figure out the same thing.
I first tried leveraging an implied filesystem_mirror by running terraform providers mirror /usr/local/share/terraform/plugins in a directory containing only one terraform file containing the required_providers block. This works fine as long as you only use the versions of the providers you mirrored.
However, it's not possible to use a different version of a provider than the one you mirrored, because:
Terraform will scan all of the filesystem mirror directories to see which providers are placed there and automatically exclude all of those providers from the implied direct block.
I've found it to be a better solution to use a plugin cache directory instead. EDIT: You can prefetch the plugins by setting TF_PLUGIN_CACHE_DIR to some directory and then running terraform init in a directory that only declares the required_providers.
Previously overengineered stuff below:
The only hurdle left was that terraform providers mirror downloads the providers in the packed layout:
Packed layout: HOSTNAME/NAMESPACE/TYPE/terraform-provider-TYPE_VERSION_TARGET.zip is the distribution zip file obtained from the provider's origin registry.
while Terraform expects the plugin cache directory to use the unpacked layout:
Unpacked layout: HOSTNAME/NAMESPACE/TYPE/VERSION/TARGET is a directory containing the result of extracting the provider's distribution zip file.
So I converted the packed layout to the unpacked layout with the help of find and parallel:
find path/to/plugin-dir -name index.json -exec rm {} +`
find path/to/plugin-dir -name '*.json' | parallel --will-cite 'mkdir -p {//}/{/.}/linux_amd64; unzip {//}/*.zip -d {//}/{/.}/linux_amd64; rm {}; rm {//}/*.zip'
I've cross-compiled opencv3.4, and it is running well on board. The project is managed through Yocto. so I wrote this opencv-gl.bb file to copy prebuilt opencv files to the target filesystem. But after I burned the mirror image to the developed board, I got nothing. It seems the copy command has neveber been executed. Where am I wrong?
SUMMARY = "Install opencv 3.4.14 libraries"
LICENSE = "CLOSED"
LIC_FILES_CHKSUM = ""
SRC_URI = "\
file://etc \
file://usr \
"
S = "${WORKDIR}"
## prebuilt library don't need following steps
do_configure[noexec] = "1"
do_compile[noexec] = "1"
do_package_qa[noexec] = "1"
do_install[nostamp] += "1"
do_install() {
install -d ${D}/usr/local/bin
cp -rf ${S}/usr/bin/* ${D}/usr/local/bin/
install -d ${D}/usr/local/lib
cp -rf ${S}/usr/lib/* ${D}/usr/local/lib/
install -d ${D}/usr/local/include
cp -rf ${S}/usr/include/* ${D}/usr/local/include/
install -d ${D}/usr/local/share
cp -rf ${S}/usr/share/* ${D}/usr/local/share/
}
# let the build system extends the FILESPATH file search path
FILESEXTRAPATHS_prepend := "${THISDIR}/prebuilts:"
FILES_${PN} += " \
/usr/local/bin/* \
/usr/local/lib/* \
/usr/local/include/* \
/usr/local/share/* \
"
# INSANE_SKIP_${PN} += "installed-vs-shipped"
the file structure is as follows:
wb#ubuntu:~/Yocto/meta-semidrive/recipes-test/opencv-gl$ tree -L 3
.
├── opencv-gl.bb
└── prebuilts
├── etc
│ └── ld.so.conf
├── LICENSE
└── usr
├── bin
├── include
├── lib
└── share
7 directories, 3 files
I am trying to use the sed command to replace variables during docker build. The variable I am attempting to do (to start) is $DATABASE_HOST. The value for that is coming from my .env file. I am reading online that environment variables are only available during run time if they come from the .env file. Due to this, my sed command is not registering.
Dockerfile:
# Dockerfile for Sphinx SE
# https://hub.docker.com/_/alpine/
FROM alpine:3.12
# https://sphinxsearch.com/blog/
ENV SPHINX_VERSION 3.4.1-efbcc65
# Install dependencies
RUN apk add --no-cache mariadb-connector-c-dev \
postgresql-dev \
wget \
sed
# set up and expose directories
RUN mkdir -pv /opt/sphinx/log /opt/sphinx/index
VOLUME /opt/sphinx/index
# http://sphinxsearch.com/downloads/sphinx-3.3.1-b72d67b-linux-amd64-musl.tar.gz
RUN wget http://sphinxsearch.com/files/sphinx-${SPHINX_VERSION}-linux-amd64-musl.tar.gz -O /tmp/sphinxsearch.tar.gz \
&& cd /opt/sphinx && tar -xf /tmp/sphinxsearch.tar.gz \
&& rm /tmp/sphinxsearch.tar.gz
# point to sphinx binaries
ENV PATH "${PATH}:/opt/sphinx/sphinx-3.4.1/bin"
RUN indexer -v
# redirect logs to stdout
RUN ln -sv /dev/stdout /opt/sphinx/log/query.log \
&& ln -sv /dev/stdout /opt/sphinx/log/searchd.log
# expose TCP port
EXPOSE 36307
EXPOSE 9306
# Copy base sphinx.conf file to container
VOLUME /opt/sphinx/conf
COPY ./sphinx.conf /opt/sphinx/conf/sphinx.conf
# Copy all docker sphinx.conf files
COPY ./configs/web-finder/docker/ /opt/sphinx/conf/
# look for and replace
RUN sed -i "s+DATABASE_HOST+${DATABASE_HOST}+g" /opt/sphinx/conf/sphinx.conf
# Concat the sphinx.conf files for all apps
# RUN cat /tmp/myconfig.append >> /etc/portage/make.conf && rm -f /tmp/myconfig.append
CMD indexer --all --config /opt/sphinx/conf/sphinx.conf \
&& searchd --nodetach --config /opt/sphinx/conf/sphinx.conf
.env file:
DATABASE_HOST=someport
DATABASE_USERNAME=someusername
DATABASE_PASSWORD=somepassword
DATABASE_SCHEMA=someschema
DATABASE_PORT=3306
SPHINX_PORT=36307
sphinx.conf:
searchd
{
listen = 127.0.0.1:$SPHINX_PORT
log = /opt/sphinx/searchd.log
query_log = /opt/sphinx/query.log
read_timeout = 5
max_children = 30
pid_file = /opt/sphinx/searchd.pid
seamless_rotate = 1
preopen_indexes = 1
unlink_old = 1
binlog_path = /opt/sphinx/
}
With sphinx the 'sphinx.conf' file can be 'executable'. Ie it can actully be a 'shell script' (or PHP, perl etc!)
Assuming your .env file makes real (runtime!) environment variables within the container (not overly familiar with Docker), then your sphinx.conf file could be ...
#!/bin/sh
set -eu
cat <<EOF
searchd
{
listen = 127.0.0.1:$SPHINX_PORT
log = /opt/sphinx/searchd.log
query_log = /opt/sphinx/query.log
read_timeout = 5
max_children = 30
pid_file = /opt/sphinx/searchd.pid
seamless_rotate = 1
preopen_indexes = 1
unlink_old = 1
binlog_path = /opt/sphinx/
}
EOF
And because it a shell script, the variables will automatically be expanded :)
Need it executable too!
RUN chmod a+x /opt/sphinx/conf/sphinx.conf
Then dont need the sed command in Dockerfile at all!
I'm trying to deploy a simple single endpoint quarkus app to Google Cloud -> App Engine flex environment. I managed to deploy an uber-jar to standard environment, according to documentation
But I'm struggling to deploy with flex environment, as my understating, from the same documentation link as above, is that GCP will create a docker container based on the Dockerfile; In the end, my intention is to deploy a native image to GCP App Engine.
I followed the steps in the link above:
Copy the JVM Dockerfile to the root directory of your project: cp src/main/docker/Dockerfile.jvm Dockerfile
Build your application using mvn clean package
src/main/appengine/app.yaml has the following:
runtime: custom
env: flex
gcloud app deploy
The gcloud log returns the same error, both for native and normal docker file, respectively:
starting build "502dd964-0abf-470e-a4c1-44c89a67a96e"
FETCHSOURCE
Fetching storage object: gs://staging.os-xxxx-quarkus.appspot.com/eu.gcr.io/os-xxxx-quarkus/appengine/default.20210322t183731:latest#1616431057582767
Copying gs://staging.os-xxxx-quarkus.appspot.com/eu.gcr.io/os-xxxx-quarkus/appengine/default.20210322t183731:latest#1616431057582767...
/ [0 files][ 0.0 B/ 230.0 B]
/ [1 files][ 230.0 B/ 230.0 B]
Operation completed over 1 objects/230.0 B.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /Users: no such file or directory
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
For brevity, I will include the dockerfile, but it is the default generated without any changes.
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3
ARG JAVA_PACKAGE=java-11-openjdk-headless
ARG RUN_JAVA_VERSION=1.3.8
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
# Install java and the run-java script
# Also set up permissions for user `1001`
RUN microdnf install curl ca-certificates ${JAVA_PACKAGE} \
&& microdnf update \
&& microdnf clean all \
&& mkdir /deployments \
&& chown 1001 /deployments \
&& chmod "g+rwX" /deployments \
&& chown 1001:root /deployments \
&& curl https://repo1.maven.org/maven2/io/fabric8/run-java-sh/${RUN_JAVA_VERSION}/run-java-sh-${RUN_JAVA_VERSION}-sh.sh -o /deployments/run-java.sh \
&& chown 1001 /deployments/run-java.sh \
&& chmod 540 /deployments/run-java.sh \
&& echo "securerandom.source=file:/dev/urandom" >> /etc/alternatives/jre/lib/security/java.security
# Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size.
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
# We make four distinct layers so if there are application changes the library layers can be re-used
COPY --chown=1001 target/quarkus-app/lib/ /deployments/lib/
COPY --chown=1001 target/quarkus-app/*.jar /deployments/
COPY --chown=1001 target/quarkus-app/app/ /deployments/app/
COPY --chown=1001 target/quarkus-app/quarkus/ /deployments/quarkus/
EXPOSE 8080
USER 1001
ENTRYPOINT [ "/deployments/run-java.sh" ]
Thank you for your time
It seems i polluted the space with an embarrassing mistake; for Google Cloud App Engine ,flex environment, the app.yaml file should be placed in the root folder; it is also documented
Moreover, for some reason, the max_num_instances should be explicitly set to a value of max 8, according to account quota google documentation otherwise a EXAHUSTED_RESOURCE exception is thrown.
I need the contents of a large *.zip file (5 gb) in my Docker container in order to compile a program. The *.zip file resides on my local machine. The strategy for this would be:
COPY program.zip /tmp/
RUN cd /tmp \
&& unzip program.zip \
&& make
After having done this I would like to remove the unzipped directory and the original *.zip file because they are not needed any more. The problem is that the COPY (and also the ADD directive) will add a layer to the image that will contain the file program.zip which is problematic as may image will be at least 5gb big. Is there a way to add a file to a container without using COPY or ADD directive? wget will not work as the mentioned *.zip file is on my local machine and curl file://localhost/home/user/program.zip -o /tmp/program.zip will not work either.
It is not straightforward but it can be done via wget or curl with a little support from python. (All three tools should usually be available on a *nix system.)
wget will not work when no url is given and
curl file://localhost/home/user/program.zip -o /tmp/
will not work from within a Dockerfile's RUN instruction. Hence, we will need a server which wget and curl can access and download program.zip from.
To do this we set up a little python server which serves our http requests. We will be using the http.server module from python for this. (You can use python or python 3. It will work with both.).
python -m http.server --bind 192.168.178.20 8000
The python server will serve all files in the directory it is started in. So you should make sure that you start your server either in the directory the file you want to download during your image build resides in or create a temporary directory which contains your program. For illustration purposes let's create the file foo.txt which we will later download via wget in our Dockerfile:
echo "foo bar" > foo.txt
When starting the http server, it is important, that we specify the IP address of our local machine on the LAN. Furthermore, we will open Port 8000. Having done this we should see the following output:
python3 -m http.server --bind 192.168.178.20 8000
Serving HTTP on 192.168.178.20 port 8000 ...
Now we build a Dockerfile to illustrate how this works. (We will assume that the file foo.txt should be downloaded into /tmp):
FROM debian:latest
RUN apt-get update -qq \
&& apt-get install -y wget
RUN cd /tmp \
&& wget http://192.168.178.20:8000/foo.txt
Now we start the build with
docker build -t test .
During the build you will see the following output on our python server:
172.17.0.21 - - [01/Nov/2014 23:32:37] "GET /foo.txt HTTP/1.1" 200 -
and the build output of our image will be:
Step 2 : RUN cd /tmp && wget http://192.168.178.20:8000/foo.txt
---> Running in 49c10e0057d5
--2014-11-01 22:56:15-- http://192.168.178.20:8000/foo.txt
Connecting to 192.168.178.20:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 25872 (25K) [text/plain]
Saving to: `foo.txt'
0K .......... .......... ..... 100% 129M=0s
2014-11-01 22:56:15 (129 MB/s) - `foo.txt' saved [25872/25872]
---> 5228517c8641
Removing intermediate container 49c10e0057d5
Successfully built 5228517c8641
You can then check if it really worked by starting and entering a container from the image you just build:
docker run -i -t --rm test bash
You can then look in /tmp for foo.txt.
We can now add any file to our image without creating an new layer. Assuming you want to add a program of about 5 gb as mentioned in the question we could do:
FROM debian:latest
RUN apt-get update -qq \
&& apt-get install -y wget
RUN cd /tmp \
&& wget http://conventiont:8000/program.zip \
&& unzip program.zip \
&& cd program \
&& make \
&& make install \
&& cd /tmp \
&& rm -f program.zip \
&& rm -rf program
In this way we will not be left with 10 gb of cruft.
There's no way to do this. A feature request is here https://github.com/docker/docker/issues/3156.
Can you not map a local folder to the container when launched and then copy the files you need.
sudo docker run -d -P --name myContainerName -v /localpath/zip_extract:/container/path/ yourContainerID
https://docs.docker.com/userguide/dockervolumes/
I have posted a similar answer here: https://stackoverflow.com/a/37542913/909579
You can use docker-squash to squash newly created layers. That will essentially remove the archive from final image if you remove it in subsequent RUN instruction.