Executing scripts under /docker-entrypoint-initdb.d with root permission - docker

I want to execute a script along with initialising the DB schema when I spawn the mariadb image.
I've placed these files under /docker-entrypoint-initdb.d
The schema initialisation is working as expected
The shell script contains apt-get install instructions, which results with the below error
Unable to lock the administration directory (/var/lib/dpkg/), are you
root?
Performing a whoami within the script gave me the output as mysql, which explains the script is being run as the user 'mysql' and not as 'root'.
Is there any way I can run this script as root ?

If you do it in a Dockerfile:
FROM mariadb
RUN apt-get update && apt-get install -y vim

Related

Docker build breaks even though nothing in Dockerfile or docker-compose.yml has changed! :( [duplicate]

This question already has answers here:
In Docker, apt-get install fails with "Failed to fetch http://archive.ubuntu.com/ ... 404 Not Found" errors. Why? How can we get past it?
(7 answers)
Closed 1 year ago.
I was really happy when I first discovered Docker. But I keep running into this issue, in which Docker builds successfully initially, but if I try to re-build a container after a few months, it fails to build, even though I made no changes to the Dockerfile and docker-compose.yml files.
I suspect that external dependencies (in this case, some mariadb package necessary for cron) may become inaccessible and therefore breaking the Docker build process. Has anyone also encountered this problem? Is this a common problem with Docker? How do I get around it?
Here's the error message.
E: Failed to fetch http://deb.debian.org/debian/pool/main/m/mariadb-10.3/mariadb-common_10.3.29-0+deb10u1_all.deb 404 Not Found [IP: 151.101.54.132 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Fetched 17.6 MB in 0s (48.6 MB/s)
ERROR: Service 'python_etls' failed to build: The command '/bin/sh -c apt-get install cron -y' returned a non-zero code: 100
This is my docker-compose.yml.
## this docker-compose file is for development/local environment
version: '3'
services:
python_etls:
container_name: python_etls
restart: always
build:
context: ./python_etls
dockerfile: Dockerfile
This is my Dockerfile.
#FROM python:3.8-slim-buster # debian gnutls28 fetch doesn't work in EC2 machine
FROM python:3.9-slim-buster
RUN apt-get update
## install
RUN apt-get install nano -y
## define working directory
ENV CONTAINER_HOME=/home/projects/python_etls
## create working directory
RUN mkdir -p $CONTAINER_HOME
## create directory to save logs
RUN mkdir -p $CONTAINER_HOME/logs
## set working directory
WORKDIR $CONTAINER_HOME
## copy source code into the container
COPY . $CONTAINER_HOME
## install python modules through pip
RUN pip install snowflake snowflake-sqlalchemy sqlalchemy pymongo dnspython numpy pandas python-dotenv xmltodict appstoreconnect boto3
# pip here is /usr/local/bin/pip, as seen when running 'which pip' in command line, and installs python packages for /usr/local/bin/python
# https://stackoverflow.com/questions/45513879/trouble-running-python-script-cron-import-error-no-module-named-tweepy
## changing timezone
ENV TZ=America/Los_Angeles
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
## scheduling with cron
## reference: https://stackoverflow.com/questions/37458287/how-to-run-a-cron-job-inside-a-docker-container
# install cron
RUN apt-get install cron -y
# Copy crontable file to the cron.d directory
COPY crontable /etc/cron.d/crontable
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/crontable
# Apply cron job
RUN crontab /etc/cron.d/crontable
# Create the log file to be able to run tail
#RUN touch /var/log/cron.log
# Run the command on container startup
#CMD cron && tail -f /var/log/cron.log
CMD ["cron", "-f"]
EDIT: So changing the Docker base image from python:3.9-slim-buster to python:3.10-slim-buster in Dockerfile allowed a successful build. However, I'm still stumped why it broke in the first place and what I should do to mitigate this problem in the future.
You may have the misconception that docker will generate the same image every time you try to build from the same Dockerfile. It doesn't, and it's not docker's problem space.
When you build an image, you most likely reach external resources: packages, repository, registries, etc. All things that could be reachable today but missing tomorrow. A security update may be released, and some old packages deleted as consequence of it. Package managers may request the newest version of specific library, so the release of an update will impact the reproducibility of your image.
You can try to minimise this effect by pinning the version of as many dependencies as you can. There are also tools, like nix, that allow you to actually reach full reproducibility. If that's important to you, I'd point you in that direction.

How to run command on container startup, and keep container running after command is done?

I have a Dockerfile, which is meant to use script1, like so:
# Pull from Debian
FROM debian
# Update apt and install dependencies
RUN apt update
RUN apt -y upgrade
RUN apt -y install wget curl
# Download script1.sh
RUN wget -O ./script1.sh https://example.com
# Make script1.sh executable
RUN chmod +x ./script1.sh
Currently, I can:
Build this Dockerfile into an image
Run said image in a container
Open a CLI in said container, and run script1 manually (with bash ./script1.sh)
The script runs, and the container stays open.
However, I'd like to automatically run this script on container startup.
So I tried to change my Dockerfile to this:
# Pull from Debian
FROM debian
# Update apt and install dependencies
RUN apt update
RUN apt -y upgrade
RUN apt -y install wget curl
# Download script1.sh
RUN wget -O ./script1.sh https://example.com
# Make script1.sh executable
RUN chmod +x ./script1.sh
# Run script1.sh on startup
CMD bash ./script1.sh
However, when I do this, the container only stays open for a little bit, and then exits right away.
I suspect it exits as soon as script1 is done...
I also tried ENTRYPOINT, without much success.
Why does my container stay open if I open a CLI and run the script manually, but doesn't stay open if I try to automatically run it at startup?
And how can I run the script automatically on container startup, and keep the container from exiting right away?
An old Docker (v2) tricks to prevent premature container closing consisted in letting run an "infinite" loop command in it, such as:
CMD tail -f /dev/null

Automatic edit of file in dockerized container

I have my dockerized container for elasticsearch and kibana running, with it automatically installing some plugins once i start the docker container.
I need to edit the config/elasticsearch.yml file to enable the usage of that one plugin and i am trying to find the way to get it done similar to the way i have installed the plugins through a file as shown below
ARG ELASTIC_VERSION="$ELASTIC_VERSION"
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
RUN bin/elasticsearch-plugin install https://github.com/spinscale/elasticsearch-ingest-opennlp/releases/download/7.6.0.1/ingest-opennlp-7.6.0.1.zip
RUN bin/elasticsearch-plugin install mapper-annotated-text
RUN bin/elasticsearch-plugin install analysis-phonetic
RUN bin/elasticsearch-plugin install ingest-attachment --batch
RUN bin/ingest-opennlp/download-models
The correct way would be to create a new docker image:
Create a new Dockerfile with elasticsearch as base image. Overwrite the elasticsearch.yml file in this image. And now, build this image
FROM elasticsearch
COPY elasticsearch.yml config/elasticsearch.yml
Optionally, push this image to dockerhub
Use this image for deployments
RESOLVED SO THANKS FOR ALL THE HELP RECEIVED
INSPIRED BY https://stackoverflow.com/a/49755244/12851178
Updated elasticsearch file; Keeping this here for others' future references
ARG ELASTIC_VERSION="$ELASTIC_VERSION"
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}
RUN bin/elasticsearch-plugin install https://github.com/spinscale/elasticsearch-ingest-opennlp/releases/download/7.6.0.1/ingest-opennlp-7.6.0.1.zip
RUN bin/elasticsearch-plugin install mapper-annotated-text
RUN bin/elasticsearch-plugin install analysis-phonetic
RUN bin/elasticsearch-plugin install ingest-attachment --batch
RUN bin/ingest-opennlp/download-models
RUN echo "ingest.opennlp.model.file.persons: en-ner-persons.bin" >> /usr/share/elasticsearch/config/elasticsearch.yml
RUN echo "ingest.opennlp.model.file.dates: en-ner-dates.bin" >> /usr/share/elasticsearch/config/elasticsearch.yml
RUN echo "ingest.opennlp.model.file.locations: en-ner-locations.bin" >> /usr/share/elasticsearch/config/elasticsearch.yml

Why dockered centos doesn't recognize pip?

I want to create a container with python and few packages over centos. I've tried to run several commands inside raw centos container. Everything worked fine I've installed everything I want. Then I created Dockerfile with the same commands executed via RUN and I'm getting /bin/sh: pip: command not found What could be wrong? I mean the situation at all. Why everything could be executed in the command line but not be executed with RUN? I've tried both variants:
RUN command
RUN command
RUN pip install ...
and
RUN command\
&& command\
&& pip install ...
Commands that I execute:
from centos
run yum install -y centos-release-scl\
&& yum install -y rh-python36\
&& scl enable rh-python36 bash\
&& pip install django
UPD: Full path to the pip helped. What's wrong?
You need to install pip first using
yum install python-pip
or if you need python3 (from epel)
yum install python36-pip
When not sure, ask yum:
yum whatprovides /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : #System
Matched from:
Filename : /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : updates
Matched from:
Filename : /usr/bin/pip
python2-pip-18.0-4.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : fedora
Matched from:
Filename : /usr/bin/pip
This output is from Fedora29, but you should get similar result in Centos/RHEL
UPDATE
From comment
But when I execute same commands from docker run -ti centos everything
is fine. What's the problem?
Maybe your PATH is broken somehow? Can you try full path to pip?
As it has already been mentioned by #rkosegi, it must be a PATH issue. The following seems to work:
FROM centos
ENV PATH /opt/rh/rh-python36/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN yum install -y centos-release-scl
RUN yum install -y rh-python36
RUN scl enable rh-python36 bash
RUN pip install django
I "found" the above PATH by starting a centos container and typing the commands one-by-one (since you've mentioned that it is working).
There is a nice explanation on this, in the slides of BMitch which can be found here: sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#24
Q: Why doesn't RUN work?
Why am I getting ./build.sh is not found?
RUN cd /app/srcRUN ./build.sh
The only part saved from a RUN is the filesystem (as a new layer).
Environment variables, launched daemons, and the shell state are all discarded with the temporary container when pid 1 exits.
Solution: merge multiple lines with &&:
RUN cd /app/src && ./build.sh
I know this was asked a while ago, but I just had this issue when building a Docker image, and wasn't able to find a good answer quickly, so I'll leave it here for posterity.
Adding the scl enable command wouldn't work for me in my Dockerfile, so I found that you can enable scl packages without the scl command by running:
source /opt/rh/<package-name>/enable.
If I remember correctly, you won't be able to do:
RUN source /opt/rh/<package-name>/enable
RUN pip install <package>
Because each RUN command creates a different layer, and shell sessions aren't preserved, so I just ran the commands together like this:
RUN source /opt/rh/rh-python36/enable && pip install <package>
I think the scl command has issues running in Dockerfiles because scl enable <package> bash will open a new shell inside your current one, rather than adding the package to the path in your current shell.
Edit:
Found that you can add packages to your current shell by running:
source scl_source enable <package>

Dockerfile - gradlew stuck on downloading

I am building a Docker-image from Dockerfile using this command:
docker build -it /root/Documents/myDockerfiles/tomcat/.
The Dockerfile looks as follows:
[root#srv01 ~]# cat /root/Documents/myDockerfiles/tomcat/Dockerfile
FROM tomcat:8.0.32-jre8
MAINTAINER "John Doe <johndoe#dough.com>"
RUN apt-get update && apt-get install -y git
RUN git clone https://myusername:mypassword#mygiturl/mygroup/myproject.git
RUN cd ./myproject/ && ./gradlew war
So it basically clones an existing git-repo, cds into the cloned directory, and runs the gradle wrapper.
Problem is that the gradle-wrapper seems to get no connection to the outside, because:
Step 5 : RUN cd ./myproject && ./gradlew war
---> Running in d01b57b9f932 Downloading https://services.gradle.org/distributions/gradle-2.3-bin.zip
(here its stuck, gradlw can't communicate to the outside)
I think it must be a firewall and port issue, because when I do the same from my local laptop its not stuck, but executing it on my VM on some Extranet-Cloud-Service (whose name I can't mention) it stops right there.
So my questions are:
1.) Why is it stuck there?
2.) What can I do to prevent it from being stuck there?

Resources