Hi I am trying build a docker and Docker file looks like this.
FROM alpine
LABEL description "Nginx + uWSGI + Flask based on Alpine Linux and managed by Supervisord"
# Copy python requirements file
COPY requirements.txt /tmp/requirements.txt
RUN apk add --no-cache \
python3 \
bash \
nginx \
uwsgi \
uwsgi-python3 \
supervisor && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install --upgrade pip setuptools && \
pip3 install -r /tmp/requirements.txt && \
rm /etc/nginx/conf.d/default.conf && \
rm -r /root/.cache
# Copy the Nginx global conf
COPY nginx.conf /etc/nginx/
# Copy the Flask Nginx site conf
COPY flask-site-nginx.conf /etc/nginx/conf.d/
# Copy the base uWSGI ini file to enable default dynamic uwsgi process number
COPY uwsgi.ini /etc/uwsgi/
# Custom Supervisord config
COPY supervisord.conf /etc/supervisord.conf
# Add demo app
COPY ./app /app
WORKDIR /app
CMD ["/usr/bin/supervisord"]
Errors looks like
Sending build context to Docker daemon 250.9kB
Step 1/11 : FROM alpine
---> 196d12cf6ab1
Step 2/11 : LABEL description "Nginx + uWSGI + Flask based on Alpine Linux and managed by Supervisord"
---> Using cache
---> d8d38c761b8d
Step 3/11 : COPY requirements.txt /tmp/requirements.txt
---> Using cache
---> cb29eb34ca46
Step 4/11 : RUN apk add --no-cache python3 bash nginx uwsgi uwsgi-python3 supervisor && python3 -m ensurepip && rm -r /usr/lib/python*/ensurepip && pip3 install --upgrade pip setuptools && pip3 install -r /tmp/requirements.txt && rm /etc/nginx/conf.d/default.conf && rm -r /root/.cache
---> Running in 3d568d2620dd
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz: could not connect to server (check repositories file)
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz: could not connect to server (check repositories file)
ERROR: unsatisfiable constraints:
bash (missing):
required by: world[bash]
nginx (missing):
required by: world[nginx]
python3 (missing):
required by: world[python3]
supervisor (missing):
required by: world[supervisor]
uwsgi (missing):
required by: world[uwsgi]
uwsgi-python3 (missing):
required by: world[uwsgi-python3]
The command '/bin/sh -c apk add --no-cache python3 bash nginx uwsgi uwsgi-python3 supervisor && python3 -m ensurepip && rm -r /usr/lib/python*/ensurepip && pip3 install --upgrade pip setuptools && pip3 install -r /tmp/requirements.txt && rm /etc/nginx/conf.d/default.conf && rm -r /root/.cache' returned a non-zero code: 6
A month ago it was building fine. Because of the limited knowledge in Docker i couldn't to figure what's causing the error. A quick google search has resulted in these two links: link1 link2 But none of them were working.
Build docker with flag "--network host" solved the issue. Here is the link.
-In Ubuntu
It was a DNS error for me. By setting /etc/docker/daemon.json with,
{
"dns": ["8.8.8.8"]
}
and then restarting docker with,
sudo service docker restart
I was able to build images again.
https://github.com/gliderlabs/docker-alpine/issues/334#issuecomment-450598069
-In Windows
C:/Users/Administrator(or any other Username)/.docker/daemon.json
And add
{
...,
"dns": ["8.8.8.8"]
}
The line:
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz: could not connect to server (check repositories file)
Basically says that you are either offline, or the alpinelinux repo is down. I cannot find anything about it on the internet, but it happened several times in the past. Or it can be network problem somewhere in between you and the cdn.
You can always pick mirror yourself from the http://dl-cdn.alpinelinux.org/alpine/MIRRORS.txt and setup it like so:
RUN echo http://repository.fit.cvut.cz/mirrors/alpine/v3.8/main > /etc/apk/repositories; \
echo http://repository.fit.cvut.cz/mirrors/alpine/v3.8/community >> /etc/apk/repositories
(change the v3.8 according to you version)
Also as #emix pointed out, you should never use :latest tag for your base image. Use for example 3.8, or the one with packages versions you need.
Try restarting the docker service, it worked for me and others:
sudo systemctl restart docker docker.service
Thanks to: https://github.com/gliderlabs/docker-alpine/issues/334#issuecomment-408826204
This kind of errors often happend due to some network problem.
Try use https mirrors instead of http.
RUN sed -i -e 's/http:/https:/' /etc/apk/repositories
Another fix -
I added 8.8.8.8 to my /etc/resolv.conf and restarted the docker daemon. It fixed this issue for me.
If you are able to manually download the file, try restarting your docker service. It did the trick for me..
Providing a more generic troubleshooting answer for the title. Test your docker commands in another container. This could be another running container that you don't mind breaking, or preferably a base container (in this case alpine) where you can run the Dockerfile commands on the shell. Probably not a solution where the network is the issue as in the original question, but good in other cases.
The apk error messages aren't always the most useful. Take a look at the example below:
/ # apk add --no-cache influxdb-client
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
influxdb-client (missing):
required by: world[influxdb-client]
/ #
/ #
/ #
/ #
/ # apk add --no-cache influxdb
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/1) Installing influxdb (1.8.0-r1)
Executing influxdb-1.8.0-r1.pre-install
Executing busybox-1.31.1-r19.trigger
OK: 613 MiB in 98 packages
By the way, https://pkgs.alpinelinux.org/packages is a good place to find the names of packages for Alpine, which would fix the above example.
I think I've done all what was proposed here, without any success.
Change http to https
Use the --network host trick
Add 8.8.8.8 in resolv.conf
Use a mirror
See my last build
I can download on that machine the index without any problem.
But when using Docker (with or without gitlab-runner), it just fails.
It works beautifully on another machine on the same network with the same architecture (armv7).
If my first instruction is
RUN wget https://mirrors.ircam.fr/pub/alpine/v3.15/main/armv7/APKINDEX.tar.gz
I get ---> Running in 19a0630d633a wget: bad address 'mirrors.ircam.fr'
In my case I had changed the /etc/docker/daemon.json and added a repository-mirrors
to bypass filtering in my region and download docker images from that repo.
daemon.json:
{
"registry-mirrors": [
"https://docker.somerepo.com"
],
"insecure-registries": [],
"debug": true,
"experimental": false
}
so that was the problem, I removed daemon.json file (or you could comment all lines in the daemon.json file) and it was fine to go and download new docker images.
I still ran into this problem as of dez 2022 with debian buster. Docker on Buster seems to be incompOption 2 here solved the problem for me.
Related
if someone can help me with this, i would be very grateful, i have a docker image in which a kafka is displayed where i pretend to have 3 brokers and i would like that nothing more be created when the docker container is created, the script that i have to raise kafka will be executed, i have tried in many ways using CMD and ENTRYPOINT commands but i am not successful, the container is created for me but the script is not executed i have to enter the container to start it
Dockerfile
FROM ubuntu
RUN apt-get update
RUN apt-get install -y openjdk-8-jdk
RUN apt-get install -y wget \
&& wget http://apache.rediris.es/kafka/2.4.0/kafka_2.12-2.4.0.tgz \
&& tar -xzf kafka_2.12-2.4.0.tgz \
&& rm -R kafka_2.12-2.4.0.tgz
#WORKDIR /home
RUN chmod +x /kafka_2.12-2.4.0
### COPY ###
COPY server-1.properties /kafka_2.12-2.4.0/config/
COPY server-2.properties /kafka_2.12-2.4.0/config/
#ADD runzk-kf.sh .
COPY runzk-kf.sh /usr/local/bin/runzk-kf.sh
#COPY runzk-kf.sh .
RUN chmod +x /usr/local/bin/runzk-kf.sh
EXPOSE 2181
EXPOSE 9092
EXPOSE 9093
EXPOSE 9094
CMD ./bin/bash
script
#!/bin/sh
# turn on bash's job control
set -m
### RUN Zookeper
./kafka_2.12-2.4.0/bin/zookeeper-server-start.sh /kafka_2.12-2.4.0/config/zookeeper.properties &
### RUN Kafka brokers ###
./kafka_2.12-2.4.0/bin/kafka-server-start.sh /kafka_2.12-2.4.0/config/server.properties &
./kafka_2.12-2.4.0/bin/kafka-server-start.sh /kafka_2.12-2.4.0/config/server-1.properties &
./kafka_2.12-2.4.0/bin/kafka-server-start.sh /kafka_2.12-2.4.0/config/server-2.properties &
View all code
Sorry, but please don't do this.
Docker images should be one service, not 4. Use Compose or MiniKube + Helm Charts to orchestrate multiple.
It's not clear what property files you changed for that to work properly.
JDK 8 is end of life, use 11 or 13, which Kafka supports.
Just use existing Docker images. If you want something minimal, personally I use bitnami/kafka. If you want something more fully featured, take a look over at Confluent's repo on running 3 Brokers via Docker Compose.
I want to run an OpenLDAP server in a docker container using CentOS7.
I managed to have a container running with an openldap installed in it. My problem is that I am using a script entrypoint.sh to start the slapd service and add a user to my directory. I would like this two steps to be in the Dockerfile so that the password to perform ldapadd is not stored in the script.
So far I have only found examples on debian .
https://github.com/kanboard/docker-openldap/blob/master/memberUid/Dockerfile this is what I would like to do but using CentOS 7.
I tried start slapd service in my Dockerfile without success.
My Dockerfile looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
My entrypoint.sh script looks like this :
#!/bin/bash
exec /usr/sbin/slapd -f /etc/openldap/slapd.conf -h "ldapi:/// ldap:///" -d stats &
sleep 10
ldapadd -x -w mypassword -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
This does work however I am looking to start the ldap service and do the ldapadd command in the Dockerfile not to have mypassword stored in entrypoint.sh.
Hence I tried these commands :
RUN systemctl slapd start
RUN ldapadd -x -w password -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
Of course this does not work as systemctl does not work in Dockerfile. What is the best alternative ? I was considering having one container starting the ldap servcie but then I do not know how to call it to build the image of the other container...
EDIT :
Thanks to Guido U. Draheim, I have an alternative to systemctl to start slapd service.
My Dockerfile now looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY files/docker/systemctl.py /usr/bin/systemctl
RUN systemctl enable slapd
RUN systemctl start slapd;\
ldapdd -x -w password -D "cn=ldapadm,dc=sblanche" -f /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
But I have got the following error : ldap_bind: Invalid credentials (49)
(a) you could use the docker-systemctl-replacement to run your "systemctl.py start slapd". Which is the obvious first error.
(b) each RUN in a dockerfile is a new container, so the running process from the earlier invocation can not survive anyway. That's why the referenced dockerfile example has it combined with "&&".
And yeah (c) I am using an openldap centos container. So go ahead, try again.
I containerised a shiny app and attempted to deploy it on GCP using Kubernetes but each time I obtain the external IP address and load it in the browser, I get the "this site cannot be reached connection refused" error. So I have attempted to run the container on my localhost to troubleshoot and now I get the "127.0.0.1 didn’t send any data. ERR_EMPTY_RESPONSE" error. I searched tirelessly online for a solution but nothing seems to work for me. Plus none of the solutions on are for a shiny app docker container. Many of the fixes mention the port but I am still stuck. I have xampp installed on my mac by the way. Is it possible that xampp and my docker container are attempting to share the same port or is there a problem with my Docker file code? Pardon me but I am new to containers and have only been following the documentation procedure up till now. Below is my Docker file code:
Dockerfile
# Install R version 3.5.1
FROM r-base:3.5.1
# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
sudo \
gdebi-core \
pandoc \
pandoc-citeproc \
libcurl4-gnutls-dev \
libcairo2-dev/unstable \
libxt-dev \
libssl-dev
# Download and install ShinyServer (latest version)
RUN wget --no-verbose https://s3.amazonaws.com/rstudio-shiny-server-os-
build/ubuntu-12.04/x86_64/VERSION -O "version.txt" && \
VERSION=$(cat version.txt) && \
wget --no-verbose "https://s3.amazonaws.com/rstudio-shiny-server-os-
build/ubuntu-12.04/x86_64/shiny-server-$VERSION-amd64.deb" -O ss-latest.deb &&
\
gdebi -n ss-latest.deb && \
rm -f version.txt ss-latest.deb
# Install R packages that are required
# TODO: add further package if you need!
RUN R -e
"install.packages(c('shiny','shinyjs','tools','foreign','XLConnect'),
repos='http://cran.rstudio.com/')"
# Copy configuration files into the Docker image
COPY shiny-server.conf /etc/shiny-server/shiny-server.conf
COPY /app /srv/shiny-server/
# Make the ShinyApp available at port 80
EXPOSE 80
# Copy further configuration files into the Docker image
COPY shiny-server.sh /usr/bin/shiny-server.sh
CMD ["/usr/bin/shiny-server.sh"]
I would appreciate it if someone could assist me.
You don't need shiny-server.
add
app <- shinyApp(ui = ui, server = server)
runApp(app, host ="0.0.0.0", port = 80, launch.browser = FALSE)
to your R script and
EXPOSE 80
CMD ["R", "-e", "library(shiny); source('/root/pathToYourScript/script.R')"]
to your Dockerfile.
I had to first create a Rprofile.site file and place it in the same directory as the dockerfile and shinyapp. Then I created my own base image with all the necessary libraries for the app and called it from my dockerfile. Here is the final code:
Rprofile.site
local({
options(shiny.port = 3838, shiny.host = "0.0.0.0")
})
Dockerfile
FROM bimage_rpackages
# Copy the app to the image
RUN mkdir /root/shinyapp
COPY app/shinyapp /root/shinyapp
COPY app/Rprofile.site /usr/lib/R/etc/
# Make the ShinyApp available at port 3838
EXPOSE 3838
CMD ["R", "-e", "shiny::runApp('/root/shinyapp')"]
I use scrapy-splash with docker.
In Dockerfile I have this line to export the result in a .jl .
CMD ["scrapy", "crawl", "quotesjs", "-o", "quote.jl"]
When I run docker-compose build and docker-compose up, the log informs me that:
scrapy1 | 2017-12-18 00:00:00 [scrapy.extensions.feedexport] INFO: Stored jl feed (10 items) in: quote.jl
I don't see any quote.jl in my local folder (where the Dockerfile and the scrapy project is), so I guesse it should be in my container.
I trie to cp the content of the container with this command but without success.
docker cp containerID:. ./copy_of_container
How can I retrieve the quote.jl file.
I am on Windows10 and I use Docker for Windows
My dockerfile
FROM python:alpine
RUN apk --update add libxml2-dev libxslt-dev libffi-dev gcc musl-dev libgcc openssl-dev curl bash
RUN pip install scrapy scrapy-splash scrapy-fake-useragent
ADD . /scraper
WORKDIR /scraper
CMD ["scrapy", "crawl", "apkmirror", "-o", "apkmirror.jl"]
I'm trying to bundle my Jekyll blog as a docker container.
I found this Dockerfile which seems to suit my use case but wanted to be more hands on so I copied it directly into my repo:
FROM ruby:latest
MAINTAINER Peter Etelej <peter#etelej.com>
RUN apt-get -qq update && \
apt-get -qq install nodejs -y && \
gem install -q bundler
RUN mkdir -p /etc/jekyll && \
printf 'source "https://rubygems.org"\ngem "github-pages"\ngem "execjs"\ngem "rouge"' > /etc/jekyll/Gemfile && \
printf "\nBuilding required Ruby gems. Please wait..." && \
bundle install --gemfile /etc/jekyll/Gemfile --clean --quiet
RUN apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENV BUNDLE_GEMFILE /etc/jekyll/Gemfile
EXPOSE 4000
ENTRYPOINT ["bundle", "exec"]
CMD ["jekyll", "serve","--host=0.0.0.0"]
When I run it I get an error
jekyll 3.4.3 | Error: No such file or directory # rb_sysopen - /etc/modules-load.d/modules.conf
The host system has this file but my assumption was that the container didn't have access to it so I tried to add it into the Dockerfile
ADD /etc/modules-load.d/modules.conf /etc/modules-load.d/modules.conf
I then docker build and get the error
lstat etc/modules-load.d/: no such file or directory
I don't understand why the container is looking for this file in the first place but I'm even more confused by the fact that I can't add a file which is clearly there.
Docker builds run on the docker host, not necessarily the client where you run the command, and so all the files needed to run the build are sent in the build context to the host. That context is most often the current directory, or ., that you pass at the end of the docker build -t $image_name . command.
Everything that you try to include in the image with a COPY or ADD is done in reference to that build context, not the filesystem on your client or host machine. So if you need a modules.conf, you'll need to first copy that into your directory with the Dockerfile, and then COPY the file from there.
As for why jekyll is looking for the file, I'm not familiar with jekyll, but it doesn't look promising for something running inside of a container. The modules are kernel specific and containers are designed to be moved to different hosts with potentially different kernels.