I have the following dockerfile:
FROM haproxy:alpine
RUN apk --update add bash && apk --no-cache add dos2unix rsyslog supervisor wget curl ruby which py-setuptools py-pip && pip install awscli && chmod +x /*.sh
COPY *haproxy.cfg /etc/
COPY supervisord.ini /etc/
COPY rsyslog.conf /etc/
COPY entrypoint.sh /
RUN dos2unix /entrypoint.sh && apt-get --purge remove -y dos2unix
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 9999
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisord.ini"]
However, when I build this I get:
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
dos2unix (missing):
required by: world[dos2unix]
I can see the package exists here though: https://pkgs.alpinelinux.org/packages?name=dos2unix&branch=&repo=&arch=&maintainer=
What am I doing wrong?
From your own link, dos2unix is (at this time, February 2017) only in testing, not in main or community. From the relevant documentation --
If you only have the main repository enabled in your configuration, apk will not include packages from the other repositories. To install a package from the edge/testing repository without changing your repository configuration file, use the command below. This will tell apk to use that particular repository.
apk add cherokee --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ --allow-untrusted
In this case, you would want to substitute dos2unix for cherokee.
Related
I need to throw the ssh folder with the keys in docker.
Dockerfile:
FROM python:3.6-alpine3.12
RUN mkdir /code && mkdir /data
ADD . /code
WORKDIR /code
RUN pip3 install -r requirement && apk add git
RUN mkdir /root/.ssh && -v ~/.ssh:/root/.ssh
RUN apk add -y wget
Error when building:
/bin/sh: illegal option -
The command '/bin/sh -c -v ~/.ssh:/root/.ssh returned a non-zero code: 2
The shell does not recognize the command -v ~/.ssh:/root/.ssh
Try this:
FROM python:3.6-alpine3.12
ADD . /code
WORKDIR /code
RUN pip3 install -r requirement && \
apk add -y git wget && \
mkdir /data
COPY $HOME/.ssh /root/.ssh
PS: I added some Dockerfile's optimization for you
EDIT:
Copying sensitive data into your container is not a good idea unless you really know what you are doing.
If your application needs to connect to a remote server you own it would be better to generate new keys for it specifically and distribute them on your server (public key).
I have a Golang program inside a docker container (I use Ubuntu 18). Also I use github.com/glenn-brown/golang-pkg-pcre/src/pkg/pcre for regex in my Golang app. Before using this library I should install libpcre++-dev this way:
sudo apt-get install libpcre++-dev
But I use golang:alpine in my Dockerfile and this is no libpcre++-dev library in alpine packages.
What package should I install instead of libpcre++-dev?
p.s. I have tried to install libc6-compat, pcre pcre-dev, libpcrecpp but I see this error:
github.com/glenn-brown/golang-pkg-pcre/src/pkg/pcre
/go/pkg/mod/github.com/glenn-brown/golang-pkg-pcre#v0.0.0-20120522223659-48bb82a8b8ce/src/pkg/pcre/pcre.go:52:10:
fatal error: pcre.h: No such file or directory #include
^~~~~~~~ compilation terminated
My Dockerfile:
FROM golang:alpine
RUN apk update
RUN apk upgrade
RUN apk add --update --no-cache build-base gcc g++ pcre pcre-dev libc6-compat
# Install git + SSL ca certificates.
# Git is required for fetching the dependencies.
# Ca-certificates is required to call HTTPS endpoints.
RUN apk update && apk add --no-cache curl git ca-certificates tzdata \
&& update-ca-certificates 2> /dev/null || true
I build my app this way:
- CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -ldflags="-w -s" -o bin/backend ./cmd/backend/main.go
EDIT
I have change my Dockerfile (add line below)
RUN apk add --update --no-cache build-base gcc g++ pcre pcre-dev libc6-compat
And now I have a new error:
Error loading shared library libpcre.so.1: No such file or directory
(needed by /bin/backend)
You can try one of these, as both package
RUN apk add --virtual build-dependencies
RUn apk add --no-cache build-base gcc
build-essential is a metapackage (a package that installs many other
packages, like g++ and gcc: the GNU C & C++ compilers).
Or you can install the alpine sdk.
You can start with alpine-sdk, which is a "metapackage that pulls in
the most essential packages used to build new packages."
http://wiki.alpinelinux.org/wiki/Developer_Documentation has more
info.
RUN apk add --update alpine-sdk
docker-alpine-issues-24
Or you can use golang:latest which will work fine.
FROM golang:latest
RUN apt-get update
RUN apt-get install libpcre++-dev -y
You can use one of the Debian-based golang images instead. By the time you're installing GNU libc and a full C toolchain on top of this anyways, there's not really going to be much space savings over the Alpine base image. You can (and should) use a multi-stage build where the final image just contains your compiled binary, and that can use an Alpine base.
The result would look something like:
# Build-time image; just has the parts needed to run `go build`
FROM golang:1.12-buster AS build
# Install additional build-time tools
RUN apt-get update \
&& apt-get install --assume-yes \
build-essential ca-certificates git-core tzdata \
libpcre++-dev
# Build your application
WORKDIR /app
COPY . .
ENV GO111MODULE=on
RUN go build -o myapp ./cmd/myapp
# Runtime image; has only what we need to run the application
FROM alpine:3.10
# Note that you'll need the shared library for libpcre++
RUN apk add ca-certificates tzdata libpcrepp
COPY --from=build /app/myapp /usr/bin/myapp
CMD ["myapp"]
I'm going crazy trying to ADD a directory from my host machine to my docker container. When building the container with docker-compose up --build, it seems to ADD just fine, but when I try to access module in my app.py file, I get the ModuleNotFoundError
My DockerFile contains the following:
FROM python:3.7-alpine
RUN apk update && \
apk add --virtual build-deps gcc musl-dev && \
apk add --no-cache postgresql-dev && \
apk add alsa-lib-dev && \
apk add pulseaudio-dev && \
apk add postgresql-dev && \
apk add ffmpeg-dev && \
apk add ffmpeg && \
rm -rf /var/cache/apk/*
COPY /scraper/requirements.txt requirements.txt
RUN pip install -r requirements.txt
ADD /common/testmodel /scraper/testmodel
WORKDIR home/scraper/
ENTRYPOINT ["python3", "-u", "app.py"]
CMD gunicorn -b 0.0.0.0:5000 --access-logfile - "app:app"
Then when building the image, the log shows:
Step 6/9 : ADD /common/testmodel home/scraper/testmodel
---> a7b27854d751
My project structure looks like the following:
-common
-testmodel
-test.py
-scraper
-DockerFile
-requirements
-docker-compose.yml
But in my app.py file, when I run from testmodel.test import TestClass I get ModuleNotFoundError: No module named 'testmodel'
Any help with this problem is greatly appreciated as this how now taken up a much larger chunk of my day that I ever thought it would. Thank you very much.
I may be missing some context but I think you've several issues:
You COPY /scraper... and ADD /common... -- are these directories hanging from root on your local machine?
You set WORKDIR after COPY and ADD but generally (although not required), you'd set this first as a default destination and then you could COPY something . and ADD something . and these destinations (.) would refer to your WORKDIR
You use /home/scraper as your WORKDIR but you don't copy and add your files into it. It will be empty at this point.
Your ENTRYPOINT references app.py but your file is called test.py
One useful debugging tool is to shell into containers to e.g. examine the directory structure to confirm it's as expected. Assuming your image is called scraper, you could:
docker build \
--tag=scraper \
--file=scraper/Dockerfile \
. # Don't forget the period ;-)
Then Alpine's shell is called ash:
docker run \
--interactive \
--tty \
scraper:latest ash
Or, if your Dockerfile has an ENTRYPOINT, then override it using:
docker run \
--interactive \
--tty \
--entrypoint=ash \
scraper:latest
and then you could browse the container's directory structure:
You'll default to /home/scraper (WORKDIR):
/home/scraper # ls -l
total 0
You may examine /scraper using:
/home/scraper # apk install tree
/home/scraper # tree /scraper
/scraper
└── testmodel
└── test.py
1 directory, 1 file
I'm not entirely clear as to what would be the correct solution for you but I hope this helps get you progressed:
FROM python:3.7-alpine
RUN apk update && \
apk add --virtual build-deps gcc musl-dev && \
apk add --no-cache postgresql-dev && \
apk add alsa-lib-dev && \
apk add pulseaudio-dev && \
apk add postgresql-dev && \
apk add ffmpeg-dev && \
apk add ffmpeg && \
rm -rf /var/cache/apk/*
WORKDIR home/scraper/
COPY scraper/requirements.txt .
RUN pip install -r requirements.txt
ADD common/testmodel .
ENTRYPOINT ["python3", "-u", "test.py"]
CMD gunicorn -b 0.0.0.0:5000 --access-logfile - "test:app"
In the Alpine linux package site https://pkgs.alpinelinux.org/packages
NSCA packages are yet to get added. Is there an alternative to setup NSCA in Alpine Linux for passive-check?
If there is no package for it, you can always build it yourself.
FROM alpine AS builder
ARG NSCA_VERSION=2.9.2
RUN apk update && apk add build-base build-base gcc wget git
RUN wget http://prdownloads.sourceforge.net/nagios/nsca-$NSCA_VERSION.tar.gz
RUN tar xzf nsca-$NSCA_VERSION.tar.gz
RUN cd nsca-$NSCA_VERSION&& ./configure && make all
RUN ls -lah nsca-$NSCA_VERSION/src
RUN mkdir -p /dist/bin && cp nsca-$NSCA_VERSION/src/nsca /dist/bin
RUN mkdir -p /dist/etc && cp nsca-$NSCA_VERSION/sample-config/nsca.cfg /dist/etc
FROM alpine
COPY --from=builder /dist/bin/nsca /bin/
COPY --from=builder /dist/etc/nsca.cfg /etc/
Since this is using multiple stages, your resulting image will not contain development files and will still be small.
I am having issues with building my docker file with Azure DevOps.
Here is a copy of my docker file:
FROM node:10-alpine
# Create app directory
WORKDIR /usr/src/app
# Copy app
COPY . .
# install packages
RUN apk --no-cache --virtual build-dependencies add \
git \
python \
make \
g++ \
&& sudo npm#latest -g wait-on concurrently truffle \
&& npm install \
&& apk del build-dependencies \
&& truffle compile --all
# Expose the right ports, the commands below are irrelevant when using a docker-compose file.
EXPOSE 3000
CMD ["npm", "run", "server”]
it was working recently now I am getting the following error message:
sudo not found.
What is the cause of this sudo not found error?
Don't use sudo. Just drop that from the command. That image is already running as root by default - there's no reason for it.
TJs-MacBook-Pro:~ tj$ docker run node:10-alpine whoami
root