Connection refused when building Docker container - docker

Here is my Dockerfile. Once I execute it throws an errorconnect ECONNREFUSED 172.17.0.1:27017 kindly help me out in this regard. Thank you.
FROM node:18-alpine
RUN npm install --global nodemon
RUN apk upgrade --update-cache --available && \
apk add openssl && \
apk add git && \
rm -rf /var/cache/apk/*
RUN openssl version
RUN node --version
ENV MONGO_URI=mongodb://172.17.0.1:27017/dbname
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY ./package.json ./
USER node
RUN npm install;
USER root
RUN apk del git
USER node
COPY --chown=node:node . ./
EXPOSE 8080
CMD ["npm","start"]

Related

Docker container exit with error code error libcurl not found

I am building a container, you can see the docker file, its for rust app deployment on Argonaut. but its not able to start. Here you can see the Dockerfile.
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
FROM debian:buster-slim
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000
It builds successfully but when it works it gets exit with error code 127.
linkedin-leadr-1 | /app/target/release/linkedin: error while loading shared libraries: libcurl.so.4: cannot open shared object file: No such file or directory
Have not found what's wrong with it, even though I am installing libcurl4. but my docker container is not able to find it. Can you please give me the solution?
As you install libcurl4 in your build environment but not in your execution environment, that's most likely the reason.
There are two ways to solve this:
Install libcurl4 in your final image, or
Link statically by replacing cargo build with
RUN rustup target add x86_64-unknown-linux-musl
RUN cargo build --target=x86_64-unknown-linux-musl --release
The --release flag should get added either way, as I'm sure you don't want to deliver unoptimized debug builds to your enduser ;)
Note that if you choose to install libcurl4 in your final image, you need to clean up the apt cache afterwards, otherwise your image grows immensely:
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
The full Dockerfile with libcurl4 installed would then look like this:
FROM rust:1.64.0-buster AS builder
WORKDIR /app
ARG TOKEN
ARG DATABASE_URL
RUN git config --global url."https://${TOKEN}:#github.com/".insteadOf "https://github.com/"
COPY . .
ENV CARGO_NET_GIT_FETCH_WITH_CLI true
RUN rustup component add rustfmt
RUN apt-get update -y && apt-get install git wget ca-certificates curl gnupg lsb-release cmake libcurl4 -y
RUN cargo build
# Copy the libcurl shared library from the builder stage into the final container
RUN mkdir -p /usr/local/lib && \
cp /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/local/lib && \
ln -s /usr/local/lib/libcurl.so.4 /usr/local/lib/libcurl.so
FROM debian:buster-slim
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --yes \
libcurl4 \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /app
COPY --from=builder /app/target/debug/linkedin /app/target/release/linkedin
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
CMD ["/app/target/release/linkedin"]
EXPOSE 3000

version `GLIBC_2.29' not found

I am basing my dockerfile on the rust base image.
When deploying my image to an azure container, I receive this log:
./bot: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./bot)
./bot is my application.
The error also occurs when I perform docker run on my Linux Mint desktop.
How can I get GLIBC into my container?
Dockerfile
FROM rust:1.50
WORKDIR /usr/vectorizer/
COPY ./Cargo.toml /usr/vectorizer/Cargo.toml
COPY ./target/release/trampoline /usr/vectorizer/trampoline
COPY ./target/release/bot /usr/vectorizer/bot
COPY ./target/release/template.svg /usr/vectorizer/template.svg
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y musl-tools && \
rustup target add x86_64-unknown-linux-musl
CMD ["./trampoline"]
Now I don't totally understand the dependencies of your particular project but the below Dockerfile should get you started.
What you want to do is compile in an image that has all of your dev dependencies and then move the build artifacts to a much smaller (but compatible) image.
FROM rust:1.50 as builder
RUN USER=root
RUN mkdir bot
WORKDIR /bot
ADD . ./
RUN cargo clean && \
cargo build -vv --release
FROM debian:buster-slim
ARG APP=/usr/src/app
ENV APP_USER=appuser
RUN groupadd $APP_USER \
&& useradd -g $APP_USER $APP_USER \
&& mkdir -p ${APP}
# Copy the compiled binaries into the new container.
COPY --from=builder /bot/target/release/bot ${APP}/bot
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./trampoline"]

upload the ssh folder with keys to docker

I need to throw the ssh folder with the keys in docker.
Dockerfile:
FROM python:3.6-alpine3.12
RUN mkdir /code && mkdir /data
ADD . /code
WORKDIR /code
RUN pip3 install -r requirement && apk add git
RUN mkdir /root/.ssh && -v ~/.ssh:/root/.ssh
RUN apk add -y wget
Error when building:
/bin/sh: illegal option -
The command '/bin/sh -c -v ~/.ssh:/root/.ssh returned a non-zero code: 2
The shell does not recognize the command -v ~/.ssh:/root/.ssh
Try this:
FROM python:3.6-alpine3.12
ADD . /code
WORKDIR /code
RUN pip3 install -r requirement && \
apk add -y git wget && \
mkdir /data
COPY $HOME/.ssh /root/.ssh
PS: I added some Dockerfile's optimization for you
EDIT:
Copying sensitive data into your container is not a good idea unless you really know what you are doing.
If your application needs to connect to a remote server you own it would be better to generate new keys for it specifically and distribute them on your server (public key).

Setting up our Rasa/NLU container, error?

I have this file Dockerfile.nlu
FROM chatbot/spacy:latest
WORKDIR /app
COPY nlu ./agent_nlu
RUN python –m rasa_nlu.train --config agent_nlu/config.yml --data agent_nlu/data/ --path agent_nlu/agent --fixed_model_name default
and I get the error below:
]$ sudo docker build -t nlu:latest -f docker/Dockerfile.nlu .
Sending build context to Docker daemon 9.216kB
Step 1/4 : FROM chatbot/spacy:latest
---> 496dc6a38abb
Step 2/4 : WORKDIR /app
---> Using cache
---> 7f02012c8452
Step 3/4 : COPY nlu ./agent_nlu
COPY failed: stat /var/lib/docker/tmp/docker-builder363868051/nlu: no such file or directory
It doesn't look like Docker can find the nlu directory. Are you sure it exists? Are you sure that you are executing the command from the correct directory?
But you also aren't installing Rasa at all or any of it's requirements. Is there a reason you aren't using the pre-built Rasa images? available here with docs here.
Here is a fully functional Docker file pulled from their repo.
FROM python:3.6-slim
ENV RASA_NLU_DOCKER="YES" \
RASA_NLU_HOME=/app \
RASA_NLU_PYTHON_PACKAGES=/usr/local/lib/python3.6/dist-packages
# Run updates, install basics and cleanup
# - build-essential: Compile specific dependencies
# - git-core: Checkout git repos
RUN apt-get update -qq \
&& apt-get install -y --no-install-recommends build-essential git-core openssl libssl-dev libffi6 libffi-dev curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR ${RASA_NLU_HOME}
COPY . ${RASA_NLU_HOME}
# use bash always
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN pip install -r alt_requirements/requirements_spacy_sklearn.txt
RUN pip install -e .
RUN pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_md-2.0.0/en_core_web_md-2.0.0.tar.gz --no-cache-dir > /dev/null \
&& python -m spacy link en_core_web_md en \
&& pip install https://github.com/explosion/spacy-models/releases/download/de_core_news_sm-2.0.0/de_core_news_sm-2.0.0.tar.gz --no-cache-dir > /dev/null \
&& python -m spacy link de_core_news_sm de
COPY sample_configs/config_spacy.yml ${RASA_NLU_HOME}/config.yml
VOLUME ["/app/projects", "/app/logs", "/app/data"]
EXPOSE 5000
ENTRYPOINT ["./entrypoint.sh"]
CMD ["start", "-c", "config.yml", "--path", "/app/projects"]

Convert from multi stage build to single

As i'm limited to use docker 1.xxx instead of 17x on my cluster, I need some help on how to convert this multi stage build to a valid build for the older docker version.
Could someone help me?
FROM node:9-alpine as deps
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
FROM deps as test
RUN rm -r ./prod_node_modules \
&& npm run lint
FROM node:9-alpine
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
COPY --from=deps /app .
COPY --from=deps /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Currently it gives me error on "FROM node:9-alpine as deps"
"FROM node:9-alpine as deps" means you are defining an intermediate image that you will be able to COPY from COPY --from=deps.
Having a single image means you don't need to COPY --from anymore, and you don't need "as deps" since everything happens in the same image (which will be bigger as a result)
So:
FROM node:9-alpine
ENV NODE_ENV=development
RUN apk update && apk upgrade && \
apk add --no-cache bash
WORKDIR /app
COPY . .
RUN npm set progress=false \
&& npm config set depth 0 \
&& npm install --only=production \
&& cp -R node_modules/ ./prod_node_modules \
&& npm install
RUN rm -r ./prod_node_modules \
&& npm run lint
RUN apk add --update tzdata
ENV PORT=3000
ENV NODE_ENV=production
WORKDIR /root/
RUN cp -r /app .
RUN cp -r /app/prod_node_modules ./node_modules
EXPOSE 3000
CMD ["node", "index.js"]
Only one FROM here.

Resources