wget over proxy within Dockerfile doesn't work - docker

i have an server that have access internet only via proxy.
So im using this docker-compose command:
docker-compose build --build-arg HTTP_PROXY=http://myproxy.server:3128 --build-arg HTTPS_PROXY=http://myproxy.server:3128 web
It starts to download alpine-os Image, executes commands like apk add, fetch from internet, which works.
Except downloading wget don't works:
wget -O libiconv.tar.gz "https://ftp.gnu.org/pub/gnu/libiconv/libiconv-$LIBICONV_VERSION.tar.gz
Error:
Connecting to ftp.gnu.org (208.118.235.20:443)
wget: can't connect to remote host (208.118.235.20): Operation timed out
I have tried many variants like adding .wgetrc file within export https_proxy="" or adding parameter to wget command:
wget -Y on
No one of them works..
PS: wget with e option doesn't exists:
wget: unrecognized option: e

If you have connectivity with your host, just add to your docker-compose file:
- network_mode: host
And you'll have inside the container the same interfaces, ports etc than in the host, without having to modify resolv.conf

Related

Dockerfile `RUN --mount=type=ssh` doesn't seem to work

In my Dockerfile, I'm trying to pull a Python lib from a private repo:
RUN --mount=type=ssh .venv/bin/pip install SOME_LIB --extra-index-url https://example.com/pypi/ -U
Then I tried to run the build using the following command:
docker buildx build --ssh /path/to/the/private/key/id_rsa .
For some reason, it gave me the following error:
#0 0.831 Host key verification failed.
#0 0.831 fatal: Could not read from remote repository.
I've double checked the private key is correct. Did I miss any step to use --mount=type=ssh?
The error has nothing to do with your private key; it is "host key verification failed". That means that ssh doesn't recognize the key being presented by the remote host. It's default behavior is to ask if it should trust the hostkey, and when run in an environment when it can't prompt interactively, it will simply reject the key.
You have a few options to deal with this. In the following examples, I'll be cloning a GitHub private repository (so I'm interacting with github.com), but the process is the same for any other host to which you're connecting with ssh.
Inject a global known_hosts file when you build the image.
First, get the hostkey for the hosts to which you'll be connecting
and save it alongside your Dockerfile:
$ ssh-keycan github.com > known_hosts
Configure your Dockerfile to install this where ssh will find
it:
COPY known_hosts /etc/ssh/ssh_known_hosts
RUN chmod 600 /etc/ssh/ssh_known_hosts; \
chown root:root /etc/ssh/ssh_known_hosts
Configure ssh to trust unknown host keys:
RUN sed /^StrictHostKeyChecking/d /etc/ssh/ssh_config; \
echo StrictHostKeyChecking no >> /etc/ssh/ssh_config
Run ssh-keyscan in your Dockerfile when building the image:
RUN ssh-keyscan github.com > /etc/ssh/ssh_known_hosts
All three of these solutions will ensure that ssh trusts the remote host key. The first option is the most secure (the known hosts file will only be updated by you explicitly when you run ssh-keyscan locally). The last option is probably the most convenient.

Why is are my published ports not working?

I've created a docker image containing a rust application that responds to get requests on port 8000. The application itself is a basic example using the rocket library (https://rocket.rs/) it looks like this
#![feature(proc_macro_hygiene, decl_macro)]
#[macro_use] extern crate rocket;
#[get("/")]
fn index() -> &'static str {
"Hello, world!"
}
fn main() {
rocket::ignite().mount("/", routes![index]).launch();
}
I have compiled this and called it server
I then created a Docker file to host it
FROM ubuntu:16.04
RUN apt-get update; apt-get install -y curl
COPY server /root/
EXPOSE 8000
CMD ["/root/server"]
I build the docker image with
$ docker build -t port_test and run it with $ docker run -p 8000:8000 port_test
At this point it all looks good
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3befe0c272f7 port_test "/root/server" 7 minutes ago Up 7 minutes 0.0.0.0:8000->8000/tcp festive_wilson
If I run curl within the container it works fine
$ docker exec -it 3befe0c272f7 curl -s localhost:8000
Hello, world!
However I can't do the same from the host
$ curl localhost:8000
curl: (56) Recv failure: Connection reset by peer
David Maze was correct. The problem was that the process was binding to localhost in the container. I added a Rocket.toml file with the following entries
[global]
address = "0.0.0.0"
[development]
address = "0.0.0.0"
and now it works fine.
Thanks David.
Rocket have different standard configuration, please try staging or prod to be able to do what you want, source.
ROCKET_ENV=staging cargo run
See also:
Why can I not access this Rust simple server from the Internet?

Non zero code returned when installing project docker ubuntu

I tried to install a project from docker and after running the command:
docker-compose -f docker-compose.yml up -d --build
at step 5 I'm receiving a Connection timed out saying:
curl: (7) Failed to connect to download.icu-project.org port 80: Connection timed out
ERROR: Service 'app' failed to build: The command '/bin/sh -c curl -sS -o /tmp/icu.tar.gz -L http://download.icuproject.org/files/icu4c/60.1/icu4c-60_1-src.tgz && tar -zxf /tmp/icu.tar.gz -C /tmp && cd /tmp/icu/source && ./configure --prefix=/usr/local && make && make install' returned a non-zero code: 7
I tried running into bash, but when I type 'docker-compose ps' I got no containers so I don't know how to properly fix this.
Have any of you encounter this issue and want to share with me ?
As is evident from the ERROR, the url that's being accessed as part of the docker-compose i.e. http://ftp.lfs-matrix.net/pub/blfs/conglomeration/icu/icu4c-60_1-src.tgz is inaccessible which is why the docker-compose fails with timeout. You can try accessing the URL from the browser and yes its unreachable.
You must change it to something else something like http://ftp.lfs-matrix.net/pub/blfs/conglomeration/icu/. While you do that please make sure that you are indeed downloading the zip from a reliable source.

Yum update fails -Centos 7 - dockerbuild

I have frequently built docker container using centos 7 as base image. But now I am getting error when I run,
RUN yum update add \
bash \
&& rm -rfv /var/cache/apk/*
ERROR:
Loaded plugins: fastestmirror, ovl
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
Contact the upstream for the repository and get them to fix the problem.
Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
`subscription-manager repos --disable=<repoid>`
Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: base/7/x86_64 Could not retrieve
mirrorlist
http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container
error was 14: curl#6 - "Could not resolve host: mirrorlist.centos.org;
Name or service not known" The command '/bin/sh -c yum update add
bash && rm -rfv /var/cache/apk/*' returned a non-zero code: 1
I also saw few resolutions to use "dhclient" but this error happens when i do docker-compose build.
I ran into this problem attempting to run the same Dockerfile, which fetched several software packages using yum, on two different platforms; one macOS, the other an Ubuntu 16.04-based Linux OS (elementaryOS Loki), both using the official packages from docker.com.
My theory is that the Linux package is just more restrictive out of the box, security-wise, than the macOS one. Maybe this is configurable with some kind of /etc/something config file, but I don't have the expertise with Docker to say for sure. EDIT: See my comment below.
What I can say is there was no additional configuration required for me on macOS (10.11 El Capitan); just docker build . worked fine, and yum processes from the Dockerfile were able to reach all the remote repositories.
In the Ubuntu-derived Linux distro, however, it was necessary to use
docker build --network host .
followed by
docker run -it --network host <image> <command>
when I wanted to run a process inside that image which required internet access.
This may be the case for other Debian-derived systems as well.
There are, of course, security considerations which need to be taken into account when allowing a long-running Docker container to communicate through the host network adapter, unrestricted, and one would do well to review the appropriate documentation in that regard.
My assumption is that for some reason network behavior in docker varies based on distribution.
Try to use:
docker run -d --net mybridge centos
or
docker network create -d bridge mybridge
docker run -d --net mybridge centos
It should start working. Or just edit /etc/hosts and add mirror address
Name: mirrorlist.centos.org
Address: 67.219.148.138
root cause of the issue is, container proxy settings were wrong. Just corrected the proxy settings at the below location and worked.
/root/.docker/config.json

Kafka Docker - Can't produce or consume from outside of docker container

Kafka works fine in the docker container. I can use docker exec -it [container name] [kafkascript] and successfully create topics, produce/consume messages, but when I try from outside of the docker container using local kafka scripts I can only create and list topics. Producing and consuming messages throws errors:
Producing:
~/development/lib/kafka/kafka_2.11-0.10.0.0$ bin/kafka-console-producer.sh --broker-list $(docker-machine ip kafka):9092 --topic test
asdf
[2016-09-18 10:13:48,999] ERROR Error when sending message to topic test with key: null, value: 4 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for test-0
Consuming:
~/development/lib/kafka/kafka_2.11-0.10.0.0$ bin/kafka-console-consumer.sh --zookeeper $(docker-machine ip kafka):2181 --topic test --from-beginning
[2016-09-18 09:57:10,389] WARN Fetching topic metadata with correlation id 0 for topics [Set(test)] from broker [BrokerEndPoint(0,ba762186182f,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:80)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79)
at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
[2016-09-18 09:57:10,392] WARN [console-consumer-34526_3c15c2c24040-1474210630122-9404562b-leader-finder-thread], Failed to find leader for Set([test,0]) (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
kafka.common.KafkaException: fetching topic metadata for topics [Set(test)] from broker [ArrayBuffer(BrokerEndPoint(0,ba762186182f,9092))] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:73)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:94)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:80)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79)
at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
... 3 more
I'm using spotify/docker-kafka ,but I upgraded it to 0.10.0.0 and used some suggestions from jshark that sets up advertised.listeners. I'm running on a Mac. I've created a docker-machine called kafka. Here is my docker run:
docker run -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=`docker-machine ip kafka` --env ADVERTISED_PORT=9092 kafka
Here is my dockerfile:
# Kafka and Zookeeper
FROM java:openjdk-8-jre
ENV DEBIAN_FRONTEND noninteractive
ENV SCALA_VERSION 2.11
ENV KAFKA_VERSION 0.10.0.0
ENV KAFKA_HOME /opt/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION"
# Install Kafka, Zookeeper and other needed things
RUN apt-get update && \
apt-get install -y zookeeper wget supervisor dnsutils && \
rm -rf /var/lib/apt/lists/* && \
apt-get clean && \
wget -q http://apache.mirrors.spacedump.net/kafka/"$KAFKA_VERSION"/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz -O /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz && \
tar xfz /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz -C /opt && \
rm /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz
ADD scripts/start-kafka.sh /usr/bin/start-kafka.sh
# Supervisor config
ADD supervisor/kafka.conf supervisor/zookeeper.conf /etc/supervisor/conf.d/
# 2181 is zookeeper, 9092 is kafka
EXPOSE 2181
EXPOSE 9092
CMD ["supervisord", "-n"]
scripts/start-kafka.sh
This worked for me: https://stackoverflow.com/a/37655203/1839580
My summary: The ADVERTISED_HOST environment variable in the spotify/kafka container needs to change depending on whether your service is operating inside or outside the container. I an using Docker for Mac and I have my docker network set to bridged. Outside of Docker the ADVERTISED_HOST needed to be set to localhost, inside of docker, it was set to myproject_kafka_1 or whatever it ends up being on your system. To fix it, I added and entry in my MacOS host files that mapped 127.0.0.1 to myproject_kafka_1. I don't like messing with my host file, but it fixed this issue for me.
127.0.0.1 localhost
127.0.0.1 myproject_kafka_1
I wasn't able to interact with Kafka from outside its container until I added the following entries to server.properties:
listener.security.protocol.map=INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
advertised.listeners=INSIDE://${container_ip}:9092,OUTSIDE://${outside_host_ip}:29092
listeners=INSIDE://:9092,OUTSIDE://:29092
inter.broker.listener.name=INSIDE
I posted a more complete answer here. Hope others won't spend as much time on this as I did.
Cleaner would be to set advertised.listeners=host-ip:port since advertised.host.name and advertised.port are deprecated in Kafka server.properties file.
If set host-ip to 0.0.0.0 it will listen requests from anywhere. But it's insecure.

Resources