sbt-native-packager docker: How to add entry in /etc/hosts - docker

I'm using the docker plugin of sbt-native-packager to build a Docker image. I would like my image to have an additional entry in /etc/hosts.
I've tried the following:
dockerCommands in Docker := dockerCommands.value.flatMap {
case cmd#Cmd("FROM", _) =>
List(Cmd("FROM", "anapsix/alpine-java")) ++ List(
Cmd("ENV", "JAVA_MIN_MEM", "1024m"),
Cmd("RUN", "echo 8.8.8.8 foo >> /etc/hosts")
)
}
Unfortunately it doesn't seem to work. When I start a container based on this image, the /etc/hosts does not have the extra entry.
It looks like it's actually writing the file because I tried the following instead:
....
Cmd("RUN", "echo 8.8.8.8 foo >> /etc/hosts; ping -c 4 foo")
....
And I'm getting as output the following:
[info] Step 9/15 : RUN echo 8.8.8.8 foo >> /etc/hosts; ping -c 4 foo
[info] ---> Running in b6d7ba25f96f
[info] PING foo (8.8.8.8): 56 data bytes
[info] 64 bytes from 8.8.8.8: seq=0 ttl=37 time=5.521 ms
[info] 64 bytes from 8.8.8.8: seq=1 ttl=37 time=3.188 ms
[info] 64 bytes from 8.8.8.8: seq=2 ttl=37 time=6.012 ms
[info] 64 bytes from 8.8.8.8: seq=3 ttl=37 time=4.192 ms
So it looks like the modified /etc/hosts is being overridden!
What is the correct way to do this?

The file /etc/hosts is managed by Docker and cannot be customized as part of building an image.
As you already figured out, you can add a custom entry using RUN echo 8.8.8.8 foo > /etc/hosts; <some_command_requiring_custom_hosts_file>. But this modification is only available during the execution of this particular RUN command.
In case you need custom entries when running containers use the --add-host parameter of docker run (see docs).
In general it is a best practise to not include configuration details in a Docker images. Applying configuration only at the time you are running your containers helps to keep the images portable.

Yes, Docker overrides that file (in reality, it is a file present in the host system that is mounted in that location when the container starts), so any change you do there will be overriden.
One option would be to change the docker entrypoint to, instead of pointing to your application startup script, point to a script that does that change and then runs your app startup script.
So in plain docker (sorry I have never used sbt docker plugin), instead of having an entry to your app start script (using /usr/bin/myapp):
ENTRYPOINT /usr/bin/myapp
you would have
RUN echo "echo 8.8.8.8 foo > /etc/hosts" >> /startup.sh
RUN echo "/usr/bin/myapp" >> /startup.sh
RUN chmod +x /startup.sh
ENTRYPOINT /startup.sh

Related

Environment variables on docker container are not override

I am kind of new to Docker and I have a Nextjs application running in a docker container. The app used some environment variables to communicate with the server. Here is where the problem appears. For some reason, when running the container and passing the env variables, they are created OK. Using docker inspect containerId I can see the correct value. However, when doing the real call to the server the value (server id) is the one that was set up on the build.
Building the image and passing the parameter. Let's say SERVER_API=127.1.2.3
docker build -t miTestImage --build-arg SERVER_API=$(SERVER_API) --rm --no-cache myNextjsApp/
By running the following command I can see the correct value was set up.
docker image inspect imageId
BUT, when running the image
docker run -itd -e SERVER_API=http://127.0.3.9:4000 --name myContianerApp -p 5000:5000 --rm imageId
and sending a request to the server, it is using the Old value (127.1.2.3) instead of the new one (http://127.0.3.9:4000)
And by doing:
docker inspect myContianerApp,
I can see the new value properly added but I don't understand why is not been picking up by the app.
I was reading this article where they have the following diagram. I'm doing the same steps but is not working for me. Do I missing something?
Any help/ clue is really appreciated.
Here's a working example:
FROM busybox
ARG SERVER="google.com"
ENV SERVER=${SERVER}
ENTRYPOINT "/bin/ash" "-c" "ping ${SERVER}"
Then:
docker build --tag=62270940 --file=./Dockerfile .
docker inspect 62270940 --format="{{.Config.Env}}"
[PATH=... SERVER=google.com]
docker run \
--interactive --tty \
62270940
PING google.com (172.217.14.238): 56 data bytes
64 bytes from 172.217.14.238: seq=0 ttl=52 time=13.850 ms
64 bytes from 172.217.14.238: seq=1 ttl=52 time=11.494 ms
docker run \
--interactive --tty \
--env=SERVER="stackoverflow.com" \
62270940
PING stackoverflow.com (151.101.1.69): 56 data bytes
64 bytes from 151.101.1.69: seq=0 ttl=55 time=13.763 ms
64 bytes from 151.101.1.69: seq=1 ttl=55 time=13.800 ms
64 bytes from 151.101.1.69: seq=2 ttl=55 time=25.678 ms

Cannot conect to Docker container running in VSTS

I have a test which starts a Docker container, performs the verification (which is talking to the Apache httpd in the Docker container), and then stops the Docker container.
When I run this test locally, this test runs just fine. But when it runs on hosted VSTS, thus a hosted build agent, it cannot connect to the Apache httpd in the Docker container.
This is the .vsts-ci.yml file:
queue: Hosted Linux Preview
steps:
- script: |
./test.sh
This is the test.sh shell script to reproduce the problem:
#!/bin/bash
set -e
set -o pipefail
function tearDown {
docker stop test-apache
docker rm test-apache
}
trap tearDown EXIT
docker run -d --name test-apache -p 8083:80 httpd
sleep 10
curl -D - http://localhost:8083/
When I run this test locally, the output that I get is:
$ ./test.sh
469d50447ebc01775d94e8bed65b8310f4d9c7689ad41b2da8111fd57f27cb38
HTTP/1.1 200 OK
Date: Tue, 04 Sep 2018 12:00:17 GMT
Server: Apache/2.4.34 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
<html><body><h1>It works!</h1></body></html>
test-apache
test-apache
This output is exactly as I expect.
But when I run this test on VSTS, the output that I get is (irrelevant parts replaced with …).
2018-09-04T12:01:23.7909911Z ##[section]Starting: CmdLine
2018-09-04T12:01:23.8044456Z ==============================================================================
2018-09-04T12:01:23.8061703Z Task : Command Line
2018-09-04T12:01:23.8077837Z Description : Run a command line script using cmd.exe on Windows and bash on macOS and Linux.
2018-09-04T12:01:23.8095370Z Version : 2.136.0
2018-09-04T12:01:23.8111699Z Author : Microsoft Corporation
2018-09-04T12:01:23.8128664Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613735)
2018-09-04T12:01:23.8146694Z ==============================================================================
2018-09-04T12:01:26.3345330Z Generating script.
2018-09-04T12:01:26.3392080Z Script contents:
2018-09-04T12:01:26.3409635Z ./test.sh
2018-09-04T12:01:26.3574923Z [command]/bin/bash --noprofile --norc /home/vsts/work/_temp/02476800-8a7e-4e22-8715-c3f706e3679f.sh
2018-09-04T12:01:27.7054918Z Unable to find image 'httpd:latest' locally
2018-09-04T12:01:30.5555851Z latest: Pulling from library/httpd
2018-09-04T12:01:31.4312351Z d660b1f15b9b: Pulling fs layer
[…]
2018-09-04T12:01:49.1468474Z e86a7f31d4e7506d34e3b854c2a55646eaa4dcc731edc711af2cc934c44da2f9
2018-09-04T12:02:00.2563446Z % Total % Received % Xferd Average Speed Time Time Time Current
2018-09-04T12:02:00.2583211Z Dload Upload Total Spent Left Speed
2018-09-04T12:02:00.2595905Z
2018-09-04T12:02:00.2613320Z 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8083: Connection refused
2018-09-04T12:02:00.7027822Z test-apache
2018-09-04T12:02:00.7642313Z test-apache
2018-09-04T12:02:00.7826541Z ##[error]Bash exited with code '7'.
2018-09-04T12:02:00.7989841Z ##[section]Finishing: CmdLine
The key thing is this:
curl: (7) Failed to connect to localhost port 8083: Connection refused
10 seconds should be enough for apache to start.
Why can curl not communicate with Apache on its port 8083?
P.S.:
I know that a hard-coded port like this is rubbish and that I should use an ephemeral port instead. I wanted to get it running first wirth a hard-coded port, because that's simpler than using an ephemeral port, and then switch to an ephemeral port as soon as the hard-coded port works. And in case the hard-coded port doesn't work because the port is unavailable, the error should look different, in that case, docker run should fail because the port can't be allocated.
Update:
Just to be sure, I've rerun the test with sleep 100 instead of sleep 10. The results are unchanged, curl cannot connect to localhost port 8083.
Update 2:
When extending the script to execute docker logs, docker logs shows that Apache is running as expected.
When extending the script to execute docker ps, it shows the following output:
2018-09-05T00:02:24.1310783Z CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2018-09-05T00:02:24.1336263Z 3f59aa014216 httpd "httpd-foreground" About a minute ago Up About a minute 0.0.0.0:8083->80/tcp test-apache
2018-09-05T00:02:24.1357782Z 850bda64f847 microsoft/vsts-agent:ubuntu-16.04-docker-17.12.0-ce-standard "/home/vsts/agents/2…" 2 minutes ago Up 2 minutes musing_booth
The problem is that the VSTS build agent runs in a Docker container. When the Docker container for Apache is started, it runs on the same level as the VSTS build agent Docker container, not nested inside the VSTS build agent Docker container.
There are two possible solutions:
Replacing localhost with the ip address of the docker host, keeping the port number 8083
Replacing localhost with the ip address of the docker container, changing the host port number 8083 to the container port number 80.
Access via the Docker Host
In this case, the solution is to replace localhost with the ip address of the docker host. The following shell snippet can do that:
host=localhost
if grep '^1:name=systemd:/docker/' /proc/1/cgroup
then
apt-get update
apt-get install net-tools
host=$(route -n | grep '^0.0.0.0' | sed -e 's/^0.0.0.0\s*//' -e 's/ .*//')
fi
curl -D - http://$host:8083/
The if grep '^1:name=systemd:/docker/' /proc/1/cgroup inspects whether the script is running inside a Docker container. If so, it installs net-tools to get access to the route command, and then parses the default gw from the route command to get the ip address of the host. Note that this only works if the container's network default gw actually is the host.
Direct Access to the Docker Container
After launching the docker container, its ip addresses can be obtained with the following command:
docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' <container-id>
Replace <container-id> with your container id or name.
So, in this case, it would be (assuming that the first ip address is okay):
ips=($(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' nuance-apache))
host=${ips[0]}
curl http://$host/

Ping: command not found when using hyperledger fabric image

I am a beginner to docker.Please correct me if anything wrong.
As shown in this docker swarm tutorial https://www.youtube.com/watch?v=nGSNULpHHZc , i am trying to setup multhost setup for my hyperledger fabric application.
I am using two oracle linux servers namely server 1 and server 2.
I connected both the servers using the docker swarm as managers and created overlay network called my-net.
I followed the same syntax given in the above mentioned tutorial and created the service using the beolw mentioned syntax.
docker service create --name myservice --network my-net --replicas 2 alpine sleep 1d
As expected it created one conatianer in each the server.
Say for example server 1 coantainer IP is 10.0.0.4 and server 2 container IP 10.0.0.5.
Now, i am trying to ping from the second servers container to first server's container as shown below and it is pinging.
# docker exec -it ContainerID sh
/ # ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4): 56 data bytes
64 bytes from 10.0.0.4: seq=0 ttl=64 time=0.082 ms
64 bytes from 10.0.0.4: seq=1 ttl=64 time=0.062 ms
64 bytes from 10.0.0.4: seq=2 ttl=64 time=0.067 ms
^C
--- 10.0.0.4 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.062/0.070/0.082 ms
Now, I am trying to create my service(1) using the beolw mentioned syntax.
docker service create --name myservice1 --network my-net --replicas 2 hyperledger/fabric-peer sleep 1d
As expected this also created one conatianer in each the server.
Say for example server 1 coantainer IP is 10.0.0.6 and server 2 container IP 10.0.0.7.
Now, I am trying to ping from the second servers container to first server's container as shown below.
This time i am getting ping not found error,
# docker exec -it ContainerID sh
# ping 10.0.0.6
sh: 1: ping: not found
Can anyone please help what is the problem with the second myservice1.
The Fabric Docker images are based on a bare bones base Ubuntu image and do not include utilities like ping. Once you "exec" into the peer containers, you use "apt" to install ping:
apt-get update
apt-get install inetutils-ping
Added -ping at the end
Expanding on Gari Singh's answer, on a Fabric network I've spun this week, the inetutils has been split in different packages:
# apt-cache search inetutils
inetutils-ftp - File Transfer Protocol client
inetutils-ftpd - File Transfer Protocol server
inetutils-inetd - internet super server
inetutils-ping - ICMP echo tool
inetutils-syslogd - system logging daemon
inetutils-talk - talk to another user
inetutils-talkd - remote user communication server
inetutils-telnet - telnet client
inetutils-telnetd - telnet server
inetutils-tools - base networking utilities (experimental pac
so to install e.g. ping the correct command has become:
# apt-get install inetutils-ping
The Ubuntu version of the peer is:
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"

Not able to connect to other hosts inside a docker container

I solved it, se the edit at the end of the description.
I'm using Centos7 as a host and running docker version 17.05.0-ce
I'm able to pull images on to the host.
from inside a contiainer I'm able to ping the docker interface, I'm also able to ping the host machine. But thats it, I'm not able to ping any other hosts, not the dns on the local network, not google, nothing. I guess it's something with the routing, but I can't figure it out.
Anyone got an idea?
This is (obviously) not about connecting to other containers on the same host. but probably a problem with the routing or configuration in docker
jonmat ~ $ docker -v
Docker version 17.05.0-ce, build 89658be
# pulling images works fine, so the engine can connect to the internet
jonmat ~ $ docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
ff3a5c916c92: Pull complete
Digest: sha256:7b848083f93822dd21b0a2f14a110bd99f6efb4b838d499df6d04a49d0debf8b
Status: Downloaded newer image for alpine:latest
# pinging google dns from the host is is no problem
jonmat ~ $ ping -c1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=5.16 ms
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 5.160/5.160/5.160/0.000 ms
# pinging google dns from inside the container won't work, probably some kind of routing issue?
jonmat ~ $ docker run -it --rm alpine ping -c1 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
EDIT:
I found the problem myself. someone other than me have also been using the host, and they added the option "--ptables=false" to dockerd, i removed this and it solved my problem.
Assuming your container is running with name alpine, can you try below command
docker exec -t alpine ping 8.8.8.8
In the example given above, seems you are missing some options, try this
docker run -it --rm -t alpine ping -c1 8.8.8.8
If container is already running use docker exec like posted above. (I would like to combine both answers, but unfortunately I am not finding option to delete and add it in the first answer itself)
Refer the docker exec for more details

Add to container's /etc/hosts using Fig?

I'm trying to configure fig so that I can connect to my database server without specifying a fully qualified domain name. The database is running on bare metal (not in docker). On the host, glinda.local is specified in /etc/hosts and I'd like the container to mimic this behavior (though not rely on the host's config).
I found this suggestion on github, but it fails since /etc/hosts is on a read-only file system.
So the question remains, how can I add glinda.local from fig.yml to /etc/hosts inside my docker container?
From Docker v1.3.1 (I think) you have available the option --add-host in docker run. Unfortunately this options has not been merged to fig:master yet, but there is a PR with it. When merged (or using that branch) you should be able to use it in this way:
extra_hosts
Add hostname mappings. Use the same values as the docker client
--add-hosts parameter.
> extra_hosts:
> - docker: 162.242.195.82
> - fig: 50.31.209.229
An entry with the ip address and hostname will be created in
/etc/hosts inside containers for this service, e.g:
> 162.242.195.82 docker
> 50.31.209.229 fig
What makes you think /etc/hosts is read-only? The following works for me with Docker 1.5:
$ docker run -it debian
root#0989fd55e8fa:/# echo "127.0.0.1 test" >> /etc/hosts
root#0989fd55e8fa:/# ping test
PING test (127.0.0.1): 48 data bytes
56 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.078 ms
56 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms
^C--- test ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.068/0.073/0.078/0.000 ms
Are you saying this doesn't work for you? If the above works, you should be able to add what you need into an entrypoint script.

Resources