Environment variables on docker container are not override - docker

I am kind of new to Docker and I have a Nextjs application running in a docker container. The app used some environment variables to communicate with the server. Here is where the problem appears. For some reason, when running the container and passing the env variables, they are created OK. Using docker inspect containerId I can see the correct value. However, when doing the real call to the server the value (server id) is the one that was set up on the build.
Building the image and passing the parameter. Let's say SERVER_API=127.1.2.3
docker build -t miTestImage --build-arg SERVER_API=$(SERVER_API) --rm --no-cache myNextjsApp/
By running the following command I can see the correct value was set up.
docker image inspect imageId
BUT, when running the image
docker run -itd -e SERVER_API=http://127.0.3.9:4000 --name myContianerApp -p 5000:5000 --rm imageId
and sending a request to the server, it is using the Old value (127.1.2.3) instead of the new one (http://127.0.3.9:4000)
And by doing:
docker inspect myContianerApp,
I can see the new value properly added but I don't understand why is not been picking up by the app.
I was reading this article where they have the following diagram. I'm doing the same steps but is not working for me. Do I missing something?
Any help/ clue is really appreciated.

Here's a working example:
FROM busybox
ARG SERVER="google.com"
ENV SERVER=${SERVER}
ENTRYPOINT "/bin/ash" "-c" "ping ${SERVER}"
Then:
docker build --tag=62270940 --file=./Dockerfile .
docker inspect 62270940 --format="{{.Config.Env}}"
[PATH=... SERVER=google.com]
docker run \
--interactive --tty \
62270940
PING google.com (172.217.14.238): 56 data bytes
64 bytes from 172.217.14.238: seq=0 ttl=52 time=13.850 ms
64 bytes from 172.217.14.238: seq=1 ttl=52 time=11.494 ms
docker run \
--interactive --tty \
--env=SERVER="stackoverflow.com" \
62270940
PING stackoverflow.com (151.101.1.69): 56 data bytes
64 bytes from 151.101.1.69: seq=0 ttl=55 time=13.763 ms
64 bytes from 151.101.1.69: seq=1 ttl=55 time=13.800 ms
64 bytes from 151.101.1.69: seq=2 ttl=55 time=25.678 ms

Related

Ping: command not found when using hyperledger fabric image

I am a beginner to docker.Please correct me if anything wrong.
As shown in this docker swarm tutorial https://www.youtube.com/watch?v=nGSNULpHHZc , i am trying to setup multhost setup for my hyperledger fabric application.
I am using two oracle linux servers namely server 1 and server 2.
I connected both the servers using the docker swarm as managers and created overlay network called my-net.
I followed the same syntax given in the above mentioned tutorial and created the service using the beolw mentioned syntax.
docker service create --name myservice --network my-net --replicas 2 alpine sleep 1d
As expected it created one conatianer in each the server.
Say for example server 1 coantainer IP is 10.0.0.4 and server 2 container IP 10.0.0.5.
Now, i am trying to ping from the second servers container to first server's container as shown below and it is pinging.
# docker exec -it ContainerID sh
/ # ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4): 56 data bytes
64 bytes from 10.0.0.4: seq=0 ttl=64 time=0.082 ms
64 bytes from 10.0.0.4: seq=1 ttl=64 time=0.062 ms
64 bytes from 10.0.0.4: seq=2 ttl=64 time=0.067 ms
^C
--- 10.0.0.4 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.062/0.070/0.082 ms
Now, I am trying to create my service(1) using the beolw mentioned syntax.
docker service create --name myservice1 --network my-net --replicas 2 hyperledger/fabric-peer sleep 1d
As expected this also created one conatianer in each the server.
Say for example server 1 coantainer IP is 10.0.0.6 and server 2 container IP 10.0.0.7.
Now, I am trying to ping from the second servers container to first server's container as shown below.
This time i am getting ping not found error,
# docker exec -it ContainerID sh
# ping 10.0.0.6
sh: 1: ping: not found
Can anyone please help what is the problem with the second myservice1.
The Fabric Docker images are based on a bare bones base Ubuntu image and do not include utilities like ping. Once you "exec" into the peer containers, you use "apt" to install ping:
apt-get update
apt-get install inetutils-ping
Added -ping at the end
Expanding on Gari Singh's answer, on a Fabric network I've spun this week, the inetutils has been split in different packages:
# apt-cache search inetutils
inetutils-ftp - File Transfer Protocol client
inetutils-ftpd - File Transfer Protocol server
inetutils-inetd - internet super server
inetutils-ping - ICMP echo tool
inetutils-syslogd - system logging daemon
inetutils-talk - talk to another user
inetutils-talkd - remote user communication server
inetutils-telnet - telnet client
inetutils-telnetd - telnet server
inetutils-tools - base networking utilities (experimental pac
so to install e.g. ping the correct command has become:
# apt-get install inetutils-ping
The Ubuntu version of the peer is:
# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"

Not able to connect to other hosts inside a docker container

I solved it, se the edit at the end of the description.
I'm using Centos7 as a host and running docker version 17.05.0-ce
I'm able to pull images on to the host.
from inside a contiainer I'm able to ping the docker interface, I'm also able to ping the host machine. But thats it, I'm not able to ping any other hosts, not the dns on the local network, not google, nothing. I guess it's something with the routing, but I can't figure it out.
Anyone got an idea?
This is (obviously) not about connecting to other containers on the same host. but probably a problem with the routing or configuration in docker
jonmat ~ $ docker -v
Docker version 17.05.0-ce, build 89658be
# pulling images works fine, so the engine can connect to the internet
jonmat ~ $ docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
ff3a5c916c92: Pull complete
Digest: sha256:7b848083f93822dd21b0a2f14a110bd99f6efb4b838d499df6d04a49d0debf8b
Status: Downloaded newer image for alpine:latest
# pinging google dns from the host is is no problem
jonmat ~ $ ping -c1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=5.16 ms
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 5.160/5.160/5.160/0.000 ms
# pinging google dns from inside the container won't work, probably some kind of routing issue?
jonmat ~ $ docker run -it --rm alpine ping -c1 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
EDIT:
I found the problem myself. someone other than me have also been using the host, and they added the option "--ptables=false" to dockerd, i removed this and it solved my problem.
Assuming your container is running with name alpine, can you try below command
docker exec -t alpine ping 8.8.8.8
In the example given above, seems you are missing some options, try this
docker run -it --rm -t alpine ping -c1 8.8.8.8
If container is already running use docker exec like posted above. (I would like to combine both answers, but unfortunately I am not finding option to delete and add it in the first answer itself)
Refer the docker exec for more details

sbt-native-packager docker: How to add entry in /etc/hosts

I'm using the docker plugin of sbt-native-packager to build a Docker image. I would like my image to have an additional entry in /etc/hosts.
I've tried the following:
dockerCommands in Docker := dockerCommands.value.flatMap {
case cmd#Cmd("FROM", _) =>
List(Cmd("FROM", "anapsix/alpine-java")) ++ List(
Cmd("ENV", "JAVA_MIN_MEM", "1024m"),
Cmd("RUN", "echo 8.8.8.8 foo >> /etc/hosts")
)
}
Unfortunately it doesn't seem to work. When I start a container based on this image, the /etc/hosts does not have the extra entry.
It looks like it's actually writing the file because I tried the following instead:
....
Cmd("RUN", "echo 8.8.8.8 foo >> /etc/hosts; ping -c 4 foo")
....
And I'm getting as output the following:
[info] Step 9/15 : RUN echo 8.8.8.8 foo >> /etc/hosts; ping -c 4 foo
[info] ---> Running in b6d7ba25f96f
[info] PING foo (8.8.8.8): 56 data bytes
[info] 64 bytes from 8.8.8.8: seq=0 ttl=37 time=5.521 ms
[info] 64 bytes from 8.8.8.8: seq=1 ttl=37 time=3.188 ms
[info] 64 bytes from 8.8.8.8: seq=2 ttl=37 time=6.012 ms
[info] 64 bytes from 8.8.8.8: seq=3 ttl=37 time=4.192 ms
So it looks like the modified /etc/hosts is being overridden!
What is the correct way to do this?
The file /etc/hosts is managed by Docker and cannot be customized as part of building an image.
As you already figured out, you can add a custom entry using RUN echo 8.8.8.8 foo > /etc/hosts; <some_command_requiring_custom_hosts_file>. But this modification is only available during the execution of this particular RUN command.
In case you need custom entries when running containers use the --add-host parameter of docker run (see docs).
In general it is a best practise to not include configuration details in a Docker images. Applying configuration only at the time you are running your containers helps to keep the images portable.
Yes, Docker overrides that file (in reality, it is a file present in the host system that is mounted in that location when the container starts), so any change you do there will be overriden.
One option would be to change the docker entrypoint to, instead of pointing to your application startup script, point to a script that does that change and then runs your app startup script.
So in plain docker (sorry I have never used sbt docker plugin), instead of having an entry to your app start script (using /usr/bin/myapp):
ENTRYPOINT /usr/bin/myapp
you would have
RUN echo "echo 8.8.8.8 foo > /etc/hosts" >> /startup.sh
RUN echo "/usr/bin/myapp" >> /startup.sh
RUN chmod +x /startup.sh
ENTRYPOINT /startup.sh

There is a way to ping a Docker container using its hostname, from another Docker Container?

I am looking for a solution to ping a Docker container using its hostname, from another Docker Container.
I tried as follow:
starting first Docker container:
docker run --rm -ti --hostname=repohost --name=repo repo
starting second Docker container, link to first and start bash:
docker run --rm -ti --hostname=repo2host --link repo:rp repo2 /bin/bash
on bash started on repo2
ping repohost
it remain on pending without any result.
Can someone tell me if there is a solution for this?
You should be able to ping using the alias you gave in the link command (the part after the :), in your case ping rp should work.
The following works for me, given a running container called furious_turing:
$ docker run -it --link furious_turing:ft debian /bin/bash
root#06b18931d80b:/# ping ft
PING ft (172.17.0.3): 48 data bytes
56 bytes from 172.17.0.3: icmp_seq=0 ttl=64 time=0.136 ms
56 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.091 ms
56 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.092 ms
^C--- ft ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.091/0.106/0.136/0.000 ms
root#06b18931d80b:/#
If you need to ping on another name, you can add entries to /etc/hosts with the --add-host argument to docker run.
One way to achieve what you need would be with WeaveDNS.

Add to container's /etc/hosts using Fig?

I'm trying to configure fig so that I can connect to my database server without specifying a fully qualified domain name. The database is running on bare metal (not in docker). On the host, glinda.local is specified in /etc/hosts and I'd like the container to mimic this behavior (though not rely on the host's config).
I found this suggestion on github, but it fails since /etc/hosts is on a read-only file system.
So the question remains, how can I add glinda.local from fig.yml to /etc/hosts inside my docker container?
From Docker v1.3.1 (I think) you have available the option --add-host in docker run. Unfortunately this options has not been merged to fig:master yet, but there is a PR with it. When merged (or using that branch) you should be able to use it in this way:
extra_hosts
Add hostname mappings. Use the same values as the docker client
--add-hosts parameter.
> extra_hosts:
> - docker: 162.242.195.82
> - fig: 50.31.209.229
An entry with the ip address and hostname will be created in
/etc/hosts inside containers for this service, e.g:
> 162.242.195.82 docker
> 50.31.209.229 fig
What makes you think /etc/hosts is read-only? The following works for me with Docker 1.5:
$ docker run -it debian
root#0989fd55e8fa:/# echo "127.0.0.1 test" >> /etc/hosts
root#0989fd55e8fa:/# ping test
PING test (127.0.0.1): 48 data bytes
56 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.078 ms
56 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms
^C--- test ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.068/0.073/0.078/0.000 ms
Are you saying this doesn't work for you? If the above works, you should be able to add what you need into an entrypoint script.

Resources