Envoy proxy throw invalid path error in docker swarm - docker-swarm

i have three node clusters
one manager
two workers
i am trying to route the services which is running on swarm mode by envoy proxy.
To do so i have to pass configuration file on envoy which is also running on swarm mode
so i have run
docker service create -d -p 8080:10000 --network=my-bridge --mount type=bind,src=/media/sf_envoy/envoy_proxy,dst=/etc/envoy/custom envoyproxy/envoy:v1.20-latest envoy -c /etc/envoy/custom/http.yaml
then it does not run any container. when i check by docker service logs SERVICE_ID it shows error invalid path /etc/envoy/custom/http.yaml
but when i run this
docker service create -d -p 8080:10000 --network=my-bridge --mount type=bind,src=/media/sf_envoy/envoy_proxy,dst=/etc/envoy/custom envoyproxy/envoy:v1.20-latest
then it start and to verify its mount i have enter into container and check it is successfully mounted and show everything stored in envoy_proxydirectory but it shows invalid path on above command

Related

Run a .war on a Docker container

I'm running a Java web application on a Docker cluster running those commands:
PS C:\Users\Marco\test_workspace> docker run -v test_web_application.war:/usr/local/tomcat/webapps/TestWebApplication.war -it -p 8080:8080 --network "host" -d Tomcat
The actual output confirms that the container is running:
At this point i want to access to the container through it's IP address from my host and i'm using the command inspect to identify the IP:
But, as the screenshot shows, i don't see any IP assigned.
Thus, my questions are:
Why the command --network "host" to assign an IP address shared with the host didn't worked ?
Finally, how can i access to my web application from the host ?
Command option --network="host" isn't supported for Docker for Windows (more information: https://docs.docker.com/network/host/).
You can access your application on localhost:8080 with launch option -p 8080:8080.

DNS not working between two linked docker containers - getaddrinfo EAI_AGAIN error

I am attempting to setup a temporary environment where I can execute ATs against a web application. To achieve this I have 3 docker containers:
Container 1: Database (mongo_local)
docker build -t mongo_local ./test/AT/mongo
docker run --name mongo_local -d -p 27017:27017 mongo_local
Container 2 (Web application):
docker run --name mywebapp_local -d -p 4431:4431 --link mongo_local -v /applicationdata:/applicationdata mywebapp
Container 3 (Newman test runner):
docker build -t newman_runner ./test/AT/newman
docker run --name newman_runner --link mywebapp_local newman_runner
The web application can access the database successfully using the following connection string: mongodb://mongo_local:27017/mydb, note that I am able to reference mongo_local, I dont have to specify an IP address for the mongo_local container.
The newman test runner runs postman tests against the web application and all tests execute successfully when I specify the IP address of the mywebapp_local container i.e. 10.0.0.4 in the URL, however if I specify the name mongo_local in the URL it does not work.
Hence https://mywebapp_local/api/v1/method1 does not work but https://10.0.0.4/api/v1/method1 does work.
The error Im getting is
getaddrinfo EAI_AGAIN mywebapp_local mywebapp_local:443 at request ...
I've tried using -add-host in the docker run command and this makes no difference. Is there anything obvious that I'm doing wrong?
As you have it set up, the newman_runner container doesn't --link mongo_local and that's why it can't see it.
Docker has been discouraging explicit inter-container links for a while. If you create a Docker-internal network and attach each container to it
docker network create testnet
docker run --net testnet ...
it will be able to see all of the other containers on the same network by their --name without an explicit --link.

No such image or container error

I want to setup a rancher server and a rancher agent on the same server.
Here is what i have done for creating server:
docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable
Then, I have opened my web-browser on 8080 port.
I have chosen a login/password and enabled access control.
Then i wanted to create a host (agent). Rancher web interface says me to type this command:
docker run -e CATTLE_AGENT_IP=x.x.x.x --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.10 http://nsxxx.ovh.net:8080/v1/scripts/yyyy:1514678400000:zzzz
I have no error message, but I do not see any entry in host section (in rancher web interface).
So I tried to execute a shell on the agent docker container:
docker exec -ti xxxxx /bin/bash
I tried to manually run run.sh script and here is what I see:
Error: No such image or container: nsxxx
I suppose this is because docker containers cannot communicate together, but I have done exactly what is in the documentation...
Thanks for your help
For docker exec your need to replace the xxxxx string with the container id or the name of the container. Both you get from the docker ps command

How to see the logs of docker container running on different ports

I am running single docker container on two different ports using below command
docker run -p ${EXTERNAL_PORT_NUMBER}:${INTERNAL_PORT_NUMBER} -p ${EXTERNAL_PORT_NUMBER_SECOND}:${INTERNAL_PORT_NUMBER_SECOND} --network ${NETWORK} --name ${SERVICE_NAME} --restart always -m 1024M --memory-swap -1 -itd ${ORGANISATION}/${SERVICE_NAME}:${VERSION}
I can see the container is running fine
My question is How can I see the logs of this docker container.
Every time I do sudo docker logs database-service -f I can see the log of container running on 9003 port only.
How can I view the logs of container running on 9113
You are getting all the logs that was displayed on stdout or stderr in the container.
It has nothing to do with the processes which are exposed on different ports.
If 2 instance is running inside the container and both are showing there logs on system console then you will be getting both logs on the docker logs command for the container.
You can try multitail utility to tail more than one log files in docker exec command.
For that you have to install it in that container.
You can bind external volumes to container service logs and see the logs
docker run -v 'path_to_you_host_logs':'container_service_log_path'
docker run -v 'home/user/app/apache_access.log':
'/var/log/apache_access.log'

Docker in Docker: Port Mapping

I have found a similar thread, but failed to get it to work. So, the use case is
I start a container on my Linux host
docker run -i -t --privileged -p 8080:2375 mattgruter/doubledocker
When in that container, I want to start another one with GAE SDK devserver running.
At that, I need to access a running app from the host system browser.
When I start a container in the container as
docker run -i -t -p 2375:8080 image/name
I get an error saying that 2375 port is in use. I start the app, and can curl 0.0.0.0:8080 when inside both containers (when using another port 8080:8080 for example) but cannot preview the app from the host system, since lohalhost:8080 listens to 2375 port in the first container, and that port cannot be used when launching the second container.
I'm able to do that using the image jpetazzo/dind. The test I have done and worked (as an example):
From my host machine I run the container with docker installed:
docker run --privileged -t -i --rm -e LOG=file -p 18080:8080
jpetazzo/dind
Then inside the container I've pulled nginx image and run it with
docker run -d -p 8080:80 nginx
And from the host environment I can browse the nginx welcome page with http://localhost:18080
With the image you were using (mattgruter/doubledocker) I have some problem running it (something related to log attach).

Resources