I am running an AWS Lambda locally via AWS Toolkit. The function, through a long dependency chain, calls an internal service endpoint that throws a ConnectionTimeoutException. That endpoint works when called locally.
Toolkit spins up a container to run the lambda in using the bridge docker network running on my local machine. My local machine is also running a proxy client in another container, and using docker network inspect bridge from my local terminal, I can see both the proxy and Toolkit containers are registered on the bridge network. When I shell into the running lambda container, my cUrl command to the internal service times out. That same command on my local machine succeeds.
Shouldn't the cUrl command work from within the lambda container?
local machine bridge network
connection time out exception
failed: connect timed out; nested exception is org.apache.http.conn.ConnectTimeoutException: Connect to internal.service.uri:80
Our SQUID proxy does not support service discovery.
This means the container has to have environment vars set to the proxy IP:
export http_proxy=http://172.17.0.2:3128
export HTTP_PROXY=http://172.17.0.2:3128
export https_proxy=http://172.17.0.2:3128
export HTTPS_PROXY=http://172.17.0.2:3128
export NO_PROXY=localhost
then it works.
next step is to figure out how to set those within the, container via Aws Toolkit
Related
B"H
I have a docker container on EC2 attempting to connect to DocumentDB. DocuementDB needs to be within the vpc network.
When attempting to connect to DocumentDB in a none host mode the connection fails, but when I (hack and) mount the container to use host network mode it does work. But for simple deployments and replicating my containers it's a problem.
Any idea how to connect to DocumentDB (without ssh tunneling) from within docker hosted on EC2?
If I understand correctly, you are running the container in none networking mode. None means you want to disable all the networking for your container. Most frequent used modes are either bridge or host.
You can also refer the below post which talks how to run docker container in ECS and connect to documentdb.
https://aws.amazon.com/blogs/database/deploy-a-containerized-application-with-amazon-ecs-and-connect-to-amazon-documentdb-securely/
I have a running k3d Kubernetes cluster:
$ kubectl cluster-info
Kubernetes master is running at https://0.0.0.0:6550
CoreDNS is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:6550/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
I have a python script that uses the kubernetes client api and manages namespaces, deployments, pod, etc. This works just fine in my local environment because I have all the necessary python modules installed and have direct access to my local k8s cluster. My goal is to containerize so that this same script is successfully run for my colleagues on their systems.
While running the same python script in a docker container, I receive connection errors:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='172.17.0.1', port=6550): Max retries exceeded with url: /api/v1/namespaces (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f8b637c5d68>: Failed to establish a new connection: [Errno 113] No route to host',))
172.17.0.1 is my docker0 bridge address so assumed that would resolve or forward traffic to my localhost. I have tried loading k8s configuration from my local .kube/config which references server: https://0.0.0.0:6550 and also creating a separate config file with server: https://172.17.0.1:6550 and both give the same No route to host error (with the respective ip address in the HTTPSConnectionPool(host=...))
One idea I was pursing was running a socat process outside the container and tunnel traffic from inside the container across a bridge socket mounted in from the outside, but looks like the docker image I need to use does not have socat installed. However, I get the feeling like the real solution should be much simplier than all of this.
Certainly there have been other instances of a docker container needing access to a k8s cluster served outside of the docker network. How is this connection typically established?
Use docker network command to create a predefined network
You can pass --network to attach k3d to an existing Docker network and also to docker run to do the same for another container
https://k3d.io/internals/networking/
I am trying to export metrics of an application by using jmx exporter. So basically i added java agent to jvm jmx parameters to run as a agent and configured it to expose localhost:5555. At the end with docker I created container.
So applications runs in remote machine. If it was running on my local I could check localhost:5555/metrics and I could see if metrics are exported. But in my case that apps runs in a container on remote machine. So how can I check if metrics are exported or not ? (Prometheus has not been configured yet so I cannot check on it.)
As long as the container is exposing 5555 to a port on its host (let's assume the same port 5555, i.e. it's running using something of the form docker run ... --publish=5555:5555 ...), then, as long as you can access the host machine, you can curl (or browse) the endpoint:
REMOTE_HOST=...
http://${REMOTE_HOST}:5555/metrics
I have slightly modified this example: https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway/examples/echo. I am running envoy on a docker container with exposed port 8080 (running this proxy server is required because the browser can't speak directly to a backend gRPC service). I am running all the services on my localhost (the host machine of the envoy docker container). However, I cannot seem to connect envoy in the docker container to the services running on the host machine.
I compiled grpc_cli in the container and when I run grpc_cli ls 192.168.1.10:9000 (host's LAN IP address and the port the service is running on), I get
root#bdc9ac396a87:~/grpc# ./bins/opt/grpc_cli ls 192.168.1.10:9000
Received an error when querying services endpoint.
ServerReflectionInfo rpc failed. Error code: 14, message: failed to connect to all addresses, debug info: {"created":"#1569023274.866465052","description":"Failed to pick subchannel","file"
:"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"#1569023274.866463178","description":"failed to connect to all addresses","file":"
src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":395,"grpc_status":14}]}
I get an almost identical error when I use the IP address of the docker0 interface, which should also provide a connection to the host machine.
root#bdc9ac396a87:~/grpc# ./bins/opt/grpc_cli ls 172.17.0.1:9000
Received an error when querying services endpoint.
ServerReflectionInfo rpc failed. Error code: 14, message: failed to connect to all addresses, debug info: {"created":"#1569022455.801913949","description":"Failed to pick subchannel","file"
:"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"#1569022455.801910006","description":"failed to connect to all addresses","file":"
src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":395,"grpc_status":14}]}
However, running a simple http server from the host with
python -m http.server
I can run the following commands from the container just fine:
wget 172.17.0.1:8000/test.txt // works
wget 192.168.1.10:8000/test.txt // works
A client on the host (not in the container) connects and works just fine with the service, so it's not a server problem.
Does docker block certain types of traffic? I noticed in the example the server was placed on another docker container, and it worked (it also worked locally for me), but I'd prefer to have my services running on my host machine while I build and test them. Is there a setting somewhere to enable gRPC from the container to a service on the host machine?
Docker version 1.13.1, build 47e2230/1.13.1
Fedora 29
I want to connect a docker container running locally to a service running on a Kubernetes cluster. To do so I have exposed a service through reserving some static IP addresses.
I have also saved those IP addresses in local DNS, in the /etc/hosts/ file:
123.123.123.12 host1
456.456.456.45 host2
I want to link my container to that such that all the traffic is routed to those addresses so that it can be processed by the cluster. I am using the link feature in the docker container but it isn't working.
I want to connect directly using IP? How should I do this?
There's no difference doing this if the client is or isn't in Docker. However you have the service exposed from Kubernetes, you'd make the same connection to it from a process running on an external host or from a process running in a Docker container on that host.
Say, as in the example in the Kubernetes documentation, you're running a NodePort service that's accessible on port 31496 on every node in the cluster, and you're trying to connect to it from outside the cluster. Maybe as in the question 123.123.123.12 is some node in the cluster. A typical setup would be to get the location of the service from an environment variable (JavaScript process.env.THE_SERVICE_URL; Ruby ENV['THE_SERVICE_URL']; Python os.environ['THE_SERVICE_URL']; ...).
When you're developing, you could set that variable in your local shell:
export THE_SERVICE_URL=http://123.123.123.12:31496
cd here && ./kubernetes_client_script.py
When you go to deploy your application, you can set the same environment variable:
docker run -e THE_SERVICE_URL=http://123.123.123.12:31496 me:k8s-client