how to understand if prometeus exporter agent export metrics on remote machine - docker

I am trying to export metrics of an application by using jmx exporter. So basically i added java agent to jvm jmx parameters to run as a agent and configured it to expose localhost:5555. At the end with docker I created container.
So applications runs in remote machine. If it was running on my local I could check localhost:5555/metrics and I could see if metrics are exported. But in my case that apps runs in a container on remote machine. So how can I check if metrics are exported or not ? (Prometheus has not been configured yet so I cannot check on it.)

As long as the container is exposing 5555 to a port on its host (let's assume the same port 5555, i.e. it's running using something of the form docker run ... --publish=5555:5555 ...), then, as long as you can access the host machine, you can curl (or browse) the endpoint:
REMOTE_HOST=...
http://${REMOTE_HOST}:5555/metrics

Related

AWS ToolKit docker container not resolving internal service URIs

I am running an AWS Lambda locally via AWS Toolkit. The function, through a long dependency chain, calls an internal service endpoint that throws a ConnectionTimeoutException. That endpoint works when called locally.
Toolkit spins up a container to run the lambda in using the bridge docker network running on my local machine. My local machine is also running a proxy client in another container, and using docker network inspect bridge from my local terminal, I can see both the proxy and Toolkit containers are registered on the bridge network. When I shell into the running lambda container, my cUrl command to the internal service times out. That same command on my local machine succeeds.
Shouldn't the cUrl command work from within the lambda container?
local machine bridge network
connection time out exception
failed: connect timed out; nested exception is org.apache.http.conn.ConnectTimeoutException: Connect to internal.service.uri:80
Our SQUID proxy does not support service discovery.
This means the container has to have environment vars set to the proxy IP:
export http_proxy=http://172.17.0.2:3128
export HTTP_PROXY=http://172.17.0.2:3128
export https_proxy=http://172.17.0.2:3128
export HTTPS_PROXY=http://172.17.0.2:3128
export NO_PROXY=localhost
then it works.
next step is to figure out how to set those within the, container via Aws Toolkit

Deploy Docker services to a remote machine via ssh (using Docker Compose with DOCKER_HOST var)

I'm trying deploy some docker services from a compose file to a Vagrantbox. The Vagrantbox does not have a static IP. I'm using the DOCKER_HOST environment variable to set up the target engine.
This is the command I use: DOCKER_HOST="ssh://$BOX_USER#$BOX_IP" docker-compose up -d. The BOX_IP and BOX_USER vars contain the correct IP address and username (obtained at runtime from the Vagrantbox).
I can connect and deploy services this way, but I the SSH connection always asks if I wanna trust the machine. Since the VM gets a dynamic IP, my known_hosts file gets polluted with lines I only used once and might cause trouble some time in the future in case the IP is taken again.
Assigning a static IP results in error messages stating that the machine does not match my known_hosts entry.
And setting StrictHostKeyChecking=no also is not an option because this opens the door for a lot of security issues.
So my question is: how can I deploy containers to a remote Vagrantbox without the metioned issues? Ideally I can start a docker container handles the deployments. But I'm open to any other idea as well.
The reason why I don't just use a bash script while provisioning the VM is that this VM acts as a testing ground for a physical machine. The scripts I use are the same for the real machine. I test them regularly and automated inside a Vagrantbox.
UPDATE: I'm using Linux

How to make docker client communicate with more than one daemon

I am a newbie to docker. When I go through docker tutorial, I saw that "Docker client can communicate with more than one daemon". What does that mean exactly?
By default, the Docker daemon listens on a Unix socket, /var/run/docker.sock. However, Docker can also be configured to listen on a TCP socket. In fact, it is often configured this way on Mac and Windows systems because Docker is actually running inside a virtual machine and the default Docker socket is not available on the host filesystem.
Because there are different ways of connecting to Docker, you must be able to configure the Docker client to connect to a Docker daemon at a specific location. You can do this using the DOCKER_HOST environment variable. You can point this at a network location:
export DOCKER_HOST=tcp://192.168.99.101:2376
Or at an alternate socket location:
export DOCKER_HOST=unix:///tmp/docker.sock
If you have Docker configured to listen for tcp connections, you can use the Docker client on a single machine to communicate with Docker on multiple hosts (but if you decide to do something like this, read through "Protect the Docker daemon socket").
Per the Docker Documentation,
The Docker client can communicate with more than one daemon.
This means that the command-line utility docker can connect to different services that run in the background,
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers.
So for example, you could configure the daemon to run on a separate machine and connect to it from your workstation.

Nifi install using Docker - CanĀ“t access the webserver

I'm new to both docker and Nifi, I found this command that installs via docker and used it on a virtual machine that I have in GCP but I would like to access this container via webserver. In docker ps this appears to me:
What command do I need to execute to gain access to the tool via port 8080?
The container has already exposed port 8080 on the host, as evidence by the output 0.0.0.0:8080->8080/tcp. You read that as {HOST_INTERFACE}:{HOST_PORT}->{CONTAINER_PORT}/{PROTOCOL}.
Navigate to http://SERVER_ADDRESS:8080/ (or maybe http://SERVER_ADDRESS:8080/nifi) using your web browser. You may need to modify the firewall rules applied to your VM to ensure that you can access that port from your local machine.

Local Docker connection to Kubernetes Cluster

I want to connect a docker container running locally to a service running on a Kubernetes cluster. To do so I have exposed a service through reserving some static IP addresses.
I have also saved those IP addresses in local DNS, in the /etc/hosts/ file:
123.123.123.12 host1
456.456.456.45 host2
I want to link my container to that such that all the traffic is routed to those addresses so that it can be processed by the cluster. I am using the link feature in the docker container but it isn't working.
I want to connect directly using IP? How should I do this?
There's no difference doing this if the client is or isn't in Docker. However you have the service exposed from Kubernetes, you'd make the same connection to it from a process running on an external host or from a process running in a Docker container on that host.
Say, as in the example in the Kubernetes documentation, you're running a NodePort service that's accessible on port 31496 on every node in the cluster, and you're trying to connect to it from outside the cluster. Maybe as in the question 123.123.123.12 is some node in the cluster. A typical setup would be to get the location of the service from an environment variable (JavaScript process.env.THE_SERVICE_URL; Ruby ENV['THE_SERVICE_URL']; Python os.environ['THE_SERVICE_URL']; ...).
When you're developing, you could set that variable in your local shell:
export THE_SERVICE_URL=http://123.123.123.12:31496
cd here && ./kubernetes_client_script.py
When you go to deploy your application, you can set the same environment variable:
docker run -e THE_SERVICE_URL=http://123.123.123.12:31496 me:k8s-client

Resources