I'm setting up a Concourse Docker container by following - https://concoursetutorial.com/ , but on GCP compute Engine. The tutorial says to access the UI at http://127.0.0.1:8080/ in your browser.
Since I am running on GCP, i gave :8080, I am getting "This site can’t be reached"
Note- I have enabled the 8080 port in GCP and also in the compute engine.enter image description here
You need to create a firewall that allows 8080 port on GCP and add the network tag of the instance to target.
See detailed instructions here:
https://cloud.google.com/vpc/docs/using-firewalls#creating_firewall_rules
Related
I'm new to both docker and Nifi, I found this command that installs via docker and used it on a virtual machine that I have in GCP but I would like to access this container via webserver. In docker ps this appears to me:
What command do I need to execute to gain access to the tool via port 8080?
The container has already exposed port 8080 on the host, as evidence by the output 0.0.0.0:8080->8080/tcp. You read that as {HOST_INTERFACE}:{HOST_PORT}->{CONTAINER_PORT}/{PROTOCOL}.
Navigate to http://SERVER_ADDRESS:8080/ (or maybe http://SERVER_ADDRESS:8080/nifi) using your web browser. You may need to modify the firewall rules applied to your VM to ensure that you can access that port from your local machine.
Need some help us with a Rancher / Docker issue encountered while trying to set up Logstash to parse our IIS logs.
There’s a container with 1 service Logstash (not a web service but a system service, part of the ELK stack that we want to use to ingests files from a given input(s) and parse them into fields before sending them to the configured output(s) – in this case, Elasticsearch).
We need to have the service accessible from an outside system (namely our web server which is going to send the IIS logs over for processing).
The problem is that we can’t get the endpoint configuration.
There is a load balancer host running on rancher with two open ports that are supposed to channel all requests to the inner services containers via path name and target but we can’t get a path configured to the logstash service.
I have been digging into the logstash configs and there is a setting for node.name in the logstash.conf file but … I haven’t managed to do anything with it yet.
Hoping someone who is more familiar with this stuff can offer some insight.
Basically I can get the Logstash service on Rancher to connect to the AWS Elasticsearch but I cannot get our web box (with the IIS logs) to connect with the Logstash service on its input port.
Solution was not to use the standard image but customize it. steps involved:
create local repo with folder structure that we need to emulate. Only
the folders we are going to replace are needed
add a dockerfile which will be used to build up the image from a docker run command
in the dockerfile reference the ready-made / standard image as a base in the first line, i.e. 'FROM '
in the dockerfile RUN command remove the directories and files that need to be customized. In this case it was logstash/pipeline
directory and logstash/config directory
use ADD commands to replace the missing directories with our customized versions
use EXPOSE command to expose the port the service is listening on
Build the container using docker run and the -p flag to publish the ports we want open, mapping them to ports on the host container
I just started yesterday and following tutorials for using GCP.
I have a Cassandra docker container running in google compute engine. I would like to connect to the Cassandra docker container from my local machine and load data into it.
I tried using the IP address of the compute instance and Cassandra port. But the java program which loads data into Cassandra throws an error NoHostAvailableException
I appreciate your time.
From my understanding, unless you expose the docker container's port publicly, you cannot access the port of the container anyway. This is where the concept of services comes in cloud architectures, to publicly expose container/s. Detailed instruction is given in "configuring endpoints" and following sections in the following article https://cloud.google.com/endpoints/docs/openapi/get-started-compute-engine-docker .
I'm unable to run a health check other than process on a docker image deployed to Pivotal Cloud Foundry.
I can deploy fine with health-check-type=process, but that isn't terribly useful. Once the container is up and running I can access the health check http endpoint at /nuxeo/runningstatus, but PCF doesn't seem to be able to check that endpoint, presumably because I'm deploying a pre-built docker container rather that an app via source or jar.
I've modified the timeout to be something way longer than it needs to be, so that isn't the problem. Is there some other way of monitoring dockers deployed to PCF?
The problem was the docker container exposed two ports, one on which the healthcheck endpoint was accessible and another that could be used for debugging. PCF always chose to try to run the health check against the debug port.
There is no way to specify, for PCF, a port for the health check to run against. It chooses among the exposed ports and for a reason I don't know always chose the the one intended for debugging.
I tried reordering the ports in the Dockerfile, but that had no effect. Ultimately I just removed the debug port from being exposed in the Docker file and things worked as expected.
I follow this guide to deploy my Kubernetes cluster and this guide to launch Heapster.
However, when I open Granfa's website UI, it always says "Dashboard init failed: Template variables could not be initialized: undefined". Moreover, I can't access InfluxDB via port 8083. Is there anything I missed?
I've tried several versions of Kubernetes. I can't deploy DNS with some of them, for now I'm using 1.1.4, but I need to create "kube-system" namespace manually. The docker version is 1.7.1.
Edit:
I can curl ports 8083 and 8086 in the influxdb pod. However, I get connection refused if I do that in the node running the container. This is my services status: