Can't access Heapster's InfluxDB port 8083 - docker

I follow this guide to deploy my Kubernetes cluster and this guide to launch Heapster.
However, when I open Granfa's website UI, it always says "Dashboard init failed: Template variables could not be initialized: undefined". Moreover, I can't access InfluxDB via port 8083. Is there anything I missed?
I've tried several versions of Kubernetes. I can't deploy DNS with some of them, for now I'm using 1.1.4, but I need to create "kube-system" namespace manually. The docker version is 1.7.1.
Edit:
I can curl ports 8083 and 8086 in the influxdb pod. However, I get connection refused if I do that in the node running the container. This is my services status:

Related

I want to connect a Docker Superset container to an existing external MySQL database

I am trying to add an existing MySQL database as a source database to a docker container running Apache Superset. The MySQL database that I am trying to add is not running in a docker container. It's an existing MySQL database running on a Windows machine.
I've added mysqlclient==1.4.6 to requirements.txt. The error message seems to indicate that the driver is installed.
I've used mysql://user:password#127.0.0.1:3306/database_name and mysql://user:password#localhost:3306/database_name
The error I get is:
"ERROR: Connection failed, please check your connection settings."
I am using image: apache / 'incubator-superset' v. 0.36.0
Are there any settings or config that needs to be changed to be able to communicate to an external database from within a running docker container?
So I figured it out. For Windows, run ipconfig (maybe ifconfig linux, mac) in terminal/powershell and check what ip address docker ethernet port is using (listed as WSL), let's say ip is: 172.x(x).x(x).x(x). Then configure connection string with ip address on docker ethernet port as follows: 'mysql://user:password#172.x(x).x(x).x(x):3306/database_name'.
Follow-up question if anybody knows: How can I connect my docker container running apache/superset to another server/ip address on my local network running a MySQL server? In other words I want to connect the apache/superset app that is running on my computer in a docker container, to another computer on my local network that is running a MySQL server. The MySQL sever is not in a docker container.
maybe the steps of this blog can help.
If your mysql is in other docker it it is not 127.0.0.1 and in addition if you don't want the requirements to be updated every time that you git pull a new docker, it is better to use the requirements-local.txt
You should be able to do that but your MySQL has to have external IP that you can access from your Supserset Machine. First do a telnet to see if you can listen from port 3306 to that machine and if you can Supserset should work with very similar URI that you have.

Concourse Can't Connect to Docker Repository

I'm new to concourse and trying to set it up in my environment. I'm running Ubuntu 18.04 on Virtualbox 6.1.4 r136177 on Windows machine. I managed to get the node running and concourse worker set up, and I was able to access my concourse dashboard successfully. The problem occurred when I was trying to run a simple hello world pipeline as outlined on this page : https://concourse-ci.org/hello-world-example.html
The error says :
[31mERRO [0m[0004] check failed: get remote image: Get https://index.docker.io/v2/: dial tcp: lookup index.docker.io on [::1]:53: read udp [::1]:55989->[::1]:53: read: connection refused
Googling for similar error indicates that virtualbox might not be able to connect to docker repository. So I proceed with installing docker to my system and run the following command :
sudo docker run hello-world
But this this time docker successfully pulled the image. So I think it is not an issue with my virtualbox. Have anyone experienced the same issue and found a solution?
UPDATES
The following question inspire me to build my own registry :
How to use a local docker image as resource in concourse-docker
I have configured my local docker registry, and have verified that it does work by pulling my image from my own registry. So I configured a simple concourse pipeline to use my registry by modifying the hello world example :
---
jobs:
- name: job
public: true
plan:
- task: simple-task
config:
platform: linux
image_resource:
type: docker-image
source:
repository: 127.0.0.1:5000/busybox
tag: latest
insecure_registries: [ "127.0.0.1:5000" ]
run:
path: echo
args: ["Hello, world!"]
But then I run into the following error :
resource script '/opt/resource/check []' failed: exit status 1
stderr:
failed to ping registry: 2 error(s) occurred:
* ping https: Get https://127.0.0.1:5000/v2: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers
* ping http: Get http://127.0.0.1:5000/v2: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers
That 127.0.0.1 is likely referring to the IP of the check container, not the machine where Concourse is running as a worker (unless you have houdini as the container strategy). Try getting the actual IP of the machine running docker and try that.
I faced the same problem. In my case, concourse worker was installed on a qemu VM inside proxmox.
When starting a job with fly-t tutorials trigger-job --job hello-world/hello-world-job --watch command (given in tutorial), worker answered ERRO[0030] checking origin busybox failed: initialize transport: Get "https://index.docker.io/v2/": dial tcp xx.xx.xx.xx:443: i/o timeout.
It means that worker can't reach any DNS server.
There are two ways to solve this problem.
First option: run everything through docker-compose. docker-compose.yml has setting for worker: CONCOURSE_GARDEN_DNS_PROXY_ENABLE: "true". And all works fine. However, I tried to specify same setting when running worker directly inside VM (without docker), and this did not fix the problem.
Second option (without docker):
Use this settings for your worker:
CONCOURSE_RUNTIME=containerd
CONCOURSE_CONTAINERD_EXTERNAL_IP=192.168.1.106
CONCOURSE_CONTAINERD_DNS_SERVER=192.168.1.1
CONCOURSE_CONTAINERD_ALLOW_HOST_ACCESS=true
CONCOURSE_CONTAINERD_DNS_PROXY_ENABLE=true
After setting these parameters my worker could see DNS server and can get access docker registry.
Replace 192.168.1.106 with your machine address in your local network. And
192.168.1.1 with your DNS server.
These parameters are documented here. Also you can get these description with concourse worker --help command.
Containerd Container Networking:
--containerd-external-ip= IP address to use to reach container's mapped ports. Autodetected if not specified. [$CONCOURSE_CONTAINERD_EXTERNAL_IP]
--containerd-dns-server= DNS server IP address to use instead of automatically determined servers. Can be specified multiple times. [$CONCOURSE_CONTAINERD_DNS_SERVER]
--containerd-restricted-network= Network ranges to which traffic from containers will be restricted. Can be specified multiple times. [$CONCOURSE_CONTAINERD_RESTRICTED_NETWORK]
--containerd-network-pool= Network range to use for dynamically allocated container subnets. (default: 10.80.0.0/16) [$CONCOURSE_CONTAINERD_NETWORK_POOL]
--containerd-mtu= MTU size for container network interfaces. Defaults to the MTU of the interface used for outbound access by the host. [$CONCOURSE_CONTAINERD_MTU]
--containerd-allow-host-access Allow containers to reach the host's network. This is turned off by default. [$CONCOURSE_CONTAINERD_ALLOW_HOST_ACCESS]
I had the same issue. Cloned this repo - https://github.com/concourse/concourse-docker
followed the directions as per the readme to generate the keys and then used the docker-compose.yml file from the clone to spin up the docker container.

Link between docker container and Minikube

Is is possible to link a docker container with a service running in minikube? I have a mysql container which I want to access using PMA pod in minikube. I have tried adding PMA_HOST is the yaml file while creating pod but getting an error on the PMA GUI page mentioning -
mysqli_real_connect(): (HY000/2002): Connection refused
If I understand you correctly, you want to access a service (mysql) running outside kube cluster (minikube) from that kube cluster.
You have two ways to achieve this:
make sure your networking is configured in a way allowinf traffic passing both ways correctly. Then you should be able to access that mysql service directly by it's address or by creating external service inside kube cluster (create Service with no selector and manualy configure external Endpoints
Use something like ie. telepresence.io to expose localy developed service inside remote kubernetes cluster

Connecting to Docker container connection refused - but container is running

I am running 2 spring boot applications: A client and rest-api. The client communicates to the rest-api which communicates to a mongodb database. All 3 tiers are running inside docker containers.
I launch the containers normally specifying the exposed ports in the dockerfile and mapping them to a port on the host machine such as: -p 7070:7070, where 7070 is a port exposed in the Dockerfile.
When I run the applications through the java -jar [application_name.war] command, the application works fine and they all can communicate.
However, when I run the applications in a Docker container I get connection refused error, such as when the client tries to connect to the rest-api I get a connection refused error at http://localhost:7070.
But the command docker ps shows that the containers are all running and listening on the exposed and mapped ports.
I have no clue why the containers aren't recognizing that the other containers are running and listening on their ports.
Does this have anything to do with iptables?
Any help is appreciated.
Thanks
EDIT 1: The applications when ran inside containers work fine on my machine, and they don't throw any connection refused errors. The error only happens on that particular different machine.
I used container linking to solve this problem. Make sure you add --link <name>:<alias> at run-time to the container you want linked. <name> is the name of the container you want to link to and <alias> will be the host/domain of an entry in Spring's application.properties file.
Example:
spring.data.mongodb.host=mongodb if the alias supplied at run-time is 'mongodb':
--link myContainerName:mongodb

I can not access my Container Docker Image by HTTP

I created an image with apache2 running locally on a docker container via Dockerfile exposing port 80. Then pushed to my DockerHUB repository
I created a new instance of Container Engine In my project on the Google Cloud. Within this I have two clusters, the Master and the Node1.
Then created a Pod specifying the name of my image in DockerHUB and configuring Ports "containerPort" and "hostPort" for 6379 and 80 respectively.
Node1 accessed via SSH and the command line: $ sudo docker ps -l Then I found that my docker container there is.
I created a service for instance by configuring the ports as in the Pod, "containerPort" and "hostPort" for 6379 and 80 respectively.
I checked the Firewall is available with access to port 80. Even without deems it necessary, I created a rule to allow access through port 6379.
But when I enter http://IP_ADDRESS:PORT is not available.
Any idea about what it's wrong?
If you are using a service to access your pod, you should configure the service to use an external load balancer (similarly to what is done in the guestbook example's frontend service definition) and you should not need to specify a host port in your pod definition.
Once you have an external load balancer created, then you should open a firewall rule to allow external access to the load balancer which will allow packets to reach the service (and pods backing it) running in your cluster.

Resources