I have a web application built with Elixir that uses a Postgres database in a docker container (https://hub.docker.com/_/postgres/).
I need to expose the web interface (running on port 4000) and the database in the docker container.
I tried adding this to my configuration files:
tunnels:
api:
addr: 4000
proto: http
db:
addr: 5432
proto: tcp
Then in my Elixir config/dev.exs I add this under the database configuration:
...
hostname: "TCP_URL_GIVEN_BY_NGRROK"
When I attempt to start the application, it says failure to connect to the database.
The docker command that I used is:
docker run --name phoenix-pg -e POSTRGRES_PASSWORD=postgres -d postgres
What am I doing wrong?
Related
My docker configuration needs to map ports for external access, but when trying to install the data hub central war file, mlDeploy and mlRedeploy encounter problems, that the ports are unavailable:
Task :mlDeployApp
Creating custom rewriters for staging and job app servers
Loading REST options for staging server
Initializing ExecutorService
Loading default query options from file default.xml
Shutting down ExecutorService
Loading REST options for jobs server
Initializing ExecutorService
Loading traces query options from file traces.xml
Shutting down ExecutorService
Writing traces query options to MarkLogic; port: 8013
Error occurred while loading modules; host: localhost; port: 8013;
cause: java.net.ConnectException: Failed to connect to localhost/127.0.0.1:8013
...
What went wrong:
Execution failed for task ':mlDeployApp'.
Error occurred while loading REST modules: Error occurred while loading modules; host: localhost; port: 8013; cause: java.net.ConnectException: Failed to connect to localhost/127.0.0.1:8013
Docker file contents
FROM store/marklogicdb/marklogic-server:10.0-7-dev-centos
WORKDIR /tmp
EXPOSE 7997-8040
EXPOSE 8080
EXPOSE 9000
CMD /etc/init.d/MarkLogic start && tail -f /dev/null
Original docker run command:
docker run -d --name=marklogic10.0-7_local -p 7997-8040:7997-8040 -p 8080:8080 -p 9000:9000 marklogic-initial-install:10.0-7-dev-centos
Revised docker run command:
docker run -d --name=marklogic10.0-7_local -p 7997-8012:7997-8012 -p 8014-8040:8014-8040 -p 8043:8013 -p 8090:8080 -p 9000:9000 marklogic-initial-install:10.0-7-dev-centos
Note: I originally had the same problem with port 8080 but mapped it to port 8090 which fixed the problem. Doing the same for port 8013 did not work.
The problem was with the installation steps and not the ports.
Based on this tutorial https://www.youtube.com/watch?v=XrFeRwJjWHI , I tried running Redis in Docker.
File docker-compose.yml
version: "3.8"
services:
redis:
image: redis
volumes:
- ./data:/data
ports:
- 6379:6379
docker pull redis
docker-compose up
docker-compose up -d
docker container ls
telnet localhost 6379
Telnet, type PING then press Enter key (you will not see text), then see result: PONG. Type quit to exit.
Microsoft Windows [Version 10.0.19041.508]
(c) 2020 Microsoft Corporation. All rights reserved.
D:\docker>docker-compose stop redis
Stopping docker_redis_1 ... done
D:\docker>
See what is running
docker container ls
You will see, docker redis was stoped.
docker image prune -a
docker-compose up
Docker RedisInsight
docker run -v redisinsight:/db -p 8001:8001 redislabs/redisinsight:latest
Wait about 6 minutes (at internet speed at 22:30) for downloading, unzip, install, starting.
go to: http://localhost:8001/ (auto open web browser). Health check for RedisInsight http://localhost:8001/healthcheck/ is OK.
(I also noted at here https://donhuvy.github.io/redis/docker/2020/10/10/run-redis-on-docker.html )
How to connect RedisInsight with Redis without error?
Update: This is my host file, seemly have problem at here (IP 127.0.0.1 for Kubernetes, really I don't know about Kubernetes, I am learning it.), but I don't know how to fix.
# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
127.0.0.1 www.techsmith.com
127.0.0.1 activation.cloud.techsmith.com
127.0.0.1 oscount.techsmith.com
127.0.0.1 updater.techsmith.com
127.0.0.1 camtasiatudi.techsmith.com
127.0.0.1 tsccloud.cloudapp.net
127.0.0.1 assets.cloud.techsmith.com
# Added by Docker Desktop
192.168.1.44 host.docker.internal
192.168.1.44 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
Using this setup in docker-compose.yml
version: '3.7'
services:
redis:
image: 'redis:6.0.6'
ports:
- '127.0.0.1:6379:6379/tcp'
volumes:
- 'redis_data:/data:rw'
healthcheck:
test: redis-cli ping
interval: 3s
timeout: 5s
retries: 5
redisinsight:
image: 'redislabs/redisinsight:latest'
ports:
- '127.0.0.1:8001:8001'
you can access redis via
RedisInsight are trying to connect to container's localhost. Try typing 127.0.0.1 into Host field.
If file host has been changed like the updated information in question, use 192.168.1.44 .
For your containers to access each other you should first connect them to same network.
docker network create redis
docker network connect redis elastic_diffie
docker network connect redis docker_redis_1
After that open RedisInsight UI and write docker_redis_1 to your Hostand keep the port same. You should be able to connect to your redis container.
Since you haven't mentioned any network for the containers, they are conneted to the default bridge network. learn more
To get container IP address
Type in your terminal
# Check container network IP address
docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}" <container name>
Use the IP address in: http://localhost:8001/ (RedisInsight).
or
(Not Recommended) Type the IP address of your HOST machine will solve the problem
Use ipconfig or ifconfig based on your OS to get your IP
you must put your Ip by ipconfig instead of localhost
I'm currently running a spring boot application in a docker container on an EC2. My docker-compose file looks like this (with some values replaced):
version: '3.8'
services:
my-app:
image: ${ecr-repo}/my-app:0.0.1-SNAPSHOT
ports:
- "8893:8839/tcp"
networks:
default:
The docker container deploys and comes up as healthy with the healthcheck command being:
wget --spider -q -Y off http://localhost:8893/my-app/v1/actuator/health
If I do a docker ps -a I can see for the ports:
0.0.0.0:8893->8893
My Alb healthcheck however is returning a 502 so I've temporarily allowed connections from my IP directly to the EC2 in the security group. The rules are:
Allow Ingress on 8893 from my Alb security group
Allow Ingress on 8893 from my IP
Allow Egress to anywhere (0.0.0.0)
When I try and hit the healthcheck endpoint of my app using the public DNS of the ec2 on port 8893 using Postman I get Error: connect ECONNREFUSED
If I take my docker container down and then simulate a webserver using the command from https://fabianlee.org/2016/09/26/ubuntu-simulating-a-web-server-using-netcat/ which is:
while true; do { echo -e "HTTP/1.1 200 OK\r\n$(date)\r\n\r\n<h1>hello world from $(hostname) on $(date)</h1>" | nc -vl 8080; } done
I get a 200 response with the expected body which indicates it's not a problem with the security groups.
The actuator endpoint for spring boot is definitely enabled as if I try running the app through intellij and hitting the endpoint it returns a 200 and status up.
Any suggestions for what I might be missing here or how I could debug this further? It seems like docker isn't picking up connections to the port for some reason.
Following the tutorial on https://docs.docker.com/get-started/part2/.
I start my docker container with docker run -p 4000:80 friendlyhello
and see
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:8088/ (Press CTRL+C to quit)
But it's inaccessible from the expected path of localhost:4000.
$ curl http://localhost:4000/
curl: (7) Failed to connect to localhost port 4000: Connection refused
$ curl http://127.0.0.1:4000/
curl: (7) Failed to connect to 127.0.0.1 port 4000: Connection refused
Okay, so maybe it's not on my local host. Getting the container ID I retrieve the IP with
docker inspect --format '{{ .NetworkSettings.IPAddress }}' 7e5bace5f69c
and it returns 172.17.0.2 but no luck! curl continues to give the same responses. I can confirm something is running on 4000....
lsof -i :4000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 94812 travis 18u IPv6 0x7516cbae76f408b5 0t0 TCP *:terabase (LISTEN)
I'm pulling my hair out on this. I've read through the troubleshooting guide and can confirm
* not on a proxy
* don't use a custom dns
* I'm having issues connecting to docker, not docker connecting to my pip server.
Running the app.py with python app.py the server starts and I'm able to hit it. What am I missing?
Did you accidentally put port=8088 at the bottom of your app.py file? When you are running this the last line of your output is saying that your python app is exposed on port 8088 not 80.
To confirm you can run either modify the app.py file and rebuild the image, or alternatively you could run: docker run -p 4000:8088 friendlyhello which would map your local port 4000 to 8088 in the container.
Try to run it using:
docker run -p 4000:8088 friendlyhello
As you can see from the logs, your app starts on port 8088, but you connect 4000 to 80 where on 80, nothing is actually listening.
I have one eureka server.
server:
port: 8761
eureka:
client:
registerWithEureka: false
fetchRegistry: false
I have one eureka client.
spring:
application:
name: mysearch
server:
port: 8020
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka
instance:
preferIpAddress: true
My eureka client is running in a docker container.
FROM java:8
COPY ./mysearch.jar /var/tmp/app.jar
EXPOSE 8180
CMD ["java","-jar","/var/tmp/app.jar"]
I am starting the eureka server by java -jar eureka-server.jar
After that I am starting the docker instance of the eureka client using
sudo docker build -t web . and sudo docker run -p 8180:8020 -it web.
I am able to access the eureka client and server from browser but the client is not connecting with Eureka server. I am not able to see the client in the eureka server dashboard. I am getting below errors and warnings.
WARN 1 --- [tbeatExecutor-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failed with message: java.net.ConnectException: Connection refused (Connection refused)
ERROR 1 --- [tbeatExecutor-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_FLIGHTSEARCH/98b0d95fd668:flightsearch:8020 - was unable to send heartbeat!
INFO 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_FLIGHTSEARCH/98b0d95fd668:flightsearch:8020: registering service...
ERROR 1 --- [nfoReplicator-0] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error
WARN 1 --- [nfoReplicator-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failed with message: java.net.ConnectException: Connection refused (Connection refused)
WARN 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_FLIGHTSEARCH/98b0d95fd668:flightsearch:8020 - registration failed Cannot execute request on any known server
WARN 1 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator : There was a problem with the instance info replicator
I am doing it in an AWS EC2 Ubuntu instance.
Can anyone please tell me what I am doing wrong here?
server:
ports:
- "8761:8761"
eureka:
client:
registerWithEureka: false
fetchRegistry: false
with the above changes port 8761 will expose on host and can connect to server.
as your connecting using localhost "http://localhost:8761/eureka" which is searching for port 8761 on host.
In Eureka client config use host ip instead of localhost , because if localhost used it's search for port 8761 within container
http://hostip:8761/eureka
Make sure you are running in Swarm mode.(Single node can also run Swarm)
$ docker swarm init
An overlay network is created so services can ping each other.
$ docker network create -d overlay mybridge
Set application.property for eurika client as below
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
Now create first discovery service (Eureka discover server)
$ docker service create -d --name discovery --network mybridge \
--replicas 1 -p 8761:8761 server-discovery
Open your browser and hit any node with port 8761
Now create client service:
$ docker service create -d --name goodbyeapp --network mybridge \
--replicas 1 -p 2222:2222 goodbye-service
This will register to the discovery service.
In the container world, the eureka server ip address can change each time the eureka server is restarted. Hence specifying the host ip address for eureka server url doesn't work all the time.
In the docker-compose.yml, I had to link the eureka client service to the eureka server container. Until I link the services, eureka client couldn't connect to the server.
This is already answered in another post recently: Applications not registering to eureka when using docker-compose