Docker container can't reach another container using container name - docker

I have 2 Docker containers running in the same network and I want 1 of them to call another via spring Webclient.
I'm sure they all are in the same network -> docker network inspect <network_ID> proves this.
AFAIK I can ping one container from another to check if they can talk to each other by docker exec -ti attachment-loader-prim ping attachment-loader-sec
If I run this - I see responses from attachment-loader-sec like 64 bytes from 172.21.0.5: seq=0 ttl=64 time=0.220 ms, which means they can communicate.
When I send Postman request to attachment-loader-prim by its exposed port localhost:8085, I expect that after some business logic it calls for attachment-loader-sec via Webclient, but on that step I get a 500 error with such a message:
"finishConnect(..) failed: Connection refused:
attachment-loader-sec/172.21.0.5:80; nested exception is
io.netty.channel.AbstractChannel$AnnotatedConnectException:
finishConnect(..) failed: Connection refused:
attachment-loader-sec/172.21.0.5:80"
Both attachment-loader-prim and attachment-loader-sec can be accessed separately via postman and both send a response, no problem.
This is my docker-compose:
version: '3'
services:
attachment-loader-prim:
container_name: attachment-loader-prim
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SERVER_PORT: 8085
networks:
- loader_network
expose:
- 8085
ports:
- 8005:8005
- 8085:8085
attachment-loader-sec:
container_name: attachment-loader-sec
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SERVER_PORT: 8086
networks:
- loader_network
expose:
- 8086
ports:
- 8006:8005
- 8086:8086
networks:
loader_network:
driver: bridge
And this is a Webclient which makes a call:
class RemoteServiceCaller(private val fetcherWebClientBuilder: WebClient.Builder) {
suspend fun getAttachmentsFromRemote(id: String, params: List<Param>, username: String): Result? {
val client = fetcherWebClientBuilder.build()
val awaitExchange = client.post()
.uri("/{id}/attachment", id)
.contentType(MediaType.APPLICATION_JSON)
.bodyValue(params)
.header(usernameHeader, username)
.accept(MediaType.APPLICATION_OCTET_STREAM)
.awaitExchange {
if (it.statusCode().is2xxSuccessful) {
handleSucessCode(it)
} else it.createExceptionAndAwait().run {
LOG.error(this.responseBodyAsString, this)
throw ProcessingException(this)
}
}
return awaitExchange
}
private suspend fun handleSucessCode(response: ClientResponse) {
// some not important logic
}
}
P.S. BasicUri for Webclient defined as Config Bean like http://attachment-loader-sec/list
All my investigations brought me to such problems as:
Calling container using localhost instead of container name
Containers are not in the same network.
All that seems not relevant for me.
Any ideas will be really appreciated.

The problem was in calling a service without its port. The url became now http://attachment-loader-sec:8086/list and it is correct now. In my case I get 404, which means that my url path is not quite correct, but that is outside of current question

Related

How docker HTTP client container send HTTP request on start

I have a docker-compose file which defines an HTTP server and client as follows
version: '3'
services:
serverA:
container_name: serverA
image: xxx/apiserver
restart: unless-stopped
ports:
- '9090:9090'
networks:
- apinet
clientA:
container_name: clientA
image: xxx/apiclient
restart: unless-stopped
ports:
- '20005:20005'
depends_on:
- serverA
networks:
- apinet
networks:
apinet:
driver: bridge
The server image contain a simple golang HTTP server code ready to handle request
func main() {
http.HandleFunc("/itemhandler", headers)
http.ListenAndServe(":9090", nil)
}
func itemhandler(w http.ResponseWriter, req *http.Request) {
//handle http request
}
while the client image contain a function SendhttpPOSTRequest which contain an HTTP client module that sends HTTP request to the server and retry until it is able to send the request to the server. The client image when run as container contain a configuration file config.toml file which contain the IP address and PORT of the server.
//
func main() {
SendhttpPOSTRequest() // sends http post request with retry
}
The problem is when start the docker-compose file, both container get created and running however the client keeps sending the HTTP request but could not reach the server even though in the config.toml I have changed the server information as shown below and restart the client container. I have used the server IP address obtained from the apinet network as well and restart the client container but still could not send the HTTP request.
config.toml:
serverIP = "serverA" or serverIP = "172.x.0.x"
serverPort = "9090"
A ping between serverA and ClientA using their name works, eg ping serverA from clientA and vise versa works. HTTP request from the host machine to the server works but the HTTP request from the client does not get to the server. An HTTP request to the server inside the client container works but that is not what I want. What I want is that the client sends HTTP request (executes the SendhttpPOSTRequest() ) to the server when its image container starts and retry until the server is up and running.
I have search stackoverflow but could not get similar problem. Can anyone help me. Am new to docker.

rsyslog not connecting to elasticsearch in docker

I am trying to capture syslog messages sent over the network using rsyslog, and then have rsyslog capture, transform and send these messages to elasticsearch.
I found a nice article on the configuration on https://www.reddit.com/r/devops/comments/9g1nts/rsyslog_elasticsearch_logging/
Problem is that rsyslog keeps popping up an error at startup that it cannot connect to Elasticsearch on the same machine on port 9200. Error I get is
Failed to connect to localhost port 9200: Connection refused
2020-03-20T12:57:51.610444+00:00 53fd9e2560d9 rsyslogd: [origin software="rsyslogd" swVersion="8.36.0" x-pid="1" x-info="http://www.rsyslog.com"] start
rsyslogd: omelasticsearch: we are suspending ourselfs due to server failure 7: Failed to connect to localhost port 9200: Connection refused [v8.36.0 try http://www.rsyslog.com/e/2007 ]
Anyone can help on this?
Everything is running in docker on a single machine. I use below docker compose file to start the stack.
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
networks:
- logging-network
kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
depends_on:
- logstash
ports:
- 5601:5601
networks:
- logging-network
rsyslog:
image: rsyslog/syslog_appliance_alpine:8.36.0-3.7
environment:
- TZ=UTC
- xpack.security.enabled=false
ports:
- 514:514/tcp
- 514:514/udp
volumes:
- ./rsyslog.conf:/etc/rsyslog.conf:ro
- rsyslog-work:/work
- rsyslog-logs:/logs
volumes:
rsyslog-work:
rsyslog-logs:
networks:
logging-network:
driver: bridge
rsyslog.conf file below:
global(processInternalMessages="on")
#module(load="imtcp" StreamDriver.AuthMode="anon" StreamDriver.Mode="1")
module(load="impstats") # config.enabled=`echo $ENABLE_STATISTICS`)
module(load="imrelp")
module(load="imptcp")
module(load="imudp" TimeRequery="500")
module(load="omstdout")
module(load="omelasticsearch")
module(load="mmjsonparse")
module(load="mmutf8fix")
input(type="imptcp" port="514")
input(type="imudp" port="514")
input(type="imrelp" port="1601")
# includes done explicitely
include(file="/etc/rsyslog.conf.d/log_to_logsene.conf" config.enabled=`echo $ENABLE_LOGSENE`)
include(file="/etc/rsyslog.conf.d/log_to_files.conf" config.enabled=`echo $ENABLE_LOGFILES`)
#try to parse a structured log
action(type="mmjsonparse")
# this is for index names to be like: rsyslog-YYYY.MM.DD
template(name="rsyslog-index" type="string" string="rsyslog-%$YEAR%.%$MONTH%.%$DAY%")
# this is for formatting our syslog in JSON with #timestamp
template(name="json-syslog" type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",") property(name="$!all-json" position.from="2")
# closing brace is in all-json
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type="omelasticsearch" template="json-syslog" searchIndex="rsyslog-index" dynSearchIndex="on")
#################### default ruleset begins ####################
# we emit our own messages to docker console:
syslog.* :omstdout:
include(file="/config/droprules.conf" mode="optional") # this permits the user to easily drop unwanted messages
action(name="main_utf8fix" type="mmutf8fix" replacementChar="?")
include(text=`echo $CNF_CALL_LOG_TO_LOGFILES`)
include(text=`echo $CNF_CALL_LOG_TO_LOGSENE`)
First of all you need to run all the containers on the same docker network which in this case are not. Second , after running the containers on the same network , login to rsyslog container and check if 9200 is available.

How to fix dial tcp i/o timeout while calling public address in docker container?

I'm setting up a new container 'A' which is calling some endpoints from container 'B'. Why are these calls always return dial tcp 116.2.153.48:8082: i/o timeout?
The call from container 'A' is using public IP. All containers are deployed on the CentOS 7. Every container has own network with own database in this network. Also, the call which returns error works fine from any REST-API client, such Postman.
Nameservers in resolv.conf file has been changed to google's 8.8.8.8 and 8.8.4.4
Error: error="Post http://116.2.153.48:8082/new_user?email=eto#email.com: dial tcp 116.203.153.48:8082: i/o timeout"
Call from the program:
req, err := http.NewRequest(http.MethodPost, fmt.Sprintf("http://116.2.153.48:8082/new_user?email=%s", user.Email), nil)
if err != nil {
return err
}
httpClient := &http.Client{}
resp, err := httpClient.Do(req)
if err != nil {
return err
}
UPD:
Docker-compose of the first container:
payment-ms:
container_name: payment-ms
build:
context: .
dockerfile: Dockerfile
environment:
- DB_HOST=payment-ms-db
ports:
- 8082:8082
Docker-compose file of the second container:
user-ms:
container_name: user-ms
build:
context: .
dockerfile: Dockerfile
environment:
- DB_HOST=user-ms-db
ports:
- 8080:8080
depends_on:
user-ms-db:
condition: service_healthy
Also, on my local machine with MacOS everything works fine, problem reproducing only on VPS with CentOS7.
The problem was triggered by 2 issues.
First, containers must be in the same network. And the second one, when containers are in the same network, calls to each other must be with the property container name host. For example:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9c6c31b8ec21 user-ms "./user-ms run" 3 minutes ago Up 3 minutes 8080/tcp, 0.0.0.0:9980->9980/tcp user-ms
13863218f942 finance-ms "./finance-ms run" 3 minutes ago Up 3 minutes 0.0.0.0:9982->9982/tcp finance-ms
That's mean, what curl and all other calls from container user-ms to finance-ms must be with finance-ms:9982 address.

docker-compose with rabbitmq

I'm trying to setup a docker-compose script - to start a dummy: website, API, Gateway and RabbitMQ. (micro service approach)
Request pipeline:
Web >> Gateway >> API >> RabbitMQ
My docker-compose looks like this:
version: "3.4"
services:
web:
image: webclient
build:
context: ./WebClient
dockerfile: dockerfile
ports:
- "4000:4000"
depends_on:
- gateway
gateway:
image: gatewayapi
build:
context: ./GateWayApi
dockerfile: dockerfile
ports:
- "5000:5000"
depends_on:
- ordersapi
ordersapi:
image: ordersapi
build:
context: ./ExampleOrders
dockerfile: dockerfile
ports:
- "6002:6002"
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.7-management
container_name: rabbitmq
hostname: rabbitmq
volumes:
- rabbitmqdata:/var/lib/rabbitmq
ports:
- "7000:15672"
- "7001:5672"
environment:
- RABBITMQ_DEFAULT_USER=rabbitmquser
- RABBITMQ_DEFAULT_PASS=some_password
This pipe works:
Web >> Gateway >> API
I get a response from the API on the website.
But when I try to push a message to rabbitmq from the API, I get the following error:
System.AggregateException: One or more errors occurred. (Connection failed) ---> RabbitMQ.Client.Exceptions.ConnectFailureException: Connection failed ---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException: Connection refused 127.0.0.1:7001
The RabbitMQ managment GUI still works on the defined port 7000.
Requests to port 7001 does not.
However, if I start the API and RabbitMQ manually, it works like a charm. The API I simply start with a debugger (.Net core + IIS - default settings hitting F5 in VS) and this is the command I use to start the docker image manually:
docker run -p 7001:5672 -p 7000:15672 --hostname localhost -e RABBITMQ_DEFAULT_USER=rabbitmquser -e RABBITMQ_DEFAULT_PASS=some_password rabbitmq:3.7-management
Update
This is how I inject the config in the .Net core pipe.
startup.cs
public void ConfigureServices(IServiceCollection services)
{
// setup RabbitMQ
var configSection = Configuration.GetSection("RabbitMQ");
string host = configSection["Host"];
int.TryParse(configSection["Port"], out int port);
string userName = configSection["UserName"];
string password = configSection["Password"];
services.AddTransient<IConnectionFactory>(_ => new ConnectionFactory()
{
HostName = host,
Port = port,
UserName = userName,
Password = password
});
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
}
controller.cs
private readonly IConnectionFactory _rabbitFactory;
public ValuesController(IConnectionFactory rabbitFactory)
{
_rabbitFactory = rabbitFactory;
}
public void PublishMessage()
{
try
{
using (var connection = _rabbitFactory.CreateConnection())
using (var channel = connection.CreateModel())
{
string exchangeName = "ExampleApiController";
string routingKey = "MyCustomRoutingKey";
channel.ExchangeDeclare(exchange: exchangeName, type: "direct", durable: true);
SendMessage("Payload to queue 1", channel, exchangeName, routingKey);
}
}
catch (Exception e)
{
Console.WriteLine(e.InnerException);
}
}
private static void SendMessage(string message, IModel channel, string exchangeName, string routingKey)
{
byte[] body = Encoding.UTF8.GetBytes(message);
channel.BasicPublish(exchange: exchangeName,
routingKey: routingKey,
basicProperties: null,
body: body);
Console.WriteLine($" Sending --> Exchange: { exchangeName } Queue: { routingKey } Message: {message}");
}
I imagine in your caller you set rabbitmq_url to localhost:7001. However, the caller is in a container, which does have anything running on the port 7001. Rabbitmq is running on port 7001 on your host.
You need to change the url to rabbitmq:5672 to use the internal network or use host.docker.internal:7001 if you are using window or mac docker 18.03+

Setting up more than one MQTT broker with Docker

Using Docker, I was able to use eclipse-mosquitto to set up an MQTT broker with my app, which subscribes to messages. I'm learning Docker right now, so wanted to try adding two brokers to Docker-compose with different ports mapped like this:
version: '3'
services:
myapp:
...
links:
- mqtt
- mqtt2
depends_on:
- mqtt
- mqtt2
mqtt:
image: eclipse-mosquitto:latest
container_name: mqtt-iot
ports:
- 1883:1883
mqtt2:
image: eclipse-mosquitto:latest
container_name: mqtt2-iot
ports:
- 1884:1883
From outside of the myapp container (i.e. from my OS X terminal), both mqtt and mqtt2 are working; I can publish and subscribe to messages as expected.
const mqtt = require('mqtt')
mqtt.connect('mqtt://mqtt', {port: 1883}) // Success
mqtt.connect('mqtt://mqtt2', {port: 1884}) // Success
However, when I'm inside the container of myapp, I can only connect to mqtt. mqtt2 connection fires the offline event right away, and no connection fails. What do I need to do to for myapp to be using both of those brokers properly?
Two issues here
links:
- mqtt
- mqtt2
Links is deprecated now and is not even required in your compose. Next when you use below
const mqtt = require('mqtt')
mqtt.connect('mqtt://mqtt', {port: 1883}) // Success
mqtt.connect('mqtt://mqtt2', {port: 1884}) // Success
From outside. This is based on the ports on the host. When you do it from app container you should do it like below
const mqtt = require('mqtt')
mqtt.connect('mqtt://mqtt', {port: 1883}) // Success
mqtt.connect('mqtt://mqtt2', {port: 1883}) // Success
The container cannot see mapped port on host. It will see what is inside the network. And in local network both are listen on 1883

Resources