Fiware Orion + Cosmos: Subscription failed - Timeout was reached trying to notify a PC that belongs to the same network - docker

I have a scenario on a Citrix Xenserver server with 3 virtual machines (VM) Centos7 on the same network (10.0.1.0/24). Each VM are responsible to give a prediction done with Apache Spark on Scala (Logistic Regression). I use Orion Context broker (CB) on Docker to create subscriptions that will be triggered to ask predictions. The CB it is located only in 10.0.1.4 VM and I made some ports available to access from the other machines.
My docker-compose.yml:
mongo:
image: mongo:4.4
command: --nojournal
orion:
image: fiware/orion
links:
- mongo
ports:
- "1026:1026"
- "1027:1027"
- "1028:1028"
- "9090:9090"
command: -dbhost mongo -corsOrigin __ALL
For example, to access CB from 10.0.1.2 VM I use 10.0.1.4:1028/..... and so on.
This is the subscription I'm facing problems (and maybe the other related to 10.0.1.3 VM must have the same problem too)
curl -v localhost:1026/v2/subscriptions -s -S -H 'Content-Type: application/json' -d #- <<EOF
{
"description": "Suscripcion de anemia para monitorear al Paciente",
"subject": {
"entities": [
{
"id": "Paciente1",
"type": "Paciente"
}
],
"condition": {
"attrs": ["calculateAnaemia"]
},
"expression":{
"q":"calculateAnaemia:1"
}
},
"notification": {
"http": {
"url": "http://10.0.1.2:9002/notify"
},
"attrs": ["gender","age","hemoglobin","mch","mchc","mcv"]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 10
}
EOF
I have a code on the 10.0.1.2 VM that is listening about changes related to this subscription on port 9002 with Fiware Cosmos:
For eventStream variable on 10.0.1.4 VM got port 9004 and for 10.0.1.3 VM it is 9003 port
For conf variable I set "spark.driver.host" on 10.0.1.4 VM with 10.0.1.4 IP and for 10.0.1.3 VM to 10.0.1.3 IP
import esqMensajeria.ActorSysMensajeria.ActoresEsquema
import org.apache.spark._
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.fiware.cosmos.orion.spark.connector._
object Main extends App{
val conf = new SparkConf().setMaster("local[*]").setAppName("AnaemiaPrediction").set("spark.driver.host", "10.0.1.2")
val ssc = new StreamingContext(conf, Seconds(10))
// Create Orion Source. Receive notifications on port 9002
val eventStream = ssc.receiverStream(new OrionReceiver(9002))
// Esquema de mensajeria
val actorSysMsj = new ActoresEsquema ()
println("Esperando cambios para obtener informaciĆ³n...")
// Process event stream
val processedDataStream = eventStream
.flatMap(event => event.entities)
.map(entity => {
val gender: Int = entity.attrs("gender").value.asInstanceOf[Number].intValue()
val age: Int = entity.attrs("age").value.asInstanceOf[Number].intValue()
val hemoglobin: Double = entity.attrs("hemoglobin").value.asInstanceOf[Double].doubleValue()
val mch: Double = entity.attrs("mch").value.asInstanceOf[Double].doubleValue()
val mchc: Double = entity.attrs("mchc").value.asInstanceOf[Double].doubleValue()
val mcv: Double = entity.attrs("mcv").value.asInstanceOf[Double].doubleValue()
actorSysMsj.start((entity.id, gender,age,hemoglobin,mch,mchc,mcv),conf)
(entity.id, gender,age,hemoglobin,mch,mchc,mcv)
})
processedDataStream.print
ssc.start()
ssc.awaitTermination()
}
But when I trigger, the subscription fails showing the next (not only on 10.0.1.2 but 10.0.1.3 VM too):
{"id":"61cb8569a1e87a254e16066d",
"description":"Suscripcion de anemia para monitorear al Paciente",
"expires":"2040-01-01T14:00:00.000Z",
"status":"failed",
"subject":{"entities":[{"id":"Paciente1","type":"Paciente"}],
"condition":{"attrs":["calculateAnaemia"]}},
"notification":
{"timesSent":3,
"lastNotification":"2021-12-29T00:03:49.000Z",
"attrs":"gender","age","hemoglobin","mch","mchc","mcv"],"
onlyChangedAttrs":false,
"attrsFormat":"normalized",
http":{"url":"http://10.0.1.2:9002/notify"},
"lastFailure":"2021-12-29T00:03:54.000Z",
"lastFailureReason":"Timeout was reached"},
"throttling":10}]
The curious thing is when I worked with the subscription related to the 10.0.1.4 VM that has the CB container, the subscription remains active and i get my expected result.
This is the subscription:
curl -v localhost:1026/v2/subscriptions -s -S -H 'Content-Type: application/json' -d #- <<EOF
{
"description": "Suscripcion de deceso para monitorear al Paciente",
"subject": {
"entities": [
{
"id": "Paciente1",
"type": "Paciente"
}
],
"condition": {
"attrs": [
"calculateDeceased"
]
},
"expression":{
"q":"calculateDeceased:1"
}
},
"notification": {
"http": {
"url": "http://10.0.1.4:9004/notify"
},
"attrs": [
"gender","age","hasAnaemia","creatinePP","hasDiabetes","ejecFrac","highBloodP","platelets","serumCreatinine","serumSodium","smoking","time"
]
},
"expires": "2040-01-01T14:00:00.00Z"
}
EOF
This is the answer when it is triggered and process perfectly:
{"id":"61caab07a1e87a254e160665",
"description":"Suscripcion de deceso para monitorear al Paciente",
"expires":"2040-01-01T14:00:00.000Z",
"status":"active",
"subject":{"entities":[{"id":"Paciente1","type":"Paciente"}],
"condition":{"attrs":["calculateDeceased"]}},
"notification":{"timesSent":1,"lastNotification":"2021-12-28T06:15:41.000Z",
"attrs":["gender","age","hasAnaemia","creatinePP","hasDiabetes","ejecFrac","highBloodP","platelets","serumCreatinine","serumSodium","smoking","time"],
"onlyChangedAttrs":false,
"attrsFormat":"normalized",
"http":{"url":"http://10.0.1.4:9004/notify"},
"lastSuccess":"2021-12-28T06:15:43.000Z",
"lastSuccessCode":200}}
I have to say I'm new to Spark, Scala and even Fiware. But projects are projects and maybe I'm missing something I did not see in all I read to set up this project. Also, I stopped all firewalls (firewalld) because I was facing a "Couldn't connect to server" error on subscriptions related to 10.0.1.2 and 10.0.1.3 VMs. I did an sudo yum update too, I pinged all the VMs between each other and I got nice response. One thing that I do not know if it is important: I have internet on all my VMs but I can't ping for example... www.google.com or 8.8.8.8. So, any suggestions are welcome! I apologized for my english. Thanks in advance~

Well, after 3 days of keep looking and trying i just discovered that I need to turn off the 1.2 and 1.3 firewalls and keep it on 1.4.

Related

Dockerize FIWARE can't notify a service

I just started to use FIWARE. I downloaded the latest version on the website (v2) using docker-compose on a PopOs distro.
I'm using Postman to make requests (create the entities and subscriptions) and a Laravel application to listen the notification from the FIWARE subscriptions. But for some reason, today, when I started the docker service and start to send requests: the FIWARE notifications suddenly stopped to work.
When I access the subscriptions endpoint FIWARE returns:
"notification": {
"timesSent": 1,
"lastNotification": "2021-09-02T01:19:39.000Z",
"attrs": [],
"onlyChangedAttrs": false,
"attrsFormat": "keyValues",
"http": {
"url": "http://localhost:8000/api/notifications"
},
"lastFailure": "2021-09-02T01:19:39.000Z",
"lastFailureReason": "Couldn't connect to server"
}
FIWARE can't comunicate, but if I make a POST request using Postman for that endpoint (http://localhost:8000/api/notifications) it returns 200.
There's some aditional configuration between the FIWARE docker container and the local machine? Or I'm doing something wrong?
This is my entity:
// http://{{orion}}/v2/subscription
{
"id": "movie",
"type": "movie",
"name": {
"type": "text",
"value": "movie name"
},
"gender": {
"type": "text",
"value": "drama"
}
}
This is how I'm doing the subscription:
// http://{{orion}}/v2/subscriptions
{
"description": "Notify me about any movie of gender drama",
"subject": {
"entities": [{"idPattern": ".*","type": "movie"}],
"condition": {
"attrs": ["gender"],
"expression": {
"q": "gender==drama"
}
}
},
"notification": {
"http": {
"url": "http://127.0.0.1:8000/api/notifications"
}
}
}
If you are using Docker, then you need to consider what http://localhost:8000/api/notifications actually means. localhost will mean the localhost as experienced by the Orion container itself. Generally Orion listens on 1026 and there is nothing listening on 8000 within a dockerized Orion, therefore your subscription fails.
If you have another micro-service running within the same docker network and in a separate container you must use the hostname of that container (or an alias or defined IP) to describe the notification URL, not localhost.
So for example in the following tutorial where a subscription payload is displayed on screen:
curl -iX POST \
--url 'http://localhost:1026/v2/subscriptions' \
--header 'content-type: application/json' \
--data '{
"description": "Notify me of all product price changes",
"subject": {
"entities": [{"idPattern": ".*", "type": "Product"}],
"condition": {
"attrs": [ "price" ]
}
},
"notification": {
"http": {
"url": "http://tutorial:3000/subscription/price-change"
}
}
}'
refers to a container which is called tutorial within the docker network
tutorial:
image: fiware/tutorials.context-provider
hostname: tutorial
container_name: fiware-tutorial
depends_on:
- orion
networks:
default:
aliases:
- iot-sensors
- context-provider
expose:
- 3000
As it happens the tutorial container is also exposing its internal port 3000 to the localhost of the machine it is running on so it can be viewed by a user, but Orion can only access it via the hostname on the docker network.

PORT setting of docker images in cloudfoundry

I tried pushing a docker image of Eclipse theia to cf, however unable to start it (or rather connect to it). The image exposes port 3000 with EXPOSE 3000. The app works and running it locally opens the default theia home screen
On CF, Sufficient disk and memory are given.
When the default port health check is set, cf hangs at starting app.
Creating app theia-docker...
Mapping routes...
Staging app and tracing logs...
Cell 15fcfa4a-a364-4dc2-ab6b-349f5196bd80 creating container for instance bd4b9e65-946f-485a-9de1-5c7fc8d4ad01
Cell 15fcfa4a-a364-4dc2-ab6b-349f5196bd80 successfully created container for instance bd4b9e65-946f-485a-9de1-5c7fc8d4ad01
Staging...
Staging process started ...
Staging process finished
Exit status 0
Staging Complete
Cell 15fcfa4a-a364-4dc2-ab6b-349f5196bd80 stopping instance bd4b9e65-946f-485a-9de1-5c7fc8d4ad01
Cell 15fcfa4a-a364-4dc2-ab6b-349f5196bd80 destroying container for instance bd4b9e65-946f-485a-9de1-5c7fc8d4ad01
Cell 15fcfa4a-a364-4dc2-ab6b-349f5196bd80 successfully destroyed container for instance bd4b9e65-946f-485a-9de1-5c7fc8d4ad01
It eventually comes to FAILED
cf logs would show:
2021-06-12T14:37:25.40+0530 [APP/PROC/WEB/0] OUT root INFO Deploy plugins list took: 161.7 ms
2021-06-12T14:38:24.77+0530 [HEALTH/0] ERR Failed to make TCP connection to port 2375: connection refused; Failed to make TCP connection to port 2376: connection refused
2021-06-12T14:38:24.77+0530 [CELL/0] ERR Failed after 1m0.303s: readiness health check never passed.
Why is it taking the wrong PORT number?
If I try setting the port in the env variable as cf set-env PORT 3000, I would get
FAILED
Server error, status code: 400, error code: 100001, message: The app is invalid: environment_variables cannot set PORT
I then set the health check is set to process. Of course, this would start successfully (failure or not). Checking the logs it can be seen that the app has started successfully. When I ssh into the app (cf ssh theia-docker) I am able to curl the application as localhost:3000 and returns the HTML of the homepage.
~ % cf ssh theia-docker
bash-5.0$ curl localhost:3000
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="apple-mobile-web-app-capable" content="yes">
<script type="text/javascript" src="./bundle.js" charset="utf-8"></script>
</head>
<body>
<div class="theia-preload"></div>
</body>
</html>bash-5.0$
However, when I try to connect to the app via the application URL I get the error:
502 Bad Gateway: Registered endpoint failed to handle the request.
The reason I see for this is that the base image I have used for this is based on docker:dind and it seems like in the base image ports 2375 and 2376 are exposed.
Why does CF pick the ports exposed in the base image rather than the one exposed in the docker image that is created? Shouldn't the port in the current image take precedence?
Changing the route mappings helped.
The following steps helped:
Get app guid
~ % cf app theia-docker --guid
8032eea6-d146-4d27-9b17-c7331852b59b
Add the required port
cf curl /v2/apps/8032eea6-d146-4d27-9b17-c7331852b59b -X PUT -d '{"ports": [3000]}'
{
"metadata": {
"guid": "8032eea6-d146-4d27-9b17-c7331852b59b",
"url": "/v2/apps/8032eea6-d146-4d27-9b17-c7331852b59b",
"created_at": "2021-06-12T09:04:51Z",
"updated_at": "2021-06-12T17:58:03Z"
},
"entity": {
"name": "theia-docker",
"production": false,
.
.
.
"ports": [
3000,
2375,
2376
],
.
.
.
Get the routes attached to the app
~ % cf curl /v2/apps/8032eea6-d146-4d27-9b17-c7331852b59b/routes
{
"total_results": 1,
"total_pages": 1,
"prev_url": null,
"next_url": null,
"resources": [
{
"metadata": {
"guid": "21f89763-baab-456d-8151-aad383a3c28f",
.
.
.
Use the route-guid to find route_mappings:
cf curl /v2/routes/21f89763-baab-456d-8151-aad383a3c28f/route_mappings
{
"total_results": 1,
"total_pages": 1,
"prev_url": null,
"next_url": null,
"resources": [
{
"metadata": {
"guid": "33bde252-ad3e-49b4-91df-78543ac452b4",
"url": "/v2/route_mappings/33bde252-ad3e-49b4-91df-78543ac452b4",
"created_at": "2021-06-12T09:04:51Z",
"updated_at": "2021-06-12T09:04:51Z"
},
"entity": {
"app_port": null,
"app_guid": "8032eea6-d146-4d27-9b17-c7331852b59b",
"route_guid": "21f89763-baab-456d-8151-aad383a3c28f",
"app_url": "/v2/apps/8032eea6-d146-4d27-9b17-c7331852b59b",
"route_url": "/v2/routes/21f89763-baab-456d-8151-aad383a3c28f"
}
}
]
}
Update the route-mapping using the app_guid, route_guid and app_port:
~% cf curl /v2/route_mappings -X POST -d '{"app_guid":"8032eea6-d146-4d27-9b17-c7331852b59b","route_guid":"21f89763-baab-456d-8151-aad383a3c28f", "app_port":3000}'
{
"metadata": {
"guid": "a62a2ea6-859f-48cc-aa33-a8d6583081da",
"url": "/v2/route_mappings/a62a2ea6-859f-48cc-aa33-a8d6583081da",
"created_at": "2021-06-12T18:02:19Z",
"updated_at": "2021-06-12T18:02:19Z"
},
"entity": {
"app_port": 3000,
"app_guid": "8032eea6-d146-4d27-9b17-c7331852b59b",
"route_guid": "21f89763-baab-456d-8151-aad383a3c28f",
"app_url": "/v2/apps/8032eea6-d146-4d27-9b17-c7331852b59b",
"route_url": "/v2/routes/21f89763-baab-456d-8151-aad383a3c28f"
}
}
List the route mappings again:
~ % cf curl /v2/routes/21f89763-baab-456d-8151-aad383a3c28f/route_mappings
{
"total_results": 2,
"total_pages": 1,
"prev_url": null,
"next_url": null,
"resources": [
{
"metadata": {
"guid": "33bde252-ad3e-49b4-91df-78543ac452b4",
"url": "/v2/route_mappings/33bde252-ad3e-49b4-91df-78543ac452b4",
"created_at": "2021-06-12T09:04:51Z",
"updated_at": "2021-06-12T09:04:51Z"
},
"entity": {
"app_port": null,
"app_guid": "8032eea6-d146-4d27-9b17-c7331852b59b",
"route_guid": "21f89763-baab-456d-8151-aad383a3c28f",
"app_url": "/v2/apps/8032eea6-d146-4d27-9b17-c7331852b59b",
"route_url": "/v2/routes/21f89763-baab-456d-8151-aad383a3c28f"
}
},
{
"metadata": {
"guid": "a62a2ea6-859f-48cc-aa33-a8d6583081da",
"url": "/v2/route_mappings/a62a2ea6-859f-48cc-aa33-a8d6583081da",
"created_at": "2021-06-12T18:02:19Z",
"updated_at": "2021-06-12T18:02:19Z"
},
"entity": {
"app_port": 3000,
"app_guid": "8032eea6-d146-4d27-9b17-c7331852b59b",
"route_guid": "21f89763-baab-456d-8151-aad383a3c28f",
"app_url": "/v2/apps/8032eea6-d146-4d27-9b17-c7331852b59b",
"route_url": "/v2/routes/21f89763-baab-456d-8151-aad383a3c28f"
}
}
]
}
You will find the new route mapping that was created. Delete the unwanted one.
~ % cf curl /v2/route_mappings/33bde252-ad3e-49b4-91df-78543ac452b4 -X DELETE
That's about it. Any better solutions are welcome :). (Not involving to maintain the Dockerfile of the base image of course)

Hashicorp Vault docker networking issue

When setting up on a brand new EC2 server as a test I run the following and it all works fine.
/vault/config/local.json
{
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 1
}
}],
"storage": {
"file": {
"path": "/vault/data"
}
},
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
"ui": true
}
docker run -d -p 8200:8200 -v /home/ec2-user/vault:/vault --cap-add=IPC_LOCK vault server
export VAULT_ADDR='http://0.0.0.0:8200'
vault operator init
I unseal and login fine.
On one of our corporate test servers I use 0.0.0.0 and I get a web server busy sorry page on the init. However, if I export 127.0.0.1 the init works fine. I cannot access the container from the server command line with a curl with 0.0.0.0 or 127.0.0.1. I'm unsure why the behaviours are different?
I understand that 127.0.0.1 should not work but why am I get server busy on 0.0.0.0 on one server and not another in the actual container?
Thanks Mark
The listener works fine in the container with 0.0.0.0. To access the container externally you need to VAULT_ADDR to an address the server understands not the container.

Virtual Machine with Vagrant is not accesible from the client the first time

I'm trying to run a Rails project using Nginx with docker and vagrant. Everything is ok if I use the vagrant box ubuntu/trusty64, I provision the VM and everything is ok. But I wanted to create my own box from ubuntu/trusty64 and this is when all my problems began.
So I created the box using packer and this template:
{
"variables": {
"home": "{{env `HOME`}}"
},
"provisioners": [
{
"type": "shell",
"execute_command": "echo 'vagrant'|sudo -S sh '{{.Path}}'",
"override": {
"virtualbox-ovf": {
"scripts": [
"scripts/docker.sh",
"scripts/ansible.sh",
"scripts/cleanup.sh",
"scripts/zerodisk.sh"
]
}
}
}
],
"post-processors": [
{
"type": "vagrant",
"override": {
"virtualbox": {
"output": "ubuntu-14-04-x64-virtualbox.box"
}
}
}
],
"builders": [
{
"type": "virtualbox-ovf",
"headless": "true",
"boot_wait": "10s",
"source_path": "{{user `home`}}/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-trusty64/14.04/virtualbox/box.ovf",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo 'shutdown -P now' > /tmp/shutdown.sh; echo 'vagrant'|sudo -S sh '/tmp/shutdown.sh'",
"vboxmanage": [
[ "modifyvm", "{{.Name}}", "--memory", "512" ],
[ "modifyvm", "{{.Name}}", "--cpus", "1" ]
]
}
]
}
Then added the box to pedrof/base-box in vagrant boxes and used this Vagrantfile to start the VM:
Vagrant.configure(2) do |config|
config.vm.provider 'virtualbox' do |v|
v.memory = 2048
v.cpus = 2
end
config.vm.box = 'pedrof/base-box'
config.vm.synced_folder '.', '/vagrant', type: 'nfs', mount_options: ['nolock,vers=3,udp,noatime,actimeo=1']
config.vm.network :private_network, ip: '172.17.8.100'
config.vm.provision 'shell', path: "docker/build.sh"
config.vm.provision 'shell', path: "docker/init.sh", run: 'always'
end
It starts the VM and starts docker containers using docker-compose. Everything is ok, except that I can't access http://172.17.8.100 from the browser but ping respond ok from the host. I ran curl to hit Nginx from inside the VM and it responded with the proper index page, but nothing from outside the VM. The weird thing is that everything works fine if I reload vagrant using vagrant reload.
Am I building the box incorrectly? Something is missing in the Vagrantfile?
I assume you start a docker container inside the Vagrant box. The container is a web server and you want to access the web server with your browser. Then you will need port forwarding to your host machine.
So first your container port must be mapped onto a box port. This is done by the -p Docker Parameter. Example: -p 8080:8080. Then the port will be available inside the box. You say, that you can access inside the box so I think this already correctly configured.
Try to forward this port out of the box. Add this to your Vagrantfile
...
config.vm.network "forwarded_port", guest: 8080, host: 8080
...
Now try to access port 8080 with http://localhost:8080
If you want to make http://172.17.8.100 work then you will have to map the container port onto port 80 of the box port which needs root access.

Docker's containers communication using Consul

I have read about service discovery for Docker using Consul, but I can't understand it.
Could you explain to me, how can I run two docker containers, recognize from the first container host of the second using Consul and send some message to it?
You would need to run Consul Agent in client mode inside each Docker container. Each Docker Container will need a Consul Service Definition file to let the Agent know to advertize it's service to the Consul Servers.
They look like this:
{
"service": {
"name": "redis",
"tags": ["master"],
"address": "127.0.0.1",
"port": 8000,
"checks": [
{
"script": "/usr/local/bin/check_redis.py",
"interval": "10s"
}
]
}
}
And a Service Health Check to monitor the health of the service. Something like this:
{
"check": {
"id": "redis",
"name": "Redis",
"script": "/usr/local/bin/check_redis_ping_returns_pong.sh",
"interval": "10s"
}
}
In the other Docker Container your code would find the Redis service either via DNS or the Consul Servers HTTP API
dig #localhost -p 8500 redis.service.consul
curl $CONSUL_SERVER/v1/health/service/redis?passing

Resources