Mosquitto Dynamic Security Plugin Users and role are not add - mosquitto

i can't add any users or roles by Dynamic Security Plugin
when i enter command
mosquitto_ctrl -u myadmin dynsec createClient testUser
it doesn't give my any errors it just ask for new user password and then when i open the dynamic-security.json i can only find the admin user created
ubuntu#instance-20220606-2142:~$ mosquitto_ctrl -u myadmin dynsec createClient testUser
Enter new password for testUser. Press return for no password (user will be unable to login).
New password for testUser:
Reenter password for testUser:
Warning: You are running mosquitto_ctrl without encryption.
This means all of the configuration changes you are making are visible on the network, including passwords.
Password for myadmin:
ubuntu#instance-20220606-2142:~$ sudo nano /opt/mosquitto/dynamic-security.json
ubuntu#instance-20220606-2142:~$
dynamic-security.json:
{
"clients": [
{
"username": "myadmin",
"textName": "Dynsec admin user",
"password": "BkzzrsHWjAo0Kz444L+OVwfrI7kJZgSU5w+2AzJXh3CaI2Dgxy0ze3Vm2K8+PMaMXFwA8uAMZ9D5g1aQuMVMjg==",
"salt": "rhTl0xJLbYuyWq9f",
"iterations": 101,
"roles": [
{
"rolename": "admin"
}
]
}
],
"roles": [
{
"rolename": "admin",
"acls": [
{
"acltype": "publishClientSend",
"topic": "$CONTROL/dynamic-security/#",
"allow": true
},
{
"acltype": "publishClientReceive",
"topic": "$CONTROL/dynamic-security/#",
"allow": true
},
{
"acltype": "subscribePattern",
"topic": "$CONTROL/dynamic-security/#",
"allow": true
},
{
"acltype": "publishClientReceive",
"topic": "$SYS/#",
"allow": true
},
{
"acltype": "subscribePattern",
"topic": "$SYS/#",
"allow": true
},
{
"acltype": "publishClientReceive",
"topic": "#",
"acltype": "publishClientReceive",
"topic": "#",
"allow": true
},
{
"acltype": "subscribePattern",
"topic": "#",
"allow": true
},
{
"acltype": "unsubscribePattern",
"topic": "#",
"allow": true
}
]
}
],
"defaultACLAccess": {
"publishClientSend": false,
"publishClientReceive": true,
"subscribe": false,
"unsubscribe": true
}
}
defualt.conf :
listener 1882
listener 1883
#password_file /etc/mosquitto/passwd
protocol websockets
mosquitto.conf:
# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example
per_listener_settings false
plugin /usr/lib/x86_64-linux-gnu/mosquitto_dynamic_security.so
plugin_opt_config_file /opt/mosquitto/dynamic-security.json
persistence true
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
include_dir /etc/mosquitto/conf.d
mosquitto version:
mosquitto version 2.0.15
could you please support my with this i can't find a solution for this i tried to google it but i couldn't find anything

The most likely problem here is your default.conf file.
You have created a listener on port 1882 and the WebSocket listener on port 1883. It will be this second listener that is the problem.
That is because port 1883 is the default native MQTT port, by switching it to WebSockets the mosquitto_ctrl command will not be able to connect to send the messages required to create the new users.
I suggest you change the default.conf to the following:
listener 1882
protocol websockets
listener 1883
This will leave port 1883 as the default MQTT port and run WebSockets on the none standard port 1882.

Related

Fiware Orion + Cosmos: Subscription failed - Timeout was reached trying to notify a PC that belongs to the same network

I have a scenario on a Citrix Xenserver server with 3 virtual machines (VM) Centos7 on the same network (10.0.1.0/24). Each VM are responsible to give a prediction done with Apache Spark on Scala (Logistic Regression). I use Orion Context broker (CB) on Docker to create subscriptions that will be triggered to ask predictions. The CB it is located only in 10.0.1.4 VM and I made some ports available to access from the other machines.
My docker-compose.yml:
mongo:
image: mongo:4.4
command: --nojournal
orion:
image: fiware/orion
links:
- mongo
ports:
- "1026:1026"
- "1027:1027"
- "1028:1028"
- "9090:9090"
command: -dbhost mongo -corsOrigin __ALL
For example, to access CB from 10.0.1.2 VM I use 10.0.1.4:1028/..... and so on.
This is the subscription I'm facing problems (and maybe the other related to 10.0.1.3 VM must have the same problem too)
curl -v localhost:1026/v2/subscriptions -s -S -H 'Content-Type: application/json' -d #- <<EOF
{
"description": "Suscripcion de anemia para monitorear al Paciente",
"subject": {
"entities": [
{
"id": "Paciente1",
"type": "Paciente"
}
],
"condition": {
"attrs": ["calculateAnaemia"]
},
"expression":{
"q":"calculateAnaemia:1"
}
},
"notification": {
"http": {
"url": "http://10.0.1.2:9002/notify"
},
"attrs": ["gender","age","hemoglobin","mch","mchc","mcv"]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 10
}
EOF
I have a code on the 10.0.1.2 VM that is listening about changes related to this subscription on port 9002 with Fiware Cosmos:
For eventStream variable on 10.0.1.4 VM got port 9004 and for 10.0.1.3 VM it is 9003 port
For conf variable I set "spark.driver.host" on 10.0.1.4 VM with 10.0.1.4 IP and for 10.0.1.3 VM to 10.0.1.3 IP
import esqMensajeria.ActorSysMensajeria.ActoresEsquema
import org.apache.spark._
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.fiware.cosmos.orion.spark.connector._
object Main extends App{
val conf = new SparkConf().setMaster("local[*]").setAppName("AnaemiaPrediction").set("spark.driver.host", "10.0.1.2")
val ssc = new StreamingContext(conf, Seconds(10))
// Create Orion Source. Receive notifications on port 9002
val eventStream = ssc.receiverStream(new OrionReceiver(9002))
// Esquema de mensajeria
val actorSysMsj = new ActoresEsquema ()
println("Esperando cambios para obtener informaciĆ³n...")
// Process event stream
val processedDataStream = eventStream
.flatMap(event => event.entities)
.map(entity => {
val gender: Int = entity.attrs("gender").value.asInstanceOf[Number].intValue()
val age: Int = entity.attrs("age").value.asInstanceOf[Number].intValue()
val hemoglobin: Double = entity.attrs("hemoglobin").value.asInstanceOf[Double].doubleValue()
val mch: Double = entity.attrs("mch").value.asInstanceOf[Double].doubleValue()
val mchc: Double = entity.attrs("mchc").value.asInstanceOf[Double].doubleValue()
val mcv: Double = entity.attrs("mcv").value.asInstanceOf[Double].doubleValue()
actorSysMsj.start((entity.id, gender,age,hemoglobin,mch,mchc,mcv),conf)
(entity.id, gender,age,hemoglobin,mch,mchc,mcv)
})
processedDataStream.print
ssc.start()
ssc.awaitTermination()
}
But when I trigger, the subscription fails showing the next (not only on 10.0.1.2 but 10.0.1.3 VM too):
{"id":"61cb8569a1e87a254e16066d",
"description":"Suscripcion de anemia para monitorear al Paciente",
"expires":"2040-01-01T14:00:00.000Z",
"status":"failed",
"subject":{"entities":[{"id":"Paciente1","type":"Paciente"}],
"condition":{"attrs":["calculateAnaemia"]}},
"notification":
{"timesSent":3,
"lastNotification":"2021-12-29T00:03:49.000Z",
"attrs":"gender","age","hemoglobin","mch","mchc","mcv"],"
onlyChangedAttrs":false,
"attrsFormat":"normalized",
http":{"url":"http://10.0.1.2:9002/notify"},
"lastFailure":"2021-12-29T00:03:54.000Z",
"lastFailureReason":"Timeout was reached"},
"throttling":10}]
The curious thing is when I worked with the subscription related to the 10.0.1.4 VM that has the CB container, the subscription remains active and i get my expected result.
This is the subscription:
curl -v localhost:1026/v2/subscriptions -s -S -H 'Content-Type: application/json' -d #- <<EOF
{
"description": "Suscripcion de deceso para monitorear al Paciente",
"subject": {
"entities": [
{
"id": "Paciente1",
"type": "Paciente"
}
],
"condition": {
"attrs": [
"calculateDeceased"
]
},
"expression":{
"q":"calculateDeceased:1"
}
},
"notification": {
"http": {
"url": "http://10.0.1.4:9004/notify"
},
"attrs": [
"gender","age","hasAnaemia","creatinePP","hasDiabetes","ejecFrac","highBloodP","platelets","serumCreatinine","serumSodium","smoking","time"
]
},
"expires": "2040-01-01T14:00:00.00Z"
}
EOF
This is the answer when it is triggered and process perfectly:
{"id":"61caab07a1e87a254e160665",
"description":"Suscripcion de deceso para monitorear al Paciente",
"expires":"2040-01-01T14:00:00.000Z",
"status":"active",
"subject":{"entities":[{"id":"Paciente1","type":"Paciente"}],
"condition":{"attrs":["calculateDeceased"]}},
"notification":{"timesSent":1,"lastNotification":"2021-12-28T06:15:41.000Z",
"attrs":["gender","age","hasAnaemia","creatinePP","hasDiabetes","ejecFrac","highBloodP","platelets","serumCreatinine","serumSodium","smoking","time"],
"onlyChangedAttrs":false,
"attrsFormat":"normalized",
"http":{"url":"http://10.0.1.4:9004/notify"},
"lastSuccess":"2021-12-28T06:15:43.000Z",
"lastSuccessCode":200}}
I have to say I'm new to Spark, Scala and even Fiware. But projects are projects and maybe I'm missing something I did not see in all I read to set up this project. Also, I stopped all firewalls (firewalld) because I was facing a "Couldn't connect to server" error on subscriptions related to 10.0.1.2 and 10.0.1.3 VMs. I did an sudo yum update too, I pinged all the VMs between each other and I got nice response. One thing that I do not know if it is important: I have internet on all my VMs but I can't ping for example... www.google.com or 8.8.8.8. So, any suggestions are welcome! I apologized for my english. Thanks in advance~
Well, after 3 days of keep looking and trying i just discovered that I need to turn off the 1.2 and 1.3 firewalls and keep it on 1.4.

Dockerize FIWARE can't notify a service

I just started to use FIWARE. I downloaded the latest version on the website (v2) using docker-compose on a PopOs distro.
I'm using Postman to make requests (create the entities and subscriptions) and a Laravel application to listen the notification from the FIWARE subscriptions. But for some reason, today, when I started the docker service and start to send requests: the FIWARE notifications suddenly stopped to work.
When I access the subscriptions endpoint FIWARE returns:
"notification": {
"timesSent": 1,
"lastNotification": "2021-09-02T01:19:39.000Z",
"attrs": [],
"onlyChangedAttrs": false,
"attrsFormat": "keyValues",
"http": {
"url": "http://localhost:8000/api/notifications"
},
"lastFailure": "2021-09-02T01:19:39.000Z",
"lastFailureReason": "Couldn't connect to server"
}
FIWARE can't comunicate, but if I make a POST request using Postman for that endpoint (http://localhost:8000/api/notifications) it returns 200.
There's some aditional configuration between the FIWARE docker container and the local machine? Or I'm doing something wrong?
This is my entity:
// http://{{orion}}/v2/subscription
{
"id": "movie",
"type": "movie",
"name": {
"type": "text",
"value": "movie name"
},
"gender": {
"type": "text",
"value": "drama"
}
}
This is how I'm doing the subscription:
// http://{{orion}}/v2/subscriptions
{
"description": "Notify me about any movie of gender drama",
"subject": {
"entities": [{"idPattern": ".*","type": "movie"}],
"condition": {
"attrs": ["gender"],
"expression": {
"q": "gender==drama"
}
}
},
"notification": {
"http": {
"url": "http://127.0.0.1:8000/api/notifications"
}
}
}
If you are using Docker, then you need to consider what http://localhost:8000/api/notifications actually means. localhost will mean the localhost as experienced by the Orion container itself. Generally Orion listens on 1026 and there is nothing listening on 8000 within a dockerized Orion, therefore your subscription fails.
If you have another micro-service running within the same docker network and in a separate container you must use the hostname of that container (or an alias or defined IP) to describe the notification URL, not localhost.
So for example in the following tutorial where a subscription payload is displayed on screen:
curl -iX POST \
--url 'http://localhost:1026/v2/subscriptions' \
--header 'content-type: application/json' \
--data '{
"description": "Notify me of all product price changes",
"subject": {
"entities": [{"idPattern": ".*", "type": "Product"}],
"condition": {
"attrs": [ "price" ]
}
},
"notification": {
"http": {
"url": "http://tutorial:3000/subscription/price-change"
}
}
}'
refers to a container which is called tutorial within the docker network
tutorial:
image: fiware/tutorials.context-provider
hostname: tutorial
container_name: fiware-tutorial
depends_on:
- orion
networks:
default:
aliases:
- iot-sensors
- context-provider
expose:
- 3000
As it happens the tutorial container is also exposing its internal port 3000 to the localhost of the machine it is running on so it can be viewed by a user, but Orion can only access it via the hostname on the docker network.

POD Definition - Deploying to DC/OS

I'm new to DC/OS and I have been really struggling trying to deploy a POD. I have tried the simple examples provided in the documentation
but the deployments remain stuck in the deploying stage. There are plenty of resources available so that is not the issue.
I have 3 containers that I need to exist within a virtual network (queue, PDI, API). I have included my definition file that starts with a single container deployment and once I can successfully deploy I will add 2 additional containers to the definition. I have been looking at this example but have been unsuccessful.
I have successfully deployed the containers one at a time through Jenkins. All 3 images have been published and exist in the docker registry (Jfrog). I have included an example of my marathon.json for one of those successful deployments. I would appreciate any feedback that can help. The service is stuck in a deployed stage so I'm unable to drill down and see the logs via the command line or UI.
containers.image = pdi-queue
artifactory server = repos.pdi.com:5010/pdi-queue
1 Container POD Definition - (Error: Stuck in Deployment Stage)
{
"id":"/pdi-queue",
"containers":[
{
"name":"simple-docker",
"resources":{
"cpus":1,
"mem":128,
"disk":0,
"gpus":0
},
"image":{
"kind":"DOCKER",
"id":"repos.pdi.com:5010/pdi-queue",
"portMappings":[
{
"hostPort": 0,
"containerPort": 15672,
"protocol": "tcp",
"servicePort": 15672
}
]
},
"endpoints":[
{
"name":"web",
"containerPort":80,
"protocol":[
"http"
]
}
],
"healthCheck":{
"http":{
"endpoint":"web",
"path":"/"
}
}
}
],
"networks":[
{
"mode":"container",
"name":"dcos"
}
]
}
Marathon.json - (No Error: Successful deployment)
{
"id": "/pdi-queue",
"backoffFactor": 1.15,
"backoffSeconds": 1,
"container": {
"portMappings": [
{"containerPort": 15672, "hostPort": 0, "protocol": "tcp", "servicePort": 15672, "name": "health"},
{"containerPort": 5672, "hostPort": 0, "protocol": "tcp", "servicePort": 5672, "name": "queue"}
],
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "repos.pdi.com:5010/pdi-queue",
"forcePullImage": true,
"privileged": false,
"parameters": []
}
},
"cpus": 0.1,
"disk": 0,
"healthChecks": [
{
"gracePeriodSeconds": 300,
"intervalSeconds": 60,
"maxConsecutiveFailures": 3,
"portIndex": 0,
"timeoutSeconds": 20,
"delaySeconds": 15,
"protocol": "MESOS_HTTP",
"path": "/"
}
],
"instances": 1,
"maxLaunchDelaySeconds": 3600,
"mem": 512,
"gpus": 0,
"networks": [
{
"mode": "container/bridge"
}
],
"requirePorts": false,
"upgradeStrategy": {
"maximumOverCapacity": 1,
"minimumHealthCapacity": 1
},
"killSelection": "YOUNGEST_FIRST",
"unreachableStrategy": {
"inactiveAfterSeconds": 300,
"expungeAfterSeconds": 600
},
"fetch": [],
"constraints": [],
"labels": {
"traefik.frontend.redirect.entryPoint": "https",
"traefik.frontend.redirect.permanent": "true",
"traefik.enable": "true"
}
}
I may not know the answer to the issues you are running into but I think I may be able to share some pointers to help debug this.
First of all, if you are unable to view logs from the DC/OS UI, you can also go to <cluster_url>/mesos and find the simple_docker task under Completed Tasks . It would show up as TASK_FAILED. Click on the Sandbox link on the right and then check stderr and stdout files for the task. There might be some clues there as to why it failed.
Another place to look can be to note the Agent IP from the Mesos UI where the task failed. SSH into the node and run sudo journalctl -u dcos-mesos-slave to see agent logs and try to find the logs corresponding to the failing task
One difference between the running the application as a Pod and a the App definition you shared is that your app definition is using DOCKER as the containerizer for the task while Pods use MESOS containerizer.
I noticed that you are using a private docker registry for your docker images. One possibility is that if your private registry's certificate is not trusted by Mesos but docker is configured already to trust it:
<copy the certificate(s) to /var/lib/dcos/pki/tls/certs>
cd /var/lib/dcos/pki/tls/certs
for file in *.crt; do ln -s \"$file\" \"$(openssl x509 -hash -noout -in \"$file\")\".0; done
This would need to be done on each agent node.
If its not a certificate issue, it could be docker registry credential issues. If the docker registry you are using requires authentication then you can specify docker credential at install time (assuming advanced install method) using : https://docs.mesosphere.com/1.11/installing/production/advanced-configuration/configuration-reference/#cluster-docker-credentials

Virtual Machine with Vagrant is not accesible from the client the first time

I'm trying to run a Rails project using Nginx with docker and vagrant. Everything is ok if I use the vagrant box ubuntu/trusty64, I provision the VM and everything is ok. But I wanted to create my own box from ubuntu/trusty64 and this is when all my problems began.
So I created the box using packer and this template:
{
"variables": {
"home": "{{env `HOME`}}"
},
"provisioners": [
{
"type": "shell",
"execute_command": "echo 'vagrant'|sudo -S sh '{{.Path}}'",
"override": {
"virtualbox-ovf": {
"scripts": [
"scripts/docker.sh",
"scripts/ansible.sh",
"scripts/cleanup.sh",
"scripts/zerodisk.sh"
]
}
}
}
],
"post-processors": [
{
"type": "vagrant",
"override": {
"virtualbox": {
"output": "ubuntu-14-04-x64-virtualbox.box"
}
}
}
],
"builders": [
{
"type": "virtualbox-ovf",
"headless": "true",
"boot_wait": "10s",
"source_path": "{{user `home`}}/.vagrant.d/boxes/ubuntu-VAGRANTSLASH-trusty64/14.04/virtualbox/box.ovf",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo 'shutdown -P now' > /tmp/shutdown.sh; echo 'vagrant'|sudo -S sh '/tmp/shutdown.sh'",
"vboxmanage": [
[ "modifyvm", "{{.Name}}", "--memory", "512" ],
[ "modifyvm", "{{.Name}}", "--cpus", "1" ]
]
}
]
}
Then added the box to pedrof/base-box in vagrant boxes and used this Vagrantfile to start the VM:
Vagrant.configure(2) do |config|
config.vm.provider 'virtualbox' do |v|
v.memory = 2048
v.cpus = 2
end
config.vm.box = 'pedrof/base-box'
config.vm.synced_folder '.', '/vagrant', type: 'nfs', mount_options: ['nolock,vers=3,udp,noatime,actimeo=1']
config.vm.network :private_network, ip: '172.17.8.100'
config.vm.provision 'shell', path: "docker/build.sh"
config.vm.provision 'shell', path: "docker/init.sh", run: 'always'
end
It starts the VM and starts docker containers using docker-compose. Everything is ok, except that I can't access http://172.17.8.100 from the browser but ping respond ok from the host. I ran curl to hit Nginx from inside the VM and it responded with the proper index page, but nothing from outside the VM. The weird thing is that everything works fine if I reload vagrant using vagrant reload.
Am I building the box incorrectly? Something is missing in the Vagrantfile?
I assume you start a docker container inside the Vagrant box. The container is a web server and you want to access the web server with your browser. Then you will need port forwarding to your host machine.
So first your container port must be mapped onto a box port. This is done by the -p Docker Parameter. Example: -p 8080:8080. Then the port will be available inside the box. You say, that you can access inside the box so I think this already correctly configured.
Try to forward this port out of the box. Add this to your Vagrantfile
...
config.vm.network "forwarded_port", guest: 8080, host: 8080
...
Now try to access port 8080 with http://localhost:8080
If you want to make http://172.17.8.100 work then you will have to map the container port onto port 80 of the box port which needs root access.

Docker's containers communication using Consul

I have read about service discovery for Docker using Consul, but I can't understand it.
Could you explain to me, how can I run two docker containers, recognize from the first container host of the second using Consul and send some message to it?
You would need to run Consul Agent in client mode inside each Docker container. Each Docker Container will need a Consul Service Definition file to let the Agent know to advertize it's service to the Consul Servers.
They look like this:
{
"service": {
"name": "redis",
"tags": ["master"],
"address": "127.0.0.1",
"port": 8000,
"checks": [
{
"script": "/usr/local/bin/check_redis.py",
"interval": "10s"
}
]
}
}
And a Service Health Check to monitor the health of the service. Something like this:
{
"check": {
"id": "redis",
"name": "Redis",
"script": "/usr/local/bin/check_redis_ping_returns_pong.sh",
"interval": "10s"
}
}
In the other Docker Container your code would find the Redis service either via DNS or the Consul Servers HTTP API
dig #localhost -p 8500 redis.service.consul
curl $CONSUL_SERVER/v1/health/service/redis?passing

Resources