Parse-Dashboard: Server Not Reachable: unable to connect server Error - docker

I have parse-server and parse-dashboard installed with pm2 in docker container in my synology nas as below:
+------+ +-------------+
| +NIC1 (192.168.1.100) <--> Router <--> (192.168.1.2) NIC1 + |
| PC | | DSM |
| +NIC2 (10.10.0.100) <--Peer-2-Peer---> (10.10.0.2) NIC2 + |
+------+ | [DOCKER] |
|(172.17.0.2) |
+-------------+
for reference: for pm2 setup, i'm following: this tutorial
here is my parse-server and parse-dashboard pm2 ecosystem (defined in pm2's ecosystem.json):
{
"apps" : [{
"name" : "parse-wrapper",
"script" : "/usr/bin/parse-server",
"watch" : true,
"merge_logs" : true,
"cwd" : "/home/parse",
"args" : "--serverURL http://localhost:1337/parse",
"env": {
"PARSE_SERVER_CLOUD_CODE_MAIN": "/home/parse/cloud/main.js",
"PARSE_SERVER_DATABASE_URI": "mongodb://localhost/test",
"PARSE_SERVER_APPLICATION_ID": "aeb932b93a9125c19df2fcaf3fd69fcb",
"PARSE_SERVER_MASTER_KEY": "358898112f354f8db593ea004ee88fed",
}
},{
"name" : "parse-dashboard-wrapper",
"script" : "/usr/bin/parse-dashboard",
"watch" : true,
"merge_logs" : true,
"cwd" : "/home/parse/parse-dashboard",
"args" : "--config /home/parse/parse-dashboard/config.json --allowInsecureHTTP=1"
}]
}
here is my parse-dashboard config: /home/parse/parse-dashboard/config.json
{
"apps": [{
"serverURL": "http://172.17.0.2:1337/parse",
"appId": "aeb932b93a9125c19df2fcaf3fd69fcb",
"masterKey": "358898112f354f8db593ea004ee88fed",
"appName": "Parse Server",
"iconName": "",
"primaryBackgroundColor": "",
"secondaryBackgroundColor": ""
}],
"users": [{
"user": "user",
"pass": "1234"
}],
"iconsFolder": "icons"
}
once you run: pm2 start ecosystem.json
here is parse-server log: pm2 logs 0
masterKey: ***REDACTED***
serverURL: http://localhost:1337/parse
masterKeyIps: []
logsFolder: ./logs
databaseURI: mongodb://localhost/test
userSensitiveFields: ["email"]
enableAnonymousUsers: true
allowClientClassCreation: true
maxUploadSize: 20mb
customPages: {}
sessionLength: 31536000
expireInactiveSessions: true
revokeSessionOnPasswordReset: true
schemaCacheTTL: 5000
cacheTTL: 5000
cacheMaxSize: 10000
objectIdSize: 10
port: 1337
host: 0.0.0.0
mountPath: /parse
scheduledPush: false
collectionPrefix:
verifyUserEmails: false
preventLoginWithUnverifiedEmail: false
enableSingleSchemaCache: false
jsonLogs: false
verbose: false
level: undefined
[1269] parse-server running on http://localhost:1337/parse
here is parse-dashboard log: pm2 logs 1
The dashboard is now available at http://0.0.0.0:4040/
once it's run, i can access 192.168.1.2:1337/parse (which will return {"error":"unauthorized"})
and i can access 192.168.1.2:4040
but it return:
Server not reachable: unable to connect to server
i see a lot of this issue can be solve by changing parse-dashboard config "serverURL": "http://localhost:1337/parse" to "serverURL": "http://172.17.0.2:1337/parse"
but to me, it's still no luck...
any idea what i've been missing here?

so apparently, my /home/parse/parse-dashboard/config.json:
"serverURL": "http://172.17.0.2:1337/parse"
should point to my DSM machine which is:
"serverURL": "http://192.168.1.2:1337/parse"

Related

Creating Managed Policy in CDK errors with MalformedPolicy

When I try to deploy a seemingly simple CDK stack, it fails with a strange error. I don't get this same behavior when I create a different iam.ManagedPolicy in a different file, and that one has a much more complicated policy with several actions, etc. What am I doing wrong?
import aws_cdk.core as core
from aws_cdk import aws_iam as iam
from constructs import Construct
from master_payer import ( env, myenv )
class FromStack(core.Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
#myenv['pma'] = an account ID (12 digits)
#env = 'dev'
rolename = f"arn:aws:iam:{myenv['pma']}:role/CrossAccount{env.capitalize()}MpaAdminRole"
mpname = f"{env.capitalize()}MpaAdminPolicy"
pol = iam.ManagedPolicy(self, mpname, managed_policy_name = mpname,
document = iam.PolicyDocument(statements= [
iam.PolicyStatement(actions=["sts:AssumeRole"], effect=iam.Effect.ALLOW, resources=[rolename])
]))
grp = iam.Group(self, f"{env.capitalize()}MpaAdminGroup", managed_policies=[pol])
The cdk deploy output:
FromStack: deploying...
FromStack: creating CloudFormation changeset...
2:19:52 AM | CREATE_FAILED | AWS::IAM::ManagedPolicy | DevMpaAdminPolicyREDACTED
The policy failed legacy parsing (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: REDACTED-GUID; Proxy: null)
new ManagedPolicy (/tmp/jsii-kernel-EfRyKw/node_modules/#aws-cdk/aws-iam/lib/managed-policy.js:39:26)
\_ /tmp/tmpxl5zxf8k/lib/program.js:8432:58
\_ Kernel._wrapSandboxCode (/tmp/tmpxl5zxf8k/lib/program.js:8860:24)
\_ Kernel._create (/tmp/tmpxl5zxf8k/lib/program.js:8432:34)
\_ Kernel.create (/tmp/tmpxl5zxf8k/lib/program.js:8173:29)
\_ KernelHost.processRequest (/tmp/tmpxl5zxf8k/lib/program.js:9757:36)
\_ KernelHost.run (/tmp/tmpxl5zxf8k/lib/program.js:9720:22)
\_ Immediate._onImmediate (/tmp/tmpxl5zxf8k/lib/program.js:9721:46)
\_ processImmediate (node:internal/timers:464:21)
❌ FromStack failed: Error: The stack named FromStack failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE
at Object.waitForStackDeploy (/usr/local/lib/node_modules/aws-cdk/lib/api/util/cloudformation.ts:307:11)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at prepareAndExecuteChangeSet (/usr/local/lib/node_modules/aws-cdk/lib/api/deploy-stack.ts:351:26)
at CdkToolkit.deploy (/usr/local/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:194:24)
at initCommandLine (/usr/local/lib/node_modules/aws-cdk/bin/cdk.ts:267:9)
The stack named FromStack failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE
And the cdk synth output, which cfn-lint is happy with (no warnings, errors, or informational violations):
{
"Resources": {
"DevMpaAdminPolicyREDACTED": {
"Type": "AWS::IAM::ManagedPolicy",
"Properties": {
"PolicyDocument": {
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Resource": "arn:aws:iam:REDACTED-ACCOUNT-ID:role/CrossAccountDevMpaAdminRole"
}
],
"Version": "2012-10-17"
},
"Description": "",
"ManagedPolicyName": "DevMpaAdminPolicy",
"Path": "/"
},
"Metadata": {
"aws:cdk:path": "FromStack/DevMpaAdminPolicy/Resource"
}
},
"DevMpaAdminGroupREDACTED": {
"Type": "AWS::IAM::Group",
"Properties": {
"ManagedPolicyArns": [
{
"Ref": "DevMpaAdminPolicyREDACTED"
}
]
},
"Metadata": {
"aws:cdk:path": "FromStack/DevMpaAdminGroup/Resource"
}
},
"CDKMetadata": {
"Type": "AWS::CDK::Metadata",
"Properties": {
"Analytics": "v2:deflate64:REDACTED-B64"
},
"Metadata": {
"aws:cdk:path": "FromStack/CDKMetadata/Default"
}
}
}
}
Environment Specs
$ cdk --version
2.2.0 (build 4f5c27c)
$ cat /etc/redhat-release
Red Hat Enterprise Linux releease 8.5 (Ootpa)
$ python --version
Python 3.6.8
$ node --version
v16.8.0
The role ARN rolename was incorrect; I was missing a colon after iam. So it's iam:: not iam:. I think I copied the single colon from a (wrong) example somewhere on the Internet. Gah...

Cannot provision an actuator in IoT Agent Fiware

I am using the following python code to create a service group
import json
import requests
url = 'http://localhost:4041/iot/services'
headers = {'Content-Type': "application/json", 'fiware-service': "openiot", 'fiware-servicepath': "/mtp"}
data = {
"services": [
{
"apikey": "456dgffdg56465dfg",
"cbroker": "http://orion:1026",
"entity_type": "Door",
#resource attribute is left blank since HTTP communication is not being used
"resource": ""
}
]
}
res = requests.post(url, json=data, headers=headers)
#print(res.status_code)
if res.status_code == 201:
print("Created")
elif res.status_code == 409:
print("A resource cannot be created because it already exists")
else:
print (res.raise_for_status())
But when trying to provision an actuator I get a bad request 400 error with the code below:
import json
import requests
url = 'http://localhost:4041/iot/devices'
headers = {'Content-Type': "application/json", 'fiware-service': "openiot", 'fiware-servicepath': "/mtp"}
data = {
"devices": [
{
"device_id": "door003",
"entity_name": "urn:ngsi-ld:Door:door003",
"entity_type": "Door",
"protocol": "PDI-IoTA-UltraLight",
"transport": "MQTT",
"commands": [
{"name": "unlock","type": "command"},
{"name": "open","type": "command"},
{"name": "close","type": "command"},
{"name": "lock","type": "command"}
],
"attributes": [
{"object_id": "s", "name": "state", "type":"Text"}
]
}
]
}
res = requests.post(url, json=data, headers=headers)
#print(res.status_code)
if res.status_code == 201:
print("Created")
elif res.status_code == 409:
print("Entity cannot be created because it already exists")
else:
print (res.raise_for_status())
Here is the error message I get in console.
iot-agent | time=2021-02-17T11:39:44.132Z | lvl=DEBUG | corr=16f27639-49c2-4419-a926-2433805dbdb3 | trans=16f27639-49c2-4419-a926-2433805dbdb3 | op=IoTAgentNGSI.GenericMiddlewares | from=n/a | srv=smartdoor | subsrv=/mtp | msg=Error [BAD_REQUEST] handling request: Request error connecting to the Context Broker: 501 | comp=IoTAgent
iot-agent | time=2021-02-17T11:39:44.133Z | lvl=DEBUG | corr=390f5530-f537-4efa-980a-890a44153811 | trans=390f5530-f537-4efa-980a-890a44153811 | op=IoTAgentNGSI.DomainControl | from=n/a | srv=smartdoor | subsrv=/mtp | msg=response-time: 29 | comp=IoTAgent
What is strange is that if a remove the commands from the payload the device provisioning works fine. Is there something am I doing wrong while trying to provision an actuator (not a sensor)?
IoT Agent version:
{"libVersion":"2.14.0-next","port":"4041","baseRoot":"/","version":"1.15.0-next"}
Orion version:
{
"orion" : {
"version" : "2.2.0",
"uptime" : "0 d, 0 h, 59 m, 18 s",
"git_hash" : "5a46a70de9e0b809cce1a1b7295027eea0aa757f",
"compile_time" : "Thu Feb 21 10:28:42 UTC 2019",
"compiled_by" : "root",
"compiled_in" : "442fc4d225cf",
"release_date" : "Thu Feb 21 10:28:42 UTC 2019",
"doc" : "https://fiware-orion.rtfd.io/en/2.2.0/"
}
}
My docker-compose file looks as follows:
iot-agent:
image: fiware/iotagent-ul:latest
hostname: iot-agent
container_name: iot-agent
restart: unless-stopped
depends_on:
- mongo-db
networks:
- default
expose:
- "4041"
ports:
- "4041:4041"
environment:
- IOTA_CB_HOST=orion
- IOTA_CB_PORT=1026
- IOTA_NORTH_PORT=4041
- IOTA_REGISTRY_TYPE=mongodb
- IOTA_LOG_LEVEL=DEBUG
- IOTA_TIMESTAMP=true
- IOTA_CB_NGSI_VERSION=v2
- IOTA_AUTOCAST=true
- IOTA_MONGO_HOST=mongo-db
- IOTA_MONGO_PORT=27017
- IOTA_MONGO_DB=iotagentul
- IOTA_PROVIDER_URL=http://iot-agent:4041
- IOTA_MQTT_HOST=mosquitto
- IOTA_MQTT_PORT=1883
Thanks in advance.
Regards,

Terraform: Passing JSON file as environment variable value with a systemd unit file inside docker container

I am trying to pass json in an environmental variable of a systemd unit file with terraform. I am using an external provider named CT to generate ignition from the YAML configuration.
CT Config:
data "ct_config" "ignition" {
# Reference: https://github.com/poseidon/terraform-provider-ct/
content = data.template_file.bastion_user_data.rendered
strict = true
pretty_print = false
}
Error:
Error: error converting to Ignition: error at line 61, column 17
invalid unit content: unexpected newline encountered while parsing option name
on ../../modules/example/launch-template.tf line 91, in data "ct_config" "ignition":
91: data "ct_config" "ignition" {
Unit File Content:
- name: cw-agent.service
enabled: true
contents: |
[Unit]
Description=Cloudwatch Agent Service
Documentation=https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html
Requires=docker.socket
After=docker.socket
[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10s
Environment=CONFIG=${cw-agent-config}
ExecStartPre=-/usr/bin/docker kill cloudwatch-agent
ExecStartPre=-/usr/bin/docker rm cloudwatch-agent
ExecStartPre=/usr/bin/docker pull amazon/cloudwatch-agent
ExecStart=/usr/bin/docker run --name cloudwatch-agent \
--net host \
--env CW_CONFIG_CONTENT=$CONFIG \
amazon/cloudwatch-agent
ExecStop=/usr/bin/docker stop cloudwatch-agent
[Install]
WantedBy=multi-user.target
Rendering:
data "template_file" "cw_agent_config" {
template = file("${path.module}/../cw-agent-config.json")
}
cw-agent-config = indent(10, data.template_file.cw_agent_config.rendered)
File Content:
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "cwagent"
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "$${aws:AutoScalingGroupName}",
"ImageId": "$${aws:ImageId}",
"InstanceId": "$${aws:InstanceId}",
"InstanceType": "$${aws:InstanceType}"
},
"metrics_collected": {
"disk": {
"drop_device" : true,
"measurement": [
"used_percent"
],
"metrics_collection_interval": 120,
"resources": [
"/"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 120
}
}
}
}
I need this json file to be available as a value of environment variable named CW_CONFIG_CONTENT inside a docker container.
This was solved by using the Terraform jsonencode function.
cw-agent-config = jsonencode(data.template_file.cw_agent_config.rendered)

Not able to run test cases in nightwatch framework on ec2 amazon linux instance through jenkins

While running the testcases through jenkins on ec2 instance,I am getting error message.
Here's my nightwatch configuration:
{
"src_folders" : ["test"],
"globals_path": "globals.js",
"output_folder" : "reports",
"custom_commands_path" : "./commands",
"custom_assertions_path" : "./assertions",
"page_objects_path":"./pages",
"test_workers" : {
"enabled" : false,
"workers" : "auto"
},
"selenium" : {
"start_process" : true,
"server_path" : "./bin/selenium-server-standalone-4.0.0.jar",
"log_path" : "",
"port" : 4444,
"cli_args" : {
"webdriver.chrome.driver" : "./bin/chromedriver_linux"
}
},
"test_settings" : {
"default" : {
"request_timeout_options": {
"timeout": 100000
},
"videos": {
"enabled": false,
"delete_on_pass": false,
"path": "reports/videos",
"format": "mp4",
"resolution": "1280x720",
"fps": 15,
"display": ":",
"pixel_format": "yuv420p",
"inputFormat": "mjpeg"
},
"launch_url" : "http://localhost",
"selenium_port" : 4444,
"selenium_host" : "localhost",
"screenshots" : {
"enabled" : false,
"on_failure" : true,
"on_error" : true,
"path" : "./screenshots"
},
"end_session_on_fail" : true,
"skip_testcases_on_fail" : false,
"use_xpath" : true,
"globals" : {
"url" : "http://ec30-3-100-2-16.us-north-10.compute.amazonws.com:1000/login"
},
"desiredCapabilities": {
"browserName": "chrome",
"chromeOptions": {
"w3c": false,
"args" : ["headless","no-sandbox"]
},
"javascriptEnabled": true,
"acceptSslCerts": true
}
}
}
}
getting below error message in the console :
Login Test Test Suite
==========================
- Connecting to localhost on port 4444...
Connected to localhost on port 4444 (31794ms).
Using: chrome (81.0.4044.129) on Linux platform.
Running: Verify user is able to login
POST /wd/hub/session/2a3ca3b508f6dda4d0933225c41824a4/url - ECONNRESET
Error: socket hang up
at connResetException (internal/errors.js:604:14)
at Socket.socketCloseListener (_http_client.js:400:25)
Error while running .navigateTo() protocol action: An unknown error has occurred.
POST /wd/hub/session/2a3ca3b508f6dda4d0933225c41824a4/elements - ECONNRESET
Error: socket hang up
at connResetException (internal/errors.js:604:14)
at Socket.socketCloseListener (_http_client.js:400:25)
Error while running .locateMultipleElements() protocol action: An unknown error has occurred.
I have installed the chrome browser(81.0.4044.129) in ec2 instance and their respective chrome linux driver
selenium server : selenium-server-standalone-4.0.0.jar
Note:
I configured the Jenkins in my local machine(MAC OS) and its working fine.
Please let me know if you need more information.
I believe you security group attached to the EC2 server doesn't have ICMP IPV4 inbound rules accessible to your server running this nightwatch script. Try adding your nightwatch server IP address in the ICMP IPV4 inbound rules of the ec2 server you provided in the URL or you can even make it publicly accessible. I hope this resolves your issue.

Deploying docker container of Kibana 4 with port-mapping on Mesos/Marathon

I'm using mesos and marathon to deploy a container of Kibana 4. The JSON to deploy is:
{
"id": "/org/products/kibana/webapp",
"instances": 1,
"cpus": 1,
"mem": 768,
"uris": [],
"constraints": [
["hostname", "UNIQUE"]
],
"upgradeStrategy": {
"minimumHealthCapacity": 0.5
},
"healthChecks": [
{
"protocol": "HTTP",
"path": "/",
"portIndex": 0,
"initialDelaySeconds": 600,
"gracePeriodSeconds": 10,
"intervalSeconds": 30,
"timeoutSeconds": 120,
"maxConsecutiveFailures": 10
}
],
"env": {
"ES_HOST":"172.23.10.23",
"ES_PORT":"9200"
},
"container": {
"type": "DOCKER",
"docker": {
"image": "myregistry.local.com:5000/org/kibana:4.0.0",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 5601,
"hostPort": 0,
"servicePort": 50061,
"protocol": "tcp"
}
]
},
"volumes": [
{
"containerPath": "/etc/localtime",
"hostPath": "/etc/localtime",
"mode": "RO"
}
]
}
}
But when I post it, the kibana app never wake up and the stderr log is:
I0227 12:22:44.666357 1149 exec.cpp:132] Version: 0.21.1
I0227 12:22:44.669059 1178 exec.cpp:206] Executor registered on slave 20150225-040104-1124079532-5050-952-S0
/kibana/src/index.js:58
throw error;
^
Error: listen EADDRNOTAVAIL
at errnoException (net.js:905:11)
at Server._listen2 (net.js:1024:19)
at listen (net.js:1065:10)
at net.js:1147:9
at asyncCallback (dns.js:68:16)
at Object.onanswer [as oncomplete] (dns.js:121:9)
After that I try to eliminate a port mapping, because I found some references indicating that it's an port or network configuration problem. Then my Kibana 4 web app wake up fine, but I need configure a port-mapping to access via HTTP. I have not idea at about why marathon has problem with network and portMappings config. Some help will be appreciated.
This is a nasty problem, and I encountered it as well (running Kibana 4 on Mesos + Marathon).
The short answer:
If you use current master of the Kibana repository, this won't happen - the relevant code has changed in the 4.1.0 snapshot which is master at the time of writing.
The long answer:
4.0.0 has this chunk of code in src/server/index.js:
var port = parseInt(process.env.PORT, 10) || config.port || 3000;
var host = process.env.HOST || config.host || '127.0.0.1';
Marathon supplies HOST and POST environment variables by default, and the HOST variable is set to the Mesos slave's hostname. The above code makes Kibana try to bind to the IP address of the Mesos slave (which Marathon has placed in HOST), which will fail, as it's running inside a Docker container.
If you want to run 4.0.0, you can just patch these lines to:
var port = config.port || 3000;
var host = config.host || '127.0.0.1';
The code looks like this in master at the moment.

Resources