I'm working through the Azure IoT Edge Hands-on Lab and running into an issue with the iotedgectl command that we can't seem to crack.
when I run 'iotedgectl status' (or 'start', or 'stop'), I get the following error message:
File "c:\python27\lib\runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "c:\python27\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\Python27\Scripts\iotedgectl.exe__main__.py", line 9, in
File "c:\python27\lib\site-packages\edgectl__init__.py", line 25, in >coremain
return cli.execute_user_command()
File "c:\python27\lib\site-packages\edgectl\edgecli.py", line 54, in >execute_user_command
(is_valid, execute_deployment_cmd) = self._process_cli_args()
File "c:\python27\lib\site-packages\edgectl\edgecli.py", line 358, in >_process_cli_args
return args.func(args)
File "c:\python27\lib\site-packages\edgectl\edgecli.py", line 379, in >_parse_edge_command
if EdgeDefault.is_deployment_supported(self._deployment):
File "c:\python27\lib\site-packages\edgectl\default.py", line 99, in >is_deployment_supported
client = EdgeDockerClient()
File "c:\python27\lib\site-packages\edgectl\dockerclient.py", line 13, in >init
self._client = docker.DockerClient.from_env()
File "c:\python27\lib\site-packages\docker\client.py", line 81, in from_env
**kwargs_from_env(**kwargs))
File "c:\python27\lib\site-packages\docker\client.py", line 38, in init
self.api = APIClient(*args, **kwargs)
File "c:\python27\lib\site-packages\docker\api\client.py", line 131, in >init
'Install pypiwin32 package to enable npipe:// support'
docker.errors.DockerException: Install pypiwin32 package to enable npipe:// >support
I'm running Docker for Windows version 17.12.0 (recent stable build), and running Linux containers.
here's my config file:
{
"deployment": {
"docker": {
"edgeRuntimeImage": "microsoft/azureiotedge-agent:1.0-preview",
"loggingOptions": {
"log-driver": "json-file",
"log-opts": {
"max-size": "10m"
}
},
"registries": [],
"uri": "unix:///var/run/docker.sock"
},
"type": "docker"
},
"deviceConnectionString": "",
"homeDir": "C:\ProgramData\azure-iot-edge\data",
"hostName": "mygateway.local",
"logLevel": "info",
"schemaVersion": "1",
"security": {
"certificates": {
"option": "preInstalled",
"preInstalled": {
"agentCAPassphraseFilePath": "",
"deviceCACertificateFilePath": "c:\edge\myGateway-public.pem",
"deviceCAChainCertificateFilePath": "c:\edge\myGateway-all.pem",
"deviceCAPassphraseFilePath": "",
"deviceCAPrivateKeyFilePath": "c:\edge\myGateway-private.pem",
"forceNoPasswords": false,
"ownerCACertificateFilePath": "c:\edge\RootCA.pem"
},
"subject": {
"commonName": "Edge Device CA",
"countryCode": "US",
"locality": "Redmond",
"organization": "Default Edge Organization",
"organizationUnit": "Edge Unit",
"state": "Washington"
}
}
}
}
Editing my original answer:
Problem Solution:
The “npipe:// broken” problem reported here applicable to Windows machines and is a problem whether Linux or Windows containers are used in Docker.
azure-iot-edge-runtime-ctl 1.0.0rc19 was released recently (Jan 26 2018) which addresses this issue by ensuring the correct pypiwin32 package is installed.
To get the latest bits execute:
$> pip install -U azure-iot-edge-runtime-ctl
Check installed version:
$> iotedgectl --version
iotedgectl 1.0.0rc19
Summary of issue:
iotedgectl uses library docker-py to communicate with Docker for kick starting and controlling the Edge runtime.
docker-py uses pypiwin32 for communicating with docker over named pipes (npipe). With the release of v222 of pypiwin32 docker-py is broken on Windows hosts.
As a quick workaround until the official fixes can be incorporated, iotedgectl was released which ensures that correctly working dependencies get pulled in.
Original Answer
To operate on Windows pypiwin32 is required even if you are using Linux based containers as the docker client expects that to be present on Windows hosts.
It appears that version 222 of pypiwin32 released today (01/25/2018) seems to be causing the problem you are reporting. See https://pypi.python.org/pypi/pypiwin32.
On my Windows 10 host pypiwin32 version 219 seems to work just fine. I would suggest installing versions below and retrying.
For Python 2.7.14
pip install pypiwin32==219
For Python 3.6.4
pip install pypiwin32==220
Related
I am trying to run a simple python script within a docker run command scheduled with Airflow.
I have followed the instructions here Airflow init.
My .env file:
AIRFLOW_UID=1000
AIRFLOW_GID=0
And the docker-compose.yaml is based on the default one docker-compose.yaml. I had to add - /var/run/docker.sock:/var/run/docker.sock as an additional volume to run docker inside of docker.
My dag is configured as followed:
""" this is an example dag """
from datetime import timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator
from airflow.utils.dates import days_ago
from docker.types import Mount
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['info#foo.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 10,
'retry_delay': timedelta(minutes=5),
}
with DAG(
'msg_europe_etl',
default_args=default_args,
description='Process MSG_EUROPE ETL',
schedule_interval=timedelta(minutes=15),
start_date=days_ago(0),
tags=['satellite_data'],
) as dag:
download_and_store = DockerOperator(
task_id='download_and_store',
image='satellite_image:latest',
auto_remove=True,
api_version='1.41',
mounts=[Mount(source='/home/archive_1/archive/satellite_data',
target='/app/data'),
Mount(source='/home/dlassahn/projects/forecast-system/meteoIntelligence-satellite',
target='/app')],
command="python3 src/scripts.py download_satellite_images "
"{{ (execution_date - macros.timedelta(hours=4)).strftime('%Y-%m-%d %H:%M') }} "
"'msg_europe' ",
)
download_and_store
The Airflow log:
[2021-08-03 17:23:58,691] {docker.py:231} INFO - Starting docker container from image satellite_image:latest
[2021-08-03 17:23:58,702] {taskinstance.py:1501} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/home/airflow/.local/lib/python3.6/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.41/containers/create
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/docker/operators/docker.py", line 319, in execute
return self._run_image()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/docker/operators/docker.py", line 258, in _run_image
tty=self.tty,
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/container.py", line 430, in create_container
return self.create_container_from_config(config, name)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/container.py", line 441, in create_container_from_config
return self._result(res, True)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 274, in _result
self._raise_for_status(response)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 400 Client Error for http+docker://localhost/v1.41/containers/create: Bad Request ("invalid mount config for type "bind": bind source path does not exist: /tmp/airflowtmp037k87u6")
Trying to set mount_tmp_dir=False yield to an Dag ImportError because of unknown Keyword Argument mount_tmp_dir. (this might be an issue for the Documentation)
Nevertheless I do not know how to configure the tmp directory correctly.
My Airflow Version: 2.1.2
There was a bug in Docker Provider 2.0.0 which prevented Docker Operator to run with Docker-In-Docker solution.
You need to upgrade to the latest Docker Provider 2.1.0
https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/index.html#id1
You can do it by extending the image as described in https://airflow.apache.org/docs/docker-stack/build.html#extending-the-image with - for example - this docker file:
FROM apache/airflow
RUN pip install --no-cache-dir apache-airflow-providers-docker==2.1.0
The operator will work out-of-the-box in this case with "fallback" mode (and Warning message), but you can also disable the mount that causes the problem. More explanation from the https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/_api/airflow/providers/docker/operators/docker/index.html
By default, a temporary directory is created on the host and mounted
into a container to allow storing files that together exceed the
default disk size of 10GB in a container. In this case The path to the
mounted directory can be accessed via the environment variable
AIRFLOW_TMP_DIR.
If the volume cannot be mounted, warning is printed and an attempt is
made to execute the docker command without the temporary folder
mounted. This is to make it works by default with remote docker engine
or when you run docker-in-docker solution and temporary directory is
not shared with the docker engine. Warning is printed in logs in this
case.
If you know you run DockerOperator with remote engine or via
docker-in-docker you should set mount_tmp_dir parameter to False. In
this case, you can still use mounts parameter to mount already
existing named volumes in your Docker Engine to achieve similar
capability where you can store files exceeding default disk size of
the container,
I had the same issue and all "recommended" ways of solving the issue here and setting up mount_dir params as descripted here just lead to other errors. The one solution that helped me was wrapping the invocated by docker code with the VPN (actually this hack was taken from another docker-powered DAG that used VPN and worked well).
So the final solution looks like:
#!/bin/bash
connect_to_vpn.sh &
sleep 10
python3 my_func.py
sleep 10
stop_vpn.sh
wait -n
exit $?
To connect to VPN I used openconnect. The took can be installed with apt install and supports anyconnect protocol (it was my crucial requirement).
I followed the windows host instructions described in the https://coral.ai/docs/dev-board/get-started/
When I run command mdt devices, I get following exception always:
$ mdt devices
Traceback (most recent call last):
File "c:\users\ram\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\ram\anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\Ram\AppData\Roaming\Python\Python38\Scripts\mdt.exe\__main__.py", line 7, in
<module>
File "C:\Users\Ram\AppData\Roaming\Python\Python38\site-packages\mdt\main.py", line 162, in
main
exit(command.run(sys.argv[1:]))
File "C:\Users\Ram\AppData\Roaming\Python\Python38\site-packages\mdt\devices.py", line 43, i
n run
self.discoverer.discover()
File "C:\Users\Ram\AppData\Roaming\Python\Python38\site-packages\mdt\discoverer.py", line 40
, in discover
self.zeroconf = Zeroconf()
File "C:\Users\Ram\AppData\Roaming\Python\Python38\site-packages\zeroconf\__init__.py", line
2508, in __init__
self._listen_socket, self._respond_sockets = create_sockets(
File "C:\Users\Ram\AppData\Roaming\Python\Python38\site-packages\zeroconf\__init__.py", line
2343, in create_sockets
respond_socket = new_respond_socket(i, apple_p2p=apple_p2p)
File "C:\Users\Ram\AppData\Roaming\Python\Python38\site-packages\zeroconf\__init__.py", line
2302, in new_respond_socket
respond_socket = new_socket(
File "C:\Users\Ram\AppData\Roaming\Python\Python38\site-packages\zeroconf\__init__.py", line
2250, in new_socket
s.bind((bind_addr[0], port, *bind_addr[1:]))
OSError: [WinError 10049] The requested address is not valid in its context
Versions of my set up:
$ python --version
Python 3.8.5
$ mdt version
MDT version 1.5.2
Can you please help me to resolve this issue?
The same happened to me. Skipping some IP addresses solved the issue. Try this code:
import logging
import zeroconf
import ifaddr
for adapter in ifaddr.get_adapters():
print("IPs of network adapter ", adapter.nice_name)
for ip in adapter.ips:
print(" %s/%s", ip.ip, ip.network_prefix)
logging.basicConfig(level=logging.DEBUG)
logging.getLogger("zeroconf").setLevel(level=logging.DEBUG)
z = zeroconf.Zeroconf()
From the log, find
INFO:zeroconf:Address not available when adding XXX.XXX.XX.XX to multicast group, it is expected to happen on some systems
Along with the Zeroconf log, there should be a list of ip addresses and names. Please find the name of the hardware that is causing the problem.
Now, open zeroconf/__init__.py and goto get_all_addresses() and replace the code with below
adapters = [x for x in ifaddr.get_adapters()
if not "Microsoft Wi-Fi Direct Virtual Adapter" in x.nice_name
and not "Bluetooth Device" in x.nice_name]
return list(set(addr.ip for iface in adapters for addr in iface.ips if addr.is_IPv4))
Please replace the "Microsoft Wi-Fi Direct Virtual Adapter" and "Bluetooth Device" with the appropriate name.
This is not a neat way, so I'd be open to better answers.
When I am using 'hicharts-export-server' as local service module in node.js to rendering and exporting images, i got these error message after 7 hours :
Tue Mar 19 2019 13:31:33 GMT+0800 (China Standard Time) [error]
phantom worker 631 error - worker.js resource error - {
"errorCode": 3,
"errorString": "Host cdnjs.cloudflare.com not found",
"id": 2,
"status": null,
"statusText": null,
"url": "https://cdnjs.cloudflare.com/ajax/libs/moment-timezone/0.5.13/moment-timezone-with-data-2012-2022.min.js"
}
Tue Mar 19 2019 13:31:33 GMT+0800 (China Standard Time) [error]
phantom worker "Host cdnjs.cloudflare.com not found",
"id": 1,
"status": null,
"statusText": null,
"url": "https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.18.1/moment.min.js"
}
I think the phantom worker was trying download js files from external server but failed, so I want to save these js files to local disk and the phantom worker do know download each time.
How to solve this problem?
Thank a lot !
Since version 2.0.9, in highcharts-export-server you can enable moment.js
by running npm install interactively, or by setting the environment variable HIGHCHARTS_MOMENT to 1.
Related issue: https://github.com/highcharts/node-export-server/issues/119
Never mind please, I found these scripts and changed the url to local file, problem was solved.
sorry for my disturbing.
I'm new to Google cloud and I'm trying to do my first deploy to it. My first deploy is a Ruby on Rails project.
I'm basically following this guide in the google cloud documentation. The only difference being that I'm using my own project instead of the 'hello world' project they supply.
This is my app.yaml file
runtime: custom
vm: true
entrypoint: bundle exec rackup -p 8080 -E production config.ru
resources:
cpu: 0.5
memory_gb: 1.3
disk_size_gb: 10
When I go to my project directory and run gcloud preview app deploy it starts the deploy but appears to eventually time out. It gives the error (gcloud.preview.app.deploy) Error Response: [4] DEADLINE_EXCEEDED.
Doing some research I found running gcloud preview app deploy with --verbosity debug gives extra debug info but it doesn't help me find whats causing it to timeout.
Here is the last chunk of the console log.
Bundle complete! 35 Gemfile dependencies, 102 gems now installed.
Bundled gems are installed into ./vendor/bundle.
Post-install message from rdoc:
Depending on your version of ruby, you may need to install ruby rdoc/ri data:
<= 1.8.6 : unsupported
= 1.8.7 : gem install rdoc-data; rdoc-data --install
= 1.9.1 : gem install rdoc-data; rdoc-data --install
>= 1.9.2 : nothing to do! Yay!
Post-install message from compass:
Compass is charityware. If you love it, please donate on our behalf at http://umdf.org/compass Thanks!
DEBUG: Operation [operations/build/guidir-1286/MmFkZjNmOGYtZDhhZi00NTJmLTk0YWEtMmQzMjBmM2JkOTg2OlVT] complete. Result: {
"metadata": {
"#type": "type.googleapis.com/google.devtools.cloudbuild.v1.BuildOperationMetadata",
"build": {
"finishTime": "2016-04-20T01:55:44.961635Z",
"status": "TIMEOUT",
"timeout": "600.000s",
"projectId": "guidir-1286",
"id": "2adf3f8f-d8af-452f-94aa-2d320f3bd986",
"source": {
"storageSource": {
"object": "us.gcr.io/guidir-1286/appengine/default.20160420t110030:latest",
"bucket": "staging.guidir-1286.appspot.com"
}
},
"steps": [
{
"args": [
"us.gcr.io/guidir-1286/appengine/default.20160420t110030:latest"
],
"name": "gcr.io/cloud-builders/dockerizer"
}
],
"startTime": "2016-04-20T01:45:43.216420Z",
"logsBucket": "staging.guidir-1286.appspot.com",
"images": [
"us.gcr.io/guidir-1286/appengine/default.20160420t110030:latest"
],
"createTime": "2016-04-20T01:45:41.861657Z"
}
},
"done": true,
"name": "operations/build/guidir-1286/MmFkZjNmOGYtZDhhZi00NTJmLTk0YWEtMmQzMjBmM2JkOTg2OlVT",
"error": {
"message": "DEADLINE_EXCEEDED",
"code": 4
}
}
DEBUG: (gcloud.preview.app.deploy) Error Response: [4] DEADLINE_EXCEEDED
Traceback (most recent call last):
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 654, in Execute
result = args.cmd_func(cli=self, args=args)
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 1401, in Run
resources = command_instance.Run(args)
File "/Users/Robert/google-cloud-sdk/lib/surface/preview/app/deploy.py", line 507, in Run
config_cleanup)
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/deploy_command_util.py", line 195, in BuildAndPushDockerImages
storage_client)
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/deploy_command_util.py", line 245, in _BuildImagesWithCloudBuild
image.tag, cloudbuild_client)
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/cloud_build.py", line 181, in ExecuteCloudBuild
retry_callback=log_tailer.Poll)
File "/Users/Robert/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/api/operations.py", line 69, in WaitForOperation
encoding.MessageToPyValue(completed_operation.error)))
OperationError: Error Response: [4] DEADLINE_EXCEEDED
ERROR: (gcloud.preview.app.deploy) Error Response: [4] DEADLINE_EXCEEDED
This is the furthest its gone, but sometimes its mid way through installing the gems before it times out and other times it doesn't even get upto installing the gems.
How can I stop this from occurring?
There's a 10 minute default timeout for Docker builds (the mechanism by which runtime: custom App Engine builds work). You can increase this by running gcloud config set app/cloud_build_timeout [NUMBER OF SECONDS].
You could also work around by performing the build yourself:
docker build . -t gcr.io/myapp/myimage
gcloud docker push gcr.io/myapp/myimage
gcloud preview app deploy app.yaml --image-url=gcr.io/myapp/myimage
However, in general, your Docker builds shouldn't be taking this long. It's usually better to have a base image with all of your dependencies already built in, and just have the final build derive from that image and install your app. This way, your builds will be much quicker.
This error is raised when the overall request times out.
The execution limit is 10 minutes for task queue requests.
You can increase this timeout limit on google cloud shell, by typing gcloud config set app/cloud_build_timeout [TIMEOUT_SECONDS]
Example
gcloud config set app/cloud_build_timeout 1000
I am trying to deploy my Cordova app to my iPhone using Visual Studio 2015 RC by Remote Agent > Local Device. I have successfully installed, ran and connect the remote agent to my Visual Studio. According to this link, when I run Local Device, iTunes should pop up on my Windows PC and install the app to my iPhone:
https://msdn.microsoft.com/en-us/library/dn757056.aspx
New build request submitted:
/build/tasks?command=build&vcordova=4.3.0&cfg=debug&options=--device
{ 'accept-language': 'en-US',
host: '192.168.0.9:3000',
connection: 'keep-alive',
'transfer-encoding': 'chunked' }
New build request submitted for cordovaVersion: 4.3.0; buildCommand: build; configuration: debug
Build will be executed under: /Users/JP/remote-builds/builds/3563
Saving build request payload to : /Users/JP/remote-builds/builds/3563
Saved upload to /Users/JP/remote-builds/builds/3563/upload_3563.tgz
Extracting /Users/JP/remote-builds/builds/3563/upload_3563.tgz to /Users/JP/remote-builds/builds/3563/cordovaApp...
POST /build/tasks?command=build&vcordova=4.3.0&cfg=debug&options=--device 202 5123ms - 487b
GET /build/tasks/3563 200 0ms - 487b
Extracted app contents from uploaded build request to /Users/JP/remote-builds/builds/3563/cordovaApp. Requesting build.
Taking 3563 as current build
Building cordova app CordovaApp2 at appDir /Users/JP/remote-builds/builds/3563/cordovaApp
Opened build log file /Users/JP/remote-builds/builds/3563/build.log
Done building 3563 : error CordovaModuleLoadError [ '4.3.0' ]
Done with currentBuild. Checking for next build in queue.
GET /build/tasks/3563 200 4ms - 142.31kb
GET /build/tasks/3563/log 200 1ms
Additionally, from Visual Studio I get the error:
EACCES, open '/Users/JP/.npm/_locks/cordova-46ce3f50013cb5f4.lock' CordovaApp2 C:\Users\J\Documents\Visual Studio 2015\Projects\CordovaApp2\CordovaApp2\MDAVSCLI 1
Remote build error from the build server undefined: {1} CordovaApp2 C:\Users\J\Documents\Visual Studio 2015\Projects\CordovaApp2\CordovaApp2\MDAVSCLI 1
When running remote test:
$ vs-mda-remote test --device
Initializing self test for https
downloading cert for pin 360583
Downloading client cert for selftest from https://Jamess-Mac-mini.local:3000/certs/360583 to /Users/JP/remote-builds/selftest/selftest-client.pfx
pfxPath: /Users/JP/remote-builds/selftest/selftest-client.pfx
serverUrl: https://Jamess-Mac-mini.local:3000
buildUrl: https://Jamess-Mac-mini.local:3000/build/tasks?vcordova=4.3.0&cfg=release&command=build&options=--device
Response statusCode: 202
{ 'x-powered-by': 'Express',
'content-type': 'application/json',
'content-location': 'https://jamess-mac-mini.local:3000/build/tasks/3570',
'content-length': '489',
date: 'Tue, 09 Jun 2015 22:05:58 GMT',
connection: 'close' }
Response: {
"buildNumber": 3570,
"status": "uploaded",
"cordovaVersion": "4.3.0",
"buildCommand": "build",
"configuration": "release",
"options": "--device",
"buildDir": "/Users/JP/remote-builds/builds/3570",
"serverDir": "/Users/JP/remote-builds",
"submissionTime": "2015-06-09T22:05:58.691Z",
"changeList": null,
"tgzFilePath": "/Users/JP/remote-builds/builds/3570/upload_3570.tgz",
"statusTime": "2015-06-09T22:05:58.763Z",
"message": "Uploaded build request payload."
}
buildingUrl: https://Jamess-Mac-mini.local:3000/build/tasks?vcordova=4.3.0&cfg=release&command=build&options=--device
[1] Response: {
"buildNumber": 3570,
"status": "error",
"cordovaVersion": "4.3.0",
"buildCommand": "build",
"configuration": "release",
"options": "--device",
"buildDir": "/Users/JP/remote-builds/builds/3570",
"serverDir": "/Users/JP/remote-builds",
"submissionTime": "2015-06-09T22:05:58.691Z",
"changeList": null,
"tgzFilePath": "/Users/JP/remote-builds/builds/3570/upload_3570.tgz",
"messageId": "CordovaModuleLoadError",
"statusTime": "2015-06-09T22:05:59.752Z",
"appDir": "/Users/JP/remote-builds/builds/3570/cordovaApp",
"appName": "HelloCordova",
"messageArgs": [
"4.3.0"
]
}
You are likely encountering the following Known Issue. Basically there's a file somewhere in your npm cache that was added while running as an administrator (sudo). As a result, vs-mda-remote cannot access it. The commands below resolve that issue (and in fact this is what recent versions of npm do by default).
iOS Build Related Known Issues
After installing the latest version of vs-mda-remote package, you may need to run the following commands before you start up the remote agent. These commands ensure your user has permissions to the contents of the npm package cache in your home directory when using older versions of Node.js and npm. Newer versions of Node.js and npm will do this for you automatically.
sudo npm cache clear
sudo chown -R `whoami` ~/.npm