My environment are below.
MacBook Pro (13-inch, 2019, Four Thunderbolt 3 ports)
2.8 GHz Quad CoreIntel Core i7
16 GB 2133 MHz LPDDR3
Intel Iris Plus Graphics 655 1536 MB
Docker: 19.03.12
Druid: 0.19.0
Although I followed official instructions, I failed to build or run Druid locally.
About this: https://github.com/apache/druid/tree/master/distribution/docker
I typed the following commands.
git clone https://github.com/apache/druid.git
docker build -t apache/druid:tag -f distribution/docker/Dockerfile .
However, the program never proceed.
Sending build context to Docker daemon 78.19MB
Step 1/18 : FROM maven:3-jdk-8-slim as builder
---> addee4586ff4
Step 2/18 : RUN export DEBIAN_FRONTEND=noninteractive && apt-get -qq update && apt-get -qq -y install --no-install-recommends python3 python3-yaml
---> Using cache
---> cdb74d0f6b3d
Step 3/18 : COPY . /src
---> 60d35cb6c0ce
Step 4/18 : WORKDIR /src
---> Running in 73dfa666a186
Removing intermediate container 73dfa666a186
---> 4839bf923b21
Step 5/18 : RUN mvn -B -ff -q dependency:go-offline install -Pdist,bundle-contrib-exts -Pskip-static-checks,skip-tests -Dmaven.javadoc.skip=true
---> Running in 1c9d4aa3d4e8
PLUS
Moreover, I followed this instruction and run docker-compose -f distribution/docker/docker-compose.yml up but I failed and get the error below.
coordinator | 2020-08-06T08:41:24,295 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorRuleRunner - Uh... I have no servers. Not assigning anything...
PLUS END
About this: https://hub.docker.com/r/apache/druid/tags
I typed the following commands.
docker pull apache/druid:0.19.0
docker run apache/druid:0.19.0
This program seems to work like this.
2020-08-06T07:50:22+0000 startup service
Setting 172.17.0.2= in /runtime.properties
cat: can't open '/jvm.config': No such file or directory
2020-08-06T07:50:24,024 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.2.5.Final
2020-08-06T07:50:24,988 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-hdfs-storage], jars: jackson-annotations-2.10.2.jar, hadoop-mapreduce-client-common-2.8.5.jar, httpclient-4.5.10.jar, htrace-core4-4.0.1-incubating.jar, apacheds-kerberos-codec-2.0.0-M15.jar, jackson-mapper-asl-1.9.13.jar, commons-digester-1.8.jar, jetty-sslengine-6.1.26.jar, jackson-databind-2.10.2.jar, api-asn1-api-1.0.0-M20.jar, ion-java-1.0.2.jar, hadoop-mapreduce-client-shuffle-2.8.5.jar, asm-7.1.jar, jsp-api-2.1.jar, druid-hdfs-storage-0.19.0.jar, api-util-1.0.3.jar, json-smart-2.3.jar, jackson-core-2.10.2.jar, hadoop-client-2.8.5.jar, httpcore-4.4.11.jar, commons-collections-3.2.2.jar, hadoop-hdfs-client-2.8.5.jar, hadoop-annotations-2.8.5.jar, hadoop-auth-2.8.5.jar, xmlenc-0.52.jar, aws-java-sdk-s3-1.11.199.jar, commons-net-3.6.jar, nimbus-jose-jwt-4.41.1.jar, hadoop-common-2.8.5.jar, jackson-dataformat-cbor-2.10.2.jar, hadoop-yarn-server-common-2.8.5.jar, accessors-smart-1.2.jar, gson-2.2.4.jar, commons-configuration-1.6.jar, joda-time-2.10.5.jar, hadoop-aws-2.8.5.jar, aws-java-sdk-core-1.11.199.jar, commons-codec-1.13.jar, hadoop-mapreduce-client-app-2.8.5.jar, hadoop-yarn-api-2.8.5.jar, aws-java-sdk-kms-1.11.199.jar, jackson-core-asl-1.9.13.jar, curator-recipes-4.3.0.jar, hadoop-mapreduce-client-jobclient-2.8.5.jar, jcip-annotations-1.0-1.jar, jmespath-java-1.11.199.jar, hadoop-mapreduce-client-core-2.8.5.jar, commons-logging-1.1.1.jar, leveldbjni-all-1.8.jar, curator-framework-4.3.0.jar, hadoop-yarn-client-2.8.5.jar, apacheds-i18n-2.0.0-M15.jar
2020-08-06T07:50:25,004 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-kafka-indexing-service], jars: lz4-java-1.7.1.jar, kafka-clients-2.5.0.jar, druid-kafka-indexing-service-0.19.0.jar, zstd-jni-1.3.3-1.jar, snappy-java-1.1.7.3.jar
2020-08-06T07:50:25,006 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-datasketches], jars: druid-datasketches-0.19.0.jar, commons-math3-3.6.1.jar
usage: druid <command> [<args>]
The most commonly used druid commands are:
help Display help information
index Run indexing for druid
internal Processes that Druid runs "internally", you should rarely use these directly
server Run one of the Druid server types.
tools Various tools for working with Druid
version Returns Druid version information
See 'druid help <command>' for more information on a specific command.
However, even if I add an argument like version, it does not work like this.
❯ docker run apache/druid:0.19.0 version
2020-08-06T07:51:30+0000 startup service version
Setting druid.host=172.17.0.2 in /runtime.properties
cat: can't open '/jvm.config': No such file or directory
2020-08-06T07:51:32,517 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.2.5.Final
2020-08-06T07:51:33,503 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-hdfs-storage], jars: jackson-annotations-2.10.2.jar, hadoop-mapreduce-client-common-2.8.5.jar, httpclient-4.5.10.jar, htrace-core4-4.0.1-incubating.jar, apacheds-kerberos-codec-2.0.0-M15.jar, jackson-mapper-asl-1.9.13.jar, commons-digester-1.8.jar, jetty-sslengine-6.1.26.jar, jackson-databind-2.10.2.jar, api-asn1-api-1.0.0-M20.jar, ion-java-1.0.2.jar, hadoop-mapreduce-client-shuffle-2.8.5.jar, asm-7.1.jar, jsp-api-2.1.jar, druid-hdfs-storage-0.19.0.jar, api-util-1.0.3.jar, json-smart-2.3.jar, jackson-core-2.10.2.jar, hadoop-client-2.8.5.jar, httpcore-4.4.11.jar, commons-collections-3.2.2.jar, hadoop-hdfs-client-2.8.5.jar, hadoop-annotations-2.8.5.jar, hadoop-auth-2.8.5.jar, xmlenc-0.52.jar, aws-java-sdk-s3-1.11.199.jar, commons-net-3.6.jar, nimbus-jose-jwt-4.41.1.jar, hadoop-common-2.8.5.jar, jackson-dataformat-cbor-2.10.2.jar, hadoop-yarn-server-common-2.8.5.jar, accessors-smart-1.2.jar, gson-2.2.4.jar, commons-configuration-1.6.jar, joda-time-2.10.5.jar, hadoop-aws-2.8.5.jar, aws-java-sdk-core-1.11.199.jar, commons-codec-1.13.jar, hadoop-mapreduce-client-app-2.8.5.jar, hadoop-yarn-api-2.8.5.jar, aws-java-sdk-kms-1.11.199.jar, jackson-core-asl-1.9.13.jar, curator-recipes-4.3.0.jar, hadoop-mapreduce-client-jobclient-2.8.5.jar, jcip-annotations-1.0-1.jar, jmespath-java-1.11.199.jar, hadoop-mapreduce-client-core-2.8.5.jar, commons-logging-1.1.1.jar, leveldbjni-all-1.8.jar, curator-framework-4.3.0.jar, hadoop-yarn-client-2.8.5.jar, apacheds-i18n-2.0.0-M15.jar
2020-08-06T07:51:33,524 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-kafka-indexing-service], jars: lz4-java-1.7.1.jar, kafka-clients-2.5.0.jar, druid-kafka-indexing-service-0.19.0.jar, zstd-jni-1.3.3-1.jar, snappy-java-1.1.7.3.jar
2020-08-06T07:51:33,526 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-datasketches], jars: druid-datasketches-0.19.0.jar, commons-math3-3.6.1.jar
ERROR!!!!
Found unexpected parameters: [version]
===
usage: druid <command> [<args>]
The most commonly used druid commands are:
help Display help information
index Run indexing for druid
internal Processes that Druid runs "internally", you should rarely use these directly
server Run one of the Druid server types.
tools Various tools for working with Druid
version Returns Druid version information
See 'druid help <command>' for more information on a specific command
So I see a few things here:
docker run apache/druid:0.19.0 means "fire and forget", if you don't have an endless running service here, your docker container will be shut down shortly after start.
To have an interaction within the docker container start it with "-it" command.
To let it run without interaction run it with "-d" command for detached.
YOu can find information about this here: https://docs.docker.com/engine/reference/run/
You have to check the start command.
The thing you wrote after the run command is the start command (in your case "version") - this is runned like you would type it into the running shell after words (just "version").
Additional to that, if you DONT add a startup command, there could be a startup command in the default druid dockerfile.
You can see the dockerfile of your selected image at docker.hub, like here:
https://hub.docker.com/layers/apache/druid/0.19.0/images/sha256-eb2a4852b4ad1d3ca86cbf4c9dc7ed9b73c767815f187eb238d2b80ca26dfd9a?context=explore
There you see, the start command, wihtin a dockerfile this is called ENTRYPOINT, is a shellscript:
ENTRYPOINT ["/druid.sh"]
So writing "version" after your run commands stops the shell command from running - we should not do that :)
I'm trying to run jenkins-jobs update for the first time on my system, but it fails on authentication.
Command:
jenkins-jobs --conf ./jjb.ini update jobs/
Where jobs contains a test.yml - a miniature build project just for testing. jjb.ini is:
[jenkins]
user=admin
password={{ admin_api_token }} # Inserted API token here.
url=http://127.0.0.1:8080
query_plugins_info=False
Expected result:
Success, and import of the example build project into Jenkins.
Actual result:
INFO:jenkins_jobs.cli.subcommand.update:Updating jobs in ['jobs/'] ([])
INFO:jenkins_jobs.builder:Number of jobs generated: 1
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 127.0.0.1
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/jenkins/__init__.py", line 557, in jenkins_request
self._request(req))
File "/usr/local/lib/python3.5/dist-packages/jenkins/__init__.py", line 508, in _response_handler
response.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 840, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Invalid password/token for user: b'admin' for url: http://127.0.0.1:8080/crumbIssuer/api/json
What catches my eye here is that the authentication fails for b'admin, not for admin. This is also reflected in the "People" page on Jenkins web-interface, which before the attempted login only showed admin, but after the attempted login shows:
From what I've been able to figure out, there may be a problem with encoding in the login request from JJB, but I'm looking for help when it comes to how to go about trying to fix this.
Current setup:
Ubuntu 16.04.4 LTS
Jenkins 2.125 (working as expected, at :8080)
jenkins-job-builder 2.0.9
Python 3.5.2
pip 10.0.1 from /usr/local/lib/python3.5/dist-packages/pip-10.0.1-py3.5.egg/pip (python 3.5)
java -version: openjdk version "1.8.0_171"
It works with requests==2.19.1
sudo pip uninstall requests
sudo pip install requests
$ pip freeze | grep requests
requests==2.19.1
The following steps resolved the issue in my case:
pip3.7 freeze | grep requests
pip3.7 uninstall requests
pip3.7 install requests
pip3.7 freeze | grep requests
I'm trying to build a VM for model training in Azure. I found this Data Science Virtual Machine for Linux (Ubuntu) VM which seems to be a suitable candidate.
Unfortunately, when I spun up the VM and installed the caffe prerequisites I wasn't able to run the tests. I'm getting the following error on make runtest (make all and make test were completed without errors):
NVIDIA: no NVIDIA devices found
Cuda number of devices: 0
Setting to use device 0
Current device id: 0
Current device name:
Note: Randomizing tests' orders with a seed of 97204 .
[==========] Running 2041 tests from 267 test cases.
[----------] Global test environment set-up.
[----------] 11 tests from AdaDeltaSolverTest/3, where TypeParam = caffe::GPUDevice<double>
[ RUN ] AdaDeltaSolverTest/3.TestAdaDeltaLeastSquaresUpdateWithHalfMomentum
NVIDIA: no NVIDIA devices found
E0715 02:24:32.097311 59355 common.cpp:114] Cannot create Cublas handle. Cublas won't be available.
NVIDIA: no NVIDIA devices found
E0715 02:24:32.103780 59355 common.cpp:121] Cannot create Curand generator. Curand won't be available.
F0715 02:24:32.103914 59355 test_gradient_based_solver.cpp:80] Check failed: error == cudaSuccess (30 vs. 0) unknown error
*** Check failure stack trace: ***
# 0x7f77a463f5cd google::LogMessage::Fail()
# 0x7f77a4641433 google::LogMessage::SendToLog()
# 0x7f77a463f15b google::LogMessage::Flush()
# 0x7f77a4641e1e google::LogMessageFatal::~LogMessageFatal()
# 0x7115e3 caffe::GradientBasedSolverTest<>::TestLeastSquaresUpdate()
# 0x7122af caffe::AdaDeltaSolverTest_TestAdaDeltaLeastSquaresUpdateWithHalfMomentum_Test<>::TestBody()
# 0x8e6023 testing::internal::HandleExceptionsInMethodIfSupported<>()
# 0x8df63a testing::Test::Run()
# 0x8df788 testing::TestInfo::Run()
# 0x8df865 testing::TestCase::Run()
# 0x8e0b3f testing::internal::UnitTestImpl::RunAllTests()
# 0x8e0e63 testing::UnitTest::Run()
# 0x466ecd main
# 0x7f77a111c830 __libc_start_main
# 0x46e589 _start
# (nil) (unknown)
Makefile:532: recipe for target 'runtest' failed
make: *** [runtest] Aborted (core dumped)
Is it possible to spin up a virtual machine in Azure suitable for GPU enabled machine learning using caffe?
All the details about the VM here
The Data Science Virtual Machine (DSVM) for Ubuntu already has Caffe installed in /opt/caffe. To use it on a GPU, create a VM with a K80 GPU by choosing the one of the NC sizes. (Be sure to choose HDD as the storage type, or the NC sizes will not appear.) Caffe will then be available out of the box.
Also note that PyCaffe is available. At a terminal:
source activate root
And python will then have PyCaffe available.
I'm in a big trouble, I can't start Elasticsearch and I need it for run my rails locally, please tell me what's going on. I installed Elasticsearch in the normal fashion then I did the following:
elasticsearch --config=/usr/local/opt/elasticsearch/config/elasticsearch.yml
But it shows the following error: [2015-11-01 20:36:50,574][INFO ][bootstrap] es.config is no longer supported. elasticsearch.yml must be placed in the config directory and cannot be renamed.
I tried several alternative ways of run it, like:
elasticsearch -f -D
But then I get the following error, and I can't find any useful for solve it, it seems to be related with file perms but not sure:
java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: sun.misc.Launcher$AppClassLoader#33909752
at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86)
at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514)
at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413)
at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216)
at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151)
at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79)
at org.joda.time.DateTimeUtils.getChronology(DateTimeUtils.java:266)
at org.joda.time.format.DateTimeFormatter.selectChronology(DateTimeFormatter.java:968)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:672)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:560)
at org.joda.time.format.DateTimeFormatter.print(DateTimeFormatter.java:644)
at org.elasticsearch.Build.<clinit>(Build.java:51)
at org.elasticsearch.node.Node.<init>(Node.java:135)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
[2015-11-01 20:40:57,602][INFO ][node ] [Centurius] version[2.0.0], pid[22063], build[de54438/2015-10-22T08:09:48Z]
[2015-11-01 20:40:57,605][INFO ][node ] [Centurius] initializing ...
Exception in thread "main" java.lang.IllegalStateException: failed to load bundle [] due to jar hell
Likely root cause: java.security.AccessControlException: access denied ("java.io.FilePermission" "/usr/local/Cellar/elasticsearch/2.0.0/libexec/antlr-runtime-3.5.jar" "read")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.security.AccessController.checkPermission(AccessController.java:884)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.lang.SecurityManager.checkRead(SecurityManager.java:888)
at java.util.zip.ZipFile.<init>(ZipFile.java:210)
at java.util.zip.ZipFile.<init>(ZipFile.java:149)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:103)
at org.elasticsearch.bootstrap.JarHell.checkJarHell(JarHell.java:173)
at org.elasticsearch.plugins.PluginsService.loadBundles(PluginsService.java:340)
at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:113)
at org.elasticsearch.node.Node.<init>(Node.java:144)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
Thanks for your help.
There are some changes with libexec with Elasticsearch/homebrew installation and that is why it is failing to start. There is a PR #45644 currently being worked on. Till the PR gets accepted, you can use the same formula to fix the installation of Elasticsearch.
First uninstall the earlier/older version. Then edit the formula of Elasticsearch:
$ brew edit elasticsearch
And use the formula from the PR.
Then do brew install elasticsearch, it should work fine.
To start Elasticsearch, just do:
$ elasticsearch
config option is no longer valid. For custom config, use path.config:
$ elasticsearch --path.conf=/usr/local/opt/elasticsearch/config
I Just built Nodejs and installed 0.10.6 then uninstalled yo+grunt-cli+bower+generator-webapp and reinstalled to latest using npm -g,
yo webapp
But now live reload doesnt work, cant see any errors either in chrome devtools
yo -v: 1.0beta5
grunt-cli v0.1.8 and grunt v0.4.1
bower -v: 0.9.2
node -v: 0.10.6
npm -v: 1.2.18
grunt server shows the watch task: time + the name of file changed
tried : changing the port number in Gruntfile to.. LIVERELOAD_PORT = 34729; nogo :(
my older webapp projects still work fine
Lost..
Thanks
--------------------- UPDATE
1. moved lrSnippet to 1st position in Grunfile.js
2. in index.js moved
end.call(res, res.data, encoding);
outside the if Block
Now it works Partially :
summary :
1. changes to index.html > reloads ok
2. changes to main.scss > reloads ok
3. changes to my.sass > Not OK
after 3rd step
1. changes to index.html > Not OK
2. changes to main.scss > Not OK
4. changes to hello.coffe > Not OK
After step 4
1. changes to index.html > ok
2. changes to main.scss > ok
//------------------------------------- index.html
Changes to index.html
reload ok
grunt server window logs change and issues reload command
grunt server window grab =
Running "watch" task
Waiting...OK
>> File "app/index.html" changed.
Running "watch" task
... Reload app/index.html ...
Completed in 0.002s at Sat May 18 2013 12:47:58 GMT+0530 (IST) - Waiting...
//------------------------------------- main.scss
Changes to main.scss
reload ok
grunt server window grab =
>> File "app/styles/main.scss" changed.
Running "compass:server" (compass) task
overwrite .tmp/styles/main.css
unchanged app/styles/my.sass
Running "watch" task
Completed in 1.906s at Sat May 18 2013 12:48:24 GMT+0530 (IST) - Waiting...
OK
>> File ".tmp/styles/main.css" changed.
Running "watch" task
... Reload .tmp/styles/main.css ...
Completed in 0.002s at Sat May 18 2013 12:48:24 GMT+0530 (IST) - Waiting...
//------------------------------------- my.sass
changes to my.sass
reload not ok (not reloading)
grunt server window grab =
Running "watch" task
Waiting...OK
>> File "app/styles/my.sass" changed.
Running "compass:server" (compass) task
unchanged app/styles/main.scss
unchanged .tmp/images/generated/design-s65ab268e46.png
overwrite .tmp/styles/my.css
Running "watch" task
Completed in 0.602s at Sat May 18 2013 13:00:19 GMT+0530 (IST) - Waiting...
//-------------------------------------
After the my.sass is changed
changes made to index.html or main.scss r not shown in grunt server window
the Watch command doesnt log anything.
changes r not reloaded
//-------------------------------------
Restarted Grunt Server
//------------------------------------- hello.coffee
grunt server window grab =
OK
>> File "app/scripts/hello.coffee" changed.
Running "coffee:dist" (coffee) task
File .tmp/scripts/hello.js created.
Running "watch" task
Completed in 0.011s at Sat May 18 2013 13:34:56 GMT+0530 (IST) - Waiting...
//-------------------------------------
There is a bug with the current livereload/yo setup. Here are the details alongside a fix for the problematic dependency (connect-livereload).
https://github.com/yeoman/generator-webapp/issues/63
Three things which you can make sure to diagnose the issue further.
Make sure you're on the latest version npm update -g yo
You shouldn't be using LiveReload plugin explicitly, it conflicts with Yeoman watch command while running
Your port (in gruntfile.js) might be using by some other process, try to exit from the terminal, change the port and it see it works or not.
The problem is yeoman doesn't change the port automatically in the current version, that's where it stopped working if you close it forcefully in the current terminal session.
The same issue has been discussed here - https://github.com/yeoman/yeoman/issues/938
If you still see the issue then try to run the following command and past the output here.
yo --version && echo $PATH $NODE_PATH && node -e 'console.log(process.platform, process.versions)' && cat Gruntfile.js
A commit was made to solve this problem.
https://github.com/yeoman/generator-webapp/pull/67
We'll see an update soon I guess.
in the meantime what you can do is modify your Gruntfile.js. Look near the line 61 for the connect option for livereload and replace the configuration for this snippet.
livereload: {
options: {
middleware: function (connect) {
return [
lrSnippet,
mountFolder(connect, '.tmp'),
mountFolder(connect, yeomanConfig.app)
];
}
}
},
Also, is a good idea to update the module connect-livereload to 0.1.4.
Just run npm install connect-livereload on your project directory.
I added live reload as an option under watch in the Gruntfile.js:
watch: {
options: {
nospawn: true,
livereload: true
},
Setting the livereload:port instead of livereload:options:port worked for me:
livereload: {
port: 35728
}