I'm getting this error while trying to run cube js with the default command in the getting started docs. I've started this in a folder and running it in docker.
Warning. There is no cube.js file. Continue with environment variables
π₯ Cube Store (0.28.31) is assigned to 3030 port.
Warning. Option apiSecret is required in dev mode. Cube.js has generated it as e3b8c5a35fe378f4d481ada777e5f3c4
π Authentication checks are disabled in developer mode. Please use NODE_ENV=production to enable it.
π¦
Dev environment available at http://localhost:4000
π Cube.js server (0.28.31) is listening on 4000
2021-09-03 15:06:01,512 INFO [cubestore::http::status] <pid:17> Serving status probes at 0.0.0.0:3031
2021-09-03 15:06:01,515 INFO [cubestore::metastore] <pid:17> Using existing metastore in /cube/conf/.cubestore/data/metastore
thread '
main
' panicked at '
called `Result::unwrap()` on an `Err` value: Error { message: "IO error: While fsync: a directory: Invalid argument" }
', /project/cubestore/src/metastore/mod.rs:1542:40
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Cube Store Start Error: undefined
I guess itβs corrupted metastore due to it was incorrectly shutdown for you locally. Could you please try to drop the .cubestore directory?
I have installed apache Helix 1.0.0 version. I am able to setup a cluster and add resources.
But when i try to start run-helix-controller.sh it gives below error.
Here is command : ./run-helix-controller.sh --zkSvr localhost:2181 --cluster jbpm-cluster
ERROR
[2020-05-20 06:22:29,773] [INFO ] [main] [org.apache.helix.controller.HelixControllerMain:208] - Cluster manager started, zkServer: lpwaidqu02:2181, clusterName:jbpm-cluster, controllerName:null, mode:STANDALONE
Exception in thread "main" java.lang.NoSuchFieldError: Rebalancer
at org.apache.helix.InstanceType.(InstanceType.java:39)
at org.apache.helix.controller.HelixControllerMain.startHelixController(HelixControllerMain.java:156)
at org.apache.helix.controller.HelixControllerMain.main(HelixControllerMain.java:212)
Have you tried the steps in this guide?
http://helix.apache.org/0.9.7-docs/Quickstart.html
Seems like there are a few steps (like "add cluster") before ./run-helix-controller.sh command.
I see the documentation on https://docs.emqx.io/broker/v3/en/guide.html#emq-x-bridge-cache-configuration and it says that you can enable the cache on file if the network fails because emqx now is not doing this stuff.
When i set, for example the parameter on emqx 3.0.0.0 it fails on start and says in the lof file that is not declared:
You've tried to set bridge.xxx.queue.replayq_seg_bytes, but there is no setting with that name.
2020-03-03T19:43:22.777171+03:00 [error] Did you mean one of these?
2020-03-03T19:43:22.962094+03:00 [error] bridge.$name.mqueue_type
2020-03-03T19:43:22.962572+03:00 [error] bridge.$name.clean_start
2020-03-03T19:43:22.962760+03:00 [error] bridge.$name.start_type
2020-03-03T19:43:23.102793+03:00 [error] Error generating configuration in phase transform_datatypes
2020-03-03T19:43:23.103040+03:00 [error] Conf file attempted to set unknown variable: bridge.aps.queue.replayq_seg_bytes
You know if its problem of my version of emqx or is possible a problem with the sintax.
Thanks in advance
Greetings
It's sintax error.
bridge.xxx.queue.replayq_seg_bytes
It's means set the xxx bridge use queue.replayq_seg_bytes config.
bridge.mqtt.xxx.address = 127.0.0.1:1883
Is exists? By the way the EMQ X v4.0.6 is recommended.
this is my first time running logstash on container. Im running logstash on the same container elasticsearch + kibana. Its running on ubuntu.
i run my conf file by using
/usr/share/logstash/bin/logstash -f conf.d/logstash.conf
here is my logstash.conf :
input{
beats{
port=>5044
}
}
filter
{
grok {
match =>{
"message" => "%{TIMESTAMP_ISO8601:logtimestamp}\s%{DATA:S_IP}\s%{WORD:s_method}\s%{DATA:cs_uri_stem}\s%{DATA:cs_uri_query}\s%{DATA:s_port}\s%{GREEDYDATA:log_message}"
}
}
date{
match =>["logtimestamp","yyyy-MM-dd HH:mm:ss"]
target=>"#timestamp"
}
}
output{
stdout{codec=>rubydebug}
elasticsearch{
hosts=>"elastic#localhost:9200"
index=>"log_iis"
user =>"*****"
password=>"*****"
}
}
and it returning error as :
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.8.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2020-02-10 03:32:59.625 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2020-02-10 03:32:59.632 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.5.2"}
[INFO ] 2020-02-10 03:33:00.995 [Converge PipelineAction::Create<main>] Reflections - Reflections took 24 ms to scan 1 urls, producing 20 keys and 40 values
[ERROR] 2020-02-10 03:33:01.375 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: Illegal character in scheme name at index 7: elastic#localhost:9200", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:119)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:60)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1156)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1143)", "org.jruby.ir.targets.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:39)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:27)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.RubyClass.newInstance(RubyClass.java:915)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:183)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:77)", "org.jruby.runtime.Block.call(Block.java:129)", "org.jruby.RubyProc.call(RubyProc.java:295)", "org.jruby.RubyProc.call(RubyProc.java:274)", "org.jruby.RubyProc.call(RubyProc.java:270)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:830)"]}
warning: thread "Converge PipelineAction::Create<main>" terminated with exception (report_on_exception is true):
LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`
create at org/logstash/execution/ConvergeResultExt.java:109
add at org/logstash/execution/ConvergeResultExt.java:37
converge_state at /usr/share/logstash/logstash-core/lib/logstash/agent.rb:339
[ERROR] 2020-02-10 03:33:01.379 [Agent thread] agent - An exception happened when converging configuration {:exception=>LogStash::Error, :message=>"Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`", :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in `create'", "org/logstash/execution/ConvergeResultExt.java:37:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in `block in converge_state'"]}
[FATAL] 2020-02-10 03:33:01.403 [LogStash::Runner] runner - An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle `Java::JavaLang::IllegalStateException` for `PipelineAction::Create<main>`>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in `create'", "org/logstash/execution/ConvergeResultExt.java:37:in `add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in `block in converge_state'"]}
[ERROR] 2020-02-10 03:33:01.432 [LogStash::Runner] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit)
and respond or explanation will be appreciated so much. thank you
The error log says that you get
Illegal character in scheme name at index 7: elastic#localhost:9200", which is the value of the hosts option.
I guess the problem is the at (#). Is that needed? Anyway, if you check the documentation of the Elasticsearch output plugin [1], it says that
Any special characters present in the URLs here MUST be URL escaped! This means # should be put in as %23 for instance.
[1] https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-hosts
I am trying to work on grails through command prompt.
I am using the below versions:
| Grails Version: 3.2.3
| Groovy Version: 2.4.7
| JVM Version: 1.8.0_112
The following error is thrown:
| Error Error occurred running Grails CLI: This is usually a temporary
error dur ing hostname resolution and means that the local server did
not receive a respon se from an authoritative server (repo.grails.org)
(Use --stacktrace to see the f ull trace)
Upon using stacktrace the following is displayed:
| Error Error occurred running Grails CLI: This is usually a temporary
error dur ing hostname resolution and means that the local server did
not receive a respon se from an authoritative server (repo.grails.org)
(NOTE: Stack trace has been fi ltered. Use --verbose to see entire
trace.)
java.net.UnknownHostException: This is usually a temporary error
during hostname resolution and means that the local server did not
receive a response from an a uthoritative server (repo.grails.org)
at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefa
ultDnsResolver.java:45)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.resolveHost
name(DefaultClientConnectionOperator.java:278)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnect
ion(DefaultClientConnectionOperator.java:162)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedCli
entConnectionImpl.java:294)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(Default
RequestDirector.java:643)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultReq
uestDirector.java:479)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpCl
ient.java:906)
at org.apache.http.impl.client.DecompressingHttpClient.execute(Decompres
singHttpClient.java:137)
at org.eclipse.aether.transport.http.HttpTransporter.execute(HttpTranspo
rter.java:287)
at org.eclipse.aether.transport.http.HttpTransporter.implGet(HttpTranspo
rter.java:243)
at org.eclipse.aether.spi.connector.transport.AbstractTransporter.get(Ab
stractTransporter.java:59)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$GetTaskRu
nner.runTask(BasicRepositoryConnector.java:447)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunne
r.run(BasicRepositoryConnector.java:350)
at org.eclipse.aether.util.concurrency.RunnableErrorForwarder$1.run(Runn
ableErrorForwarder.java:67)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$DirectExe
cutor.execute(BasicRepositoryConnector.java:581)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector.get(Basic
RepositoryConnector.java:249)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.performDownl
oads(DefaultArtifactResolver.java:520)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(Defa
ultArtifactResolver.java:421)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtif
acts(DefaultArtifactResolver.java:246)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtif
act(DefaultArtifactResolver.java:223)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.
loadPom(DefaultArtifactDescriptorReader.java:320)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.
readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.resolveCa
chedArtifactDescriptor(DefaultDependencyCollector.java:535)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.getArtifa
ctDescriptorResult(DefaultDependencyCollector.java:519)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDe
pendency(DefaultDependencyCollector.java:409)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDe
pendency(DefaultDependencyCollector.java:363)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(D
efaultDependencyCollector.java:351)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDe
pendencies(DefaultDependencyCollector.java:254)
at org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDepen
dencies(DefaultRepositorySystem.java:341)
at org.springframework.boot.cli.compiler.grape.AetherGrapeEngine.resolve
(AetherGrapeEngine.java:302)
at org.springframework.boot.cli.compiler.grape.AetherGrapeEngine.resolve
(AetherGrapeEngine.java:284)
at org.springframework.boot.cli.compiler.grape.AetherGrapeEngine.resolve
(AetherGrapeEngine.java:276)
at org.grails.cli.boot.GrailsDependencyVersions.<init>(GrailsDependencyV
ersions.groovy:53)
at org.grails.cli.boot.GrailsDependencyVersions.<init>(GrailsDependencyV
ersions.groovy:49)
at org.grails.cli.profile.repository.MavenProfileRepository.<init>(Maven
ProfileRepository.groovy:53)
at org.grails.cli.GrailsCli.createMavenProfileRepository(GrailsCli.groov
y:333)
at org.grails.cli.GrailsCli.execute(GrailsCli.groovy:234)
at org.grails.cli.GrailsCli.main(GrailsCli.groovy:159)
| Error Error occurred running Grails CLI: This is usually a temporary
error dur ing hostname resolution and means that the local server did
not receive a respon se from an authoritative server (repo.grails.org)
Kindly help me resolve the above issue.
Most likely reason is your internet is not working or your are behind a proxy. See http://docs.grails.org/latest/guide/conf.html#proxyConfig for how to configure your proxy settings