Jenkins Job Builder failing authentication with Jenkins due to encoding issue? - jenkins

I'm trying to run jenkins-jobs update for the first time on my system, but it fails on authentication.
Command:
jenkins-jobs --conf ./jjb.ini update jobs/
Where jobs contains a test.yml - a miniature build project just for testing. jjb.ini is:
[jenkins]
user=admin
password={{ admin_api_token }} # Inserted API token here.
url=http://127.0.0.1:8080
query_plugins_info=False
Expected result:
Success, and import of the example build project into Jenkins.
Actual result:
INFO:jenkins_jobs.cli.subcommand.update:Updating jobs in ['jobs/'] ([])
INFO:jenkins_jobs.builder:Number of jobs generated: 1
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 127.0.0.1
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/jenkins/__init__.py", line 557, in jenkins_request
self._request(req))
File "/usr/local/lib/python3.5/dist-packages/jenkins/__init__.py", line 508, in _response_handler
response.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 840, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Invalid password/token for user: b'admin' for url: http://127.0.0.1:8080/crumbIssuer/api/json
What catches my eye here is that the authentication fails for b'admin, not for admin. This is also reflected in the "People" page on Jenkins web-interface, which before the attempted login only showed admin, but after the attempted login shows:
From what I've been able to figure out, there may be a problem with encoding in the login request from JJB, but I'm looking for help when it comes to how to go about trying to fix this.
Current setup:
Ubuntu 16.04.4 LTS
Jenkins 2.125 (working as expected, at :8080)
jenkins-job-builder 2.0.9
Python 3.5.2
pip 10.0.1 from /usr/local/lib/python3.5/dist-packages/pip-10.0.1-py3.5.egg/pip (python 3.5)
java -version: openjdk version "1.8.0_171"

It works with requests==2.19.1
sudo pip uninstall requests
sudo pip install requests
$ pip freeze | grep requests
requests==2.19.1

The following steps resolved the issue in my case:
pip3.7 freeze | grep requests
pip3.7 uninstall requests
pip3.7 install requests
pip3.7 freeze | grep requests

Related

Stackstorm: Action "packs.install" cannot be found

I am currently having issues installing any pack at all in stackstorm using the st2 client.
Error:
ERROR: 400 Client Error: Bad Request
MESSAGE: Action "packs.install" cannot be found. for url: http://sts-st2api:9101/packs/install
Environment: Running Kubernetes on local kind cluster
To repoduce:
Exec into the st2 client pod
Then login to the st2 using: st2 login -w st2admin
Install any pack

jib gradle plugin + static docker client: cannot build image due to permission error: layer.tar: A required privilege is not held by the client

I want to build Docker image with jib Gradle plugin in Windows, and use a Windows docker client to load it into my WSL 2 container running dockerd, and use WSL 2 as server. Resource-wise I think this is the lightest solution. .
On WSL 2 side, I run dockerd service in Ubuntu 20 on WSL 2, and it's listening on [::]:2375. TLS disabled(--tls=false), only http.
On Windows side, I only downloaded the Docker client(static client, from https://download.docker.com/win/static/stable/x86_64/), and added the dynamic WSL 2 container IP into the insecure-registry in daemon.json. This file is put in the same dir of docker.exe client.
On Intellij IDEA side, I use gradle 5.2.1 wrapper, and jib plugin 3.2.1. I configure jib as follows:
jib {
dockerClient.executable = 'E:\\coding\\environment\\docker\\docker.exe'
dockerClient.environment = [ DOCKER_HOST: '172.21.169.180:2375',
DOCKER_INSECURE_REGISTRIES: "172.21.169.180:5000"]
from.image = 'docker://mini/java#sha256:d3ded1fd0df592c33185d930d976304994bbc539c7bf70a6091cb3da0f7e11fa'
to.image = 'spring-plugins-demo'
container.mainClass = 'dev.westerngun.oldway.ApplicationV1'
}
I know it can connect to dockerd in my WSL 2, because before I add the dynamic IP of Ubuntu the error was not able to connect to daemon. Now it can load the base image and start building.
Then, when I run jibDockerBuild --stacktrace, I see this error:
Execution failed for task ':jibDockerBuild'.
> com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: C:\Users\WESTER~1\AppData\Local\Temp\16164656866093264693\33e3f3775358985441c3bea658f06f5307326c83f9c0bcbf8aa4acb327abffde\layer.tar: �ͻ���û���������Ȩ��
* Try:
Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':jibDockerBuild'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:121)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$2.accept(ExecuteActionsTaskExecuter.java:117)
at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:184)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:110)
at org.gradle.api.internal.tasks.execution.ResolveIncrementalChangesTaskExecuter.execute(ResolveIncrementalChangesTaskExecuter.java:84)
at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:91)
at org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionStateTaskExecuter.execute(ResolveBeforeExecutionStateTaskExecuter.java:74)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:109)
at org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionOutputsTaskExecuter.execute(ResolveBeforeExecutionOutputsTaskExecuter.java:67)
at org.gradle.api.internal.tasks.execution.ResolveAfterPreviousExecutionStateTaskExecuter.execute(ResolveAfterPreviousExecutionStateTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:93)
at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:45)
at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:94)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:63)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:46)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:102)
at org.gradle.internal.operations.DelegatingBuildOperationExecutor.call(DelegatingBuildOperationExecutor.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:46)
at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:43)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:336)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:322)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:134)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:129)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:202)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:193)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:129)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
Caused by: org.gradle.internal.UncheckedException: com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: C:\Users\WESTER~1\AppData\Local\Temp\16164656866093264693\33e3f3775358985441c3bea658f06f5307326c83f9c0bcbf8aa4acb327abffde\layer.tar: �ͻ���û���������Ȩ��
at org.gradle.internal.UncheckedException.throwAsUncheckedException(UncheckedException.java:67)
at org.gradle.internal.UncheckedException.throwAsUncheckedException(UncheckedException.java:41)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:106)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.doExecute(StandardTaskAction.java:48)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:41)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:28)
at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:705)
at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:672)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$4.run(ExecuteActionsTaskExecuter.java:338)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:402)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:394)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:92)
at org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:327)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:312)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.access$200(ExecuteActionsTaskExecuter.java:75)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$TaskExecution.execute(ExecuteActionsTaskExecuter.java:158)
at org.gradle.internal.execution.impl.steps.ExecuteStep.execute(ExecuteStep.java:46)
at org.gradle.internal.execution.impl.steps.CancelExecutionStep.execute(CancelExecutionStep.java:34)
at org.gradle.internal.execution.impl.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:69)
at org.gradle.internal.execution.impl.steps.TimeoutStep.execute(TimeoutStep.java:49)
at org.gradle.internal.execution.impl.steps.CatchExceptionStep.execute(CatchExceptionStep.java:34)
at org.gradle.internal.execution.impl.steps.CreateOutputsStep.execute(CreateOutputsStep.java:49)
at org.gradle.internal.execution.impl.steps.SnapshotOutputStep.execute(SnapshotOutputStep.java:42)
at org.gradle.internal.execution.impl.steps.SnapshotOutputStep.execute(SnapshotOutputStep.java:28)
at org.gradle.internal.execution.impl.steps.CacheStep.executeWithoutCache(CacheStep.java:133)
at org.gradle.internal.execution.impl.steps.CacheStep.lambda$execute$5(CacheStep.java:83)
at org.gradle.internal.execution.impl.steps.CacheStep.execute(CacheStep.java:82)
at org.gradle.internal.execution.impl.steps.CacheStep.execute(CacheStep.java:37)
at org.gradle.internal.execution.impl.steps.PrepareCachingStep.execute(PrepareCachingStep.java:33)
at org.gradle.internal.execution.impl.steps.StoreSnapshotsStep.execute(StoreSnapshotsStep.java:38)
at org.gradle.internal.execution.impl.steps.StoreSnapshotsStep.execute(StoreSnapshotsStep.java:23)
at org.gradle.internal.execution.impl.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:95)
at org.gradle.internal.execution.impl.steps.SkipUpToDateStep.lambda$execute$0(SkipUpToDateStep.java:88)
at org.gradle.internal.execution.impl.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:52)
at org.gradle.internal.execution.impl.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:36)
at org.gradle.internal.execution.impl.DefaultWorkExecutor.execute(DefaultWorkExecutor.java:34)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:109)
... 37 more
Caused by: com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: C:\Users\WESTER~1\AppData\Local\Temp\16164656866093264693\33e3f3775358985441c3bea658f06f5307326c83f9c0bcbf8aa4acb327abffde\layer.tar: �ͻ���û���������Ȩ��
at com.google.cloud.tools.jib.plugins.common.JibBuildRunner.runBuild(JibBuildRunner.java:285)
at com.google.cloud.tools.jib.gradle.BuildDockerTask.buildDocker(BuildDockerTask.java:126)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:103)
... 75 more
Caused by: java.nio.file.FileSystemException: C:\Users\WESTER~1\AppData\Local\Temp\16164656866093264693\33e3f3775358985441c3bea658f06f5307326c83f9c0bcbf8aa4acb327abffde\layer.tar: �ͻ���û���������Ȩ��
at com.google.cloud.tools.jib.tar.TarExtractor.extract(TarExtractor.java:93)
at com.google.cloud.tools.jib.tar.TarExtractor.extract(TarExtractor.java:49)
at com.google.cloud.tools.jib.builder.steps.LocalBaseImageSteps.cacheDockerImageTar(LocalBaseImageSteps.java:217)
at com.google.cloud.tools.jib.builder.steps.LocalBaseImageSteps.lambda$retrieveDockerDaemonLayersStep$0(LocalBaseImageSteps.java:133)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
The error message in Chinese is
C:\Users\WESTER~1\AppData\Local\Temp\16164656866093264693\33e3f3775358985441c3bea658f06f5307326c83f9c0bcbf8aa4acb327abffde\layer.tar: 客户端没有所需的特权。
And I think it can be translated into "A required permission is not held by the client".
I suspect this is because my user is not added to docker-user group, as stated here. But, I uninstalled Docker toolbox and I don't see this group anymore, as it sets DOCKER_HOST and interferes with my setup. Secondly, I don't have Local Users and Group available, seems Windows 10 Home edition does not have it.
Should I try to install gpedit in my Home Edition, add the group and try? But without Docker toolbox, I doubt it would work. Docker documentation explains here that it creates the group and configure it to ensure separation of permissions between root/admin and non-root/non-admin users; I think only creating that group will not work. https://docs.docker.com/desktop/windows/permission-requirements/
But, when I use docker.exe to connect to WSL 2 and save a tar file to C:\Users\WESTER~1\AppData\Local\Temp, it works. The tar file is created and not corrupted. So I think it's not a permission error; anyone can access that dir.
Windows bundled bsd-tar.exe has nothing to do with it; renaming the tar.exe in System32 and build, the error is the same.
It is solved when I run cmd as admin and cd to project dir and do gradlew jibDockerBuild. Image built and loaded into WSL 2 daemon successfully. It is indeed file system permission error.
Although still very strange(as I allowed the permission to everyone on that folder), but at least this is one workaround.
Another workaround, even better:
As per https://github.com/microsoft/WSL/issues/4983, I changed the jib config to set docker host to be http://[::1]:2375, and suddenly it works. Seems only ipv6 is bind.
Now not only the host is reachable, even permission error disappears; no insecure_registries settings needed, neither.

How can I build and run Druid locally

My environment are below.
MacBook Pro (13-inch, 2019, Four Thunderbolt 3 ports)
2.8 GHz Quad CoreIntel Core i7
16 GB 2133 MHz LPDDR3
Intel Iris Plus Graphics 655 1536 MB
Docker: 19.03.12
Druid: 0.19.0
Although I followed official instructions, I failed to build or run Druid locally.
About this: https://github.com/apache/druid/tree/master/distribution/docker
I typed the following commands.
git clone https://github.com/apache/druid.git
docker build -t apache/druid:tag -f distribution/docker/Dockerfile .
However, the program never proceed.
Sending build context to Docker daemon 78.19MB
Step 1/18 : FROM maven:3-jdk-8-slim as builder
---> addee4586ff4
Step 2/18 : RUN export DEBIAN_FRONTEND=noninteractive && apt-get -qq update && apt-get -qq -y install --no-install-recommends python3 python3-yaml
---> Using cache
---> cdb74d0f6b3d
Step 3/18 : COPY . /src
---> 60d35cb6c0ce
Step 4/18 : WORKDIR /src
---> Running in 73dfa666a186
Removing intermediate container 73dfa666a186
---> 4839bf923b21
Step 5/18 : RUN mvn -B -ff -q dependency:go-offline install -Pdist,bundle-contrib-exts -Pskip-static-checks,skip-tests -Dmaven.javadoc.skip=true
---> Running in 1c9d4aa3d4e8
PLUS
Moreover, I followed this instruction and run docker-compose -f distribution/docker/docker-compose.yml up but I failed and get the error below.
coordinator | 2020-08-06T08:41:24,295 WARN [Coordinator-Exec--0] org.apache.druid.server.coordinator.helper.DruidCoordinatorRuleRunner - Uh... I have no servers. Not assigning anything...
PLUS END
About this: https://hub.docker.com/r/apache/druid/tags
I typed the following commands.
docker pull apache/druid:0.19.0
docker run apache/druid:0.19.0
This program seems to work like this.
2020-08-06T07:50:22+0000 startup service
Setting 172.17.0.2= in /runtime.properties
cat: can't open '/jvm.config': No such file or directory
2020-08-06T07:50:24,024 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.2.5.Final
2020-08-06T07:50:24,988 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-hdfs-storage], jars: jackson-annotations-2.10.2.jar, hadoop-mapreduce-client-common-2.8.5.jar, httpclient-4.5.10.jar, htrace-core4-4.0.1-incubating.jar, apacheds-kerberos-codec-2.0.0-M15.jar, jackson-mapper-asl-1.9.13.jar, commons-digester-1.8.jar, jetty-sslengine-6.1.26.jar, jackson-databind-2.10.2.jar, api-asn1-api-1.0.0-M20.jar, ion-java-1.0.2.jar, hadoop-mapreduce-client-shuffle-2.8.5.jar, asm-7.1.jar, jsp-api-2.1.jar, druid-hdfs-storage-0.19.0.jar, api-util-1.0.3.jar, json-smart-2.3.jar, jackson-core-2.10.2.jar, hadoop-client-2.8.5.jar, httpcore-4.4.11.jar, commons-collections-3.2.2.jar, hadoop-hdfs-client-2.8.5.jar, hadoop-annotations-2.8.5.jar, hadoop-auth-2.8.5.jar, xmlenc-0.52.jar, aws-java-sdk-s3-1.11.199.jar, commons-net-3.6.jar, nimbus-jose-jwt-4.41.1.jar, hadoop-common-2.8.5.jar, jackson-dataformat-cbor-2.10.2.jar, hadoop-yarn-server-common-2.8.5.jar, accessors-smart-1.2.jar, gson-2.2.4.jar, commons-configuration-1.6.jar, joda-time-2.10.5.jar, hadoop-aws-2.8.5.jar, aws-java-sdk-core-1.11.199.jar, commons-codec-1.13.jar, hadoop-mapreduce-client-app-2.8.5.jar, hadoop-yarn-api-2.8.5.jar, aws-java-sdk-kms-1.11.199.jar, jackson-core-asl-1.9.13.jar, curator-recipes-4.3.0.jar, hadoop-mapreduce-client-jobclient-2.8.5.jar, jcip-annotations-1.0-1.jar, jmespath-java-1.11.199.jar, hadoop-mapreduce-client-core-2.8.5.jar, commons-logging-1.1.1.jar, leveldbjni-all-1.8.jar, curator-framework-4.3.0.jar, hadoop-yarn-client-2.8.5.jar, apacheds-i18n-2.0.0-M15.jar
2020-08-06T07:50:25,004 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-kafka-indexing-service], jars: lz4-java-1.7.1.jar, kafka-clients-2.5.0.jar, druid-kafka-indexing-service-0.19.0.jar, zstd-jni-1.3.3-1.jar, snappy-java-1.1.7.3.jar
2020-08-06T07:50:25,006 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-datasketches], jars: druid-datasketches-0.19.0.jar, commons-math3-3.6.1.jar
usage: druid <command> [<args>]
The most commonly used druid commands are:
help Display help information
index Run indexing for druid
internal Processes that Druid runs "internally", you should rarely use these directly
server Run one of the Druid server types.
tools Various tools for working with Druid
version Returns Druid version information
See 'druid help <command>' for more information on a specific command.
However, even if I add an argument like version, it does not work like this.
❯ docker run apache/druid:0.19.0 version
2020-08-06T07:51:30+0000 startup service version
Setting druid.host=172.17.0.2 in /runtime.properties
cat: can't open '/jvm.config': No such file or directory
2020-08-06T07:51:32,517 INFO [main] org.hibernate.validator.internal.util.Version - HV000001: Hibernate Validator 5.2.5.Final
2020-08-06T07:51:33,503 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-hdfs-storage], jars: jackson-annotations-2.10.2.jar, hadoop-mapreduce-client-common-2.8.5.jar, httpclient-4.5.10.jar, htrace-core4-4.0.1-incubating.jar, apacheds-kerberos-codec-2.0.0-M15.jar, jackson-mapper-asl-1.9.13.jar, commons-digester-1.8.jar, jetty-sslengine-6.1.26.jar, jackson-databind-2.10.2.jar, api-asn1-api-1.0.0-M20.jar, ion-java-1.0.2.jar, hadoop-mapreduce-client-shuffle-2.8.5.jar, asm-7.1.jar, jsp-api-2.1.jar, druid-hdfs-storage-0.19.0.jar, api-util-1.0.3.jar, json-smart-2.3.jar, jackson-core-2.10.2.jar, hadoop-client-2.8.5.jar, httpcore-4.4.11.jar, commons-collections-3.2.2.jar, hadoop-hdfs-client-2.8.5.jar, hadoop-annotations-2.8.5.jar, hadoop-auth-2.8.5.jar, xmlenc-0.52.jar, aws-java-sdk-s3-1.11.199.jar, commons-net-3.6.jar, nimbus-jose-jwt-4.41.1.jar, hadoop-common-2.8.5.jar, jackson-dataformat-cbor-2.10.2.jar, hadoop-yarn-server-common-2.8.5.jar, accessors-smart-1.2.jar, gson-2.2.4.jar, commons-configuration-1.6.jar, joda-time-2.10.5.jar, hadoop-aws-2.8.5.jar, aws-java-sdk-core-1.11.199.jar, commons-codec-1.13.jar, hadoop-mapreduce-client-app-2.8.5.jar, hadoop-yarn-api-2.8.5.jar, aws-java-sdk-kms-1.11.199.jar, jackson-core-asl-1.9.13.jar, curator-recipes-4.3.0.jar, hadoop-mapreduce-client-jobclient-2.8.5.jar, jcip-annotations-1.0-1.jar, jmespath-java-1.11.199.jar, hadoop-mapreduce-client-core-2.8.5.jar, commons-logging-1.1.1.jar, leveldbjni-all-1.8.jar, curator-framework-4.3.0.jar, hadoop-yarn-client-2.8.5.jar, apacheds-i18n-2.0.0-M15.jar
2020-08-06T07:51:33,524 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-kafka-indexing-service], jars: lz4-java-1.7.1.jar, kafka-clients-2.5.0.jar, druid-kafka-indexing-service-0.19.0.jar, zstd-jni-1.3.3-1.jar, snappy-java-1.1.7.3.jar
2020-08-06T07:51:33,526 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-datasketches], jars: druid-datasketches-0.19.0.jar, commons-math3-3.6.1.jar
ERROR!!!!
Found unexpected parameters: [version]
===
usage: druid <command> [<args>]
The most commonly used druid commands are:
help Display help information
index Run indexing for druid
internal Processes that Druid runs "internally", you should rarely use these directly
server Run one of the Druid server types.
tools Various tools for working with Druid
version Returns Druid version information
See 'druid help <command>' for more information on a specific command
So I see a few things here:
docker run apache/druid:0.19.0 means "fire and forget", if you don't have an endless running service here, your docker container will be shut down shortly after start.
To have an interaction within the docker container start it with "-it" command.
To let it run without interaction run it with "-d" command for detached.
YOu can find information about this here: https://docs.docker.com/engine/reference/run/
You have to check the start command.
The thing you wrote after the run command is the start command (in your case "version") - this is runned like you would type it into the running shell after words (just "version").
Additional to that, if you DONT add a startup command, there could be a startup command in the default druid dockerfile.
You can see the dockerfile of your selected image at docker.hub, like here:
https://hub.docker.com/layers/apache/druid/0.19.0/images/sha256-eb2a4852b4ad1d3ca86cbf4c9dc7ed9b73c767815f187eb238d2b80ca26dfd9a?context=explore
There you see, the start command, wihtin a dockerfile this is called ENTRYPOINT, is a shellscript:
ENTRYPOINT ["/druid.sh"]
So writing "version" after your run commands stops the shell command from running - we should not do that :)

SilverStripe running tests using Behat throwing "No behat.yml found for module silverstripe/framework" error

I am working on a SilverStripe project. I am trying to write Behavioural Tests using Behat for my projects. But I am getting an error when I run the tests. Following is what I have done so far.
First I install the module using composer
composer require --dev silverstripe/behat-extension
I have the behat.yml file right under the project root folder with the following definition
default:
suites: []
extensions:
SilverStripe\BehatExtension\MinkExtension:
default_session: facebook_web_driver
javascript_session: facebook_web_driver
facebook_web_driver:
browser: chrome
wd_host: "http://127.0.0.1:9515"
browser_name: chrome
SilverStripe\BehatExtension\Extension:
bootstrap_file: vendor/silverstripe/cms/tests/behat/serve-bootstrap.php
screenshot_path: %paths.base%/artifacts/screenshots
retry_seconds: 4 # default is 2
Then I tried to run the tests executing the following command.
vendor/bin/behat #framework
Then I get the following error.
In ModuleSuiteLocator.php line 166:
No behat.yml found for module silverstripe/framework
behat [-s|--suite SUITE] [-f|--format FORMAT] [-o|--out OUT] [--format-settings FORMAT-SETTINGS] [--init] [--namespace NAMESPACE] [--lang LANG] [--name NAME] [--tags TAGS] [--role ROLE] [--story-syntax] [-d|--definitions DEFINITIONS] [--snippets-for [SNIPPETS-FOR]] [--snippets-type SNIPPETS-TYPE] [--append-snippets] [--no-snippets] [--strict] [--order ORDER] [--rerun] [--stop-on-failure] [--dry-run] [--] [<module> [<paths>]]
There is not behat.yml in the vendor/silverstripe/framework folder. Actually, it is supposed to come with the framework. But it is not there. How can I solve the error?
For all SilverStripe modules the tests and associated configuration are not shipped with distributable packages (e.g. tags).
If you want to use them, you'll need to install the "dev" version. E.g. 4.5.x-dev instead of ~4.5.0.

Homebrew: Can't start elastic search

I'm in a big trouble, I can't start Elasticsearch and I need it for run my rails locally, please tell me what's going on. I installed Elasticsearch in the normal fashion then I did the following:
elasticsearch --config=/usr/local/opt/elasticsearch/config/elasticsearch.yml
But it shows the following error: [2015-11-01 20:36:50,574][INFO ][bootstrap] es.config is no longer supported. elasticsearch.yml must be placed in the config directory and cannot be renamed.
I tried several alternative ways of run it, like:
elasticsearch -f -D
But then I get the following error, and I can't find any useful for solve it, it seems to be related with file perms but not sure:
java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: sun.misc.Launcher$AppClassLoader#33909752
at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86)
at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514)
at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413)
at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216)
at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151)
at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79)
at org.joda.time.DateTimeUtils.getChronology(DateTimeUtils.java:266)
at org.joda.time.format.DateTimeFormatter.selectChronology(DateTimeFormatter.java:968)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:672)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:560)
at org.joda.time.format.DateTimeFormatter.print(DateTimeFormatter.java:644)
at org.elasticsearch.Build.<clinit>(Build.java:51)
at org.elasticsearch.node.Node.<init>(Node.java:135)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
[2015-11-01 20:40:57,602][INFO ][node ] [Centurius] version[2.0.0], pid[22063], build[de54438/2015-10-22T08:09:48Z]
[2015-11-01 20:40:57,605][INFO ][node ] [Centurius] initializing ...
Exception in thread "main" java.lang.IllegalStateException: failed to load bundle [] due to jar hell
Likely root cause: java.security.AccessControlException: access denied ("java.io.FilePermission" "/usr/local/Cellar/elasticsearch/2.0.0/libexec/antlr-runtime-3.5.jar" "read")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.security.AccessController.checkPermission(AccessController.java:884)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.lang.SecurityManager.checkRead(SecurityManager.java:888)
at java.util.zip.ZipFile.<init>(ZipFile.java:210)
at java.util.zip.ZipFile.<init>(ZipFile.java:149)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:103)
at org.elasticsearch.bootstrap.JarHell.checkJarHell(JarHell.java:173)
at org.elasticsearch.plugins.PluginsService.loadBundles(PluginsService.java:340)
at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:113)
at org.elasticsearch.node.Node.<init>(Node.java:144)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
Thanks for your help.
There are some changes with libexec with Elasticsearch/homebrew installation and that is why it is failing to start. There is a PR #45644 currently being worked on. Till the PR gets accepted, you can use the same formula to fix the installation of Elasticsearch.
First uninstall the earlier/older version. Then edit the formula of Elasticsearch:
$ brew edit elasticsearch
And use the formula from the PR.
Then do brew install elasticsearch, it should work fine.
To start Elasticsearch, just do:
$ elasticsearch
config option is no longer valid. For custom config, use path.config:
$ elasticsearch --path.conf=/usr/local/opt/elasticsearch/config

Resources