How to deploy neoload in docker - neoload

Please does anyone know how to deploy neoload on Docker. I have looked at the neoload package on docker hub but it doesn't seem to make much sense. I want to use it for performance testing. the link is https://hub.docker.com/r/neotys/neoload-controller/

As explained in the documentation, there are 2 ways to deploy your neoload controller on docker:
Managed: this mode only works with a neoload web.
Standalone: basically when you run your neoload container, you give it some parameters like the neoload project, the number of virtual users etc... The test is launched at the start of the container.
From the docker hub documentation:
docker run -d --rm \
-e PROJECT_NAME={project-name} \
-e SCENARIO={scenario} \
-e NTS_URL={nts-url} \
-e NTS_LOGIN={login:password} \
-e COLLAB_URL={collab-url} \
-e LICENSE_ID={license-id} \
-e VU_MAX={vu-max} \
-e DURATION_MAX={duration-max} \
-e NEOLOADWEB_URL={nlweb-onpremise-apiurl:port} \
-e NEOLOADWEB_TOKEN={nlweb-token} \
-e PUBLISH_RESULT={publish-result} \
neotys/neoload-controller
You either have to pull the license from a Neoload Web or a NTS server.
I will need more informations about your problem to help you.
Regards

Related

Running Filebeat on Docker Errors Out

I was following this https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html#_run_the_filebeat_setup
docker run \
docker.elastic.co/beats/filebeat:8.0.0 \
setup -E setup.kibana.host=kibana:port \
-E output.elasticsearch.hosts=["https://testelk.es.us-east4.gcp.elastic-cloud.com:9243"] \
cloud -E cloud.id=cloudid \
-E cloud.auth=elastic:pass
I get the following error on my macOS when I run it on my terminal. Is there a way to fix it?
zsh: no matches found: output.elasticsearch.hosts=[https://testelk.es.us-east4.gcp.elastic-cloud.com:9243]
As written in the documentation, if you are using Elastic Cloud, you need to remove the output.elasticsearch.hosts option and specify only the cloud.id and cloud.auth.

Enable fine grained on keycloak with docker

I have set up keycloak using docker, my problem is that I need to do some modifications on the clients that need the fine grained to be enabled. I have read the documentation and i know I should use the parameter -Dkeycloak.profile=preview or -Dkeycloak.profile.feature.admin_fine_grained_authz=enabled. My problem is that I tried to use that on my docker execution command, but with no luck
docker run --rm \
--name keycloak \
-p 80:8080 \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=[adminPass] \
-e PROXY_ADDRESS_FORWARDING=true \
-e DB_VENDOR=MYSQL \
-e DB_ADDR=[SQL_Server] \
-e DB_DATABASE=keycloak \
-e DB_USER=[DBUSER] \
-e DB_PASSWORD=[DB_PASS] \
-e JDBC_PARAMS=useSSL=false \
-e -Dkeycloak.profile.feature.admin_fine_grained_authz=enabled \
jboss/keycloak
any help?
It is documented in the Docker image readme https://hub.docker.com/r/jboss/keycloak
Additional server startup options (extension of JAVA_OPTS) can be configured using the JAVA_OPTS_APPEND environment variable.
So in your case:
-e JAVA_OPTS_APPEND="-Dkeycloak.profile=preview"
Guess you might need to pass the environment variables to the JVM when starting the Wildfly containing the Keycloak WAR. There is a runner shell script that starts when launching the container. You need to add your environment variables to that call.

elasticsearch metricbeats docker install error

Hello everyone I'm new in elk stack.
I'm trying to run elk in docker with metricbeats. But unfortunately I have a problem with metricbeat setup.
docker run \
docker.elastic.co/beats/metricbeat:7.9.1 \
setup -E setup.kibana.host=ELK-IP-Address:5601 \
-E output.elasticsearch.hosts=["ELK-IP-Address:9200"] \
-E output.elasticsearch.username=elastic \
-E output.elasticsearch.password=changeme
When I run that code in my terminal I have that error.
zsh: no matches found: output.elasticsearch.hosts=[elasticsearch:9200]
Please help me:(
Add a backslash \ behind each of the square brackets, i.e. output.elasticsearch.hosts=\[elasticsearch:9200\]

Azure DevOps agent in Docker: add custom capabilities

I run Azure DevOps agents in Docker as per guide on DockerHub:
docker run -d -e VSTS_ACCOUNT='kagarlickij' \
-e VSTS_POOL='Self-Hosted-Containers' \
-e VSTS_TOKEN='a***q' \
mcr.microsoft.com/azure-pipelines/vsts-agent:ubuntu-16.04-docker-18.06.1-ce
I'd like to automatically add custom capabilities to agent, how is it possible?
When you create an agent, add the capability in the command. For example: docker run -d -e VSTS_ACCOUNT={account} -e VSTS_POOL={pool} -e VSTS_AGENT={agent} -e VSTS_TOKEN={token} -e myvar=test -it mcr.microsoft.com/azure-pipelines/vsts-agent:ubuntu-16.04-docker-18.06.1-ce. I've tested on my side, I can see myvar shows in the capabilities.

How to store my docker registry in the file system

I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?
I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.

Resources