I run Azure DevOps agents in Docker as per guide on DockerHub:
docker run -d -e VSTS_ACCOUNT='kagarlickij' \
-e VSTS_POOL='Self-Hosted-Containers' \
-e VSTS_TOKEN='a***q' \
mcr.microsoft.com/azure-pipelines/vsts-agent:ubuntu-16.04-docker-18.06.1-ce
I'd like to automatically add custom capabilities to agent, how is it possible?
When you create an agent, add the capability in the command. For example: docker run -d -e VSTS_ACCOUNT={account} -e VSTS_POOL={pool} -e VSTS_AGENT={agent} -e VSTS_TOKEN={token} -e myvar=test -it mcr.microsoft.com/azure-pipelines/vsts-agent:ubuntu-16.04-docker-18.06.1-ce. I've tested on my side, I can see myvar shows in the capabilities.
Related
I have set up keycloak using docker, my problem is that I need to do some modifications on the clients that need the fine grained to be enabled. I have read the documentation and i know I should use the parameter -Dkeycloak.profile=preview or -Dkeycloak.profile.feature.admin_fine_grained_authz=enabled. My problem is that I tried to use that on my docker execution command, but with no luck
docker run --rm \
--name keycloak \
-p 80:8080 \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=[adminPass] \
-e PROXY_ADDRESS_FORWARDING=true \
-e DB_VENDOR=MYSQL \
-e DB_ADDR=[SQL_Server] \
-e DB_DATABASE=keycloak \
-e DB_USER=[DBUSER] \
-e DB_PASSWORD=[DB_PASS] \
-e JDBC_PARAMS=useSSL=false \
-e -Dkeycloak.profile.feature.admin_fine_grained_authz=enabled \
jboss/keycloak
any help?
It is documented in the Docker image readme https://hub.docker.com/r/jboss/keycloak
Additional server startup options (extension of JAVA_OPTS) can be configured using the JAVA_OPTS_APPEND environment variable.
So in your case:
-e JAVA_OPTS_APPEND="-Dkeycloak.profile=preview"
Guess you might need to pass the environment variables to the JVM when starting the Wildfly containing the Keycloak WAR. There is a runner shell script that starts when launching the container. You need to add your environment variables to that call.
Please does anyone know how to deploy neoload on Docker. I have looked at the neoload package on docker hub but it doesn't seem to make much sense. I want to use it for performance testing. the link is https://hub.docker.com/r/neotys/neoload-controller/
As explained in the documentation, there are 2 ways to deploy your neoload controller on docker:
Managed: this mode only works with a neoload web.
Standalone: basically when you run your neoload container, you give it some parameters like the neoload project, the number of virtual users etc... The test is launched at the start of the container.
From the docker hub documentation:
docker run -d --rm \
-e PROJECT_NAME={project-name} \
-e SCENARIO={scenario} \
-e NTS_URL={nts-url} \
-e NTS_LOGIN={login:password} \
-e COLLAB_URL={collab-url} \
-e LICENSE_ID={license-id} \
-e VU_MAX={vu-max} \
-e DURATION_MAX={duration-max} \
-e NEOLOADWEB_URL={nlweb-onpremise-apiurl:port} \
-e NEOLOADWEB_TOKEN={nlweb-token} \
-e PUBLISH_RESULT={publish-result} \
neotys/neoload-controller
You either have to pull the license from a Neoload Web or a NTS server.
I will need more informations about your problem to help you.
Regards
I am using Docker for Windows (2.2.0.5) on my Windows 10 Pro system.
I have created and build the docker image for my dotnet core app (SDK 3.1).
This app is connecting with external MySQL server to fetch data.
The app inside docker container is able to connect with database with hardcoded connection string. But not able to connect with arguments passed using -e flag. Upon investigation i figured out the environment variables are not getting created inside docker container.
Below is my docker run command -
docker run -d -p 5003:80 --name price-cat pricingcatalog:latest -e DB_HOST=165.202.xx.xx -e DB_DATABASE=pricing_catalog -e DB_USER=my-username -e DB_PASS=my-password
I am printing all environment variables created with container using C# code -
Console.WriteLine("All environment variables....Process");
foreach(DictionaryEntry envVar in Environment.GetEnvironmentVariables(EnvironmentVariableTarget.Process)){
Console.WriteLine("key={0}, value={1}", envVar.Key, envVar.Value);
}
Console.WriteLine("============================");
Console.WriteLine("All environment variables....User");
foreach(DictionaryEntry envVar in Environment.GetEnvironmentVariables(EnvironmentVariableTarget.User)){
Console.WriteLine("key={0}, value={1}", envVar.Key, envVar.Value);
}
Console.WriteLine("============================");
Console.WriteLine("All environment variables....Machine");
foreach(DictionaryEntry envVar in Environment.GetEnvironmentVariables(EnvironmentVariableTarget.Machine)){
Console.WriteLine("key={0}, value={1}", envVar.Key, envVar.Value);
}
Console.WriteLine("============================");
Below is what i am getting -
Is there anything i am missing out here.
Remember that all docker cli arguments must come before the image name. Anything after the image name is passed into the image as the command. If you're expecting those -e ... argument to set environment variables, they need to come before the image name:
docker run -d -p 5003:80 --name price-cat \
-e DB_HOST=165.202.xx.xx \
-e DB_DATABASE=pricing_catalog \
-e DB_USER=my-username \
-e DB_PASS=my-password \
pricingcatalog:latest
I followed the excellent Flask Mega Tutorial by Miguel Grinberg and have successfully setup a Flask web app with a Redis task queue and RQ workers, all in Docker containers.
To improve task queue performance, I now need to use my own custom worker, rather than the default RQ worker.
Unfortunately, I'm struggling to understand how I start a custom worker within docker.
To start a default RQ worker, the Flask Mega Tutorial uses the method of overriding the Docker entrypoint with "venv/bin/rq" and then supplying the argument "worker -u redis://redis-server:6379/0 microblog-tasks".
The executable name is supplied with the --entrypoint flag, whilst the command arguments are passed at the very end, after the name of the container image.
Here is the full command - only the last two lines are relevant to this question.
$ docker run --name rq-worker -d --rm -e SECRET_KEY=my-secret-key \
-e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
-e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \
--link mysql:dbserver --link redis:redis-server \
-e DATABASE_URL=mysql+pymysql://microblog:<database-password>#dbserver/microblog \
-e REDIS_URL=redis://redis-server:6379/0 \
--entrypoint venv/bin/rq \
microblog:latest worker -u redis://redis-server:6379/0 microblog-tasks
I have my own custom worker with the following code, taken directly from the RQ documentation:
#!/usr/bin/env python
import sys
from rq import Connection, Worker
# Preload libraries
import library_that_you_want_preloaded
# Provide queue names to listen to as arguments to this script,
# similar to rq worker
with Connection():
qs = sys.argv[1:] or ['default']
w = Worker(qs)
w.work()
Given that my custom worker is located within the Docker container at "home/dashboard/app/custom_worker.py", which commands do I need to supply upon starting the Docker container to create an RQ worker using my customised worker script? So far I have tried the following:
$ docker run --name rq-worker -d --rm -e SECRET_KEY=my-secret-key \
-e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
-e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \
--link mysql:dbserver --link redis:redis-server \
-e DATABASE_URL=mysql+pymysql://microblog:<database-password>#dbserver/microblog \
-e REDIS_URL=redis://redis-server:6379/0 \
--entrypoint venv/bin/rq \
microblog:latest /home/dashboard/app/custom_worker.py -u redis://redis-server:6379/0 microblog-tasks
and also...
$ docker run --name rq-worker -d --rm -e SECRET_KEY=my-secret-key \
-e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \
-e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \
--link mysql:dbserver --link redis:redis-server \
-e DATABASE_URL=mysql+pymysql://microblog:<database-password>#dbserver/microblog \
-e REDIS_URL=redis://redis-server:6379/0 \
--entrypoint /home/dashboard/app \
microblog:latest custom_worker -u redis://redis-server:6379/0 microblog-tasks
Any help would be greatly appreciated. There are a lot of posts online about creating a custom RQ worker, but I've not found much detail on how you practically use your custom worker in deployment.
Thank you kindly,
Robin
In the docs its possible with below command:
/usr/local/bin/rq worker -w custom_worker.py --path path/to/sourcecode
See more options with /usr/local/bin/rq worker --help
Docs:
https://python-rq.org/docs/workers/#custom-worker-classes
I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?
I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.