I am writing a deployment for xwiki with mysql on kubernetes. In the setup instructions, the command for running xwiki is given as
docker run --net=xwiki-nw --name xwiki -p 8080:8080 -v /my/own/xwiki:/usr/local/xwiki -e DB_USER=xwiki -e DB_PASSWORD=xwiki -e DB_DATABASE=xwiki -e DB_HOST=mysql-xwiki xwiki:mysql-tomcat
I can't seem to find anything online or in the kubernetes documentation for how to control these argument flags that go with the docker run command.
Is there therefore no way to use this container correctly in a deployment, or am I missing something?
I don't have much experience in xwiki but I can shed some light.
You can probably ignore --net as well as --name
You would need to map your container port -p in your deployment
I'm not sure what the volume -v is used for. If this is just for reading some configuration, you need a ConfigMap in Kubernetes
All the environment variables -e can be stored in a ConfigMap and included in your Deployment.
I suggest you have a look at a sample deployment: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Please refer the following doc's for reference
For defining env vars:
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
For defining Command and Arguments for a Container:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
for persistent-volumes:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Related
I'm stuck trying to store OrientDB database and configuration outside of the docker container I'm running. This is the first time using both docker and orientdb so my confusion is multilevel.
Based on https://hub.docker.com/_/orientdb/ I have successfully ran the command docker run -d --name orientdb -p 2424:2424 -p 2480:2480 -e ORIENTDB_ROOT_PASSWORD=rootpwd orientdb but I'm stuck trying to specify where on my local disk to store data and configuration so its not lost when the container is stopped/removed.
I tried adding the -v <databases_path>:/orientdb/databases option but to no avail. I'm probably missing something very basic (since this is my first hands on experience with docker and orientdb). Trying to set up volumes in docker desktop and other trial and error tests have also failed.
Can anyone help? Or point me to some tutorial where I can learn because I'm stuck.
Thanks to #nulldroid I finally figured it out. It was the syntax which messed me up as usual. The following command worked for me. No need to set up volumes etc just a correct formatted path to the directory I already had created using the "/d/" in the beginning for windows "D:"
docker run -d --name orientdb -p 2424:2424 -p 2480:2480 -v /d/docker/test1/databases:/orientdb/databases -e ORIENTDB_ROOT_PASSWORD=root orientdb:latest
I am trying to follow this tutorial on setting up docker clusters https://levelup.gitconnected.com/setting-up-rabbitmq-cluster-c247d61385ed
I get to running the following command that I will need to run for the other two nodes
docker run -d --rm --net rabbit -v C:\RabbitPrototype\config\rabbit-1/:/config/ -e RABBITMQ_CONFIG_FILE=/config/rabbitmq -e RABBITMQ_ERLANG_COOKIE=ABCDYJLFQNTHDRZEPLOZ --hostname rabbit-1 --name rabbit-1 -p 8081:15672 rabbitmq:3.9-management
I can see it run in the docker graphical container view, but after a couple seconds it disappears, as seems that the container stops running, what do I need to do order to keep it running, are there any logs to look at to see why it stopped?
I have removed the --rm mentioned by #Omer
I get this error
2021-12-16 16:24:41.403174+00:00 [erro] <0.130.0> Failed to load advanced configuration file "/config/rabbitmq.config": 1: syntax error before: '.'
My config file that I am trying to load looks like this, copied from the tutorial page, so currently not sure what the issue is with the . (dot) between then users.guest part on line one?
loopback_users.guest = false
listeners.tcp.default = 5672
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#rabbit-1
cluster_formation.classic_config.nodes.2 = rabbit#rabbit-2
cluster_formation.classic_config.nodes.3 = rabbit#rabbit-3
The issue from the error message seems to be that RabbitMQ thinks you are providing it an advanced configuration file instead of the normal configuration file - https://www.rabbitmq.com/configure.html#advanced-config-file . Even though since RabbitMQ 3.7+ has sysctl(the format you used) kind of configuration files, the advanced configuration file still uses the classic configuration format(https://www.rabbitmq.com/configure.html#config-file-formats) which explains the syntax error.
From the docs - https://www.rabbitmq.com/configure.html#configuration-files
Not sure why it would pick the value of the RABBITMQ_CONFIG_FILE as the advanced config file instead of the default one.
Can you update the question with the full logs? Even after the container is dead, you can check its logs using
docker logs rabbit-1
I seem to have something running with this command now running this, I renamed the rabbitmq.config to rabbitmq.conf and also told it to put it in the /etc/rabbitmq/ which seems to be the default location
docker run --net rabbit -v C:\\RabbitPrototype\config\rabbit-1\:/etc/rabbitmq/ -e RABBITMQ_ERLANG_COOKIE=ABCDYJLFQNTHDRZEPLOZ --hostname rabbit-1 --name rabbit-1 -p 8081:15672 rabbitmq:3.9-management
I'm testing my lambda function wrapped in a docker image and provided environment variable AWS_PROFILE=my-profile for the lambda function. However, I got an error : "The config profile (my-profile) could not be found" while this information is there in ~/.aws/credentials and ~/.aws/config files. Below are my commands:
docker run -e BUCKET_NAME=my-bucket -e AWS_PROFILE=my-profile-p 9000:8080 <image>:latest lambda_func.handler
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '"body":{"x":5, "y":6}}'
The thing is that if I just run the lambda function as a separated python script then it works.
Can someone show me what went wrong here?
Thanks
When AWS is showing how to use their containers, such as for local AWS Glue, they share the ~/.aws/ in read-only mode with the container using volume option:
-v ~/.aws:/root/.aws:ro
Thus if you wish to follow AWS example, your docker command could be:
docker run -e BUCKET_NAME=my-bucket -e AWS_PROFILE=my-profile-p 9000:8080 -v ~/.aws:/root/.aws:ro <image>:latest lambda_func.handler
The other way is to pass the AWS credentials using docker environment variables, which you already are trying.
You need to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Your home directory (~) is not copied to Docker container, so AWS_PROFILE will not work.
See here for an example: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
I am trying to create a docker container with a rabbitMQ image, and then join that instance to an existing cluster.
However I get the error incompatible_feature_flags
It looks like the created image automatically enables some feature flags that are not enabled and cannot be enabled in the existing cluster.
I am running the container using the following code:
docker run -d --hostname xxx.yyy.com.co --name rabbit -p 15672:15672 -p 5672:5672 -p 4369:4369 --add-host='rabbit1:xxx.xxx.xx.xxx' --network=host -e RABBITMQ_DEFAULT_USER=admin -e RABBITMQ_DEFAULT_PASS=admin -e RABBITMQ_ERLANG_COOKIE='xxxxxxxx' -e ERL_EPMD_PORT=4369 rabbitmq:latest
I think that I can enable/disable feature flags as parameters when starting the container, but I have not been able to find anything in the documentation.
I would appreciate any help
It may be caused by the different version between the tow RabbitMQ applications.
eg: one is 3.7.x, but the another is 3.8.x .
I am referring this site to link containers.
When two containers are linked, Docker will set some environment variables in the target container to enable programmatic discovery of information related to the source container.
This is the line specified in the documentaion. But when i see /etc/hosts i can see entries for both container. But when i run env command, i don't see any port mappings specified in that docker site.
Works fine for me:
$ docker run -d --name redis1 redis
0b869d9f5a43e24976beec6c292839ea2c67983012e50893f0b557cd8bc0c3b4
$ docker run --link redis1:redis1 debian env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=c23a30b8618f
REDIS1_PORT=tcp://172.17.0.3:6379
REDIS1_PORT_6379_TCP=tcp://172.17.0.3:6379
REDIS1_PORT_6379_TCP_ADDR=172.17.0.3
REDIS1_PORT_6379_TCP_PORT=6379
REDIS1_PORT_6379_TCP_PROTO=tcp
REDIS1_NAME=/berserk_nobel/redis1
REDIS1_ENV_REDIS_VERSION=2.8.19
REDIS1_ENV_REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-2.8.19.tar.gz
REDIS1_ENV_REDIS_DOWNLOAD_SHA1=3e362f4770ac2fdbdce58a5aa951c1967e0facc8
HOME=/root
If you're still having trouble, you need to provide a way we can recreate your problem.