I want to modify the password of the container created by the elasticsearch image,I have executed the following orders
setup-passwords auto
but it did't work
enter image description here
unexpected response code [403] from GET http://172.17.0.2:9200/_xpack/security/_authenticate?pretty
Please help me. Thank you.
When using docker it is usually best to configure services via environment variables. To set a password for the elasticsearch service you can run the container using the env variable ELASTIC_PASSWORD:
docker run -e ELASTIC_PASSWORD=`openssl rand -base64 12` -p 9200:9200 --rm --name elastic docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4
openssl rand -base64 12 sets a random value for the password
Related
I have quite a weird problem and I'm really not sure where it comes from. I'm trying to run a Grafana inside a Docker Container and want to set some grafana.ini values through Environment Variables in the Docker run Command.
docker run -d -v grafana_data:/var/lib/grafana -e "GF_SECURITY_ADMIN_PASSWORD=123456" -e "GF_USERS_DEFAULT_THEME=light" --name=grafana -p 3000:3000 grafana/grafana
The default Theme gets changed as wanted, the admin_password stays the same. I've checked for typos like a million times and could not find one. I've tried with '123456' and without, all with the same result. Is there a reason why I can't change this value?
Thanks in Advance!
https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#admin_password
The password of the default Grafana Admin. Set once on first-run.
Please note the last sentence = it is used only on first run. It won't change admin password if you already changed admin password before and that new password is stored in the database.
I would like to create a SQL Server container in Azure CLI with the following statement:
az container create --resource-group rgtest \gt; --name db_test > --image mcr.microsoft.com/mssql/server:2017-CU16-ubuntu > --environment-variables ACCEPT_EULA=Y MSSQL_SA_PASSWORD= password_test > --dns-name-label dns_test > --cpu 2 > --memory 2 > --port 1433
I was expecting a JSON output that contains all the details and properties of container, but unfortunately, I am not getting anything returned. Am I doing anything wrong?
There are some format mistakes in the command, you could use it like this:
az container create --resource-group rgtest --name dbtest --image mcr.microsoft.com/mssql/server:2017-CU16-ubuntu --environment-variables ACCEPT_EULA=Y MSSQL_SA_PASSWORD=password_test --dns-name-label dbtest --cpu 2 --memory 2 --port 1433
More reference: az container create
I found the solution. Actually, for MSSQL_SA_PASSWORD, I had provided some special characters (symbols) to make it strong. Because of these special characters, the command was not working. Once I removed those, the command is working like a champ.
In docker a created network
docker network create mysql-network
Then I create mysql image
docker container run -d -p 3306:3306 --net=mysql-network --name mysql-hibernate -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=test -v hibernate:/var/lib/mysql mysql
When I run docker ps everything seems OK
This is my application.properties
spring.jpa.hibernate.ddl-auto=create
useSSL=false
spring.datasource.url=jdbc:mysql://localhost:3306/test
spring.datasource.username=root
spring.datasource.password=password
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL57Dialect
I also tried
spring.datasource.url=jdbc:mysql://mysql-hibernate:3306/test
But I will always get an error on startup
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'test'
How's that possible that it doesn't know database 'test' ? I specified name in docker like this -e MYSQL_DATABASE=test
What am I missing ?
I know it is bit late but I'll answer anyway so people coming here can benefit ;)
Your configuration overall seems alright. When you get error like this you can add flag param set to true in your application.properties in line where you set datasource url.
So you will come up with something like this:
spring.datasource.url=jdbc:mysql://localhost:3306/test?createDatabaseIfNotExist=true
Hope this helps!
RabbitMQ in docker lost data after remove container without volume.
My Dockerfile:
FROM rabbitmq:3-management
ENV RABBITMQ_HIPE_COMPILE 1
ENV RABBITMQ_ERLANG_COOKIE "123456"
ENV RABBITMQ_DEFAULT_VHOST "123456"
My run script:
IMAGE_NAME="service-rabbitmq"
TAG="${REGISTRY_ADDRESS}/${IMAGE_NAME}:${VERSION}"
echo $TAG
docker rm -f $IMAGE_NAME
docker run \
-itd \
-v "rabbitmq_log:/var/log/rabbitmq" \
-v "rabbitmq_data:/var/lib/rabbitmq" \
--name "service-rabbitmq" \
--dns=8.8.8.8 \
-p 8080:15672 \
$TAG
After removing the container, all data are lost.
How do I configure RabbitMQ in docker with persistent data?
Rabbitmq uses the hostname as part of the folder name in the mnesia
directory. Maybe add a --hostname some-rabbit to your docker run?
I had the same issue and I found the answer here.
TL;DR
Didn't do too much digging on this, but it appears that the simplest way to do this is to change the hostname as Pedro mentions above.
MORE INFO:
Using RABBITMQ_NODENAME
If you want to edit the RABBITMQ_NODENAME variable via Docker, it looks like you need to add a hostname as well since the Docker hostnames are generated as random hashes.
If you change the RABBITMQ_NODENAME var to something static like my-rabbit, RabbitMQ will throw something like an "nxdomain not found" error because it's looking for something likemy-rabbit#<docker_hostname_hash>. If you know the Docker hostname and can automate pulling it into your RABBITMQ_NODENAME value like so, my-rabbit#<docker_hostname_hash> I believe it would work.
UPDATE
I previously said,
If you know the Docker hostname and can automate pulling it into your RABBITMQ_NODENAME value like so, my-rabbit#<docker_hostname_hash> I believe it would work.
This would not work as described precisely because the default docker host name is randomly generated at launch, if it is not assigned explicitly. The hurdle would actually be to make sure you use the EXACT SAME <docker_hostname_hash> as your originating run so that the data directory gets picked up correctly. This would be a pain to implement dynamically/robustly. It would be easiest to use an explicit hostname as described below.
The alternative would be to set the hostname to a value you choose -- say, app-messaging -- AND ALSO set the RABBITMQ_NODENAME var to something like rabbit#app-messaging. This way you are controlling the full node name that will be used in the data directory.
Using Hostname
(Recommended)
That said, unless you have a reason NOT to change the hostname, changing the hostname alone is the simplest way to ensure that your data will be mounted to and from the same point every time.
I'm using the following Docker Compose file to successfully persist my setup between launches.
version: '3'
services:
rabbitmq:
hostname: 'mabbit'
image: "${ARTIFACTORY}/rabbitmq:3-management"
ports:
- "15672:15672"
- "5672:5672"
volumes:
- "./data:/var/lib/rabbitmq/mnesia/"
networks:
- rabbitmq
networks:
rabbitmq:
driver: bridge
This creates a data directory next to my compose file and persists the RabbitMQ setup like so:
./data/
rabbit#mabbit/
rabbit#mabbit-plugins-expand/
rabbit#mabbit.pid
rabbit#mabbit-feature_flags
I'm trying to deploy an S3-backed private docker registry and I'm getting an error when I try to start the registry container. My docker-compose.yml looks like this:
registry:
restart: always
image: registry:2
ports:
- 5000:5000
environment:
REGISTRY_STORAGE_S3_ACCESSKEY: myKey
REGISTRY_STORAGE_S3_SECRETKEY: mySecret
REGISTRY_STORAGE_S3_BUCKET: docker.registry.bucket
REGISTRY_STORAGE_S3_ROOTDIRECTORY: registry/data
volumes:
- /home/docker/certs:/certs
And when I try to run sudo docker-compose up -d I get this error message:
registry_1 | panic: multiple storage drivers specified in configuration or environment: s3, filesystem
It seems like I'm missing something in my environment to explicitly choose s3 but it's not clear from the docs how to do this.
I was trying to override the storage configuration by using ENV vars. This workaround did the job (in json format):
{
"REGISTRY_STORAGE": "s3",
"REGISTRY_STORAGE_S3_REGION": <REGION>,
"REGISTRY_STORAGE_S3_BUCKET": <BUCKET_NAME>,
"REGISTRY_STORAGE_S3_ROOTDIRECTORY": <ROOT_PATH>,
"REGISTRY_STORAGE_S3_ACCESSKEY": <KEY>,
"REGISTRY_STORAGE_S3_SECRETKEY": <SECRET>
}
It looks like by defining REGISTRY_STORAGE we override the one in config.yml.
There is REGISTRY_STORAGE environment variable missing. Need to be added to the "env" block.
You're getting this error because the registry:2 image comes with a default config file /etc/docker/registry/config.yml which uses filesystem storage.
By adding S3 storage using environment variables there are multiple storage drivers, which I guess isn't supported.
I don't know of any way to remove configuration options with environment variables, so I think you'll probably need to create a config file and mount it as a volume (http://docs.docker.com/registry/configuration/#overriding-the-entire-configuration-file)
I was able to get this working using environment variables. Here's the snippet from my script:
-e REGISTRY_STORAGE=s3 \
-e REGISTRY_STORAGE_S3_ACCESSKEY=$AWS_KEY \
-e REGISTRY_STORAGE_S3_SECRETKEY=$AWS_SECRET \
-e REGISTRY_STORAGE_S3_BUCKET=$BUCKET \
-e REGISTRY_STORAGE_S3_REGION=$AWS_REGION \
-e REGISTRY_STORAGE_S3_ROOTDIRECTORY=$BUCKET_PATH \
In My Case I had an environment variable for data in docker-compose.yml & S3 configuration in config.yml. It took some time but once the environment variables are commented out , registry:2 started properly.