Can i know, how to set initial password for elasticsearch database using docker-compose
bin/elasticsearch-setup-passwords auto -u "http://192.168.2.120:9200
See this:
The initial password can be set at start up time via the ELASTIC_PASSWORD environment variable:
docker run -e ELASTIC_PASSWORD=MagicWord docker.elastic.co/elasticsearch/elasticsearch-platinum:6.1.4
Also, for newest image (docker.elastic.co/elasticsearch/elasticsearch:7.14.0), the ELASTIC_PASSWORD_FILE environment added mentioned in Configuring Elasticsearch with Docker:
For example, to set the Elasticsearch bootstrap password from a file, you can bind mount the file and set the ELASTIC_PASSWORD_FILE environment variable to the mount location. If you mount the password file to /run/secrets/bootstrapPassword.txt, specify:
-e ELASTIC_PASSWORD_FILE=/run/secrets/bootstrapPassword.txt
So add these environment in docker-compose.yaml I guess could work for you.
Related
I have a cassandra container started on which I overwrite the cassandra.yaml file with the updated property:
authenticator: PasswordAuthenticator
instead
authenticator: AllowAllAuthenticator
This will allow me to create new superuser on the new instance.
Now, after this instance, I create a new image named cassandra-new which has the new cassandra.yaml file. So when I start it, it will allow me to create a new role for may cassandra db.
The problem is that I must manually go inside the instance:
docker exec -it cassandra-new /bin/bash
Then I have to manually type in:
cqlsh -u cassandra -p cassandra
And then I can write my script:
CREATE ROLE IF NOT EXISTS some WITH SUPERUSER = true AND LOGIN = true AND PASSWORD = 'supersome';
LIST ROLES;
How can I do this automatically without ENTRYPOINT?(because I already tried it for 2 days now and I got tired of it - not working)
(Please provide code instead of words because I am newby.)
Dockerfile is created by shell script:
if [[ ! -e Dockerfile ]]; then
touch Dockerfile
cat >> Dockerfile << EOF
FROM cassandra:latest
COPY cassandra.yaml cassandra:etc/cassandra/cassandra.yaml
EOF
fi
docker-entrypoint.sh was not changed - so it is the same as the default one provided by cassandra latest image.
Unless cassandra docker image developers support special environment variables or some init scripts that are automatically run as part of their entrypoint (mysql for example has a folder where you mount .sql, .sh or .gz files and it will execute them accordingly), then you do require a custom entrypoint or just have to do this manually...
One way to do it would be a simple script that starts the cassandra container and runs those commands, but depending on your environment that can be either shell or batch script so this is quite a custom solution.
Since the cassandra entrypoint is probably what is starting the service and doing the init process the only way to actually manage this would be a custom entrypoint.
Have a look at this gist that shows how to create a cassandra docker image that can execute any bash or cql scripts upon startup.
Following this gist, you can simply add a cql script inside the container e.g. /docker-entrypoint-initdb.d/create_roles.cql with the following content:
CREATE ROLE IF NOT EXISTS some WITH SUPERUSER = true AND LOGIN = true AND PASSWORD = 'supersome';
LIST ROLES;
It will be executed automatically on startup.
I have an app that runs on several docker containers. To simplify my problem let's say I have 3 containers : one for MySQL and 2 for 2 instances of the api (sharing the same volume where the code is but with a different env specifying different database settings) as configured in the following docker-compose.yml
services:
api-1:
image: mynamespace/my-image-name:1.0
environment:
DB_NAME: db_api_1
api-2:
image: mynamespace/my-image-name:1.0
environment:
DB_NAME: db_api_2
In a Makefile I have rules for deploying the containers and installing the database for each of my api instances.
What I am trying to achieve is to create a make rule that dumps a database given an env. Knowing that I have no MySQL client installed on my api instances, I thought there should be a way to extract the env variables I need (with printenv VARNAME) from an api container then use it in the database container.
Anyone knows how this could be achieved ?
Assuming that it's an environment variable that you set using the -e option to docker run, you could do something like this:
docker exec api_container sh -c 'echo $VARNAME'
If it is environment variable that was set inside the container e.g. from a script, then you're mostly out of luck. You could of course inspect /proc/<pid>/environ, but that's hacky and I wouldn't recommend it.
It also sounds as if you would benefit from using something like docker-compose to manage your containers.
I have this dockerfile that is working correctly.
https://github.com/shantanuo/docker/blob/master/packetbeat-docker/Dockerfile
The only problem is that when my host changes, I need to modify packetbeat.yml file
hosts: ["https://944fe807b7525eaf163f502e08a412c5.us-east-1.aws.found.io:9243"]
password: "rzmYYJUdHVaglRejr8XqjIX7"
Is there any way to simplify this change? Can I use environment variable to replace these 2 values?
Set environment variables in your docker container first.
You can either set them by accessing your container
docker exec -it CONTAINER_NAME /bin/bash
HOST="https://944fe807b7525eaf163f502e08a412c5.us-east-1.aws.found.io:9243"
PASS="rzmYYJUdHVaglRejr8XqjIX7"
Or in your Dockerfile
ENV HOST https://944fe807b7525eaf163f502e08a412c5.us-east-1.aws.found.io:9243
ENV PASS rzmYYJUdHVaglRejr8XqjIX7
And the in the packetbeat.yml
hosts: ['${HOST}']
password: '${PASS}'
I'm currently trying to use variable substitution in a docker-compse.yml file. This file contains the following:
jenkins:
image: "jenkins:${JENKINS_VERSION}"
external_links:
- mongodb:mongo
ports:
- 8000:8080
The image below shows what happens when I try to start everything up.
As you can see, docker-compose shows a warning saying that the variable is not set. I suspect this is caused due to the use of sudo to start docker-compose. My setup (a Jenkins docker container which has access to docker and docker-compose via volume mounts) currently requires the use of sudo. Would it be better to stop docker requiring sudo, or is there another way to fix this without changing the current setup?
sudo -E preserve the user environment when running the command. It should do what you want.
I have my app inside a container and it's reading environment variables for passwords and API keys to access services. If I run the app on my machine (not inside docker), I just export SERVICE_KEY='wefhsuidfhda98' and the app can use it.
What's the standard approach to this? I was thinking of having a secret file which would get added to the server with export commands and then run a source on that file.
I'm using docker & fig.
The solution I settled on was the following: save the environment variables in a secret file and pass those on to the container using fig.
have a secret_env file with secret info, e.g.
export GEO_BING_SERVICE_KEY='98hfaidfaf'
export JIRA_PASSWORD='asdf8jriadf9'
have secret_env in my .gitignore
have a secret_env.template file for developers, e.g.
export GEO_BING_SERVICE_KEY='' # can leave empty if you wish
export JIRA_PASSWORD='' # write your pass
in my fig.yml I send the variables through:
environment:
- GEO_BING_SERVICE_KEY
- JIRA_PASSWORD
call source secret_env before building
docker run provides environment variables:
docker run -e SERVICE_KEY=wefsud your/image
Then your application would read SERVICE_KEY from the environment.
https://docs.docker.com/reference/run/
In fig, you'd use
environment:
- SERVICE_KEY: wefsud
in your app spec. http://www.fig.sh/yml.html
From a security perspective, the former solution is no worse than running it on your host if your docker binary requires root access. If you're allowing 'docker' group users to run docker, it's less secure, since any docker user could docker inspect the running container. Running on your host, you'd need to be root to inspect the environment variables of a running process.