I'm using Jenkins X for microservice build / deployment. In each environment there are shared secrets used across microservices (client keys etc) which are injected into deployment.yaml as environment variables using valueFrom and secretKeyRef. This works well in Production and Staging where the namespaces are well know, but since preview generates a new namespace each time, these secrets will no exist. Is there a way to copy secrets from another, known, namespace, or a better approach?
You can create another namespace called jx-preview to store preview specific secrets, and add this line after the jx preview command in your Jenkinsfile
sh "kubectl get secret {secret_name} --namespace={from_namespace} --export -o yaml | kubectl apply --namespace=jx-$ORG-$PREVIEW_NAMESPACE -f -"
Not sure if this is the best way though
We've got a command to service link services from one namespace to another - such as to link services from staging to your preview environment via jx step link services.
It would be nice to add a similar command to copy secrets from a namespace in the same way. I've raised an issue to track this new feature
Another option is to create your own Job in charts/preview/templates/myjob.yaml and in that job create whatever Secrets you need however you want and then annotate it so that its triggered as a post-install hook of your Preview chart
Related
I have encountered a problem while attempting to deploy my code to a Droplet server (running Ubuntu) using BitBucket Pipeline.
I have set the necessary environment variables (SSH_PRIVATE_KEY, SSH_USER, SSH_HOST) and added the public key of the SSH_PRIVATE_KEY to the ~/.ssh/authorized_keys file on the server. When I manually deploy from the server, there are no issues with cloning or pulling. However, during the automatic CI deployment stage, I am encountering the error shown in the attached image.
This is my .yml configuration.
Thanks for helps in advance.
To refer to the values of the variables in defined in the configuration, you script should use $VARIABLE_NAME, not VARIABLE_NAME as that is literally that string.
- pipe: atlassian/ssh-run:latest
variables:
SSH_KEY: $SSH_KEY
SSH_USER: $SSH_USER
SERVER: $SSH_HOST
COMMAND: "/app/deploy_dev01.sh"
Also, note some pitfalls exists when using an explicit $SSH_KEY, it is generally easier and safer to use the default key provided by Bitbucket, see Load key "/root/.ssh/pipelines_id": invalid format
We have a GitLab CI pipeline that currently pulls images from our internal Docker registry, authenticated using a variable defined in .gitlab-ci.yml:
variables:
...
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}}'
This works fine.
We are trying to add a step to the end of the pipeline, to push our built Docker images to an Amazon ECR registry. We have installed the amazon-ecr-credential-helper on our runner instances, and given them the correct IAM permissions to be able to push to these registries. We have changed the .gitlab-ci.yml variable to:
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}, "credHelpers": { "<account-id>.dkr.ecr.<region>.amazonaws.com": "ecr-login"}}'
However, this causes the runner to fail to authenticate to our internal registry, so it cannot pull the images in which our jobs run. Whereas previously we would see in our pipeline jobs' logs:
Authenticating with credentials from $DOCKER_AUTH_CONFIG
... we are no longer seeing this. We're not even getting to the step where we want to push to ECR.
We have added a wrapper script around the credential helper, to log all the ins and outs to a file, and try and debug what is happening. However, it appears as if the helper isn't getting called at all, as there is nothing in the log file.
What can we do to try and get this working?
Our problems here boiled down to a number of causes:
Since we referenced the credential helper in DOCKER_AUTH_CONFIG, we needed the helper installed on the machine spawning the runners. (We use the docker+machine runner.) This machine also needed IAM permissions. Without this, it just gave up on the DOCKER_AUTH_CONFIG variable completely (a questionable decision if you ask me...)
In order to authenticate from within the jobs and push the images to ECR, we needed to configure the helper there too. We did this by modifying our spawner's config.toml file to add a volume /usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login. (We also mounted the log directory and our helper wrapper.) In the docker push command, we added a --config docker-config flag, and wrote out an appropriate config to docker-config.config.json
Finally, our job image was docker/compose, and our verbose wrapper was written in bash, which isn't included in that image, so that was another silent failure. 😖.
I am running docker-ejabberd on ECS and all works fine. Now i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container. There is no clear way described even on the docker-ejabberd wiki or anywhere on how to do that simply. Does anyone face a similar situation and how to do that?
For example in the ejabberd.yml i have this section:
sql_server: ${MYSQL_SERVER}
sql_database: ${MYSQL_DATABASE_NAME}
sql_username: ${MYSQL_USERNAME}
sql_password: ${MYSQL_PASSWORD}
sql_port: ${MYSQL_PORT}
I want to pass those vars as env vars while docker run and then replace them before the container run.
Side note: We are using ECS and passing the variables through the task defination without any issue.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Also, I have an idea of replacing the variables in this ejabberd.yml file in the CICD pipeline just before building the image and while getting the code from the git repository and create the image on AWS ECR?
i want to replace the my_sql user/pass that exists on the ejabberd.yml file with the environment variables been passed to the image while running the container.
The ejabberd.yml file is read and parsed by the yconf library (https://github.com/processone/yconf) , and I doubt it supports such a thing.
I went through some topics recommend using the ENTRY_POINT command to run a script that replaces the file before running the container but not sure if that's a good idea.
Following that recomendation, if you don't want to mess with the whole ejabberd.yml and let a script manipulate it, you can ensure that only those specific options are parametrized:
You can define those vars using a script in a small file, and then include options from that small file into ejabberd.yml using
https://docs.ejabberd.im/admin/configuration/file-format/#include-additional-files
For example, in your ejabberd.yml, put something like this:
include_config_file:
/etc/ejabberd/database.yml:
allow_only: [sql_server, sql_database, sql_username, sql_password, sql_port]
Then write your script, that generates that small file, for example:
$ generate-database-config.sh
$ cat /etc/ejabberd/database.yml
sql_server: "localhost"
sql_database: "ejaup"
sql_username: "ejabberd_test"
sql_password: "ejabberd_test"
sql_port: 3306
While setting up and configure some docker containers I asked myself how I could automatically edit some config files inside the container after the containerized service finished installing (since the config files are created at the installation).
I have tried that using a shell file and adding it as the entrypoint in the Dockerfile. However, as I have said the config file does not exist right at the beginning and hence the sed commands in the script fail.
Linking an config files with - ./myConfig.conf:/xy/myConfig.conf is also not an option because the config contains some installation dependent options.
The most reasonable solution I have found was running a script, which edits the config, manually after the installation has finished with docker exec -i mycontainer sh < editconfig.sh
EDIT
My question is formulated in general terms. However, the question arose while working with Nextcloud in a docker-compose setup similar to the official example. That container contains a config.php file which is the general config file of Nextcloud and is generated during the installation. Certain properties of that files have to be changed (there are only a very limited number of environmental variables to specify). Since I am conducting some tests with this container I have to repeatedly reinstall it and thus reedit the config file.
Maybe you can try another approach and have your config file/application pick its settings from the environmental variables. That would be consistent with the 12factor app methodology see here
How I understand your case you need to start your container from creating config by some template.
I see a number of options to do it:
Use some script that generates a config from template and arguments from a command line or environment variables. (Jinja2 and python for example or Mustache and node.js ). In this case, your entrypoint generate the template and after this start application. For change config, you will be forced restart service (container).
Run some service can save the configuration and render you configuration in run time. Personally, I like consul template, we active use this engine in our environment, and have no problems for while. In this case, config is more dynamic and able to be changed "on the fly". In your container, you will have two processes, application, and consul-template daemon. Obviously, you will need to run and maintain consul. For reloading config restart of an application process is enough.
Run a custom script to create the config. :)
So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.