Bitbucket auto deploy to Linux server (DigitalOcean Droplet) - bitbucket

I have encountered a problem while attempting to deploy my code to a Droplet server (running Ubuntu) using BitBucket Pipeline.
I have set the necessary environment variables (SSH_PRIVATE_KEY, SSH_USER, SSH_HOST) and added the public key of the SSH_PRIVATE_KEY to the ~/.ssh/authorized_keys file on the server. When I manually deploy from the server, there are no issues with cloning or pulling. However, during the automatic CI deployment stage, I am encountering the error shown in the attached image.
This is my .yml configuration.
Thanks for helps in advance.

To refer to the values of the variables in defined in the configuration, you script should use $VARIABLE_NAME, not VARIABLE_NAME as that is literally that string.
- pipe: atlassian/ssh-run:latest
variables:
SSH_KEY: $SSH_KEY
SSH_USER: $SSH_USER
SERVER: $SSH_HOST
COMMAND: "/app/deploy_dev01.sh"
Also, note some pitfalls exists when using an explicit $SSH_KEY, it is generally easier and safer to use the default key provided by Bitbucket, see Load key "/root/.ssh/pipelines_id": invalid format

Related

ActiveSupport::EncryptedFile::MissingKeyError: Missing encryption key to decrypt file with. Docker

I'm trying to deploy a rails 7 app to Fly.io, which uses Docker to deploy apps. I keep getting the below error when I try to deploy.
ActiveSupport::EncryptedFile::MissingKeyError: Missing encryption key to decrypt file with. Ask your team for your master key and write it to /app/config/credentials/production.key or put it in the ENV['RAILS_MASTER_KEY'].
I've tried putting the following into my docker file:
RUN --mount=type=secret,id=RAILS_MASTER_KEY \
RAILS_MASTER_KEY="$(cat /run/secrets/RAILS_MASTER_KEY)"
Then running:
fly deploy \
--build-secret RAILS_MASTER_KEY=the_actual_secret_key_here
That doesn't work. I've added the key as an environment variable to fly.io, but my understanding is this is failing because production keys aren't available at build time. Anyway, I'm stumped. Any ideas?
I'm new to docker, so it's likely I'm just missing something simple here.

GitLab runner ignoring DOCKER_AUTH_CONFIG when credential helper specified

We have a GitLab CI pipeline that currently pulls images from our internal Docker registry, authenticated using a variable defined in .gitlab-ci.yml:
variables:
...
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}}'
This works fine.
We are trying to add a step to the end of the pipeline, to push our built Docker images to an Amazon ECR registry. We have installed the amazon-ecr-credential-helper on our runner instances, and given them the correct IAM permissions to be able to push to these registries. We have changed the .gitlab-ci.yml variable to:
DOCKER_AUTH_CONFIG: '{"auths": {"our.registry": {"auth": "$B64AUTH"}}, "credHelpers": { "<account-id>.dkr.ecr.<region>.amazonaws.com": "ecr-login"}}'
However, this causes the runner to fail to authenticate to our internal registry, so it cannot pull the images in which our jobs run. Whereas previously we would see in our pipeline jobs' logs:
Authenticating with credentials from $DOCKER_AUTH_CONFIG
... we are no longer seeing this. We're not even getting to the step where we want to push to ECR.
We have added a wrapper script around the credential helper, to log all the ins and outs to a file, and try and debug what is happening. However, it appears as if the helper isn't getting called at all, as there is nothing in the log file.
What can we do to try and get this working?
Our problems here boiled down to a number of causes:
Since we referenced the credential helper in DOCKER_AUTH_CONFIG, we needed the helper installed on the machine spawning the runners. (We use the docker+machine runner.) This machine also needed IAM permissions. Without this, it just gave up on the DOCKER_AUTH_CONFIG variable completely (a questionable decision if you ask me...)
In order to authenticate from within the jobs and push the images to ECR, we needed to configure the helper there too. We did this by modifying our spawner's config.toml file to add a volume /usr/bin/docker-credential-ecr-login:/usr/bin/docker-credential-ecr-login. (We also mounted the log directory and our helper wrapper.) In the docker push command, we added a --config docker-config flag, and wrote out an appropriate config to docker-config.config.json
Finally, our job image was docker/compose, and our verbose wrapper was written in bash, which isn't included in that image, so that was another silent failure. 😖.

Puppet Code Manager setup issue with Bitbucket

I have just installed puppet server enterprise and successfully added a few nodes and got some custom modules running also. I am now wanting to move to Code Manager before we get too deep in it.
I have followed the instructions for creating an empty Bitbucket repo here and initializing it with one single file environment.conf on a production branch as described in that link.
I have then followed the steps here to configure Code Manager but when I get to Test the control repository section to test the connection with puppet-code deploy --dry-run I get the following error:
--dry-run implies --all.
--dry-run implies --wait.
Dry-run deploying all environments.
2021/12/21 20:21:12 ERROR - [POST /deploys][500] Errors while collecting a list of environments to deploy (exit code: 1).
"/opt/puppetlabs/puppet/lib/ruby/gems/2.7.0/gems/rugged-0.27.7/lib/rugged/repository.rb:258: warning: Using the last argument as keyword parameters is deprecated\nERROR\t -\u003e Unable to determine current branches for Git source 'puppet' (/etc/puppetlabs/code-staging/environments)\nOriginal exception:\nFailed to authenticate SSH session: Unable to send userauth-publickey request at /opt/puppetlabs/server/data/code-manager/git/git#git.company.com-1234-in-puppet-control-repo.git\n"
I have added the puppet server's SSH pub key to the bitbucket repo's access tokens.
There are a few things in that error message im not fully understanding.
Unable to determine current branches for Git source 'puppet' - What is meant by source 'puppet' - my repo is called puppet-control-repo...?
Failed to authenticate SSH session: Unable to send userauth-publickey request - My puppet master's SSH keys are in the token list for that repo so confused here also.
Any guidance would be appreciated.
UPDATE (13-01-2022):
I can successfully clone on puppet server using command
git clone ssh://git#git.example.com:1234/project/puppet-control-repo.git --config core.sshCommand="ssh -i /etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa"
Note sure why puppet is still returning:
Failed to authenticate SSH session: Unable to send userauth-publickey request
I don't know if you saw the instructions here https://puppet.com/docs/pe/2021.4/control_repo.html#managing_environments_with_a_control_repository but you can run
puppet infrastructure configure
which makes sure the files have right permissions.
I would also test attempting a clone with keys works outside of code deploy
git clone -i /etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa your_gir_url
If this works it may be worth being aware of an issue we experienced on github https://puppet.com/blog/how-githubs-protocol-changes-impact-your-puppet-code-deployments/ which depending on bitbuckets approach to protocal may be having a similar affect.
We are updating docs to recommend the usage of more secure keys ed25519 creating as per the article.
if a manual clone doesnt work it suggests bitbucket doesn't have your public key correctly
Also a more complete debugging command is
runuser -u pe-puppet -- /opt/puppetlabs/puppet/bin/r10k -c /opt/puppetlabs/server/data/code-manager/r10k.yaml deploy environment production --puppetfile --verbose debug2
FOLLOWUP
On investigation we found https://support.puppet.com/hc/en-us/articles/227829007 which showed ssh:// was required at the start of r10k_remote making an example command of ssh://git#bitbucket.org:davidsandilands/control-repo.git
I have requested updates to https://support.puppet.com/hc/en-us/articles/227829007 to highlight this is not a version confined issue and asked for the puppet code manager configuration docs to be updated to reflect this may be required.
I see that you have a .pub file in the ssh directory. I believe it's expecting a private key there.
Also do you have the master class set up to point to your repo inside of Puppet Enterprise web ui?
You'll want to set the following parameters on that class.
code_manager_auto_configure = true
r10k_private_key = $PRIVATE_KEY_IN_SSH_FOLDER_ABSOLUTE_PATH
r10k_remote = Your git URL
The PE Master can be found in Node Groups on the PE Web UI Node Groups -> PE Infrastructure -> PE Master
Thanks to #david-sandilands for helping me resolve this and guiding me to this article via the puppet community slack. Top guy!
EDIT 1:
The solution was documented here: https://support.puppet.com/hc/en-us/articles/227829007-Fix-your-Bitbucket-Stash-Code-Manager-configuration-in-Puppet-Enterprise-2015-3-to-2017-2
However the documentation was out of date as it affected version 2021.4 also.
In short:
r10k_remote = "ssh://git#git.company.com:1234/project/control-repo.git"
Not
r10k_remote = "git#git.company.com:1234/project/control-repo.git"
When working with Bitbucket Server.
EDIT 2:
Puppet have since updated their documentation:
https://puppet.com/docs/pe/2021.5/code_mgr_config.html#code_mgr_enable

Jenkins X use secrets in Preview environments

I'm using Jenkins X for microservice build / deployment. In each environment there are shared secrets used across microservices (client keys etc) which are injected into deployment.yaml as environment variables using valueFrom and secretKeyRef. This works well in Production and Staging where the namespaces are well know, but since preview generates a new namespace each time, these secrets will no exist. Is there a way to copy secrets from another, known, namespace, or a better approach?
You can create another namespace called jx-preview to store preview specific secrets, and add this line after the jx preview command in your Jenkinsfile
sh "kubectl get secret {secret_name} --namespace={from_namespace} --export -o yaml | kubectl apply --namespace=jx-$ORG-$PREVIEW_NAMESPACE -f -"
Not sure if this is the best way though
We've got a command to service link services from one namespace to another - such as to link services from staging to your preview environment via jx step link services.
It would be nice to add a similar command to copy secrets from a namespace in the same way. I've raised an issue to track this new feature
Another option is to create your own Job in charts/preview/templates/myjob.yaml and in that job create whatever Secrets you need however you want and then annotate it so that its triggered as a post-install hook of your Preview chart

Accessing Elastic Beanstalk environment properties in Docker

So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.

Resources