How to use environment variables in GitLab Omnibus configuration - environment-variables

I have GitLab running on a Kubernetes cluster.
I have a ConfigMap containing all my omnibus configurations.
The ConfigMap gets mounted to the environment variable GITLAB_OMNIBUS_CONFIG.
This expose sensitive configurations like passwords in src code.
I'd like to create Secrets instead and mount them as additional Environment variables and have the
omnibus config read from the additional Environment variables as in the example bellow.
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "mail.hostedemail.com"
gitlab_rails['smtp_port'] = 465
gitlab_rails['smtp_user_name'] = "username#domain.com"
gitlab_rails['smtp_password'] = $SMTP_PASSWORD
gitlab_rails['smtp_domain'] = "domain.com"
etc...

I have met the same issue with docker-compose and I resolved this by the Ruby ENV variable:
environment:
SMTP_USER_PASSWORD: ${SMTP_PASSWORD}
GITLAB_OMNIBUS_CONFIG: |
# Add any other gitlab.rb configuration here, each on its own line
gitlab_rails['smtp_password'] = ENV['SMTP_PASSWORD']

Related

Setting airflow connections using Values.yaml in Helm (Kubernetes)

Airflow Version - 2.3.0
Helm Chart - Apache-airflow/airflow
I have been working on setting up airflow using helm on kubernetes.
Currently, I am planning to set airflow connections using the values.yaml file and env variables instead of configuring them up on the webUI.
I believe the settings to tweak, to set the connections, are:
extraSecrets: {}
# eg:
# extraSecrets:
# '{{ .Release.Name }}-airflow-connections':
# type: 'Opaque'
# data: |
# AIRFLOW_CONN_GCP: 'base64_encoded_gcp_conn_string'
# AIRFLOW_CONN_AWS: 'base64_encoded_aws_conn_string'
# stringData: |
# AIRFLOW_CONN_OTHER: 'other_conn'
# '{{ .Release.Name }}-other-secret-name-suffix':
# data: |
# ...
I am not sure how to set all the key-value pairs for a databricks/emr connection, and how to use the kubernetes secrets (already set up as env vars in pods) to get the values
#extraSecrets:
# '{{ .Release.Name }}-airflow-connections':
# type: 'Opaque'
# data:
# AIRFLOW_CONN_DATABRICKS_DEFAULT_two:
# conn_type: "emr"
# host: <host_url>
# extra:
# token: <token string>
# host: <host_url>
It would be great to get some insights on how to resolve this issue
I looked up this link : managing_connection on airflow
Tried Changes in values.yaml file:
#extraSecrets:
# '{{ .Release.Name }}-airflow-connections':
# type: 'Opaque'
# data:
# AIRFLOW_CONN_DATABRICKS_DEFAULT_two:
# conn_type: "emr"
# host: <host_url>
# extra:
# token: <token string>
# host: <host_url>
Error Occurred:
While updating helm release:
extraSecrets.{{ .Release.Name }}-airflow-connections expects string, got object
Airflow connections can be set using Kubernetes secrets and env variables.
For setting secrets, directly from the cli, the easiest way is to
Create a kubernetes secret
The secret value (connection string) has to be in the URI format suggested by airflow
my-conn-type://login:password#host:port/schema?param1=val1&param2=val2
Create an env variable in the airflow-suggested-format
Airflow format for connection - AIRFLOW_CONN_{connection_name in all CAPS}
set the value of the connection env variable using the secret
How to manage airflow connections: here
Example,
To set the default databricks connection (databricks_default)in airflow -
create secret
kubectl create secret generic airflow-connection-databricks \
--from-literal=AIRFLOW_CONN_DATABRICKS_DEFAULT='databricks://#<DATABRICKS_HOST>?token=<DATABRICKS_TOKEN>'
In helm's (values.yaml), add new env variable using the secret:
envName: "AIRFLOW_CONN_DATABRICKS_DEFAULT"
secretName: "airflow-connection-databricks"
secretKey: "AIRFLOW_CONN_DATABRICKS_DEFAULT"
Some useful links:
Managing Airflow connections
Databricks databricks connection

Can I change the port a GitLab runner uses to pull from my self-hosted GitLab instance?

I have two virtual machines both running docker. On one, I am hosting a GitLab instance according to https://docs.gitlab.com/ee/install/docker.html. On the other, I have a GitLab runner running inside a container https://docs.gitlab.com/runner/install/docker.html.
Due to some port constraints, I am running the GitLab instance on non-standard ports (4443 instead of 443, for instance).
I am able to successfully register the runner, and GitLab can send a job that the runner will pick up. However, when that runner pulls the git repo it is apparently looking at the wrong port for that git pull:
My GitLab is on port 4443 not port 443. The GitLab runner config has the correct port in the url field and is, again, able to connect and receive jobs.
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "runner1"
url = "https://vm-pr01:4443/"
token = "egZUzK44hYVrhy6DTfey"
tls-ca-file = "/etc/gitlab-runner/certs/cert.crt"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:20.10.16"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
Finally, I did try to change the GitLab ssh port according to this question:
Gitlab with non-standard SSH port (on VM with Iptable forwarding)
But that wasn't effective after restarting GitLab.
Are there other angles to try? My last thought was to host my own docker image with the correct ports changed in SSH config but I imagine there's a better way.
Use the clone_url configuration for the runner to change this.
[[runners]]
# ...
clone_url = "https://hostname:port"
Make sure the scheme matches your instance (http or https).
For whatever reason, the default clone url will not precisely respect the setting from url (scheme and port are assumed) so you must provide both url and clone_url in your configuration scenario.

GitLab CI script needs entry in /etc/hosts

I have a GitLab CI docker runner to execute my automated tests when I push. One of my tests requires a custom entry in /etc/hosts. I can't figure out how to get the entry into that file.
Here's basically what my .gitlab-ci.yml file looks like:
before_script:
- cat /etc/hosts # for debugging
- ... # install app dependencies
specs:
script:
- rspec # <- a test in here fails without the /etc/hosts entry
All my tests pass, except for the one that requires that /etc/hosts entry.
Let's say I'm trying to have the hostname myhost.local resolve to the IPv4 address XX.XX.XX.XX...
I tried using extra_hosts on the runner config, but it didn't seem to have any effect (got idea from here):
/etc/gitlab-runner/config.toml:
concurrent = 1
check_interval = 0
[[runners]]
name = "shell"
url = "https://mygitlabinstance.com/"
token = "THETOKEN"
executor = "shell"
[runners.cache]
[[runners]]
name = "docker-ruby-2.5"
url = "https://mygitlabinstance.com/"
token = "THETOKEN"
executor = "docker"
[runners.docker]
tls_verify = false
image = "ruby:2.5"
privileged = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
extra_hosts = ["myhost.local:XX.XX.XX.XX"]
[runners.cache]
But the test still failed. The cat /etc/hosts shows that it's unchanged:
# Your system has configured 'manage_etc_hosts' as True.
# As a result, if you wish for changes to this file to persist
# then you will need to either
# a.) make changes to the master file in /etc/cloud/templates/hosts.tmpl
# b.) change or remove the value of 'manage_etc_hosts' in
# /etc/cloud/cloud.cfg or cloud-config from user-data
#
127.0.1.1 ip-172-31-2-54.ec2.internal ip-172-31-2-54
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
I figured I could just add the entry myself in a before_script line, but I don't seem to be able to execute anything with root privileges in the container:
before_script:
- echo 'XX.XX.XX.XX myhost.local' >> /etc/hosts
...
But that just fails because the gitlab-runner user doesn't have permissions to write to that file. I tried to use sudo, but gitlab-runner can't do that either (echo 'XX.XX.XX.XX myhost.local' | sudo tee --non-interactive --append /etc/hosts --> sudo: a password is required)
So in summary, how can I get my container to have the host entry I need (or how can I execute a before_script command as root)?
The following statement is incorrect:
"But that just fails because the gitlab-runner user doesn't have permissions to write to that file."
The gitlab-runner is not the user executing your before_script, it is the user that runs the container in which your job is executed.
You are using the ruby:2.5 Docker image as far as I can tell and that does not contain any USER reference in its or its parents Dockerfile.
Try adding a whoami command right before your echo 'XX.XX.XX.XX myhost.local' >> /etc/hosts command to verify you are root.
Update
If gitlab-runner is shown as the result of whoamithe docker-executor is not used and instead a shell-executor has picked up the job.
In your config.toml on your Gitlab CI runner, you can add a setting to the config.toml so you can achieve this without touching /etc/hosts.
[runners.docker]
# ... other settings ...
extra_hosts = ["myhost.local:xx.xx.xx.xx"]
You can read more about the extra_hosts configuration here: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersdocker-section

Docker: Change word in file at container startup

I'm creating a docker image for our fluentd.
The image contains a file called http_forward.conf
It contains:
<store>
type http
endpoint_url ENDPOINTPLACEHOLDER
http_method post # default: post
serializer json # default: form
rate_limit_msec 100 # default: 0 = no rate limiting
raise_on_error true # default: true
authentication none # default: none
username xxx # default: ''
password xxx # default: '', secret: true
</store>
So this is in our image. But we want to use the image for all our environments. Specified with environment variables.
So we create an environment variable for our environment:
ISSUE_SERVICE_URL = http://xxx.dev.xxx.xx/api/fluentdIssue
This env variable contains dev on our dev environment, uat on uat etc.
Than we want to replace our ENDPOINTPLACEHOLDER with the value of our env variable. In bash we can use:
sed -i -- 's/ENDPOINTPLACEHOLDER/'"$ISSUE_SERVICE_URL"'/g' .../http_forward.conf
But how/when do we have to execute this command if we want to use this in our docker container? (we don't want to mount this file)
We did that via ansible coding.
Put the file http_forward.conf as template, and deploy the change depend on the environment, then mount the folder (include the conf file) to docker container.
ISSUE_SERVICE_URL = http://xxx.{{ environment }}.xxx.xx/api/fluentdIssue
playbook will be something like this, I don't test it.
- template: src=http_forward.conf.j2 dest=/config/http_forward.conf mode=0644
- docker:
name: "fluentd"
image: "xxx/fluentd"
restart_policy: always
volumes:
- /config:/etc/fluent
In your DockerFile you should have a line starting with CMD somewhere. You should add it there.
Or you can do it cleaner: set the CMD line to call a script instead. For example CMD ./startup.sh. The file startup.sh will then contain your sed command followed by the command to start your fluentd (I assume that is currently the CMD).

Vagrant docker provider: create and start vs run

I'm new to vagrant, using 1.7.4 with VirtualBox 5.0.10 on Windows 7 and trying to figure out how to get it to setup and run docker containers the way I'd like, which is like so:
Start my docker host, which is already provisioned with the latest docker tools and boots with the cadvisor container started - I get this box from the publicly available williamyeh/ubuntu-trusty64-docker
If (for example) the mongo container I'd like to use has not been created on the docker host, just create it (don't start it)
Else, if the container already exists, start it (don't try to create it)
With my current setup, using the docker provider, after the first use of vagrant up, using vagrant halt followed by vagrant up will produce this error:
Bringing machine 'default' up with 'docker' provider...
==> default: Docker host is required. One will be created if necessary...
default: Docker host VM is already ready.
==> default: Warning: When using a remote Docker host, forwarded ports will NOT be
==> default: immediately available on your machine. They will still be forwarded on
==> default: the remote machine, however, so if you have a way to access the remote
==> default: machine, then you should be able to access those ports there. This is
==> default: not an error, it is only an informational message.
==> default: Creating the container...
default: Name: mongo-container
default: Image: mongo
default: Port: 27017:27017
A Docker command executed by Vagrant didn't complete successfully!
The command run along with the output from the command is shown
below.
Command: "docker" "run" "--name" "mongo-container" "-d" "-p" "27017:27017" "-d" "mongo"
Stderr: Error response from daemon: Conflict. The name "mongo-container" is already in use by container 7a436a4a3422. You have to remove (or rename) that container to be able to reuse that name.
Here is the Vagrantfile I'm using for the docker host:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.require_version ">= 1.6.0"
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.hostname = "docker-host"
config.vm.box_check_update = false
config.ssh.insert_key = false
config.vm.box = "williamyeh/ubuntu-trusty64-docker"
config.vm.network "forwarded_port", guest: 27017, host: 27017
config.vm.synced_folder ".", "/vagrant", disabled: true
end
...and here is the docker provider Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.require_version ">= 1.6.0"
VAGRANTFILE_API_VERSION = "2"
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'docker'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "../docker-host/Vagrantfile"
docker.image = "mongo"
docker.ports = ['27017:27017']
docker.name = 'mongo-container'
end
end
Well, I'm not sure what had gotten munged in my environment, but while reconfiguring my setup, I deleted and restored my base docker host image, and from that point on, vagrant up, followed by vagrant halt, followed by vagrant up on the docker provider worked exactly like I was expecting it to.
At any rate, I guess this question is already supported by vagrant.

Resources