How to create proper .lando.yml custom file? - docker

is there any way how to create a proper, really custom .lando.yml file so it will not use any recipe? How do I specify "just give me Apache, MariaDB, PHP" in Lando?
I tried this
# The name of the app
name: mariadb
# Give me http://mariadb.lndo.site and https://mariadb.lndo.site
proxy:
html:
- mariadb.lndo.site
# Set up my services
services:
# Set up a basic webserver running the latest nginx with ssl turned on
html:
type: nginx
ssl: true
webroot: www
# Spin up a mariadb container called "database"
# NOTE: "database" is arbitrary, you could just as well call this "db" or "kanye"
database:
# Use mariadb version 10.1
type: mariadb:10.1
# Optionally allow access to the database at localhost:3307
# You will need to make sure port 3307 is open on your machine
#
# You can also set `portforward: true` to have Lando dynamically assign
# a port. Unlike specifying an actual port setting this to true will give you
# a different port every time you restart your app
portforward: 3307
# Optionally set the default db credentials
#
# Note: You will need to `lando destroy && lando start` to change these if you've
# already started your app
# See: https://docs.devwithlando.io/tutorials/lando-info.html
creds:
user: mariadb
password: mariadb
database: mariadb
# Optionally load in all the mariadb config files in the config directory
# This is relative to the app root
# NOTE: these files need to end in .cnf
config:
confd: config
but after lando start I am getting a ERROR: No such service: appserver error and the documentation for this is extremely confusing.
Thanks.

You'll want to look at the Building a Custom Stack section of the lando custom project page.
I won't do your entire project, but the basics are as follows:
# LAMP stack example
name: lamp
proxy:
appserver:
- lamp.lndo.site # Allows you to access the site at http[s]://lamp.lndo.site
# This may actually get done automatically
services: # Define your services
appserver: # Create a web server container
type: php:5.3 # Specify what version of php to use
via: apache # This could be nginx, should you choose so
webroot: www # Specify webroot
config: # If you want to add/edit
server: config/apache/lamp.conf # Use an alternate apache config file
conf: path/from/app/root/php.ini # Alter php configuration with a custom file
database: # Create a database server container
type: mysql
portforward: 3308
creds: # Specify what creds/db to use
user: lamp
password: lamp
database: lamp
tooling: # These toolings allow you to connect land <command> to the appropriate containers
composer: # Call with "lando composer..."
service: appserver
description: Run composer commands
cmd: composer --ansi
php: # Call with "lando php..."
service: appserver
mysql: # Call with "lando mysql..."
user: root
service: database
description: Drop into a MySQL shell

Related

Setting airflow connections using Values.yaml in Helm (Kubernetes)

Airflow Version - 2.3.0
Helm Chart - Apache-airflow/airflow
I have been working on setting up airflow using helm on kubernetes.
Currently, I am planning to set airflow connections using the values.yaml file and env variables instead of configuring them up on the webUI.
I believe the settings to tweak, to set the connections, are:
extraSecrets: {}
# eg:
# extraSecrets:
# '{{ .Release.Name }}-airflow-connections':
# type: 'Opaque'
# data: |
# AIRFLOW_CONN_GCP: 'base64_encoded_gcp_conn_string'
# AIRFLOW_CONN_AWS: 'base64_encoded_aws_conn_string'
# stringData: |
# AIRFLOW_CONN_OTHER: 'other_conn'
# '{{ .Release.Name }}-other-secret-name-suffix':
# data: |
# ...
I am not sure how to set all the key-value pairs for a databricks/emr connection, and how to use the kubernetes secrets (already set up as env vars in pods) to get the values
#extraSecrets:
# '{{ .Release.Name }}-airflow-connections':
# type: 'Opaque'
# data:
# AIRFLOW_CONN_DATABRICKS_DEFAULT_two:
# conn_type: "emr"
# host: <host_url>
# extra:
# token: <token string>
# host: <host_url>
It would be great to get some insights on how to resolve this issue
I looked up this link : managing_connection on airflow
Tried Changes in values.yaml file:
#extraSecrets:
# '{{ .Release.Name }}-airflow-connections':
# type: 'Opaque'
# data:
# AIRFLOW_CONN_DATABRICKS_DEFAULT_two:
# conn_type: "emr"
# host: <host_url>
# extra:
# token: <token string>
# host: <host_url>
Error Occurred:
While updating helm release:
extraSecrets.{{ .Release.Name }}-airflow-connections expects string, got object
Airflow connections can be set using Kubernetes secrets and env variables.
For setting secrets, directly from the cli, the easiest way is to
Create a kubernetes secret
The secret value (connection string) has to be in the URI format suggested by airflow
my-conn-type://login:password#host:port/schema?param1=val1&param2=val2
Create an env variable in the airflow-suggested-format
Airflow format for connection - AIRFLOW_CONN_{connection_name in all CAPS}
set the value of the connection env variable using the secret
How to manage airflow connections: here
Example,
To set the default databricks connection (databricks_default)in airflow -
create secret
kubectl create secret generic airflow-connection-databricks \
--from-literal=AIRFLOW_CONN_DATABRICKS_DEFAULT='databricks://#<DATABRICKS_HOST>?token=<DATABRICKS_TOKEN>'
In helm's (values.yaml), add new env variable using the secret:
envName: "AIRFLOW_CONN_DATABRICKS_DEFAULT"
secretName: "airflow-connection-databricks"
secretKey: "AIRFLOW_CONN_DATABRICKS_DEFAULT"
Some useful links:
Managing Airflow connections
Databricks databricks connection

Apache Nifi (on docker): only one of the HTTP and HTTPS connectors can be configured at one time error

Have a problem adding authentication due to a new needs while using Apache NiFi (NiFi) without SSL processing it in a container.
The image version is apache/nifi:1.13.0
It's said that SSL is unconditionally required to add authentication. It's recommended to use tls-toolkit in the NiFi image to add SSL. Worked on the following process:
Except for environment variable nifi.web.http.port for HTTP communication, and executed up the standalone mode container with nifi.web.https.port=9443
docker-compose up
Joined to the container and run the tls-toolkit script in the nifi-toolkit.
cd /opt/nifi/nifi-toolkit-1.13.0/bin &&\
sh tls-toolkit.sh standalone \
-n 'localhost' \
-C 'CN=yangeok,OU=nifi' \
-O -o $NIFI_HOME/conf
Attempt 1
Organized files in directory $NIFI_HOME/conf. Three files keystore.jks, truststore.jsk, and nifi.properties were created in folder localhost that entered the value of the option -n of the tls-toolkit script.
cd $NIFI_HOME/conf &&
cp localhost/*.jks .
The file $NIFI_HOME/conf/localhost/nifi.properties was not overwritten as it is, but only the following properties were imported as a file $NIFI_HOME/conf/nifi.properties:
nifi.web.http.host=
nifi.web.http.port=
nifiweb.https.host=localhost
nifiweb.https.port=9443
Restarted container
docker-compose restart
The container died with below error log:
Only one of the HTTP and HTTPS connectors can be configured at one time
Attempt 2
After executing the tls-toolkit script, all files a were overwritten, including file nifi.properties
cd $NIFI_HOME/conf &&
cp localhost/* .
Restarted container
docker-compose restart
The container died with the same error log
Hint
The dead container volume was also accessible, so copied and checked file nifi.properties, and when did docker-compose up or restart, it changed as follows:
The part I overwritten or modified:
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=localhost
nifi.web.https.port=9443
The changed part after re-executing the container:
nifi.web.http.host=a8e283ab9421
nifi.web.http.port=9443
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=a8e283ab9421
nifi.web.https.port=9443
I'd like to know how to execute the container with http.host, http.port empty. docker-compose.yml file is as follows:
version: '3'
services:
nifi:
build:
context: .
args:
NIFI_VERSION: ${NIFI_VERSION}
container_name: nifi
user: root
restart: unless-stopped
network_mode: bridge
ports:
- ${NIFI_HTTP_PORT}:8080/tcp
- ${NIFI_HTTPS_PORT}:9443/tcp
volumes:
- ./drivers:/opt/nifi/nifi-current/drivers
- ./templates:/opt/nifi/nifi-current/templates
- ./data:/opt/nifi/nifi-current/data
environment:
TZ: 'Asia/Seoul'
########## JVM ##########
NIFI_JVM_HEAP_INIT: ${NIFI_HEAP_INIT} # The initial JVM heap size.
NIFI_JVM_HEAP_MAX: ${NIFI_HEAP_MAX} # The maximum JVM heap size.
########## Web ##########
# NIFI_WEB_HTTP_HOST: ${NIFI_HTTP_HOST} # nifi.web.http.host
# NIFI_WEB_HTTP_PORT: ${NIFI_HTTP_PORT} # nifi.web.http.port
NIFI_WEB_HTTPS_HOST: ${NIFI_HTTPS_HOST} # nifi.web.https.host
NIFI_WEB_HTTP_PORT: ${NIFI_HTTPS_PORT} # nifi.web.https.port
Thank you

Why does my metricbeat extension ignore my ActiveMQ broker host configuration in Kibana docker?

I'm trying to set up a local Kibana instance with ActiveMQ for testing purposes. I've created a docker network called elastic-network. I have 3 containers in my network: elasticsearch, kibana and finally activemq. In my kibana container, I downloaded metric beats using the following shell command
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.11.2-linux-x86_64.tar.gz
In the configuration file metricbeat.reference.yml, I've changed the host for my ActiveMQ instance running under the container activemq
- module: activemq
metricsets: ['broker', 'queue', 'topic']
period: 10s
hosts: ['activemq:8161']
path: '/api/jolokia/?ignoreErrors=true&canonicalNaming=false'
username: admin # default username
password: admin # default passwor
When I run metricbeat using the verbose parameter ./metricbeat -e I get some error mentioning that ActiveMQ API is unreachable. My problem is that metricbeat ignore my active mq broker configuration and tries to connect to localhost.
Is there a reason why my configuration could be ignored?
After looking through the documentation, I saw that for Linux, unlike the other OS, you also have to change the configuration in the module directory module.d/activemq.yml, not just the metricbeat.reference.yml
# Module: activemq
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.11/metricbeat-module-activemq.html
- module: activemq
metricsets: ['broker', 'queue', 'topic']
period: 10s
hosts: ['activemq:8161']
path: '/api/jolokia/?ignoreErrors=true&canonicalNaming=false'
username: admin # default username
password: admin # default password

Docker: Change word in file at container startup

I'm creating a docker image for our fluentd.
The image contains a file called http_forward.conf
It contains:
<store>
type http
endpoint_url ENDPOINTPLACEHOLDER
http_method post # default: post
serializer json # default: form
rate_limit_msec 100 # default: 0 = no rate limiting
raise_on_error true # default: true
authentication none # default: none
username xxx # default: ''
password xxx # default: '', secret: true
</store>
So this is in our image. But we want to use the image for all our environments. Specified with environment variables.
So we create an environment variable for our environment:
ISSUE_SERVICE_URL = http://xxx.dev.xxx.xx/api/fluentdIssue
This env variable contains dev on our dev environment, uat on uat etc.
Than we want to replace our ENDPOINTPLACEHOLDER with the value of our env variable. In bash we can use:
sed -i -- 's/ENDPOINTPLACEHOLDER/'"$ISSUE_SERVICE_URL"'/g' .../http_forward.conf
But how/when do we have to execute this command if we want to use this in our docker container? (we don't want to mount this file)
We did that via ansible coding.
Put the file http_forward.conf as template, and deploy the change depend on the environment, then mount the folder (include the conf file) to docker container.
ISSUE_SERVICE_URL = http://xxx.{{ environment }}.xxx.xx/api/fluentdIssue
playbook will be something like this, I don't test it.
- template: src=http_forward.conf.j2 dest=/config/http_forward.conf mode=0644
- docker:
name: "fluentd"
image: "xxx/fluentd"
restart_policy: always
volumes:
- /config:/etc/fluent
In your DockerFile you should have a line starting with CMD somewhere. You should add it there.
Or you can do it cleaner: set the CMD line to call a script instead. For example CMD ./startup.sh. The file startup.sh will then contain your sed command followed by the command to start your fluentd (I assume that is currently the CMD).

GitLab 7-0 stable not able to push or clone

Hi, I tried various links and configuration setting to resolve my issue. but I’m not able to do so.
If you still feel the question is copy or not useful I'm requesting you to please and a comment and I will delete the question without bother anyone.
I did gitlab 7-0 stable setup in a fresh ubuntu 12.04 64 bit machine in a local network.
my sever domain name is 192.168.1.1(some static IP within my LAN).
After this I can able to login both admin and user. I can create project ,group all basic work through web UI.
But I'm not able to clone or pushing code to the server using ssh from some other system in my LAN.
the result for sudo ./bin/check is
Check GitLab API access: OK
Check directories and files:
/home/git/repositories/: OK
/home/git/.ssh/authorized_keys: OK
Test redis-cli executable: redis-cli 2.2.12
Send ping to redis server: PONG
and all my system status are GREEN.
suspecting that it might be a problem of gitlab.yml , nginx,unicorn configuration.
can anyone please help me?
Update:
my config/gitlab.yml
production: &base
## GitLab settings
gitlab:
## Web server settings (note: host is the FQDN, do not include http://)
host: 192.168.1.37
port: 80
https: false
email_from: example#example.com
default_projects_limit: 10
## Default project features settings
default_projects_features:
issues: true
merge_requests: true
wiki: true
snippets: false
visibility_level: "private" # can be "private" | "internal" | "public"
issues_tracker:
## Gravatar
gravatar:
enabled: true # Use user avatar image from Gravatar.com (default: true)
ldap:
enabled: false
host: '_your_ldap_server'
port: 636
uid: 'sAMAccountName'
method: 'ssl' # "tls" or "ssl" or "plain"
bind_dn: '_the_full_dn_of_the_user_you_will_bind_with'
password: '_the_password_of_the_bind_user'
allow_username_or_email_login: true
satellites:
# Relative paths are relative to Rails.root (default: tmp/repo_satellites/)
path: /home/git/gitlab-satellites/
## Backup settings
backup:
path: "tmp/backups" # Relative paths are relative to Rails.root (default: tmp/backups/)
# keep_time: 604800 # default: 0 (forever) (in seconds)
## GitLab Shell settings
gitlab_shell:
path: /home/git/gitlab-shell/
# REPOS_PATH MUST NOT BE A SYMLINK!!!
repos_path: /home/git/repositories/
hooks_path: /home/git/gitlab-shell/hooks/
# Git over HTTP
upload_pack: true
receive_pack: true
ssh_port: 22
git:
bin_path: /usr/local/bin/git
max_size: 5242880 # 5.megabytes
# Git timeout to read a commit, in seconds
timeout: 10
extra:
development:
<<: *base
test:
<<: *base
gravatar:
enabled: true
gitlab:
host: localhost
And /home/git/gitlab-shell/config.yml
user: git
gitlab_url: http://192.168.1.37
http_settings:
self_signed_cert: false
repos_path: /home/git/repositories/
auth_file: /home/git/.ssh/authorized_keys
redis:
bin: /usr/bin/redis-cli
host: localhost
port: 6379
namespace: resque:gitlab
log_level: INFO
audit_usernames: false
~
I had a similar problem.
When following the installation instructions, one create a git user with the nologin option. But on my Linux box (openSUSE) this blocks pull and push via SSH.
I edited the file /etc/passwd looking for the line starting with git and I replaced the end of the line with /bin/bash (was before /bin/false).
No need to restart anything. You should be able to pull and push.

Resources