I started the Airflow with Postgres as the backend database. The web-servers is started successfully, but when I trying to create the user using the create_user command and full CLI as follow
airflow create_user -r Admin -u admin -e admin#acme.com -f admin -l user -p Password
This CLI is giving an error and it looks like airflow is still looking for SQLite. Full logs as follow
ravi#ravi:~/Desktop/test$ docker exec -it airflow_airflow_webserver_1 sh
$ airflow create_user -r Admin -u admin -e admin#acme.com -f admin -l user -p Pass#123
/home/airflow/.local/lib/python3.6/site-packages/flask_sqlalchemy/__init__.py:813: UserWarning: Neither SQLALCHEMY_DATABASE_URI nor SQLALCHEMY_BINDS is set. Defaulting SQLALCHEMY_DATABASE_URI to "sqlite:///:memory:".
'Neither SQLALCHEMY_DATABASE_URI nor SQLALCHEMY_BINDS is set. '
[2020-12-08 11:19:30,734] {manager.py:96} ERROR - DB Creation and initialization failed: Invalid argument(s) 'pool_size','max_overflow' sent to create_engine(), using configuration SQLiteDialect_pysqlite/StaticPool/Engine. Please check that the keyword arguments are appropriate for this combination of components.
The bug is fixed by setting the following in airflow.cfg ref
[webserver]
rbac = True
Related
I'm running a command in a network namespace using nsenter, and I wish to run it as an ordinary (non-root) user because I want to access an Android SDK installation, which exists in my own home directory.
I find that although I can specify which user I want in my nsenter command, my environment variables don't get set accordingly, and I don't see a way to set those variables. What can I do?
sudo nsenter --net=/var/run/netns/netns1 -S 1000 bash -c -l whoami
# => bash: /root/.bashrc: Permission denied
# => myuser
sudo nsenter --net=/var/run/netns/netns1 -S 1000 bash -c 'echo $HOME'
# => /root
Observe that:
When I attempt a login shell (with -l), bash attempts to source /root/.bashrc instead of /home/myuser/.bashrc
$HOME is /root
If I prepend my command with a variable assignment (HOME=/home/markham sudo nsenter --net=/var/run/netns/netns1 -S 1000 bash -c -l whoami), I get the same results.
(I'm on version nsenter from util-linux 2.34.)
I have created a docker image using AmazonLinux:2 base image in my Dockerfile. This docker container will run as Jenkins build agent on a Linux server and has to make certain AWS API calls. In my Dockerfile, I'm copying a shell-script called assume-role.sh.
Code snippet:-
COPY ./assume-role.sh .
RUN ["chmod", "+x", "assume-role.sh"]
ENTRYPOINT ["/assume-role.sh"]
CMD ["bash", "--"]
Shell script definition:-
#!/usr/bin/env bash
#echo Your container args are: "${1} ${2} ${3} ${4} ${5}"
echo Your container args are: "${1}"
ROLE_ARN="${1}"
AWS_DEFAULT_REGION="${2:-us-east-1}"
SESSIONID=$(date +"%s")
DURATIONSECONDS="${3:-3600}"
#Temporary loggings starts here
id
pwd
ls .aws
cat .aws/credentials
#Temporary loggings ends here
# AWS STS AssumeRole
RESULT=(`aws sts assume-role --role-arn $ROLE_ARN \
--role-session-name $SESSIONID \
--duration-seconds $DURATIONSECONDS \
--query '[Credentials.AccessKeyId,Credentials.SecretAccessKey,Credentials.SessionToken]' \
--output text`)
# Setting up temporary creds
export AWS_ACCESS_KEY_ID=${RESULT[0]}
export AWS_SECRET_ACCESS_KEY=${RESULT[1]}
export AWS_SECURITY_TOKEN=${RESULT[2]}
export AWS_SESSION_TOKEN=${AWS_SECURITY_TOKEN}
echo 'AWS STS AssumeRole completed successfully'
# Making test AWS API calls
aws s3 ls
echo 'test calls completed'
I'm running the docker container like this:-
docker run -d -v $PWD/.aws:/.aws:ro -e XDG_CACHE_HOME=/tmp/go/.cache arn:aws:iam::829327394277:role/myjenkins test-image
What I'm trying to do here is mounting .aws credentials from host directory to the volume on container at root level. The volume mount is successful and I can see the log outputs as describe in its shell file :-
ls .aws
cat .aws/credentials
It tells me there is a .aws folder with credentials inside it in the root level (/). However somehow, AWS CLI is not picking up and as a result remaining API calls like AWS STS assume-role is getting failed.
Can somebody please suggest me here?
[Output of docker run]
Your container args are: arn:aws:iam::829327394277:role/myjenkins
uid=0(root) gid=0(root) groups=0(root)
/
config
credentials
[default]
aws_access_key_id = AKXXXXXXXXXXXXXXXXXXXP
aws_secret_access_key = e8SYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYxYm
Unable to locate credentials. You can configure credentials by running "aws configure".
AWS STS AssumeRole completed successfully
Unable to locate credentials. You can configure credentials by running "aws configure".
test calls completed
I found the issue finally.
The path was wrong while mounting the .aws volume to the container.
Instead of this -v $PWD/.aws:/.aws:ro, it was supposed to be -v $PWD/.aws:/root/.aws:ro
I was using docker image ibmcom/mq .
My compose file was:
FROM ibmcom/mq
USER root
# create another client user
# default is app without password
RUN useradd user1 -G mqclient && \
echo user1:passwd | chpasswd
Then suddenly it was stopped working when I build latest image again.
Error is :
useradd: group 'mqclient' does not exist
ERROR: Service 'mq' failed to build: The command '/bin/sh -c useradd user1 -G mqclient && echo user1:passwd | chpasswd' returned a non-zero code: 6
Now compose is not working with latest image(9.1.5.0-r1) version but works with old version e.g. 9.1.4.0-r1
Can anyone suggest what is the alternative
From 9.1.5 the container does not use OS based users or groups. This is to conform to cloud best practices. Instead a file based system is being used. This is so that when you roll-out the container in a cloud into production you can switch to an LDAP based system.
The 9.1.5 container uses htpasswd, with the relevant file in /etc/mqm/
For development, if you are not going to create new users, then you can use the 9.1.5 container. If you want to create new users, then you can use 9.1.4 or earlier, or use htpasswd with bcrypt to create the users.
hello everyone i have a problem which is , i can't run an ansible playbook from aws instance (ansible system) to another aws (docker system) instance
it shows me this error
fatal: [x.x.x.x]: FAILED! => {"msg": "Missing sudo password"}
can any one help me please , i will be grateful
From: Missing sudo password in Ansible
You should give ansible-playbook the flag to prompt for privilege escalation password.
ansible-playbook --ask-become-pass
add the user into visudo file on the host server some thing like this
{username} ALL=(ALL) NOPASSWD: ALL
Actually i didn't get your scenario very well,do you want to connect to docker container from your playbook?
if that is the case you can add ssh public key 'id_rsa.pub' (generate this file by the command ssh-keygen inside the instance from where you want to connect to docker) to authorized_keys file inside docker container. When shh keys are there you don't need a sudo password.
You can do this in either in Dockerfile or using ssh-copy-id
if you are not using ssh, and having this error while running task with 'become: true' or 'become: sudo', then add the following line to /etc/sudoers list
<username> ALL=NOPASSWD: ALL
I am running Alpine Linux like this:
$ docker run --rm -it alpine sh
Then running the following commands:
/ # apk add shadow
/ # /usr/sbin/useradd -m -u 1000 jenkins
Creating mailbox file: No such file or directory
/ # echo "jenkins:mypassword" | chpasswd
Password: chpasswd: PAM: Authentication failure
According to this, the warning Creating mailbox file: No such file or directory can be safely ignored.
My problem is that chpasswd is failing with the vague error message seen in the last line. I tried the exact commands on CenstOS and Ubuntu and it worked there.
This turned out to be a bug in Alpine 3.6+. A new pull request is supposed to have fixed this as mentioned here: https://bugs.alpinelinux.org/issues/10209
Are you sure the root account is enable ?
This might be a consequence of this change: https://github.com/alpinelinux/aports/commit/72c7a7a3caf28c06289dc5f65e1756b38cfb00ca