hello everyone i have a problem which is , i can't run an ansible playbook from aws instance (ansible system) to another aws (docker system) instance
it shows me this error
fatal: [x.x.x.x]: FAILED! => {"msg": "Missing sudo password"}
can any one help me please , i will be grateful
From: Missing sudo password in Ansible
You should give ansible-playbook the flag to prompt for privilege escalation password.
ansible-playbook --ask-become-pass
add the user into visudo file on the host server some thing like this
{username} ALL=(ALL) NOPASSWD: ALL
Actually i didn't get your scenario very well,do you want to connect to docker container from your playbook?
if that is the case you can add ssh public key 'id_rsa.pub' (generate this file by the command ssh-keygen inside the instance from where you want to connect to docker) to authorized_keys file inside docker container. When shh keys are there you don't need a sudo password.
You can do this in either in Dockerfile or using ssh-copy-id
if you are not using ssh, and having this error while running task with 'become: true' or 'become: sudo', then add the following line to /etc/sudoers list
<username> ALL=NOPASSWD: ALL
Related
I use Jenkins for CICD. After cloning the repository, I want to copy some file from cloned repository to a remote server using sshpass (scp).
sh """sshpass -p '$KEY'-o StrictHostKeyChecking=no scp *.json $UNAME#$PROD_IP:/home/test"""
But I get error in output:
sshpass: Failed to run command: No such file or directory
What's wrong I'm doing ?
After a long search I found the answer. So, you need to switch to the jenkins user and create a pair of keys on his behalf and add to the remote server to which you need to access. Then add the private key into Jenkins credential and use.
I'm trying to build a Docker image using DOCKER_BUILDKIT which involves cloning a private remote repository from GitLab, with the following lines of my Dockerfile being used for the git clone:
# Download public key for gitlab.com
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#gitlab.com:*name_of_repo* *download_location*
However, when I run the docker build command using:
DOCKER_BUILDKIT=1 docker build --ssh default --tag test:local .
I get the following error when it is trying to do the git clone:
git#gitlab.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've set up the ssh access successfully on the machine I'm trying to build this image on, and both the ssh -T git#gitlab.com and trying to clone the repository outside of the Docker build work just fine.
I've had a look around but can't find any info on what might be causing this specific issue - any pointers much appreciated.
Make sure you have an SSH agent running and that you added your private key to it.
Depending on your platform, the commands may vary but since it's tagged gitlab I will assume that Linux is your platform.
Verify that you have an SSH agent running with echo $SSH_AUTH_SOCK or echo $SSH_AGENT_SOCK if both echo an empty string, you most likely do not have an agent running.
To start an agent you can usually type:
eval `ssh-agent`
Next, you can verify what key are added (if any) with:
ssh-add -l
If the key you need is not listed, you can add it with:
ssh-add /path/to/your/private-key
Then you should be good to go.
More info here: https://www.ssh.com/academy/ssh/agent
Cheers
For testing, use a non-encrypted private SSH key (meaning you don't have to manage an ssh-agent, which is only needed for encrypted private key passphrase caching)
And use ssh -Tv git#gitlab.com to check where SSH is looking for your key.
Then, in your Dockerfile, add before the line with git clone:
ENV GIT_SSH_COMMAND='ssh -Tv'
You will see again where Docker/SSH is looking when executing git clone with an SSH URL.
I suggested as much here, and there were some mounting folders missing then.
In my Dockerfile, I'm trying to pull a Python lib from a private repo:
RUN --mount=type=ssh .venv/bin/pip install SOME_LIB --extra-index-url https://example.com/pypi/ -U
Then I tried to run the build using the following command:
docker buildx build --ssh /path/to/the/private/key/id_rsa .
For some reason, it gave me the following error:
#0 0.831 Host key verification failed.
#0 0.831 fatal: Could not read from remote repository.
I've double checked the private key is correct. Did I miss any step to use --mount=type=ssh?
The error has nothing to do with your private key; it is "host key verification failed". That means that ssh doesn't recognize the key being presented by the remote host. It's default behavior is to ask if it should trust the hostkey, and when run in an environment when it can't prompt interactively, it will simply reject the key.
You have a few options to deal with this. In the following examples, I'll be cloning a GitHub private repository (so I'm interacting with github.com), but the process is the same for any other host to which you're connecting with ssh.
Inject a global known_hosts file when you build the image.
First, get the hostkey for the hosts to which you'll be connecting
and save it alongside your Dockerfile:
$ ssh-keycan github.com > known_hosts
Configure your Dockerfile to install this where ssh will find
it:
COPY known_hosts /etc/ssh/ssh_known_hosts
RUN chmod 600 /etc/ssh/ssh_known_hosts; \
chown root:root /etc/ssh/ssh_known_hosts
Configure ssh to trust unknown host keys:
RUN sed /^StrictHostKeyChecking/d /etc/ssh/ssh_config; \
echo StrictHostKeyChecking no >> /etc/ssh/ssh_config
Run ssh-keyscan in your Dockerfile when building the image:
RUN ssh-keyscan github.com > /etc/ssh/ssh_known_hosts
All three of these solutions will ensure that ssh trusts the remote host key. The first option is the most secure (the known hosts file will only be updated by you explicitly when you run ssh-keyscan locally). The last option is probably the most convenient.
I have created a declarative jenkins pipeline and one of it's stages is as follows:
stage('Docker Image'){
steps{
bat 'docker build -t HMT/demo-application:%BUILD_NUMBER% --no-cache -f Dockerfile .'
}
}
This is the docker file:
FROM tomcat:alpine
RUN wget -O /usr/local/tomcat/webapps/launchstation04.war http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-SNAPSHOT.war
EXPOSE 9100
CMD /usr/local/tomcat/bin/cataline.bat run
I am getting the below error.:
[91m/bin/sh:
01:33:28 [0mThe command '/bin/sh -c wget -O /usr/local/tomcat/webapps/launchstation04.war http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-SNAPSHOT.war' returned a non-zero code: 127
UPDATE:
I have updated the command to
RUN wget -O /usr/local/tomcat/webapps/launchstation04.war -U jenkinsuser:Learning#% http://localhost:8082/artifactory/demoArtifactory/com/demo/0.0.1-SNAPSHOT/demo-0.0.1-20200823.053346-18.war
There is no problem in my command.Jfrog artifactory was unable to authorize this action.So I added username and password details but it still didn't work.
Error:
wget: server returned error: HTTP/1.1 401 Unauthorized
It didnt work after modifiying the password policy to unsupported.But it worked when I allowed anonymous access.
How to provide access using credentials.
Need more clarification on your question. Not sure where you are using curl command.
Image tomcat:alpine doesn't contains curl command. Unless you install it manually.
bash-4.4# type curl
bash: type: curl: not found
bash-4.4#
If your ask is regarding the sh -c option, if the script is invoked through CMD option, yes it will use sh. Instead you can give a try with ENTRYPOINT.
You can provide username & password via command line:
wget --user user --password pass
Using curl :
curl -u username:password -O
But void using special characters:
Change your password to another once in: [a-z][A-Z][0-9]
Try an API Key instead of password, I have a feeling that "#" may be throwing you off. Quotes can help there too or separating the password with -p
Also look at the request logs for whether the entry comes as 401 for the user, or anonymous/unauthenticated
Lastly, see if you can cURL from outside the image and then ADD the file in, as that will remove any external factors that may vary from the host (where I assume the command works)
I'm trying to do an offline deployment of a docker image with RPM on CentOS.
My spec file is pretty simple :
Source1: myimage.tar.gz
...
%install
cp %{SOURCE1} ...
...
%post
docker load -i myimage.tar.gz
docker-compose up -d
docker image prune -af
I compress my image using docker save and gzip. Then, on another machine, I just load the image with docker and use docker-compose to run my service.
When executing the commands "docker load" and "docker-compose up", I got that error:
sudo: unable to execute /bin/docker: Permission denied
sudo: unable to execute /bin/docker-compose: Permission denied
sudo: unable to execute /bin/docker: Permission denied
My user is part of the docker group, I checked if the RPM file was executed using root, it is...
If I run the RPM on my dev machine, it works, if I execute the commands in a script that is not part of the RPM, it works...
Any ideas ?
Thanks in advance.
You're probably being blocked by SELinux. You can temporarily disable it to check with setenforce 0.
If that is the problem (it is; this is a comment turned into an answer), some possible solutions:
You might be able to use audit2allow to change the denials into new rules to import.
Maybe udica will help. I don't know enough about it to tell.
I tried the first solution and it worked ! grep rpm_script_t /var/log/audit/audit.log | audit2allow -m mypolicy > mypolicy.te
The problem came from the fact that the RPM scripts didn't have the access to the container_runtime_exec_t:file entrypoint that I suppose, allow it to run containers like docker.
Thanks a lot for the tip !