OpenAM: Web Policy Agent login to OpenAM fails - docker

I am unable to identify the error source. I checked the settings dozens of times, I tried out the local and public IPs, I even tried using different web agent versions and I read everything that I could find on the topic (at least that is what it feels like).
Question: Why is my Web Agent unable to login to OpenAM?
Initial situation: I have two docker containers. The first is running a Tomcat server with OpenAM and the second is running an Apache webserver. Both containers are deployed on two different virtual machines. Both machines can reach each other via their public as well as their private IPs and in the docker-compose files 'network_mode: host' is set.
Following this offical-guide I create an agent profile using the AM console with the following specifications:
Agent ID: WebAgent
Agent URL: http://<public_ip_apache_server>:80
Server URL: http://<public_ip_openam_server>:8080/openam
password: password
Within the container running the Apache webserver, I do the following:
Stopping the apache webserver.
Install OpenSSL.
Export /<path>/libcrypto.so and /<path>/libssl.so to LD_LIBRARY_PATH.
Make sure that libc.so.6 is available, and that it supports the GLIBC_2.3 API by running
strings libc.so.6 | grep GLIBC_2 within /usr/lib/x86_64-linux-gnu/.
Creating a password file via echo password > /tmp/pwd.txt followed by chmod 400 /tmp/pwd.txt.
Running the config command for the Web Agent:
/apache24_agent/bin/agentadmin --s "/usr/local/apache2/conf/httpd.conf" \
"http://<public_ip_openam_server>:8080/openam" "http://<public_ip_apache_server>:80" "/" \
"WebAgent" "/tmp/pwd.txt" --changeOwner --acceptLicence
Problem:
The last command always fails with the following output:
OpenAM Web Agent for Apache Server installation.
Validating...
Error validating OpenAM - Agent configuration.
Installation failed.
See installation log /usr/local/apache2/apache24_agent/bin/../log/install_20201227114136.log file for more details. Exiting.
Checking the error log:
2020-12-27 11:41:36 license accepted with --acceptLicence option
2020-12-27 11:41:36 license was accepted earlier
2020-12-27 11:41:36 Found user daemon, uid 1, gid 1
2020-12-27 11:41:36 Found group daemon, gid 1
2020-12-27 11:41:36 OpenSSL library status: <removed for readbility> OpenSSL v1.1.x library support is available
2020-12-27 11:41:36 validating configuration parameters...
2020-12-27 11:41:36 error validating OpenAM agent configuration
agent login to http://<public_ip_openam_server>:8080/openam fails
2020-12-27 11:41:36 installation error
2020-12-27 11:41:36 installation exit
System and software:
OpenAM Version: 14.5.4
Container running Apache Webserver: x86_64 system, Debian
Version Apache: 2.4.46
Web Policy Agent: Platform = Apache, Platform Version = 2.4, Operating System = Linux, Architecture = 64bit, Platform Version = 5.6, Version = 5.6.2.0
OpenSSL Version: v1.1

Are you using Open Identity Platform community version? I'm afraid Web Agent 5.6.2.0 and OpenAM 14.5.4 could be incompatible. Try to use an earlier Web Agent version for example 4.1.1, or switch to OpenIG as an alternative to Web Agent.
There are a couple of useful links below:
https://github.com/OpenIdentityPlatform/OpenAM/wiki/Quick-Start-Guide
https://github.com/OpenIdentityPlatform/OpenAM/wiki/How-to-Add-Authorization-and-Protect-Your-Application-With-OpenAM-and-OpenIG-Stack

Related

Capistrano 3 asks for SSH user's password since `do-release-upgrade` was done on Ubuntu 20.04 server

I have a rails app that I could so far successfully deploy to my Ubuntu server using capistrano 3.
Last night I did a successful server update using do-release-upgrade:
Linux my-server 5.15.0-47-generic #51-Ubuntu SMP Thu Aug 11 07:51:15 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
I can still ssh into the server using my id_rsa key from my Mac Terminal:
ssh user#my-server.example.com
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-47-generic x86_64)
However Capistrano now asks for the password instead of asking me for the passkey of my id_rsa certificate:
cap production deploy
user#my-server.example.com's password:
I tried to run ssh-copy-id too to make sure the certificate is re-uploaded:
ssh-copy-id user#my-server.example.com
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)
It's all quite weird since just yesterday I did do several deployments:
ls -lia rails/releases/ | grep 20220913 | wc -l
9
I did not enable passwords for the SSH access at all so I am wondering how to re-enable the SSH communication with my sever.
Update
The issue might be related to "SSH agent forwarding". I did use capistrano-ssh-doctor and it told me that:
SSH agent forwarding report
[success] repo_url setting ok
[success] ssh private key file exists
[success] ssh-agent process seems to be running locally
[success] ssh-agent process recognized by ssh-add command
[success] ssh private keys added to ssh-agent
[success] application repository accessible from local machine
[success] all hosts using passwordless login
[success] forward_agent ok for all hosts
[success] ssh agent successfully forwarded to remote hosts
[error] It seems Capistrano cannot access application git repository from these hosts: my-server.example.com
Actions:
make sure all the previous checks pass. That should make this one work too.
It seems SSH agent forwarding is not set up correctly. Follow the
suggested steps described in error messages. Errors (if more than one)
are ordered by importance, so always start with the first one.
So I logged in on the server and I was able to successfully clone the repository.
There is some information in this post:
I'd still like to find out why I can't use the git#github.com:{github-organization}/{private-repo}.git format for :repo_url, with keys, when all of the SSH forwarding report's requirements seem to be met. If you need further info from me just let me know - and thanks again for any help!
So it seems that the :repo_url needs to be changed. I'll give that a shot.
I did figure out that now for some reason the following command does no longer work:
cap staging deploy
Instead I need to use bundler
bundle exec cap staging deploy
Had the same issue. This fixed it for me: https://askubuntu.com/questions/1409105/ubuntu-22-04-ssh-the-rsa-key-isnt-working-since-upgrading-from-20-04
Add this to the end of /etc/ssh/sshd_config:
PubkeyAcceptedKeyTypes +ssh-rsa
HostKeyAlgorithms +ssh-rsa

Could not Install Craft CMS on Local Machine and on VPS

I have been trying to install Craft CMS (free version), but failed each time. At first I attempted to install it on a remote VPS with Ubuntu 20.04 server and MySQL database:
Installation via Composer failed:
In CreateProjectCommand.php line 438:
Could not find package public_html/craft with stability stable.
Then I attempted to install Craft CMS on VPS manually. I got as far as running the installer and entering my username, email and password, but that resulted in "Install failed. Please, check your logs for more info" message. Checked the logs and the problem was as follows:
2022-04-06 22:33:39 [-][-][-][error][craft\base\ApplicationTrait::getIsInstalled] There was a problem fetching the info row: SQLSTATE[42S02>
The SQL being executed was: SELECT *
FROM `info`
WHERE `id`=1
LIMIT 1
2022-04-06 22:33:39 [-][-][-][error][yii\db\Exception] PDOException: SQLSTATE[42S02]: Base table or view not found: 1146 Table 'craftdb.inf>
Then on my local machine with Ubuntu 18.04 using Composer got the same error as under VPS. When attempting the manual install I did not get past the error
HTTP 503 – Service Unavailable – craft\web\ServiceUnavailableHttpException
Perhaps it is the free version problem. Or any other ideas?

Cannot Find Docker Host Certificate Authentication Credentials

I'm currently running Jenkins lts in docker and i wanted to try the Docker Swarm Plugin. However is can't seem to find the Docker Host Certificate Authentication Credentials anywhere when adding a cloud provider. See Image:Credentials
Is it a plugin that i need to install?
My current docker plugins:
docker-commons 1.16
docker-java-api 3.0.14
docker-plugin 1.1.9
docker-swarm 1.8
docker-workflow 1.21
I'm at a complete loss, help would be appreciated!
As described in the changelogs of docker-commons plugin, "Docker Host Certificate Authentication Credentials" was renamed to "X.509 Client Certificate" since 1.16.
Link : https://github.com/jenkinsci/docker-commons-plugin/releases/tag/docker-commons-1.16

jenkins running under proxy -version 2.32.3

I am trying to setup the jenkins running under our corporate proxy as our servers dont have direct access to internet.
However, it failed to connnect to internet to update plugins through proxy server.
Error logs:
ntlm authentication scheme selected
Aug 04, 2017 8:14:44 AM INFO org.apache.commons.httpclient.HttpMethodDirector processProxyAuthChallenge
Failure authenticating with NTLM <any realm>#proxy-server:8080
The proxy setup as below:
Please advise>
i am able to solve this problem by upgrading to the latest version 2.60.2

how to solve the certification issues in puppet

I have installed docker in my ubuntu 14.04 OS.In docker containers im running puppet master and puppet agent.But im getting errors during the certificate exchange.
The puppet agent is not requesting certificates.Also showing an error saying the name cannot be resolved.
I checked the IP and hostname in /etc/hosts and /etc/hostname.
root#55fe460464d3:/# puppet agent --test
Error: Could not request certificate: getaddrinfo: Name or service not known
Exiting; failed to retrieve certificate and waitforcert is disabled
root#f7d7516d720e:/# puppet cert list -all
+ "f7d7516d720e" (SHA256) D1:6C:50:5B:BD:F6:AA:91:C4:B2:FD:4D:58:B8:DF:18:32:F4:EB:D7:B2:75:FF:E4:AF:7B:F6:F6:FE:0D:84:54
The puppet cert list --all command is showing only the master certificate,not the client certificate
What it looks like is happening is that the puppet agent can't talk to or find the puppetmaster to ask for a certificate.
The first thing to check would be that they can talk to each other over the network; the second thing to check is that the short hostname puppet resolves to the puppetmaster when run on the host. Unless you've specified a different dns name in /etc/puppet/puppet.conf by setting a server =directive in the [main] section or specified it on the command line with puppet agent -t --server <foo>, it will look for a hostname called puppet and rely on your /etc/resolv.conf's search domains to find it.

Resources