OAuth Integration with JFrog Artifactory - oauth

I have just deployed JFrog Artifactory using Docker file on AWS-Openshift Environment.
I need to integrate Artifactory with the OpenShift OAuth service.
Could anyone please guide me how to proceed on this?
I can check these options from below/document.
OAuth integration settings, in the Admin module, select Security | OAuth SSO.
I configured using above and getting error -
{"error":"unsupported_grant_type","error_description":"The authorization grant type is not supported by the authorization server."}
=====================================================================
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "User \"system:anonymous\" cannot list all users in the cluster",
"reason": "Forbidden",
"details": {
"kind": "users"
},
"code": 403
}
FROM rhel7:latest
MAINTAINER Naveen Kumar 06 <naveen.sr#tech.com>
RUN set -x \
&& yum -y install tar unzip \
&& yum -y update \
&& yum -y clean all
#java
ENV JAVA_HOME /opt/java
ENV JAVA_VERSION_MAJOR 8
ENV JAVA_VERSION_MINOR 102
ENV JAVA_VERSION_BUILD 14
RUN mkdir -p /opt \
&& curl --fail --silent --location --retry 3 \
--header "Cookie: oraclelicense=accept-securebackup-cookie; " \
http://download.oracle.com/otn-pub/java/jdk/${JAVA_VERSION_MAJOR}u${JAVA_VERSION_MINOR}-b${JAVA_VERSION_BUILD}/server-jre-${JAVA_VERSION_MAJOR}u${JAVA_VERSION_MINOR}-linux-x64.tar.gz \
| gunzip \
| tar -x -C /opt \
&& ln -s /opt/jdk1.${JAVA_VERSION_MAJOR}.0_${JAVA_VERSION_MINOR} ${JAVA_HOME}
#jfrog-artifactory-pro-4.12.1.zip
#https://dl.bintray.com/jfrog/artifactory-pro/org/artifactory/pro/jfrog-artifactory-pro/4.12.1/jfrog-artifactory-pro-4.12.1.zip
ENV ARTIFACTORY_VERSION 4.12.1
ENV ARTIFACTORY_HOME /artifactory-pro-${ARTIFACTORY_VERSION}
#ADD http://dl.bintray.com/content/jfrog/artifactory/jfrog-artifactory-pro-${ARTIFACTORY_VERSION}.zip?direct artifactory.zip
ADD https://dl.bintray.com/jfrog/artifactory-pro/org/artifactory/pro/jfrog-artifactory-pro/4.12.1/jfrog-artifactory-pro-4.12.1.zip artifactory.zip
RUN unzip artifactory.zip
RUN sed -i -e 's/Xmx2g/Xmx512m/g' artifactory-*/bin/artifactory.default
#artifactory-oss-4.12.1/tomcat/webapps/
RUN chmod +x /artifactory-pro-${ARTIFACTORY_VERSION}/bin/artifactory.sh
# Expose the default endpoint
EXPOSE 8081
WORKDIR /artifactory-oss-${ARTIFACTORY_VERSION}
RUN chmod -R 777 /artifactory-pro-4.12.1/
# Run the embedded tomcat container
ENTRYPOINT /artifactory-pro-${ARTIFACTORY_VERSION}/bin/artifactory.sh
Regards
Naveen

The problem in our case is that OpenShift authentication does not provide id_token. This is the reason for your problem. We wrote a proxy to deliver to Artifactory from OpenShift OAuth the answer that it was expecting.
Be aware that Artifactory uses java.net.HttpURLConnection to connect to OpenID provider and Apache HTTP Client. That class reads from the System Properties the typical proxy settings
"https.proxyHost"
"https.proxyPort"
"https.proxyUser"
"https.proxyPassword"
To set them in Artifactory edit the artifactory.bat in /artifactory-home/bin
set JAVA_OPTIONS=-server -Xms512m -Xmx2g -Xss256k -XX:+UseG1GC -Dhttps.proxyHost=myProxyHost -Dhttps.proxyPort=myProxyPort -Dhttps.proxyUser=myProxyUser -Dhttps.proxyPassword=XXXX

Related

Nginx Dockerfile amazon Linux

Please help ..
I am building nginx plus ingress controller and deplyoing in eks using Dockerfile
Dockerfile:
FROM amazonlinux:2
LABEL maintainer="armand#f5.com"
ENV NGINX_VERSION 23
ENV NJS_VERSION 0.5.2
ENV PKG_RELEASE 1.amzn2.ngx
ENV PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:${PATH}"
RUN mkdir -p /etc/ssl/nginx
ADD nginx-repo.crt /etc/ssl/nginx
ADD nginx-repo.key /etc/ssl/nginx
ADD qlik.crt /etc/ssl/nginx
RUN update-ca-trust extract
RUN yum -y update \
&& yum -y install sudo
RUN set -x \
&& chmod 644 /etc/ssl/nginx/* \
&& yum install -y --setopt=tsflags=nodocs wget ca-certificates bind-utils wget bind-utils vim-minimal shadow-utils \
&& groupadd --system --gid 101 nginx \
&& adduser -g nginx --system --no-create-home --home /nonexistent --shell /bin/false --uid 101 nginx \
&& usermod -s /sbin/nologin nginx \
&& usermod -L nginx \
&& wget -P /etc/yum.repos.d https://cs.nginx.com/static/files/nginx-plus-amazon2.repo \
&& yum --showduplicates list nginx-plus \
&& yum install -y --setopt=tsflags=nodocs nginx-plus-${NGINX_VERSION}-${PKG_RELEASE} \
&& rm /etc/nginx/conf.d/default.conf \
&& mkdir -p /var/cache/nginx \
&& mkdir -p /var/lib/nginx/state \
&& chown -R nginx:nginx /etc/nginx \
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
&& ulimit -c -m -s -t unlimited \
&& yum clean all \
&& rm -rf /var/cache/yum \
&& rm -rf /etc/yum.repos.d/* \
&& rm /etc/ssl/nginx/nginx-repo.crt /etc/ssl/nginx/nginx-repo.key
RUN echo "root:root" | chpasswd
EXPOSE 80 443 8080
STOPSIGNAL SIGTERM
CMD ["nginx", "-g", "daemon off;"]
I am starting the container using helm commands
helm upgrade \
--install my-athlon-ingress-controller nginx-stable/nginx-ingress --version 0.11.3 --debug \
--set controller.image.pullPolicy=Always \
--set controller.image.tag=6.0.1 \
--set controller.image.repository=957123096554.dkr.ecr.eu-central-1.amazonaws.com/nginx-service \
--set controller.nginxplus=true \
--set controller.enableSnippets=true \
--set controller.enablePreviewPolicies=true \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-type'='nlb' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-backend-protocol'='tcp' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-proxy-protocol'='*' \
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/aws-load-balancer-ssl-ports'='443'
echo Setting up SSL
export tlskey=$(aws secretsmanager get-secret-value --secret-id myathlon/infrastructure/$(env)/gate-key |jq --raw-output '.SecretString' )
echo $tlskey
export tlscrt=$(aws secretsmanager get-secret-value --secret-id myathlon/infrastructure/$(env)/gate-crt |jq --raw-output '.SecretString' )
echo $tlscrt
helm upgrade --install nginx-certificate-secrets ./helm-chart-nginx-certificates --set tlscrt=$tlscrt --set tlskey=$tlskey
Ok, let me give more clarity , i have a nginx pod running in debian 10 and when i try to curl a particular endpoint in keycloak i get a error like
2022/06/13 12:17:46 [info] 35#35: *35461 invalid JWK set while sending to client, client: 141.113.3.32, server: gate-acc.athlon.com, request:
but when i curl the same end point from a application (java pod) i get a response 200 .
Both nginx pod and all my application pod is in same namespace and the from the same cluster in EKS.
the difference i see between nginx pod and application pod is application pod i used the base image as amazon linux but the ngnix pod is with the base image of debian .
so i suspect the OS is the issue , so now i try to build a ngnix plus image using amazon linux and deploy using helm and then try to curl the keycloak end point , that is when i get this PATH not found issue ,
I assume amazon linux may have some root certificate already trusted inbuild so it is able to curl my keycloak but debian does not .
This is the reason i am doing this , adding a certificate in the docker file is a interim solution , if this works then i can add this as secrets and mount as file system .
Both the ngnix pod build in amazon linux or debian as only nginx user , i am not able to login as root , so i am not able to install any utilities like tcpdump or MRT or dig to see what is happening when i do curl , the strange thing is not even ps , sudo or any basis command is working as i dont have root , i am not even able to install anything .
Error :
Error: failed to start container "my-athlon-ingress-controller-nginx-ingress": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-nginx-plus=true": executable file not found in $PATH: unknown
My goal is deploy this image and have the root certifcate installed in amazon linux machine and have root access for the pod .
I am getting the above message any help is much appreciated , i also added ENV path in my docker file .
qlik.crt has the root certificate
Please help, thanks
For loading certificates you need not to build the nginx docker image. You can use secrets to load the same as volume mount to deployment/daemon set configuration.

Issues deploying Self-Hosted Agent on Linux (Ubuntu 18.04) Container

To begin, I followed this documentation in order to deploy a self-hosted agent on a linux container. I didn't do anything other than create the Dockerfile as start.sh file as it stated (copy and paste) to confirm I will add the files here:
Dockerfile
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat \
libssl1.0
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
Start.sh
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
rm -rf /azp/agent
mkdir /azp/agent
cd /azp/agent
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
./config.sh remove --unattended \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE")
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_RESPONSE=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;api-version=3.0-preview' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=linux-x64")
if echo "$AZP_AGENT_RESPONSE" | jq . >/dev/null 2>&1; then
AZP_AGENTPACKAGE_URL=$(echo "$AZP_AGENT_RESPONSE" \
| jq -r '.value | map([.version.major,.version.minor,.version.patch,.downloadUrl]) | sort | .[length-1] | .[3]')
fi
if [ -z "$AZP_AGENTPACKAGE_URL" -o "$AZP_AGENTPACKAGE_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent - check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and installing Azure Pipelines agent..."
curl -LsS $AZP_AGENTPACKAGE_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent..."
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
./run.sh & wait $!
Despite copy and pasting these from the documentation. I receive an error when it reaches the 3rd step (Configuring Azure Pipelines Agent) in the start.sh script.
Error message: qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory
If it helps, I am running docker on MacOS but as you can see the container is Ubuntu.
Thank you
According the documentation, we can know Both Windows and Linux are supported as container hosts. But the MacOS is not support as container hosts. So you can try to create a new Windows docker container to try again.

SSH into Azure web-app container running with non root user

I am running an Elastic and Kibana service within a container using an Azure Web app container service. I was keen on checking the SSH connectivity for this container using Azures Web SSH console feature. Followed the microsoft documentation for SSH into custom containers https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#enable-ssh which shows the example of running the container as default root user.
My issue is Elasticsearch process does not run as a root user so I had to make the sshd process run as an elastic user. I was able to get the sshd process running which accepts the SSH connection from my host however the credentials I am setting in the docker file (elasticsearch:Docker!) are throwing Access Denied error.Any idea where i am going wrong here?
Dockerfile
FROM openjdk:jre-alpine
ARG ek_version=6.5.4
RUN apk add --quiet --no-progress --no-cache nodejs wget \
&& adduser -D elasticsearch \
&& apk add openssh \
&& echo "elasticsearch:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY startup.sh /home/elasticsearch/
RUN chmod +x /home/elasticsearch/startup.sh && \
chown elasticsearch /home/elasticsearch/startup.sh
COPY sshd_config /home/elasticsearch/
USER elasticsearch
WORKDIR /home/elasticsearch
ENV ES_TMPDIR=/home/elasticsearch/elasticsearch.tmp ES_DATADIR=/home/elasticsearch/elasticsearch/data
RUN wget -q -O - https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-${ek_version}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ek_version} elasticsearch \
&& mkdir -p ${ES_TMPDIR} ${ES_DATADIR} \
&& wget -q -O - https://artifacts.elastic.co/downloads/kibana/kibana-oss-${ek_version}-linux-x86_64.tar.gz \
| tar -zx \
&& mv kibana-${ek_version}-linux-x86_64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm
EXPOSE 9200 5601 2222
ENTRYPOINT ["/home/elasticsearch/startup.sh"]
startup.sh script
#!/bin/sh
# Generating hostkey
ssh-keygen -f /home/elasticsearch/ssh_host_rsa_key -N '' -t rsa
# starting sshd process
echo "Starting SSHD"
/usr/sbin/sshd -f sshd_config
# Staring the ES stack
echo "Starting ES"
sh elasticsearch/bin/elasticsearch -E http.host=0.0.0.0 & kibana/bin/kibana --host 0.0.0.0
sshd_config file
Port 2222
HostKey /home/elasticsearch/ssh_host_rsa_key
PidFile /home/elasticsearch/sshd.pid
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
Subsystem sftp internal-sftp
Error i am getting
Please check and verify that your docker image supports SSH. It would appear that you have done everything correctly so one of the final troubleshooting steps left as this point is to verify that your image supports SSH to begin with.

How do I debug a Jenkins agent launch error?

I added a new worker/slave via a Docker plugin agent template that uses an Alpine image. The agent is instantiated, invoked, and authentication succeeds, but the launch of the agent on the slave fails with the following error (which I get from Jenkins > Nodes > my_node_id > See log for more details):
[03/27/19 18:11:39] [SSH] Authentication successful.
SSH connection reports a garbage before a command execution.
Check your .bashrc, .profile, and so on to make sure it is quiet.
The received junk text is as follows:
nologin: this account is not available
null
[03/27/19 18:11:39] Launch failed - cleaning up connection
[03/27/19 18:11:39] [SSH] Connection closed.
There's not enough info for me to debug this - just the error output from some unknown command. Looks like the agent is issueing some command with a nologin parameter that is being misinterpreted as a userid. My guess would be it's some slave image configuration or package install problem.
Is there any way I can find out exactly what command(s) the agent issued? Any way to get more complete agent launch logs?
FWIW here's my slave Dockerfile:
FROM docker:stable-dind
RUN apk update; \
apk upgrade; \
apk add git; \
apk add python3; \
pip3 install pyyaml pexpect requests ruamel.yaml; \
apk add openjdk8; \
apk add curl; \
apk add sudo; \
apk add bash; \
apk add openssh-server; \
rm -rf /etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_dsa_key; \
/usr/bin/ssh-keygen -A
EXPOSE 22
# Docker and Jenkins users
RUN addgroup -g 1001 -S docker; \
adduser -D -u 1001 -S appuser -G docker; \
addgroup -S jenkins; adduser -S jenkins -G jenkins; \
echo "jenkins:xxxxxxxxx" | chpasswd; \
chown -R jenkins:jenkins /home/jenkins
CMD ["/usr/sbin/sshd", "-D"]
The nologin: this account is not available problem was because Jenkins agent requires bash, not sh. So changing the Dockerfile adduser fixed that problem:
adduser -S jenkins -G jenkins --shell /bin/bash
Now on to the next error :-)
Note I did not find out how to get more detailed info about the agent commands or logs. If anyone knows that I would be interested to know how.

Docker Is supposed to be listening but it doesn't

I deployed my first scala project on docker but i have a problem, the problem is the docker says that the server has been started, but surprisingly it doesn't listen to any request, even i exposed the port to the host, when i tried to request a get, it says that the connection is refused, also i tried to telnet to the port and it seems that there are no listener on port 9000 neither 3200 an 3000, please find bellow what i have wrote in dockerFile
FROM jelastic/sbt
# Env variables
ENV SCALA_VERSION 2.12.4
ENV SBT_VERSION 1.1.0
# Scala expects this file
RUN touch /usr/lib/jvm/java-8-openjdk-amd64/release
# Install Scala
## Piping curl directly in tar
RUN \
curl -fsL https://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz | tar xfz - -C /root/ && \
echo >> /root/.bashrc && \
echo "export PATH=~/scala-$SCALA_VERSION/bin:$PATH" >> /root/.bashrc
# Install sbt
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb && \
apt-get update && \
apt-get install sbt && \
sbt sbtVersion
WORKDIR /
ADD play /
RUN tree /
EXPOSE 9000
CMD sbt run
and my run command was
docker run -p 9000:9000 -t bee while bee is my image name
as you see the server is started properly.
please find bellow the attached picture to be more clearly
here is the docker ps
If you see your screenshot, it clear states the docker machine is located at 192.168.99.100. So that is the address you need to use.
Open http://192.168.99.100:9000 and it should work

Resources