Velero for S3 compatible storage - velero

How to configure velero to connect with s3 compatible storage, we are trying to configure using this command:
velero install \
--plugins velero/velero-plugin-for-aws:v1.0.0 \
--provider aws \
--bucket sparktest \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config s3ForcePathStyle="true",s3Url=https://<s3 compaitable url>
It's throwing this error:
$ kubectl logs deployment/velero -n velero
time="2023-01-17T11:28:43Z" level=error msg="Error getting backup store for this location" backupLocation=default controller=backup-sync error="rpc error: code = Unknown desc = Invalid s3 url , URL must be valid according to https://golang.org/pkg/net/url/#Parse and start with http:// or https://" error.file="/go/src/github.com/vmware-tanzu/velero-plugin-for-aws/velero-plugin-for-aws/object_store.go:195" error.function=main.newAWSConfig logSource="pkg/controller/backup_sync_controller.go:182"

Related

When enabling oauth2 on pgadmin for gitlab, i get Missing jwks_uri in metadata error

I used the configuration from: enabling oauth2 with pgadmin and gitlab
The main difference is, i have a local gitlab setup at https://gitlab_company_org
and a local (dockered) pgadmin instance at http://pgadmin_projectx_company_org:8000
But i get the error: {"success":0,"errormsg":"Missing \"jwks_uri\" in metadata","info":"","result":null,"data":null} when i try to login.
So my configs are:
config_local.py:
AUTHENTICATION_SOURCES = ['oauth2', 'internal']
MASTER_PASSWORD = True
OAUTH2_CONFIG = [
{
'OAUTH2_NAME': 'gitlab',
'OAUTH2_DISPLAY_NAME': 'Gitlab',
'OAUTH2_CLIENT_ID': 'gitlab_client_id',
'OAUTH2_CLIENT_SECRET': 'gitlab_client_secret',
'OAUTH2_TOKEN_URL': 'https://gitlab_company_org/oauth/token',
'OAUTH2_AUTHORIZATION_URL': 'https://gitlab_company_org/oauth/authorize',
'OAUTH2_API_BASE_URL': 'https://gitlab_company_org/oauth/',
'OAUTH2_USERINFO_ENDPOINT': 'userinfo',
'OAUTH2_SCOPE': 'openid email profile',
'OAUTH2_ICON': 'fa-gitlab',
'OAUTH2_BUTTON_COLOR': '#E24329',
}
]
OAUTH2_AUTO_CREATE_USER = True
run_pgadmin.sh
mkdir -p ./pgadmin
mkdir -p ./pgadmin/data
touch ./pgadmin/config_local.py
chown -R 5050:5050 ./pgadmin
docker stop pgadmin
docker rm pgadmin
docker pull dpage/pgadmin4
docker run -p 8000:80 \
--name pgadmin \
-e 'PGADMIN_DEFAULT_EMAIL=pgadmin#company.org' \
-e 'PGADMIN_DEFAULT_PASSWORD=somesupersecretsecret' \
-e 'PGADMIN_CONFIG_LOGIN_BANNER="Authorised users only!"' \
-v /opt/container/pgadmin/data:/var/lib/pgadmin \
-v /opt/container/pgadmin/config_local.py:/pgadmin4/config_local.py:ro \
-d dpage/pgadmin4
When trying to login via gitlab button i get the gitlab login, then i allowed the app to login via gitlab, but afterwards i get the error: {"success":0,"errormsg":"Missing \"jwks_uri\" in metadata","info":"","result":null,"data":null} .. which seems a json response to: http://pgadmin.projectx.company.org:8000/oauth2/authorize?code=VERYLONGCODE&state=SOMEOTHERKINDOFCODE
Solution:
Thanks to Aditya Toshniwal: i tried the new dpage/pgadmin4:snapshot or 2023-01-09-2 tag on dockerhub, and had to add the OAUTH2_SERVER_METADATA_URL parameter (value: https://gitlab_company_org/oauth/.well-known/openid-configuration), which i found in the issue he mentioned, now the thing works with gitlab onprem. Awesome!
The issue is fixed - https://github.com/pgadmin-org/pgadmin4/issues/5666 and will be available in pgAdmin release coming this week. You can also try on the candidate build here - https://developer.pgadmin.org/builds/2023-01-09-2/

[?1034hsh-4.2$ Cannot perform start session: EOF

Getting below error while connecting aws ec2 instance through SSM in jenkins.
Starting session with SessionId:
[?1034hsh-4.2$ Cannot perform start session: EOF
Command used in jenkins (execute shell):
INSTANCE_ID=aws ec2 describe-instances --filters "Name=tag:t_name,Values=appdev" --region us-east-1 | jq -r .Reservations[].Instances[].InstanceId
echo "INSTANCE_ID: $INSTANCE_ID"
aws ssm start-session --region us-east-1 --target $INSTANCE_ID
Why do you want to start a session in Jenkins for SSM?
start-session is used to initiate a connection to a target (for example, a managed node) for a Session Manager session.
To work with SSM in Jenkins you can pass the commands directly without the need to create a session using send-command
Example: To untar files in your instance
aws ssm send-command --instance-ids "${INSTANCE_ID}" \
--region us-east-1 --document-name "AWS-RunShellScript" \
--output-s3-bucket-name "$bucketName" --output-s3-key-prefix "$bucketDir" \
--comment "Untar Files" \
--parameters '{"commands":["tar -xvf /tmp/repo.tar -C /tmp" ]}'
You can pass any number of commands using this way.
After each command, you can call a shell function to check the command status and if it fails you can exit your loop

HyperLedger Fabric v1.4.4: Instantiating smart contract on mychannel with error

I am following the HyperLedger Fabric v1.4.4 "Writing Your First Application" tutorial[1], but I am having a problem running the code ./startFabric.sh javascript:
+ echo 'Instantiating smart contract on mychannel'
Instantiating smart contract on mychannel
+ docker exec -e CORE_PEER_LOCALMSPID=Org1MSP -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp cli peer chaincode instantiate -o orderer.example.com:7050 -C mychannel -n fabcar -l node -v 1.0 -c '{"Args":[]}' -P 'AND('\''Org1MSP.member'\'','\''Org2MSP.member'\'')' --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
2019-11-25 16:14:38.470 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 001 Using default escc
2019-11-25 16:14:38.470 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 002 Using default vscc
Error: could not assemble transaction, err proposal response was not successful, error code 500, msg error starting container: error starting container: Failed to generate platform-specific docker build: Error returned from build: 1 "npm ERR! code EAI_AGAIN
npm ERR! errno EAI_AGAIN
npm ERR! request to https://registry.npmjs.org/fabric-shim failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org:443
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2019-11-25T16_17_00_000Z-debug.log
"
I think that error is related to the docker image, because it occurs when the following code is executed:
echo "Instantiating smart contract on mychannel"
docker exec \
-e CORE_PEER_LOCALMSPID=Org1MSP \
-e CORE_PEER_MSPCONFIGPATH=${ORG1_MSPCONFIGPATH} \
cli \
peer chaincode instantiate \
-o orderer.example.com:7050 \
-C mychannel \
-n fabcar \
-l "$CC_RUNTIME_LANGUAGE" \
-v 1.0 \
-c '{"Args":[]}' \
-P "AND('Org1MSP.member','Org2MSP.member')" \
--tls \
--cafile ${ORDERER_TLS_ROOTCERT_FILE} \
--peerAddresses peer0.org1.example.com:7051 \
--tlsRootCertFiles ${ORG1_TLS_ROOTCERT_FILE}
I don't know much about docker, but I'm learning. Until then, does anyone help me with this problem?
[1] https://hyperledger-fabric.readthedocs.io/en/release-1.4/write_first_app.html
Update 1
The same error was observed when I run ./byfn.sh up -l node, but no error for ./byfn.sh up. I think the error is connected to fabric-shim. I am still looking for an answer to this error.
The command to Instantiate the smart contract will be trying to start a new chaincode container, and this is failing because the new container cannot successfully run npm install commands.
The problem could be a Docker DNS issue, or an npm registry connection problem due to the country or company you are connecting from.
The following 2 previous answers should help you:
Network calls fail during image build on corporate network
Error while running fabcar sample in javascript

How to make GitLab Runner in Docker see a custom CA Root certificate

I have installed and configured:
an on-premises GitLab Omnibus on ServerA running on HTTPS
an on-premises GitLab-Runner installed as Docker Service in ServerB
ServerA certificate is generated by a custom CA Root
The Configuration
I've have put the CA Root Certificate on ServerB:
/srv/gitlab-runner/config/certs/ca.crt
Installed the Runner on ServerB as described in Run GitLab Runner in a container - Docker image installation and configuration:
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
Registered the Runner as described in Registering Runners - One-line registration command:
docker run --rm -t -i
-v /srv/gitlab-runner/config:/etc/gitlab-runner
--name gitlab-docker-runner gitlab/gitlab-runner register \
--non-interactive \
--executor "docker" \
--docker-image alpine:latest \
--url "https://MY_PRIVATE_REPO_URL_HERE/" \
--registration-token "MY_PRIVATE_TOKEN_HERE" \
--description "MyDockerServer-Runner" \
--tag-list "TAG_1,TAG_2,TAG_3" \
--run-untagged \
--locked="false"
This command gave the following output:
Updating CA certificates...
Runtime platform arch=amd64 os=linux pid=5 revision=cf91d5e1 version=11.4.2
Running in system-mode.
Registering runner... succeeded runner=8UtcUXCY
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
I checked with
$ docker exec -it gitlab-runner bash
and once in the container with
$ awk -v cmd='openssl x509 -noout -subject' '
/BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt
and the custom CA root is correctly there.
The Problem
When running Gitlab-Runner from GitLab-CI, the pipeline fails miserably telling me that:
$ git clone https://gitlab-ci-token:${CI_BUILD_TOKEN}#ServerA/foo/bar/My-Project.wiki.git
Cloning into 'My-Project.wiki'...
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#ServerA/foo/bar/My-Project.wiki.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
ERROR: Job failed: exit code 1
It does not recognize the Issuer (my custom CA Root), but according to The self-signed certificates or custom Certification Authorities, point n.1, it should out-of-the-box:
Default: GitLab Runner reads system certificate store and verifies the GitLab server against the CA’s stored in system.
I've then tried the solution from point n.3, editing
/srv/gitlab-runner/config/config.toml:
and adding:
[[runners]]
tls-ca-file = "/srv/gitlab-runner/config/certs/ca.crt"
But it still doesn't work.
How can I make Gitlab Runner read the CA Root certificate?
You have two options:
Ignore SSL verification
Put this at the top of your .gitlab-ci.yml:
variables:
GIT_SSL_NO_VERIFY: "1"
Point GitLab-Runner to the proper certificate
As outlined in the official documentation, you can use the tls-*-file options to setup your certificate, e.g.:
[[runners]]
...
tls-ca-file = "/etc/gitlab-runner/ssl/ca-bundle.crt"
[runners.docker]
...
As the documentation states, "this file will be read every time when runner tries to access the GitLab server."
Other options include tls-cert-file to define the certificate to be used if needed.
While I've still not got why it doesn't work out-of-the-box, I've found the Egg of Columbus:
Gitlab-Runner configuration:
[[runners]]
name = "MyDockerServer-Runner"
url = "https://MY_PRIVATE_REPO_URL_HERE/"
token = "MY_TOKEN_HERE"
executor = "docker"
...
[runners.docker]
image = "ubuntu:latest"
# The trick is the following:
volumes = ["/cache","/srv/gitlab-runner/config:/etc/gitlab-runner"]
...
Gitlab-ci.yml pipeline:
MyJob:
image: ubuntu:latest
script:
- awk -v cmd='openssl x509 -noout -subject' '/BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt
- git clone https://gitlab-ci-token:${CI_BUILD_TOKEN}#ServerA/foo/bar/My-Project.wiki.git
- wget -O foo.png https://ServerA/foo/bar/foo.png
before_script:
- apt-get update -y >/dev/null
- apt-get install -y apt-utils dialog >/dev/null
- apt-get install -y git >/dev/null
- apt-get install -y wget >/dev/null
# The trick is the following:
- cp /etc/gitlab-runner/certs/ca.crt /usr/local/share/ca-certificates/ca.crt
- update-ca-certificates
That's it:
Mount the volume once (per Docker executor)
Update the CA certificates once (per job)
And everything will work as expected: git clone, wget https, etc...
A great workaround, until someone at GitLab will fix it or explain me where I'm wrong (be my guest!)
Not sure it's the best approach, but at least it worked for me. You can create a customized gitlab runner image and add your root CA inside:
├── Dockerfile
└── myca.crt
# Dockerfile
FROM gitlab/gitlab-runner:latest
COPY myca.crt /usr/local/share/ca-certificates
RUN update-ca-certificates
Build it:
docker build -t custom-gitlab-runner .
And rerun all your commands, just remember to use this new image name.
Off-topic, but related and might be useful
Dockerized gitlab-runner seem to also ignore entries in your /etc/hosts, so if you have launched Gitlab on a custom domain, e.g. https://gitlab.local.net, you need to pass the values from /etc/hosts when launching/registering gitlab runner:
docker run -d --name gitlab-runner --restart always \
--add-host="gitlab.local.net:192.168.1.100" \
...
If you want to launch docker:dind (docker in docker service) container to build docker images, you also need to set these values inside /srv/gitlab-runner/config/config.toml:
[[runners]]
url = "https://gitlab.local.net/"
executor = "docker"
pre_clone_script = "echo '192.168.1.100 gitlab.local.net registry.local.net' >> /etc/hosts"
...
From the output you provided i think that the certificate might be OK but you are lacking the CRL file : server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
The CRL file is used to verify that even if the certificate is valid is hasn't been revoked by the CA owner. You shoudl then need to :
1) Generate a CRL file based on your CA:
openssl ca -gencrl -keyfile ca.key -cert ca.crt -out crl.pem
source: https://blog.didierstevens.com/2013/05/08/howto-make-your-own-cert-and-revocation-list-with-openssl/
2) Instruct the runner to use it :
[[runners]]
...
tls-ca-file = "/etc/gitlab-runner/ssl/ca-bundle.crt"
crl-file = "/etc/gitlab-runner/ssl/ca.crl"
3) Of course setting GIT_SSL_NO_VERIFY will work but you will be more sensitive to man-in-the-middle attacks

delete image from docker registry v2

the Docker Registry v2 has an API endpoint to delete an image
DELETE /v2/<name>/manifests/<reference>
https://github.com/docker/distribution/blob/master/docs/spec/api.md#deleting-an-image
However the doc says:
For deletes, reference must be a digest or the delete will fail.
Indeed, using a tag does not work and returns a 405 Operation Not Supported
The problem is, there doesn't seem to be any endpoint to get the digest of an image.
The endpoints to list images, and tags only list those.
Trying to get the manifest with
GET /v2/<name>/manifests/<reference>
using the tag as <reference>I see that a Docker-Content-Digest header is set with a digest which the doc says is
Docker-Content-Digest: Digest of the targeted content
for the request.
while the body contains a bunch of blobSum: <digest>
If I try using the Header digest value, with
GET /v2/<name>/manifests/<reference>
and the digest as <reference>, I get a 404.
the digest looks like: sha256:6367f164d92eb69a7f4bf4cab173e6b21398f94984ea1e1d8addc1863f4ed502
and I tried with and without the sha256 prefix. but no luck
So how am I supposed to get the digest of the image I want to delete, to delete it?
curl -u login:password -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -X GET https://registry.private.com/v2/<name>/manifests/<tag>
json > config > digest
Not a trivial operation in Docker API right now but I hope this procedure helps:
Create a file and give it a name, for me it will be delete-image.sh:
#!/bin/bash
# Inspired by: https://gist.github.com/jaytaylor/86d5efaddda926a25fa68c263830dac1
set -o errexit
if [ -z "$1" ]
then
echo "Error: The image name arg is mandatory"
exit 1
fi
registry='localhost:5000'
name=$1
curl -v -sSL -X DELETE "http://${registry}/v2/${name}/manifests/$(
curl -sSL -I \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json" \
"http://${registry}/v2/${name}/manifests/$(
curl -sSL "http://${registry}/v2/${name}/tags/list" | jq -r '.tags[0]'
)" \
| awk '$1 == "Docker-Content-Digest:" { print $2 }' \
| tr -d $'\r' \
)"
Give the permission to that file so that it can be executed;
sudo chmod u+x ./delete-image.sh
./delete-image.sh <your-image-name>
After deleting the image, collect the garbage;
docker exec -it registry.localhost bin/registry \
garbage-collect /etc/docker/registry/config.yml
Now delete the folder for that image (and I'm assuming that you created a volume previously);
sudo rm -rf ${HOME}/registry/docker/registry/v2/repositories/<your-image-name>
If you have not created a volume, you may have to enter the container to delete that folder. But, in any case, it's a good idea to restart the container;
docker restart registry.localhost
Procedure not recommended for production environments.
I hope that we will have better support for these operations natively in the Docker API in the future.

Resources