Helm Sentry installation failed on deployment: initdb: could not change permissions of directory - docker

I have a local Openshift instance where I'm trying to install Sentry using helm as:
helm install --name sentry --wait stable/sentry.
All pods are deployed fine other than the PostgreSQL pod also deployed as a dependency for Sentry.
This pod's initiliazation fails as a CrashLoopBackOff and the logs show the following:
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
initdb: could not change permissions of directory "/var/lib/postgresql/data/pgdata": Operation not permitted
Not sure where to start to fix this issue so I can get sentry deployed successfully with all its dependencies

The issue was resolved by adding permissions to the service account that was being used to run commands on the pod.
In my case the default service account on OpenShift was being used.
I added the appropriate permissions to this service account using the cli:
oc adm policy add-scc-to-user anyuid -z default --as system:admin
Also see: https://blog.openshift.com/understanding-service-accounts-sccs/

Related

Setting up Pwnmachine slef hosted docker embed

trying to setup [pwnmachinev2]https://github.com/yeswehack/pwn-machine properly
PwnMachine is a self hosting solution based on docker aiming to provide an easy to use pwning station for bug hunters.
The basic install include a web interface, a DNS server and a reverse proxy.
Installation
Using Docker
Clone the repository locally on your machine
git clone https://github.com/yeswehack/pwn-machine.git
Enter in the repository previously cloned
cd pwn-machine/
Configure the .env <--Having trouble on this step
If you start to build direclty the project, you will be faced with an error:
${LETS_ENCRYPT_EMAIL?Please provide an email for let's encrypt}" # Replace with your email address or create a .env file
We highly recommend to create a .env file in the PwnMachine directory and to configure an email. It's used for let's encrypt to have a SSL certificate.
LETS_ENCRYPT_EMAIL="your_email#domain.com"
Building
Build the project (using option -d will start the project in background, it's optional). Building can take several minutes (depending on your computer and network connection).
docker-compose up --build -d
Once everything is done on docker side, you should be able to access on the PwnMachine at http://your_address_ip
Starting pm_powerdns-db_1 ... done
Starting pm_redis_1 ... done
Starting pm_powerdns_1 ... done
Starting pm_filebeat_1 ... done
Recreating traefik ... done
Recreating pm_manager_1 ... done
First run & configuration
Password and 2FA configuration
When you start the PwnMachine for the first time, we ask users to set a new password and 2FA authentication. This is mandatory to continue. You can use Google Authenticator, Authy, Keepass... anything you want that allows you to set up 2FA authentication.
After this, you are ready to use the PwnMachine!
How to setup DNS
Create a new DNS zone
First, we need to create a new DNS zone. Go on DNS > ZONES
Name: domain.com
Nameserver: ns.domain.com.
Postmaster: noreply.example.com.
Click on the button to save the configuration and the this new DNS zone
Create a new DNS rule
Zone: example.com.
Name: *.example.com.
Type: A
Add a new record
your_adress_ip
Click on the button +
Click on the button to save the configuration
Now you need to update your DNS servers at your ISP with the one that has just been configured.
How to expose a docker container on a subdomain and use HTTPS
For this example, we will create a new subdomain like manager.example.com to expose the PwnMachine interface on it and accessible in HTTPS.
Go on DOCKER > CONTAINERS
Right click on pm_manager
Click on Expose via traefik
A new window should open:
Name: pm_manager-router
Rule: Host(manager.example.com) && PathPrefix(/)
Entrypoint: https
Select "Middlewares"
Service: pm_manager-service
---- TLS ----
Cert Resolver: Let's Encrypt staging - DNS
Domain: *.example.com
Now, wait the DNS propagation and after some minutes you should be able to connect on manager.example.com.
I was able to get it started and access it at http://127.0.0.1/
but got a bit confused when setting up the records
im trying to set it up so i can access it over the web i.e c25.tech/payload.dtd
c25.tech is my domain , I have threw hostinger
I hope that someone can help me out thanks.
screenshot1
screenshot2
screenshot3
screenshot3

Read-Only Pods in Openshift

I am trying to implement Read-Only filesystems for my Pods running on Openshift.
I use Jenkins to build and deploy these pods in Openshift.
For Build, I use :
def builddemo = openshift.newBuild("${Image}~${Github}", "--name=${Name}", "--source-secret=githubsecret", "--to=${artifactorylocation}", "--push-secret=artifactorysecret", "--to-docker=true" , "--strategy=docker")
For Deploy, I use YAML files.
I would like to make my filesystem read-only.
I have already tried below document :
https://github.com/nmasse-itix/OpenShift-Examples/tree/master/Read-Only-FS
But, I dont have admin privileges on the cluster to change security config. I only have admin privileges in my own namespace.
from server for: "read-only-scc.yaml": securitycontextconstraints.security.openshift.io "readonly-fs" is forbidden: User "USERNAME" cannot get securitycontextconstraints.security.openshift.io at the cluster scope: no RBAC policy matched
Is there anyway, I can do it.

Login Issue with Weblogic in Docker

I created a Weblogic generic container for version 12.1.3 based on the official Docker images from Oracle at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles
Command: buildDockerImage.sh -g -s -v 12.1.3
This creates the image oracle/weblogic:12.1.3-generic
Using a modified version of sample dockerfile for 1213-domain, I built the Weblogic container.
Note: changed the base image to be generic, instead of developer
docker build -t 1213-domain --build-arg ADMIN_PASSWORD="admin123" -f myDockerfile .
Pushed the built image to Amazon ECR and ran the container using the AWS ECS. Configured the port mappings as 0:7001, set memory soft limit as 1024, nothing else changed in default ECS settings. I have an application load balancer in the front, which receives traffic at 443 port and forwards to the containers. In the browser I get a login page for Weblogic, when I enter username as weblogic and password as admin123, I get the error:
Authentication Denied
Interestingly when I go to the container and connect to the weblogic using WLST, it works fine.
[ec2-user#ip-10-99-103-141 ~]$ docker exec -it 458 bash
[oracle#4580238db23f mydomain]$ /u01/oracle/oracle_common/common/bin/wlst.sh
Initializing WebLogic Scripting Tool (WLST) ...
Jython scans all the jar files it can find at first startup. Depending on the system, this process may take a few minutes to complete, and WLST may not return a prompt right away.
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
wls:/offline> connect("weblogic","admin123","t3://localhost:7001")
Connecting to t3://localhost:7001 with userid weblogic ...
Successfully connected to Admin Server "AdminServer" that belongs to domain "mydomain".
Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.
wls:/mydomain/serverConfig>
Any hints on what can be going wrong?
Very interesting indeed. :) .. You are sure there is no special characters or so when you entering the username and password. Try typing the same if you are coping.
Also as backup, since you are able to login to WLST you can try two option.
Resetting the current password of weblogic or try adding new username and password.
below link will help
http://middlewarebuzz.blogspot.com/2013/06/weblogic-password-reset.html
or
http://middlewaremagic.com/weblogic/?p=4962

Docker-Machine do not work with Google Cloud service account

I Create a google compute instance with service account
gcloud --project my-proj compute instances create test1 \
--image-family "debian-9" --image-project "debian-cloud" \
--machine-type "g1-small" --network "default" --maintenance-policy "MIGRATE" \
--service-account "gke-build-robot#myproj-184015.iam.gserviceaccount.com" \
--scopes "https://www.googleapis.com/auth/cloud-platform" \
--tags "gitlab-runner" \
--boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "$RESOURCE_NAME" \
--metadata register_token=mytoken,config_bucket=gitlab_config,runner_name=test1,gitlab_uri=myuri,runner_tags=backend \
--metadata-from-file "startup-script=startup-scripts/prepare-runner.sh"
Log to instance though ssh: gcloud compute --project "myproj" ssh --zone "europe-west1-b" "gitlab-shared-runner-pool"
After install and configure docker machine. i try create instance:
docker-machine create --driver google --google-project myproj test2
Running pre-create checks...
(test2) Check that the project exists
(test2) Check if the instance already exists
Creating machine...
(test2) Generating SSH Key
(test2) Creating host...
(test2) Opening firewall ports
(test2) Creating instance
(test2) Waiting for Instance
Error creating machine: Error in driver during machine creation: Operation error: {EXTERNAL_RESOURCE_NOT_FOUND The resource '1045904521672-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found. []}
1045904521672-compute#developer.gserviceaccount.com is my default account.
I don;t understand why it used. Because activated is gke-build-robot#myproj-184015.iam.gserviceaccount.com
gcloud config list
[core]
account = gke-build-robot#myproj-184015.iam.gserviceaccount.com
disable_usage_reporting = True
project = novaposhta-184015
Your active configuration is: [default]
gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* gke-build-robot#myproj-184015.iam.gserviceaccount.com
Can some one explain me, what i do wrong?
There was double problem.
First of all, docker-machine can't work with specific service account, at least in 0.12 and 0.13 version.
Docker+Machine google driver have only scope parameter and can't get specific one.
So Instance where docker+machine was installed is work fine with specified sa. But instance that was created with docker+machine, must have default service account.
And when during debug, I turn off it.
I've got this error as a result.
A similar issue (bosh-google-cpi-release issue 144) suggests somehow the
This error message is unclear, particularly because the credentials which also need to be specified in the manifest may be associated with another account altogether.
The default service_account for the bosh-google-cpi-release is set to "default" if it is not proactively set by the bosh manifest, so this will happen anytime you use service_scopes instead of a service_account.
While you are not using bosh-google-cpi-release, the last sentence made me double-check the gcloud reference page, in particular gcloud compute instance create.
A service account is an identity attached to the instance. Its access tokens can be accessed through the instance metadata server and are used to authenticate applications on the instance.
The account can be either an email address or an alias corresponding to a service account. You can explicitly specify the Compute Engine default service account using the 'default' alias.
If not provided, the instance will get project's default service account.
It is as if your service account is either ignored or incorrect (and falls back to the project default's one)
See "Creating and Enabling Service Accounts for Instances" to double-check its value:
Usually, the service account's email is derived from the service account ID, in the format:
[SERVICE-ACCOUNT-NAME]#[PROJECT_ID].iam.gserviceaccount.com
Or try setting first the service scope and account.

Docker cannot acces registry from openshift

Here is my whole scenario.
I have a RHEL 7.1 vmware image, with the corporate proxy properly configured, accessing stuff over http or https works properly.
Installed docker-engine, and added the HTTP_PROXY setting to /etc/systemd/system/docker.service.d/http-proxy.conf. I can verify the proxy setting is picked up by executing:
sudo systemctl show docker --property Environment
which will print:
Environment=HTTP_PROXY=http://proxy.mycompany.com:myport/ with real values of course.
Pulling and running docker images works correctly this way.
The goal is to work with the binary distribution of openshift-origin. I downloaded the binaries, and started setting up things as per the walkthrough page on github:
https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
Starting openshift seems to work as I can:
* login via the openshift cli
* create a new project
* even access the web console
But when I try to create an app in the project (also via the cli):
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
It fails:
error: can't look up Docker image "centos/ruby-22-centos7": Internal error occurred: Get https://registry-1.docker.io/v2/: dial tcp 52.71.246.213:443: connection refused
I can access (without authentication though) this endpoint via the browser on the VM or via WGET.
Hence I believe DOCKER fails to pick up the proxy settings. After some searching I also fear if there are IPTABLES settings missing. Referring to:
https://docs.docker.com/v1.7/articles/networking/
But I don't know if I should fiddle with the IPTABLES settings, should not Docker figure that out itself?
Check your HTTPS_PROXY environment property.

Resources