How to permanently save the path of the identity file in pgAdmin4 - docker

Context
I'm using this pgAdmin4 docker image: https://hub.docker.com/r/dpage/pgadmin4/ in its latest state (version 6.9 at the time of writing).
And I am currently wondering how to permanently set the path of the identity file in the servers.json file.
This identity file is mounted as a bind mount in my compose file:
- ./id_ed25519:/var/lib/pgadmin/storage/<user_domain>/id_ed25519
For the moment, if I set this file path within the running application:
and if I export that definition in a test_export.json file according to the documentation, I end up with the following server definition, which curiously doesn't have any reference to the identity file path:
# cat test_export.json
{
"Servers": {
"1": {
"Name": "local.pg9",
"Group": "SERVER",
"Host": "localhost",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "postgres",
"SSLMode": "prefer",
"PassFile": "/.pgpass",
"UseSSHTunnel": 1,
"TunnelHost": "my-test-server.org",
"TunnelPort": "22",
"TunnelUsername": "vpsroot",
"TunnelAuthentication": 1
}
}
I also "randomly" tried to guess it by adding "TunnelIdentityfile": "/id_ed25519" but it didn't work.
And I cannot find any information about that file in the documentation at https://www.pgadmin.org/docs/pgadmin4/6.5/import_export_servers.html#json-format
Question
How could I save the identity file path (ideally in the servers.json file) so that I don't have to manually set it up each time I reboot the pgAdmin4 container?

The identity file or passwords are not exported by default for security reasons and will not be restored as well. If you wish to have this then you can raise a feature request on pgAdmin - https://redmine.postgresql.org/projects/pgadmin4/issues/new

Related

Run buildah within gitlab-ci

I want to use buildah from gitlab-ci, in order to build an image, run a container from it and do some tests against it.
My current gitlab-ci is:
tests:
tags:
- docker
image: quay.io/buildah/stable
stage: test
variables:
STORAGE_DRIVER: "vfs"
BUILDAH_FORMAT: "docker"
BUILDAH_ISOLATION: "rootless"
only:
refs:
- merge_requests
changes:
- **/*
script:
- buildah info --debug
- buildah unshare docker/test/run.sh
My runner is private gitlab runner, I don't want to change its configuration (to not break other CI).
The content of run.sh is:
#!/usr/bin/env bash
set -euo pipefail
container=$(buildah --ulimit nofile=8192 --name my-container from phusion/baseimage:bionic-1.0.0-amd64)
The error is:
level=warning msg="error reading allowed ID mappings: error reading subuid mappings for user \"root\" and subgid mappings for group \"root\": No subuid ranges found for user \"root\" in /etc/subuid" level=warning msg="Found no UID ranges set aside for user \"root\" in /etc/subuid." level=warning msg="Found no GID ranges set aside for user \"root\" in /etc/subgid." No buildah sali-container already exists... Package Sali Creating sali-container Completed short name "phusion/baseimage" with unqualified-search registries (origin: /etc/containers/registries.conf) Getting image source signatures Copying blob
sha256:36505266dcc64eeb1010bd2112e6f73981e1a8246e4f6d4e287763b57f101b0b Copying blob
sha256:1907967438a7f3c5ff54c8002847fe52ed596a9cc250c0987f1e2205a7005ff9 Copying blob
sha256:23884877105a7ff84a910895cd044061a4561385ff6c36480ee080b76ec0e771 Copying blob
sha256:2910811b6c4227c2f42aaea9a3dd5f53b1d469f67e2cf7e601f631b119b61ff7 Copying blob
sha256:bc38caa0f5b94141276220daaf428892096e4afd24b05668cd188311e00a635f Copying blob
sha256:53c90fd859186b7b770d65adcb6ae577d4c61133f033e628530b1fd8dc0af643 Copying blob
sha256:d039079bb3a9bf1acf69e7c00db0e6559a86148c906ba5dab06b67c694bbe87c Copying config
sha256:32c929dd2961004079c1e35f8eb5ef25b9dd23f32bc58ac7eccd72b4aa19f262 Writing manifest to image destination Storing signatures level=error msg="Error while applying layer: ApplyLayer
exit status 1 stdout: stderr: potentially insufficient UIDs or GIDs available in user namespace (requested 0:42 for /etc/gshadow): Check /etc/subuid and /etc/subgid: lchown /etc/gshadow: invalid argument" 4 errors occurred while pulling:
* Error initializing source docker://registry.fedoraproject.org/phusion/baseimage:bionic-1.0.0-amd64: Error reading manifest bionic-1.0.0-amd64 in registry.fedoraproject.org/phusion/baseimage: manifest unknown: manifest unknown
* Error initializing source docker://registry.access.redhat.com/phusion/baseimage:bionic-1.0.0-amd64: Error reading manifest bionic-1.0.0-amd64 in registry.access.redhat.com/phusion/baseimage: name unknown: Repo not found
* Error initializing source docker://registry.centos.org/phusion/baseimage:bionic-1.0.0-amd64: Error reading manifest bionic-1.0.0-amd64 in registry.centos.org/phusion/baseimage: manifest unknown: manifest unknown
* Error committing the finished image: error adding layer with blob "sha256:23884877105a7ff84a910895cd044061a4561385ff6c36480ee080b76ec0e771": ApplyLayer exit status 1 stdout: stderr: potentially insufficient UIDs or GIDs available in user namespace (requested 0:42 for /etc/gshadow): Check /etc/subuid and /etc/subgid: lchown /etc/gshadow: invalid argument level=error msg="exit status 125" level=error msg="exit status 125"
The result of buildah info --debug:
{
"debug": {
"buildah version": "1.18.0",
"compiler": "gc",
"git commit": "",
"go version": "go1.15.2"
},
"host": {
"CgroupVersion": "v1",
"Distribution": {
"distribution": "fedora",
"version": "33"
},
"MemFree": 9021378560,
"MemTotal": 15768850432,
"OCIRuntime": "runc",
"SwapFree": 0,
"SwapTotal": 0,
"arch": "amd64",
"cpus": 4,
"hostname": "runner-cvBUQadt-project-2197143-concurrent-0",
"kernel": "4.14.83+",
"os": "linux",
"rootless": false,
"uptime": "6391h 28m 15.45s (Approximately 266.29 days)"
},
"store": {
"ContainerStore": {
"number": 0
},
"GraphDriverName": "vfs",
"GraphOptions": [
"vfs.imagestore=/var/lib/shared"
],
"GraphRoot": "/var/lib/containers/storage",
"GraphStatus": {},
"ImageStore": {
"number": 0
},
"RunRoot": "/var/run/containers/storage"
}
}
I read other posts about the errors I had and came to this configuration, which is not enough. I choose buildah by thinking it would be easy to use from a CI as it is supposed to run rootless, but this is a real nightmare... I am poor lonesome developer and not a sysadmin, I don't understand how to setup linux for buildah... Can somebody help me?
Buildah is going to need to run as root or within a user namespace with sufficent UIDs to install files with different UID.
This looks like for some reason buildah thought it should run within a user namespace and then did not find root listed within the user namespace. This usually happens when you did not run with enough privileges.

How can i use TypeORM with better-sqlite3

How can i use TypeORM with better-sqlite3?
on the official documentation, there is a section form better-sqlite3.
I already installed by typeorm#latest and typeorm#next but there is no options for better-sqlite3 yet.
If i try to force initialize it, i got the following error
MissingDriverError: Wrong driver: "better-sqlite3" given. Supported drivers are: "cordova", "expo", "mariadb", "mongodb", "mssql", "mysql", "oracle", "postgres", "sqlite", "sqljs", "react-native", "aurora-data-api", "aurora-data-api-pg".
The better-sqlite3 driver was added in typeorm#0.2.26.
Export your existing database.
Update typeorm to 0.2.26 or higher.
Install the package better-sqlite3. sqlite3 can be uninstalled.
In your ormconfig.json change the type to "type": "better-sqlite3",
Import the database exported in step 1.

Sharing files with ECS and EFS

Could you help me, please?
I'm trying to configure an ECS cluster to share files using EFS but I'm facing the following issue:
level=info time=2020-03-02T17:30:27Z msg="TaskHandler: Sending task change: TaskChange:
[arn:aws:ecs:us-east-1:959242800104:task/74086a36-c405-4248-8475-3234b011bee8 -> STOPPED, Known
Sent: NONE, PullStartedAt: 2020-03-02 17:30:27.661062367 +0000 UTC m=+3131.201879282,
PullStoppedAt: 2020-03-02 17:30:27.744492758 +0000 UTC m=+3131.285309673, ExecutionStoppedAt:
2020-03-02 17:30:27.913073824 +0000 UTC m=+3131.453890739,
arn:aws:ecs:us-east-1:959242800104:task/74086a36-c405-4248-8475-3234b011bee8 redmine -> STOPPED, Reason
CannotCreateContainerError: Error response from daemon: failed to mount local volume: mount
:/mnt/efs/redmine:/var/lib/docker/volumes/ecs-redmine-22-attachments-cee2f0e7e0ebc5f55000/_data,
data: addr=10.0.0.127,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport:
no such file or directory, Known Sent: NONE] sent: false" module=task_handler_types.go
If I only declare a volume inside my ECS task, the container started normally but if I try to map the outside volume with the container folder the issue happens.
I followed this tutorial: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_efs.html and it seems the problem isn't in security groups but the container itself.
I'm using the alpine version of Redmine.
Follow the config snippets:
...
"mountPoints": [
{
"readOnly": null,
"containerPath": "/usr/src/redmine/files",
"sourceVolume": "attachments"
}
],
...
"volumes": [
{
"efsVolumeConfiguration": {
"fileSystemId": "fs-xxxxx",
"rootDirectory": "/mnt/efs/redmine"
},
"name": "attachments",
"host": null,
"dockerVolumeConfiguration": null
}
],
Thanks in advance.
The log says: "no such file or directory": Make sure the directory on efs exists before using it.
Other considerations:
You cannot use "efsVolumeConfiguration" with ECS-Fargate. Currently only for ECS-on-EC2 (Fargate support is in the making).
I followed those links in order to solve my problem. I thought that EFS is not ready to use in ECS.
I had to map EFS inside EC2 and after that I had access from docker container.
https://gist.github.com/duluca/ebcf98923f733a1fdb6682f111b1a832#update-your-cloud-formation-template
https://xiaoyunyang.github.io/post/a-complete-guide-to-deploying-your-web-app-to-amazon-web-service/#set-up-efs-with-your-containers

Using BigQuery plugin in Grafana

I have some issues with the format of the provisioning file. I have some service account file that looks like this
{
"type": "service_account",
"project_id": "my-project",
"private_key_id": "XXXXX_my_private_key_id_XXXXXXX",
"private_key": "-----BEGIN PRIVATE KEY-----\nXXXXXXX_my_private_key___\nXXXXX_another_line_here_XXXXX\nXXXXXX_final_line_XXXXXX==\n-----END PRIVATE KEY-----\n",
"client_email": "my-project#company.iam.gserviceaccount.com",
"client_id": "123456",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/my-project%40company.iam.gserviceaccount.com"
}
And the provisioning file that I put in /etc/grafana/provisioning/datasources/all.yaml
Which looks like this
apiVersion: 1
# list of datasources to insert/update depending
# what's available in the database
datasources:
# <string, required> name of the datasource. Required
- name: bigquery-project
type: doitintl-bigquery-datasource
access: proxy
isDefault: true
jsonData:
authenticationType: jwt
clientEmail: my-project#company.iam.gserviceaccount.com
defaultProject: my-default-project
tokenUri: https://accounts.google.com/o/oauth2/token
secureJsonData:
privateKey: "-----BEGIN PRIVATE KEY-----\nXXXXXXX_my_private_key___\nXXXXX_another_line_here_XXXXX\nXXXXXX_final_line_XXXXXX==\n-----END PRIVATE KEY-----\n"
version: 2
readOnly: false
But when I clicked on save and test I got some errors. I think I miss-formated the provisioning file.
I tried to upload the service account file using the UI and the test passed, I was able to query BQ. However when I did this, I couldn't find any file in /etc/grafana/provisioning/datasources to get and example of it.
I'm executing a custom Grafana image in a docker container
### file system hierarchy of the project
.
├── Dockerfile
└── provisioning
├── dashborads
└── datasources
└── all.yaml
### Docker file
ARG GRAFANA_VERSION=6.5.3
FROM grafana/grafana:$GRAFANA_VERSION
ENV GF_AUTH_DISABLE_LOGIN_FORM "true"
ENV GF_AUTH_ANONYMOUS_ENABLED "true"
ENV GF_AUTH_ANONYMOUS_ORG_ROLE "Admin"
ENV GF_INSTALL_PLUGINS "doitintl-bigquery-datasource 1.0.4"
The way I'm running the container
docker run -d -p 3000:3000 -v $(pwd)/provisioning:/etc/grafana/provisioning massy/custom-grafana
I'm providing the provisioning file via a volume.
What's wrong with the provisioning file ?
When we add a datasource in Grafana, isn't the provisioning file updated automatically ? (if not created then it will be ?)
How could I get the logs when I test that bigquery plugin ?
Edit
When I tried to add a dummy SQL query in "new dashbord" section this is what I get
lvl=eror msg="Failed to get access token" logger=data-proxy-log error="private key should be a PEM or plain PKCS1 or PKCS8; parse error: asn1: structure error: tags don't match (16 vs {class:0 tag:28 length:110 isCompound:true}) {optional:false explicit:false application:false private:false defaultValue:<nil> tag:<nil> stringType:0 timeType:0 set:false omitEmpty:false} pkcs1PrivateKey #2"
t=2020-01-22T10:02:18+0000 lvl=info msg=Requesting logger=data-proxy-log url=https://www.googleapis.com/bigquery/v2/projects/undefined/queries
t=2020-01-22T10:02:18+0000 lvl=info msg="Request Completed" logger=context userId=0 orgId=1 uname= method=POST path=/api/datasources/proxy/1/bigquery/v2/projects/undefined/queries status=401 remote_addr=172.17.0.1 time_ms=55 size=304 referer="http://localhost:3000/dashboard/new?panelId=2&edit&fullscreen&orgId=1&gettingstarted"
I did a docker logs on the container
This is the correct format for the provisioning file
apiVersion: 1
datasources:
- name: bigquery-project
type: doitintl-bigquery-datasource
access: proxy
isDefault: true
jsonData:
authenticationType: jwt
clientEmail: my-project#company.iam.gserviceaccount.com
defaultProject: my-default-project
tokenUri: https://accounts.google.com/o/oauth2/token
secureJsonData:
privateKey: |
-----BEGIN PRIVATE KEY-----
XXXXXXX_my_private_key___
XXXXX_another_line_here_XXXXX
XXXXXX_final_line_XXXXXX
-----END PRIVATE KEY-----
version: 2
readOnly: false
There is a difference between the two provisioning files:
https://grafana.com/grafana/plugins/doitintl-bigquery-datasource?version=1.0.4
https://github.com/doitintl/bigquery-grafana#example-of-provisioning-a-file
The one on GitHub has the correct format

Add API present on a local docker container to Kong

I'm developing a little project with microservices, I want to configure Kong as the API-Gateway for my structure but I'm not able to forward incoming requests from Kong to my services.
For example, I have a docker container running locally on address 172.18.0.15 and port 8006. I can access the api just fine at 172.18.0.15:8006/compute, I would like to configure kong in order to be able to make the same call at localhost:8001/compute. Am I missing something? Should api be present in my url in order for Kong to discover my api? This is the configuration that I've added to Kong, I basically just added all REST methods and set url:port as the upstream-url field.
{
"data": [
{
"created_at": 1514479841682,
"http_if_terminated": false,
"https_only": false,
"id": "9cf925a5-2c27-4dbc-ae2a-e70fba94ef53",
"methods": [
"GET",
"POST",
"PUT",
"DELETE"
],
"name": "compute",
"preserve_host": false,
"retries": 5,
"strip_uri": true,
"upstream_connect_timeout": 60000,
"upstream_read_timeout": 60000,
"upstream_send_timeout": 60000,
"upstream_url": "http://172.18.0.15:8006"
}
],
"total": 1
}
This is not working for me: when I try to access the API through localhost, the request times out with a 499 error. Any help is highly appreciated.
EDIT: I've tried adding my API by setting the URI field instead. I've inspected using docker logs kong and it looks like kong is definetely accessing the correct address, but is timing out for some unknown reason:
2017/12/28 17:55:25 [error] 47#0: *103 upstream timed out (110: Connection timed out) while connecting to upstream, client: 172.17.0.1, server: kong, request: "GET /compute?origin=20,20&destination=20,20 HTTP/1.1", upstream: "http://172.18.0.15:8006/compute?origin=20,20&destination=20,20", host: "localhost:8000"
However, if I access the same address using curl it works just fine:
# curl http://172.18.0.15:8006/compute?origin=20,20\&destination=20,20
[]
I highly doubt anybody else will encounter the same issue but the problem was in my Docker configuration, I forgot to put the api gateway on the same subnet as the services. Fixing this solved my issue.

Resources