LDAP authentication doesn't work on Gitea custom image - docker

I'm developing a Docker infrastructure with Ansible and Docker Compose and I have a problem with the authentication via LDAP on my custom image of Gitea.
The error that i get inside the logs of Gitea when I try to use one of the users that are in the LDAP is:
Do you think that is a problem of network or is a problem of the LDAP that doesn't find the user?
The restoration of the LDIF backup works as expected because it adds the user that I'm trying to log:
Also when I create manually a user in Gitea via the graphic interface, in the authentication sources I find ansible-ldap.
What can be the solution to this problem?
This is my configuration:
app.ini (of Gitea)
[DEFAULT]
RUN_USER = git
RUN_MODE = prod
...
[database]
PATH = /data/gitea/gitea.db
DB_TYPE = postgres
HOST = db:5432
NAME = gitea
USER = gitea
PASSWD = gitea
LOG_SQL = false
...
Dockerfile
FROM gitea/gitea:1.16.8
RUN apk add sudo
RUN chmod 777 /home
COPY entrypoint /usr/bin/custom_entrypoint
COPY gitea-cli.sh /usr/bin/gitea-cli.sh
ENTRYPOINT /usr/bin/custom_entrypoint
entrypoint
#!/bin/sh
set -e
while ! nc -z $GITEA__database__HOST; do sleep 1; done;
chown -R 1000:1000 /data/gitea/conf
if ! [ -f /data/gitea.initialized ]; then
gitea-cli.sh migrate
gitea-cli.sh admin auth add-ldap --name ansible-ldap --host 127.0.0.1 --port 1389 --security-protocol unencrypted --user-search-base dc=ldap,dc=vcc,dc=unige,dc=it --admin-filter "(objectClass=giteaAdmin)" --user-filter "(&(objectClass=inetOrgPerson)(uid=%s))" --username-attribute uid --firstname-attribute givenName --surname-attribute surname --email-attribute mail --bind-dn cn=admin,dc=ldap,dc=vcc,dc=unige,dc=it --bind-password admin --allow-deactivate-all
touch /data/gitea.initialized
fi
exec /usr/bin/entrypoint
gitea-cli.sh
#!/bin/sh
echo 'Started gitea-cli'
USER=git HOME=/data/git GITEA_WORK_DIR=/var/lib/gitea sudo -E -u git gitea --config /data/gitea/conf/app.ini "$#"
docker-compose.yaml
db:
image: postgres:14.3
restart: always
hostname: db
environment:
POSTGRES_DB: gitea
POSTGRES_USER: gitea
POSTGRES_PASSWORD: gitea
ports:
- 5432:5432
volumes:
- /data/postgres:/var/lib/postgresql/data
networks:
- vcc
openldap:
image: bitnami/openldap:2.5
ports:
- 1389:1389
- 1636:1636
environment:
BITNAMI_DEBUG: "true"
LDAP_LOGLEVEL: 4
LDAP_ADMIN_USERNAME: admin
LDAP_ADMIN_PASSWORD: admin
LDAP_ROOT: dc=ldap,dc=vcc,dc=unige,dc=it
LDAP_CUSTOM_LDIF_DIR: /bitnami/openldap/backup
LDAP_CUSTOM_SCHEMA_FILE: /bitnami/openldap/schema/schema.ldif
volumes:
- /data/openldap/:/bitnami/openldap
networks:
- vcc
gitea:
image: 127.0.0.1:5000/custom_gitea:51
restart: always
hostname: git.localdomain
build: /data/gitea/custom
ports:
- 4000:4000
- 222:22
environment:
USER: git
USER_UID: 1000
USER_GID: 1000
GITEA__database__DB_TYPE: postgres
GITEA__database__HOST: db:5432
GITEA__database__NAME: gitea
GITEA__database__USER: gitea
GITEA__database__PASSWD: gitea
GITEA__security__INSTALL_LOCK: "true"
GITEA__security__SECRET_KEY: XQolFkmSxJWhxkZrkrGbPDbVrEwiZshnzPOY
volumes:
- /data/gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /data/gitea/app.ini:/data/gitea/conf/app.ini
deploy:
mode: global
depends_on:
- db
- openldap
- openldap_admin
networks:
- vcc

The problem was the address 127.0.0.1 in the entrypoint file in --host, changing it to openldap (name of the service in the docker-compose file) worked.

Related

Running integration tests in Github actions: issues with connecting with postgres

I have some integration tests that, in order to succesfully run, require a running postgres database, setup via docker-compose, and my go app running from main.go. Here is my docker-compose:
version: "3.9"
services:
postgres:
image: postgres:12.5
user: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-db
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d
networks:
default:
driver: bridge
volumes:
data:
driver: local
and my Github Actions are as follows:
jobs:
unit:
name: Test
runs-on: ubuntu-latest
services:
postgres:
image: postgres:12.5
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-db
ports:
- 5432:5432
env:
GOMODCACHE: "${{ github.workspace }}/.go/mod/cache"
TEST_RACE: true
steps:
- name: Initiate Database
run: psql -f initdb/init.sql postgresql://postgres:password#localhost:5432/my-db
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#v0
- name: Authenticate with GCP
id: auth
uses: "google-github-actions/auth#v0"
with: credentials_json: ${{ secrets.GCP_ACTIONS_SECRET }}
- name: Configure Docker
run: |
gcloud auth configure-docker "europe- docker.pkg.dev,gcr.io,eu.gcr.io"
- name: Set up Docker BuildX
uses: docker/setup-buildx-action#v1
- name: Start App
run: |
VERSION=latest make images
docker run -d -p 3000:3000 -e POSTGRES_DB_URL='//postgres:password#localhost:5432/my-db?sslmode=disable' --name='app' image/app
- name: Tests
env:
POSTGRES_DB_URL: //postgres:password#localhost:5432/my-db?sslmode=disable
GOMODCACHE: ${{ github.workspace }}/.go/pkg/mod
run: |
make test-integration
docker stop app
My tests run just fine locally firing off the docker-compose with docker-compose up and running the app from main.go. However, in Github actions I am getting the following error:
failed to connect to `host=/tmp user=nonroot database=`: dial error (dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory
What am I missing? Thanks
I think this code has more than one problem.
Problem one:
In your code I don't see you run docker-compose up, therefore I would assume that Postgres is not running.
Problem two:
is in this line: docker run -d -p 3000:3000 -e POSTGRES_DB_URL='//postgres:password#localhost:5432/my-app?sslmode=disable' --name='app' image/app
You point the host of Postgres to localhost, which on your local machine works. As there localhost is your local comuter. Though, as you use docker run you are not running this on your local machine, but in a docker container. There localhost is pointing to inside the conmtainer.
Posible solution for both
As you are already using docker-compose I suggest you to also add your test web server there.
Change your docker-compose file to:
version: "3.9"
services:
webapp:
build: image/app
environment:
POSTGRES_DB_URL='//postgres:password#postgres:5432/my-app?sslmode=disable'
ports:
- "3000:3000"
depends_on:
- "postgres"
postgres:
image: postgres:12.5
user: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-app
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d
networks:
default:
driver: bridge
volumes:
data:
driver: local
If you now run docker-compose up, both services will be available. And it should work. Though I am not a github-actions expert, so I might have missed something. At least like this, you can run your tests locally the same way as in CI, something that I always see as a big plus.
What you are missing is setting up the actual Postgres Client inside the Github Actions server (that is why there is no psql tool to be found).
Set it up as a step.
- name: Install PostgreSQL client
run: |
apt-get update
apt-get install --yes postgresql-client
Apart from that, if you run everything through docker-compose you will need to wait for postgres to be up and running (healthy & accepting connections).
Consider the following docker-compose:
version: '3.1'
services:
api:
build: .
depends_on:
- db
ports:
- 8080:8080
environment:
- RUN_UP_MIGRATION=true
- PSQL_CONN_STRING=postgres://gotstock_user:123#host.docker.internal:5432/gotstockapi?sslmode=disable
command: ./entry
db:
image: postgres:9.5-alpine
restart: always
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=password
ports:
- "5432:5432"
volumes:
- ./db:/docker-entrypoint-initdb.d/
There are a couple of things you need to notice. First of all, in the environment section of the api we have PSQL_CONN_STRING=postgres://gotstock_user:123#host.docker.internal:5432/gotstockapi?sslmode=disable which is the connection string to the db being passed as an env variable. Notice the host is host.docker.internal.
Besides that we have command: ./entry in the api section. The entry file contains the following #!/bin/ash script:
#!/bin/ash
NOT_READY=1
while [ $NOT_READY -gt 0 ] # <- loop that waits till postgres is ready to accept connections
do
pg_isready --dbname=gotstockapi --host=host.docker.internal --port=5432 --username=gotstock_user
NOT_READY=$?
sleep 1
done;
./gotstock-api # <- actually executes the build of the api
sleep 10
go test -v ./it # <- executes the integration-tests
And finally, in order for the psql client to work in the above script, the docker file of the api is looking like this:
# syntax=docker/dockerfile:1
FROM golang:1.19-alpine3.15
RUN apk add build-base
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download && go mod verify
COPY . .
RUN apk add postgresql-client
RUN go build -o gotstock-api
EXPOSE 8080
Notice RUN apk add postgresql-client which installs the client.
Happy hacking! =)

Rails Unable to make API request inside docker SocketError (Failed to open TCP connection to xxx:443 (getaddrinfo: Name does not resolve))

I'm really confused why I'm unable to make API requests to any site. for example, I want to run :
HTTParty.get("https://fakerapi.it/api/v1/persons")
It runs well on my machine. (without docker).
But if I run it inside docker, I got :
SocketError (Failed to open TCP connection to fakerapi.it:443 (getaddrinfo: Name does not resolve))
It happens not only for this site. But for all sites.
So I guess there's something wrong with my docker settings. But I'm not sure where to start.
I'm new to docker. So any advice means a lot to me.
Below is my docker-compose.yaml
version: '3.4'
services:
db:
image: mysql:8.0.17 #using official mysql image from docker hub
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- db_data:/var/lib/mysql
ports:
- "3307:3306"
backend:
build:
context: .
dockerfile: backend-dev.Dockerfile
ports:
- "3001:3001"
volumes:
#the host repos are mapped to the container's repos
- ./backend:/my-project
#volume to cache gems
- bundle:/bundle
depends_on:
- db
stdin_open: true
tty: true
env_file: .env
command: /bin/sh -c "rm -f tmp/pids/server.pid && rm -f tmp/pids/delayed_job.pid && bundle exec bin/delayed_job start && bundle exec rails s -p 3001 -b '0.0.0.0'"
frontend:
build:
context: .
dockerfile: frontend-dev.Dockerfile
ports:
- "3000:3000"
links:
- "backend:bb"
depends_on:
- backend
volumes:
#the host repos are mapped to the container's repos
- ./frontend/:/my-project
# env_file: .env
environment:
- NODE_ENV=development
command: /bin/sh -c "yarn dev --port 3000"
volumes:
db_data:
driver: local
bundle:
driver: local
How I try to run:
docker-compose run backend /bin/sh
rails c
HTTParty.get("https://fakerapi.it/api/v1/persons")
Any idea how can I fix this?

Keycloak fails to import exported realm running in docker

I started my keycloak service using the command:
docker run -d -p 8180:8080 -e KEYCLOAK_USER=admin -e \
KEYCLOAK_PASSWORD=admin -v $(pwd):/tmp --name kc \
jboss/keycloak:8.0.2
I created a new realm on keycloak only giving it a name, nothing else. Exported it running the command:
docker exec -it kc keycloak/bin/standalone.sh \
-Djboss.socket.binding.port-offset=100 -Dkeycloak.migration.action=export \
-Dkeycloak.migration.provider=singleFile \
-Dkeycloak.migration.realmName=my_realm \
-Dkeycloak.migration.usersExportStrategy=REALM_FILE \
-Dkeycloak.migration.file=/tmp/my_realm.json
I now have the realm in the my_realm.json file. I then start a new keycloak using docker compose to set up my entire test environment. I build a new docker image with the this DockerFile:
FROM jboss/keycloak:8.0.2
COPY my_realm.json /tmp/my_realm.json
ENV KEYCLOAK_IMPORT /tmp/my_realm.json
ENV KEYCLOAK_MIGRATION_STRATEGY OVERWRITE_EXISTING
Docker compose:
version: '3.4'
volumes:
postgres_kc_data:
driver: local
services:
kc_postgresql:
image: postgres:11.5
volumes:
- postgres_kc_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: kc
POSTGRES_USER: kc
POSTGRES_DB: kcdb
ports:
- 50009:5432
keycloak:
build: "./keycloak/" # ref to folder with above DockerFile
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
DB_VENDOR: POSTGRES
DB_ADDR: kc_postgresql
DB_DATABASE: kcdb
DB_SCHEMA: public
DB_USER: kc
DB_PASSWORD: kc
depends_on:
- kc_postgresql
ports:
- 8080:8080
The log output from running docker compose indicates that it is not able to import realm, and suggests something about validating the clients. I added no clients, so these are the default ones.
08:39:46,713 WARN [org.keycloak.services] (ServerService Thread Pool -- 67) KC-SERVICES0005: Unable to import realm Demo from file /tmp/my_realm.json.: java.lang.NullPointerException
at org.keycloak.keycloak-services#8.0.2//org.keycloak.url.DefaultHostnameProvider.resolveUri(DefaultHostnameProvider.java:83)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.url.DefaultHostnameProvider.getScheme(DefaultHostnameProvider.java:38)
at org.keycloak.keycloak-server-spi#8.0.2//org.keycloak.models.KeycloakUriInfo.<init>(KeycloakUriInfo.java:46)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.services.DefaultKeycloakContext.getUri(DefaultKeycloakContext.java:79)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.services.util.ResolveRelative.resolveRootUrl(ResolveRelative.java:45)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.validation.DefaultClientValidationProvider.validate(DefaultClientValidationProvider.java:44)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.validation.DefaultClientValidationProvider.validate(DefaultClientValidationProvider.java:37)
at org.keycloak.keycloak-server-spi-private#8.0.2//org.keycloak.validation.ClientValidationUtil.validate(ClientValidationUtil.java:30)
at org.keycloak.keycloak-server-spi-private#8.0.2//org.keycloak.models.utils.RepresentationToModel.createClients(RepresentationToModel.java:1224)
at org.keycloak.keycloak-server-spi-private#8.0.2//org.keycloak.models.utils.RepresentationToModel.importRealm(RepresentationToModel.java:362)
at org.keycloak.keycloak-services#8.0.2//org.keycloak.services.managers.RealmManager.importRealm(RealmManager.java:506)
Any pointers are welcome!
Upgrading to 9.0.0 fixes it for me.

How to deploy spring-cloud-config server in docker-compose?

I've got a problem with my docker-compose. I'm trying to create mulitple containers with one spring-cloud-config-server, and im trying to use spring-cloud server with my webapps container.
That's the list of my containers:
1 for the BDD (db)
1 for spring cloud config server (configproperties)
1 for some webapps (webapps)
1 nginx for reverse proxying (cnginx)
I already try to use localhost, my config-server name as host in environment variable : SPRING_CLOUD_CONFIG_URI and in boostrap.properties.
My 'configproperties' container, use 8085 in the server.xml.
That's my docker-compose.yml
version: '3'
services:
cnginx:
image: cnginx
ports:
- 80:80
restart: always
depends_on:
- configproperties
- cprocess
- webapps
db:
image: postgres:9.3
environment:
POSTGRES_USER: xxx
POSTGRES_PASSWORD: xxx
restart: always
command:
- -c
- max_prepared_transactions=100
configproperties:
restart: on-failure:2
image: configproperties
depends_on:
- db
expose:
- "8085"
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/mydbName?currentSchema=public
webapps:
restart: on-failure:2
image: webapps
links:
- "configproperties"
environment:
- SPRING_CLOUD_CONFIG_URI=http://configproperties:8085/config-server
- SPRING_PROFILES_ACTIVE=docker
depends_on:
- configproperties
expose:
- "8085"
When i run my docker-compose, my webapps are deploying with somme error like:
- Could not resolve placeholder 'repository.temps' in value "${repository.temps}"
But when i'm using postman and send a request like that :
http://localhost/config-server/myApplication/docker/latest
It's working properly, the request return all configuration for 'myApplication'.
I think i have miss somethink, but i don't find it...
Anyone can help me ?
Regards,
For using springcloud server, we just need to run our webapps after the initialisation of the springcloud server.
#!/bin/bash
CONFIG_SERVER_URL=$SPRING_CLOUD_CONFIG_URI/myApp/production/latest
#Check si le le serveur de config est bien lancé
echo 'Waiting for tomcat to be available'
attempt_counter=0
max_attempts=10
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' $CONFIG_SERVER_URL)" != "200" ]]; do
if [ ${attempt_counter} -eq ${max_attempts} ];then
echo "Max attempts reached on $CONFIG_SERVER_URL"
exit 1
fi
attempt_counter=$(($attempt_counter+1))
echo "code retour $(curl -s -o /dev/null -w ''%{http_code}'' $CONFIG_SERVER_URL)"
echo "Attempt to connect to $CONFIG_SERVER_URL : $attempt_counter/$max_attempts"
sleep 15
done
echo 'Tomcat is available'
mv /usr/local/tomcat/webappsTmp/* /usr/local/tomcat/webapps/

Wrong order of running docker containers in convox/docker-compose

Here is the compose file that I am using. It consist one postgres db container and one redis container. On top of them I have a gunicorn-django-python web server(docker image:python-3.5). There is one nginx proxy server which is linked to web container.
version: '2'
services:
nginx:
build: ./nginx/
ports:
- 80:80
volumes:
- /usr/src/app/static
links:
- web
web:
build: ./web
ports:
- 8000:8000
volumes:
- /usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
SECRET_KEY: 5(15ds+i2+%ik6z&!yer+ga9m=e%jcqiz_5wszg)r-z!2--b2d
DB_NAME: postgres
DB_USER: postgres
DB_PASS: postgres
DB_SERVICE: postgres
DB_PORT: 5432
command: /usr/local/bin/gunicorn messanger.wsgi:application -w 2 -b :8000
links:
- redis
- postgres
postgres:
image: postgres
ports:
- 5432
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
image: redis
ports:
- 6379
volumes:
- redisdata:/data
I am facing issue while starting docker containers via
convox start -f docker-compose.yml
Problem is , ideally first postgres/redis servers to be started, then the web server and then nginx server in the end according to their linking order. But actually web server is getting started first due to which it fails without db/cache. See error logs below:
web │ running: docker run -i --rm --name python-app-web -e DB_NAME=postgres -e DB_USER=postgres -e DB_PASS=postgres -e DB_SERVICE=postgres -e DB_PORT=5432 -e DEBUG=true -e SECRET_KEY=5(15ds+i2+%ik6z&!yer+ga9m=e%jcqiz_5wszg)r-z!2--b2d --add-host redis: -e REDIS_SCHEME=tcp -e REDIS_HOST= -e REDIS_PORT=
-e REDIS_PATH= -e REDIS_USERNAME= -e REDIS_PASSWORD= -e REDIS_URL=tcp://:%0A --add-host postgres: -e POSTGRES_SCHEME=tcp -e POSTGRES_HOST= -e POSTGRES_PORT=
-e POSTGRES_PATH= -e POSTGRES_USERNAME= -e POSTGRES_PASSWORD= -e POSTGRES_URL=tcp://:%0A -p 0:8000 -v /Users/gaurav/.convox/volumes/python-app/web/usr/src/app/static:/usr/src/app/static python-app/web sh -c /usr/local/bin/gunicorn messanger.wsgi:application -w 2 -b :8000
web │ invalid argument "redis:" for --add-host: invalid IP address in add-host: ""
web │ See 'docker run --help'.
postgres │ running: docker run -i --rm --name python-app-postgres -p 5432 -v pgdata:/var/lib/postgresql/data/ python-app/postgres
redis │ running: docker run -i --rm --name python-app-redis -p 6379 -v redisdata:/data python-app/redis
But it works fine when I completely removes the nginx server, in that case web server is being kicked off after postgres/redis.
I am not able to understand the actual error.
Complete code can be found here at github.
[Note] Found very strange thing I never expected this. Problem was with the name of the container. If I rename the 'ngnix' container to anything like server/webserver/xyz/myngnix/ngnixxxx etc it all worked as expected and in order. But not working with the name ngnix! strange isn't it.
Just add depends_on directive to docker-compose.yml
version: '2'
services:
depends_on:
- web
nginx:
build: ./nginx/
ports:
- 80:80
volumes:
- /usr/src/app/static
web:
build: ./web
depends_on:
- postrges
- redis
ports:
- 8000:8000
volumes:
- /usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
SECRET_KEY: 5(15ds+i2+%ik6z&!yer+ga9m=e%jcqiz_5wszg)r-z!2--b2d
DB_NAME: postgres
DB_USER: postgres
DB_PASS: postgres
DB_SERVICE: postgres
DB_PORT: 5432
command: /usr/local/bin/gunicorn messanger.wsgi:application -w 2 -b :8000
postgres:
image: postgres
ports:
- 5432
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
image: redis
ports:
- 6379
volumes:
- redisdata:/data
Order of running containers will be ok. But it doesnt mean that when web app will try to connect to redis, this container will be available. If you want reach this fetures you need something like whait for it or docker-compose health check

Resources