Gitlab Pages in docker with reverse-proxy - docker

I have docker gitlab instance at gitlab.example.io. So i configured in gitlab.rb:
pages_external_url "http://pages.example.io"
gitlab_pages['enable'] = true
Both gitlab.example.io and pages.example.io at the same ip address.
Then I create simple example group with group_test name and group_nested inside. Finaly I create project with name project_test in group_nested.
It contains index.html
<head>
</head>
<body>
<p>Hello World!</p>
</body>
and .gitlab-ci.yml
stages:
- deploy
pages:
tags:
- latest
stage: deploy
script:
- mkdir .public
- cp -r * .public
- mv .public public
artifacts:
paths:
- public
only:
- main
After pipeline finishes I get url: http://group_test.pages.example.io/group_nested/project_test and when I try to follow the link I get 404 error. Not found. The requested URL was not found on this server.
Files exist at /var/opt/gitlab/gitlab-rails/shared/pages.
DNS service has wildcard record *.example.io A ip-address.
Gitlab pages log:
==> /var/log/gitlab/gitlab-pages/current <==
{"level":"info","msg":"Checking GitLab internal API availability","time":"2021-09-19T21:21:40+03:00"}
{"error":"failed to connect to internal Pages API: HTTP status: 502","level":"warning","msg":"attempted to connect to the API","time":"2021-09-19T21:21:40+03:00"}
{"level":"info","msg":"Checking GitLab internal API availability","time":"2021-09-19T21:21:44+03:00"}
{"error":"failed to connect to internal Pages API: HTTP status: 502","level":"warning","msg":"attempted to connect to the API","time":"2021-09-19T21:21:44+03:00"}
{"level":"info","msg":"Checking GitLab internal API availability","time":"2021-09-19T21:21:49+03:00"}
{"error":"failed to connect to internal Pages API: HTTP status: 502","level":"warning","msg":"attempted to connect to the API","time":"2021-09-19T21:21:49+03:00"}
{"level":"info","msg":"Checking GitLab internal API availability","time":"2021-09-19T21:21:55+03:00"}
{"error":"failed to connect to internal Pages API: HTTP status: 502","level":"warning","msg":"attempted to connect to the API","time":"2021-09-19T21:21:55+03:00"}
{"level":"info","msg":"Checking GitLab internal API availability","time":"2021-09-19T21:22:03+03:00"}
{"level":"info","msg":"GitLab internal pages status API connected successfully","time":"2021-09-19T21:22:03+03:00"}

Related

Github actions run jobs in webserver container unable to connect to localhost

I have a container which starts a webserver when I run the container on my laptop, login to its terminal and do a curl request to 127.0.0.1 it will give a result. When I try the same thing in a GitHub Actions Workflow I get: "curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused". I have tried things like adding ports (which I do not think should be necessary) but it won't work. I actually think apache is not running for some reason but I do not understand why as it does work locally.
See a minimal workflow file below:
name: Auto tests
on:
push:
branches: [ "master", "githubActions" ]
jobs:
build:
runs-on: ubuntu-latest
container:
image: php:7.4.33-apache-bullseye
steps:
- name: GET localhost
run: curl 127.0.0.1
I am expecting I get something like:
# curl localhost
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access this resource.</p>
<hr>
<address>Apache/2.4.54 (Debian) Server at localhost Port 80</address>
</body></html>
But I get "curl: (7) Failed to connect to localhost port 80: Connection refused" instead.
You are running steps inside the container, not alongside it. See Running jobs in a container. That's why there's no server running there and curl fails.
What you're looking for is jobs.<job_id>.services. See About service containers for more details.
With services, a sample workflow will be:
name: php_container
on:
workflow_dispatch:
jobs:
ci:
runs-on: ubuntu-latest
services:
php:
image: php:7.4.33-apache-bullseye
ports:
- 80:80
steps:
- name: Test
run: curl localhost
Apart from that, if using php:7.4.33-apache-bullseye is not a hard requirement on your side, you can simply avoid the container or services altogether and use the preinstalled PHP and Apache Web Server. The default preinstalled Apache is inactive so you'll have to start it.

ddev: Call the endpoint of a certain port of the web container from another container

I set up a Shopware 6 project with ddev. Now I want to write cypress tests for one of my plugins. The shopware testsuite starts a node express server on port 8005 in the web container. I have configured the port for ddev so that I can open the express endpoint in my browser: http://my.ddev.site:8005/cleanup. That is working.
For cypress I have created a new ddev container with a new docker-compose file:
version: '3.6'
services:
cypress:
container_name: ddev-${DDEV_SITENAME}-cypress
image: cypress/included:4.10.0
tty: true
ipc: host
links:
- web:web
environment:
- CYPRESS_baseUrl=https://web
- DISPLAY
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
volumes:
# Project root
- ../shopware:/project
# Storefront and Administration
- ../shopware/vendor/shopware/platform/src/Storefront/Resources/app/storefront/test/e2e:/e2e-Storefront
- ../shopware/vendor/shopware/platform/src/Administration/Resources/app/administration/test/e2e:/e2e-Administration
# Custom plugins
- ../shopware/custom/plugins/MyPlugin/src/Resources/app/administration/test/e2e:/e2e-MyPlugin
# for Cypress to communicate with the X11 server pass this socket file
# in addition to any other mapped volumes
- /tmp/.X11-unix:/tmp/.X11-unix
entrypoint: /bin/bash
I can now successfully open the cypress interface and I see my tests. The problem is now, that always before a cypress test is executed, the express endpoint is called (with the URL from above) and the cypress container seems to has no access to the endpoint. This is the output:
cy.request() failed trying to load:
http://my.ddev.site:8005/cleanup
We attempted to make an http request to this URL but the request failed without a response.
We received this error at the network level:
> Error: connect ECONNREFUSED 127.0.0.1:8005
-----------------------------------------------------------
The request we sent was:
Method: GET
URL: http://my.ddev.site:8005/cleanup
So I can call this endpoint in my browser, but cypress can't. Is there any configuration in the cypress container missing to call the port 8005 from the web container?
You need to add this to the cypress service:
external_links:
- "ddev-router:${DDEV_HOSTNAME}"
and then your http URL will be accessed through the router via ".ddev.site".
If you need a trusted https URL it's a little more complicated, but for http this should work fine.

how to pull from a private registry in gitlab CI, with docker DIND

actually I'm using gitlab runners, with docker executor, and I'm trying to pull some docker images to do some tests, and to preserve my network connection, I've created a private docker registry, to "cache" the images .
So, my registry is linked to my gitlab runner (with configuration in the config.toml https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersdocker-section ) .
This work, my image can ask the registry :
$ wget http://registry:5000/v2/_catalog
--2019-02-15 10:40:54-- http://registry:5000/v2/_catalog
Resolving registry... 172.17.0.3
Connecting to registry|172.17.0.3|:5000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 20 [application/json]
Saving to: '_catalog'
0K 100% 1.17M=0s
2019-02-15 10:40:54 (1.17 MB/s) - '_catalog' saved [20/20]
but the DIND service can't :
pull registry:5000/arminc/clair-db:latest
Error response from daemon: Get http://registry:5000/v2/: dial tcp: lookup registry on 192.168.9.254:53: no such host
My gitlab-ci conf for this task
scan:image:
stage: scans
image: docker:git
services:
- name: docker:dind
command: ["--insecure-registry=registry:5000"]
variables:
DOCKER_DRIVER: overlay2
allow_failure: true
script:
- chmod 777 ./docker/scan.sh
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD $DOCKER_REGISTRY
- ./docker/scan.sh
artifacts:
paths: [gl-container-scanning-report.json]
only:
- master
Probably, you might need to add a DNS entry to your DNS server or dockers host file:
192.168.xx.xxx registry

How to manage Docker private registry

I've set up Docker and running a private repository on example.com:5000. I followed the instructions listed here: https://docs.docker.com/registry/deploying/
And uses the docker-compose.yml:
registry:
restart: always
image: registry:2
ports:
- 5000:5000
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
- /path/data:/var/lib/registry
- /path/certs:/certs
- /path/auth:/auth
I can push and pull images to the repository, but I can't get docker search example.com:5000/library to run. I get an: Error response from daemon: Unexpected status code 404.
When I point curl to the endpoint I get the following result:
$ curl -v -X GET http://example.com:5000/v2/images
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 37.139.20.160...
* Connected to example.com (192.167.201.2) port 5000 (#0)
> GET /v2/images HTTP/1.1
> Host: domain.com:5000
> User-Agent: curl/7.47.0
> Accept: */*
>
* Connection #0 to host example.com left intact
How can I make the search command working so that I can manage the repository? And where can I find the API documentation of the endpoint? Or are there better ways to manage a Docker private repo?
It seems you have to activate the search option, according to Search-engine options
The Docker Registry can optionally index repository information in a database for the GET /v1/search endpoint.
(I don't see a search in the V2 API. You can list tags)
The search_backend setting selects the search backend to use.
If search_backend is empty, no index is built, and the search endpoint always returns empty results.
For instance, using the SQLAlchemy database
common:
search_backend: sqlalchemy
sqlalchemy_index_database: sqlite:////tmp/docker-registry.db
On initialization, the SQLAlchemyIndex class checks the database version. If the database doesn't exist yet (or does exist, but lacks a version table), the SQLAlchemyIndex creates the database and required tables.

how do I resolve docker issues with ice login?

I am using use the ice command line interface for IBM Container Services, and I am seeing a couple of different problems from a couple of different boxes I am testing with. Here is one example:
[root#cds-legacy-monitor ~]# ice --verbose login --org chrisr#ca.ibm.com --space dev --user chrisr#ca.ibm.com --registry registry-ice.ng.bluemix.net
#2015-11-26 01:38:26.092288 - Namespace(api_key=None, api_url=None, cf=False, cloud=False, host=None, local=False, org='chrisr#ca.ibm.com', psswd=None, reg_host='registry-ice.ng.bluemix.net', skip_docker=False, space='dev', subparser_name='login', user='chrisr#ca.ibm.com', verbose=True)
#2015-11-26 01:38:26.092417 - Executing: cf login -u chrisr#ca.ibm.com -o chrisr#ca.ibm.com -s dev -a https://api.ng.bluemix.net
API endpoint: https://api.ng.bluemix.net`
Password>
Authenticating...
OK
Targeted org chrisr#ca.ibm.com
Targeted space dev
API endpoint: https://api.ng.bluemix.net (API version: 2.40.0)
User: chrisr#ca.ibm.com
Org: chrisr#ca.ibm.com
Space: dev
#2015-11-26 01:38:32.186204 - cf exit level: 0
#2015-11-26 01:38:32.186340 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.186640 - Bearer: <long string omitted>
#2015-11-26 01:38:32.186697 - cf login succeeded. Can access: https://api-ice.ng.bluemix.net/v3/containers
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v3/containers completed successfully
You can issue commands now to the container service
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
#2015-11-26 01:38:32.187317 - using bearer token
#2015-11-26 01:38:32.187350 - config.json path: /root/.cf/config.json
#2015-11-26 01:38:32.187489 - Bearer: <long pw string omitted>
#2015-11-26 01:38:32.187517 - Org Guid: dae00d7c-1c3d-4bfd-a207-57a35a2fb42b
#2015-11-26 01:38:32.187551 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
FATA[0012] Error response from daemon: </html>
#2015-11-26 01:38:44.689721 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:38:44.689842 - Exit err level = 2**
On the other box, it also fails, but the final error is slightly different.
#2015-11-26 01:44:48.916034 - docker login -u bearer -p '<long pw string omitted>' -e a#b.c registry-ice.ng.bluemix.net
Error response from daemon: Unexpected status code [502] : <html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
#2015-11-26 01:45:02.582753 - docker call exit level: 256
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
#2015-11-26 01:45:02.582868 - Exit err level = 2
Any thoughts on what might be causing these issues?
The errors are referring the same problem, ice isn't finding any docker env locally.
It doesn't prevent working remotely on Bluemix but without a local docker env ice cannot work with local containers

Resources