I am trying to do something I thought was simple, but looks like I’m missing a thing.
I have a web app, which is package in a docker image I manage. It starts a server listening on port 9000. It does have an endpoint publishing metrics, on /admin/metrics.
The application is deployed on a system that requires me to publish those metrics on port 9100, with the path /metrics. I could change the application, run a second server, etc, but for fun I tried something quicker (I thought): running a companion reverse proxy.
I chose traefik, and I managed to configure it properly using a file provider: when running on my machine (no container), it does redirect properly calls from /metrics on port 9100 to my app’s /admin/metrics. But when inside the container, it only gives 404 errors, although the configuration is ok. I also tried to run the app only and have traefik on my machine route to the app inside the container, but it fails too.
This is my configuration:
#/app/traefik.toml
[entryPoints]
[entryPoints.MetricsProxy]
address = ":9100"
[providers]
providersThrottleDuration = 42
[providers.file]
directory = "/app"
watch = false
[api]
insecure = false
dashboard = false
debug = false
[log]
level = "TRACE"
#/app/metrics.toml
[http]
[http.routers]
[http.routers.Router0]
entryPoints = ["MetricsProxy"]
middlewares = ["PathConvert"]
service = "MetricsService"
rule = "Path(`/metrics`)"
[http.services]
[http.services.MetricsService]
[http.services.MetricsService.loadbalancer]
[[http.services.MetricsService.loadBalancer.servers]]
url = "http://0.0.0.0:9000"
[http.middlewares]
[http.middlewares.PathConvert]
[http.middlewares.PathConvert.addPrefix]
prefix = "/admin"
Please note that I tried to replace 0.0.0.0 with 127.0.0.1 or localhost, neither works.
Finally, the Dockerfile:
FROM openjdk:8-jre-slim
WORKDIR /app
RUN \
apt-get update -qq && apt-get install -y -qq curl && \
curl -sSL https://github.com/containous/traefik/releases/download/v2.0.4/traefik_v2.0.4_linux_amd64.tar.gz | tar -xz
COPY bin/myapp.sh .
COPY target/universal/bluevalet-server.zip .
COPY deploy/traefik/traefik.toml .
COPY deploy/traefik/metrics.toml .
COPY deploy/nginx.conf .
COPY deploy/run.sh .
#run.sh ~~> ./traefik --configfile /app/traefik.toml & ./myapp.sh
CMD [ "/app/run.sh" ]
EXPOSE 9000
EXPOSE 9100
I guess there is something with "localhost" in the service definition, but cannot understand what.
Anyone has an idea?
Not sure why it does work this way, but I succeeded using another configuration for traefik:
[http]
[http.routers]
[http.routers.Router0]
entryPoints = ["MetricsProxy"]
middlewares = ["PathConvert"]
service = "MetricsService"
rule = "Path(`/metrics`)"
[http.services]
[http.services.MetricsService]
[http.services.MetricsService.loadbalancer]
[[http.services.MetricsService.loadBalancer.servers]]
url = "http://localhost:9000/"
[http.middlewares]
[http.middlewares.PathConvert]
[http.middlewares.PathConvert.replacePathRegex]
regex = "^/metrics"
replacement = "/admin/metrics/prometheus"
Related
I'm trying to set up a JupyterHub with a docker - spawner. After I log in into my JupyterHub I get the following error:
500 : Internal Server Error
Error in Authenticator.pre_spawn_start: ChunkedEncodingError ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
You can try restarting your server from the home page.
My JupyterHub config looks as follows:
from jupyterhub.auth import Authenticator
class DictionaryAuthenticator(Authenticator):
passwords = {'max':'123'}
async def authenticate(self, handler, data):
if self.passwords.get(data['username']) == data['password']:
return data['username']
# docker image tag in the docker registry
c.DockerSpawner.image = 'jupyterhub/singleuser:latest'
# listen on all interfaces
c.DockerSpawner.host_ip = "0.0.0.0"
c.DockerSpawner.network_name = 'jupyterhub'
c.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'
c.JupyterHub.authenticator_class = DictionaryAuthenticator
and this is the content of my Dockerfile:
FROM python:3.7
RUN pip3 install \
jupyterhub==1.0.0 \
'notebook>=5.0,<=6.0'
# create a user, since we don't want to run as root
RUN useradd -m max
ENV HOME=/home/max
WORKDIR $HOME
USER max
CMD ["jupyterhub-singleuser"]
How can I fix this error?
Thanks for the help in advance!
I was able to solve the issue. The Docker-Spawner does in my case only work with the docker version 2.3.0.5 (MacOS).
If someone experiences the same issue --> just downgrade.
I am using Goland IDE on MacOSX and I'm trying to debug an application running on the container. I'm trying to attempt remote debugging, just that the container is on my local.
When I run the debugger on my IDE it does stop on the breakpoint but the one that it is debugging is the application on my local and not the one on the container.
For background, my application is supposed to listen on port 8000 and return "Hello, visitor!".
If I compile and run this file through a docker container, map my port 8000 and make a request through browser or through .http file, I do receive this response.
However, when I run it through Delve on the container, it does not respond through browser.
Also, once the container is up, when I start debugger on my IDE it does not debug the application on the container, as it complains about
2020/08/05 17:57:39 main.go:16: listen tcp :8000: bind: address already in use
I've tried following these 2 tutorials, both of which are mostly same, except for the version of their docker images that they use.
Tutorial1
Tutorial2
I have gone through all the comments on these 2 posts as well but haven't found anything that would solve my problem.
Here is my main.go
package main
import (
"fmt"
"log"
"net/http"
)
func main() {
// Set the flags for the logging package to give us the filename in the logs
log.SetFlags(log.LstdFlags | log.Lshortfile)
log.Println("starting server...")
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
_, _ = fmt.Fprintln(w, `Hello, visitor!`)
})
log.Fatal(http.ListenAndServe(":8000", nil))
}
Here is my Dockerfile:
# Compile stage
FROM golang AS build-env
# Build Delve
RUN git config --global http.sslVerify "false"
RUN git config --global http.proxy http://mycompanysproxy.com:80
RUN go get github.com/go-delve/delve/cmd/dlv
ADD . /dockerdev
WORKDIR /dockerdev
RUN go build -gcflags="all=-N -l" -o /server
# Final stage
FROM debian:buster
EXPOSE 8000 40000
WORKDIR /
COPY --from=build-env /go/bin/dlv /
COPY --from=build-env /server /
CMD ["/dlv", "--listen=:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/server"]
The container comes up successfully and the attached console's log says:
API server listening at: [::]:40000
However, it does not seem to be listening.
If I run
GET http://localhost:8000/
Accept: application/json
I expect it to stop on the breakpoint but it doesn't. Rather it complains:
org.apache.http.NoHttpResponseException: localhost:8000 failed to respond
Am I missing something?
Is this the way to invoke debugger on a containerized app?
Some more information:
I figured out that I was using the wrong debug configuration. Need to press debug button with remote debug (top right) showing in the configuration.
I have created an azure function that exposes an API that is being used in another project. I would like to enable the team that works on the front end to have a Docker image of our API to help them in their development. I have packed my azure function using the Dockerfile that is being generated when running func init LocalFunctionsProject --worker-runtime dotnet --docker as per this guide
This results in the following Dockerfile contents:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS installer-env
COPY . /src/dotnet-function-app
RUN cd /src/dotnet-function-app && \
mkdir -p /home/site/wwwroot && \
dotnet publish *.csproj --output /home/site/wwwroot
FROM mcr.microsoft.com/azure-functions/dotnet:2.0
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY --from=installer-env ["/home/site/wwwroot", "/home/site/wwwroot"]
I have then created a docker-compose.yml file on the front end project:
version: "3.7"
services:
frontend:
image: frontend:latest
build: .
ports:
- "3000:3000"
env_file:
- frontend.docker.env
backend:
image: backend:latest
env_file:
- backend.docker.env
ports:
- "8080:80"
This succesfully spins up two containers, one with the front end and another with the backend. Inside my frontend.docker.env I have set my variable to point to the backend so that calls from the front end are directed to http://backend/api/myendpoint
However this is where everything fails, Im getting a CORS issue:
What I have tried:
Whenever I call my backend from the exposed 8080 port from Postman, everyhting is working fine. I have tried to manually add an Access-Control-Allow-Origin header set to * on my response which I can verify is coming through in Postman. However the front end is still getting the CORS issue.
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://backend/api/myendpoint. (Reason: CORS request did not succeed)
My other approach is changing the web.config inside the azure-functions-host file manually by running the following instruction on my Dockerfile:
RUN sed -i 's/<customHeaders>/<customHeaders><add name="Access-Control-Allow-Origin" value="*" \/><add name="Access-Control-Allow-Headers" value="Content-Type" \/><add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" \/>/g' azure-functions-host/web.config
After that I see the following headers coming from the response in Postman:
Access-Control-Allow-Headers = X-Requested-With,content-type
Access-Control-Allow-Methods = GET, POST, OPTIONS, PUT, PATCH, DELETE
Access-Control-Allow-Origin = *
However Im still facing the same issue. Postman works, the frontend doesnt...
Do you have any idea what the issue might be, or how to properly set CORS
When running gerritcodereview/gerrit docker container. Gerrit is installed within the /var/gerrit directoy in the container. But when trying to install plugins by docker cp the plugin .jar file, downloaded from https://gerrit-ci.gerritforge.com/job/plugin-its-jira-bazel-stable-2.16/ into the /var/gerrit/plugins directory, plugins are not showing up in the list amongst installed plugins. Eventhough I restarted the container.
I ran gerrit with:
docker run -ti -p 8080:8080 -p 29418:29418 gerritcodereview/gerrit
And Gerrit is accessible via:
http://localhost:8080/admin/plugins
I also have a list of plugins in the plugins manager, but don't know how to add more plugins to the list, have tried to use gerrit-ci.gerritforge.com url in [httpd]. http://localhost:8080/plugins/plugin-manager/static/index.html
My gerrit.config file looks like this:
[gerrit]
basePath = git
serverId = 62b710a2-3947-4e96-a196-6b3fb1f6bc2c
canonicalWebUrl = http://10033a3fe5b7
[database]
type = h2
database = db/ReviewDB
[index]
type = LUCENE
[auth]
type = DEVELOPMENT_BECOME_ANY_ACCOUNT
[sendemail]
smtpServer = localhost
[sshd]
listenAddress = *:29418
[httpd]
listenUrl = http://*:8080/
filterClass = com.googlesource.gerrit.plugins.ootb.FirstTimeRedirect
firstTimeRedirectUrl = /login/%23%2F?account_id=1000000
[cache]
directory = cache
[plugins]
allowRemoteAdmin = true
[container]
javaOptions = "-Dflogger.backend_factory=com.google.common.flogger.backend.log4j.Log4jBackendFactory#getInstance"
javaOptions = "-Dflogger.logging_context=com.google.gerrit.server.logging.LoggingContext#getInstance"
user = gerrit
javaHome = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre
javaOptions = -Djava.security.egd=file:/dev/./urandom
[receive]
enableSignedPush = false
[noteDb "changes"]
autoMigrate = true
I am pretty sure that Gerrit runs from /var/gerrit, even for your version as that is the version I used before.
Why don't you use docker-compose together with a custom Dockerfile. This way you can easily recreate your image and don't need to worry about adding plugins again after you upgrade your version.
I would suggest that you play around with these scripts and use it for your testing.
This is what my Dockerfile looks like for my previous 2.16 installation:
FROM gerritcodereview/gerrit:2.16.8
# Add custom plugins that are not downloaded from the web
COPY ./plugins/* /var/gerrit/plugins/
# Add logo
COPY ./static/* /var/gerrit/static/
ADD https://gerrit-ci.gerritforge.com/view/Plugins-stable-2.16/job/plugin-avatars-gravatar-bazel-master-stable-2.16/lastSuccessfulBuild/artifact/bazel-genfiles/plugins/avatars-gravatar/avatars-gravatar.jar /var/gerrit/plugins/
USER root
# Fix any permissions
RUN chown -R gerrit:gerrit /var/gerrit
USER gerrit
ENV CANONICAL_WEB_URL=https://gerrit.mycompoany.net/r/
And below the docker-compose.yml
version: '3.4'
services:
gerrit:
build: .
ports:
- "29418:29418"
- "8080:8080"
restart: unless-stopped
volumes:
- /external/gerrit2.16/etc:/var/gerrit/etc
- /external/gerrit2.16/git:/var/gerrit/git
- /external/gerrit2.16/index:/var/gerrit/index
- /external/gerrit2.16/cache:/var/gerrit/cache
- /external/gerrit2.16/logs:/var/gerrit/logs
- /external/gerrit2.16/.ssh:/var/gerrit/.ssh
# entrypoint: java -jar /var/gerrit/bin/gerrit.war init --install-all-plugins -d /var/gerrit
# entrypoint: java -jar /var/gerrit/bin/gerrit.war reindex -d /var/gerrit
Finally found out a way that works for me in my use case.
copy content of your public key and insert into ssh web browser profile settings: my_gerrit_admin_username
Add key to ssh-agent:
eval `ssh-agent`
ssh-add .ssh/id_rsa
from terminal outside container, run:
ssh -p 29418 my_gerrit_admin_username#localhost gerrit plugin install -n its-base.jar https://gerrit-ci.gerritforge.com/job/plugin-its-base-bazel-stable-2.16/lastSuccessfulBuild/artifact/bazel-bin/plugins/its-base/its-base.jar
check web browser that plugin is installed among plugins.
I have a PhpStorm IDE set up with Xdebug, a docker container that is set up with Xdebug included and the browser module applied. However, when I try to 'pick up the phone' and refresh the code contained in the environment, nothing is ever picked up.
I've tried a variety of ports and server names, along with the tutorials I can find. I am not certain if it's security or a bad setup with my docker but I am relatively certain the PhpStorm setup is correct.
I've tried exposing ports in the Dockerfile (9000 & 9001)
My .php file is just echo and some math with break points applied:
echo("TEST 1<br>");
$test = 2;
echo("TEST " . $test . "<br>");
$testArray = xdebug_get_code_coverage();
var_dump($testArray);
phpinfo();
dd("TEST 3");
In my .env file, the following is defined:
PHP_IDE_CONFIG=serverName=jumbledowns-demo
XDEBUG_CONFIG=remote_host=localhost remote_port=9001
And my Dockerfile sets up Xdebug thus:
RUN pecl install xdebug; \
docker-php-ext-enable xdebug; \
echo "error_reporting = E_ALL" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
echo "display_startup_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
echo "display_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
echo "xdebug.remote_enable=1" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini;
PHPinfo on the browser side shows the code as expected, and the following settings for Xdebug:
xdebug.remote_host localhost localhost
xdebug.remote_log no value no value
xdebug.remote_mode req req
xdebug.remote_port 9001 9000
I've tried using URLs and assigned IP addresses in the above and can change them as needed.
Ideally, I'd like to be able to have a command line log or output that is showing what Xdebug is trying to call and on what port.
In order to find out what Xdebug is trying to do, you need to set the xdebug.remote_log setting to something non-empty, such as xdebug.remote_log=/tmp/xdebug.log. That file will include all connection attempts, and communication protocol contents if a connection is made.
remote_host=localhost
Is almost certainly not correct though, as PHP and Xdebug, running in your docker container need to open a connection to this host, and localhost, is not going to be the right hostname/IP address. It's more likely that it's host.docker.internal or something like that, or rather, hard code the IP address of the machine where your IDE runs, as is accessible to from your docker container.