Docker: Artifactory CE-C++ behind a reverse proxy (using traefik) - docker

I'm trying to run Artifactory (Artifactory CE-C++, V7.6.1) behind a reverse proxy (Traefik v2.2, latest).
Both are official unaltered docker-images. For starting them up I'm using docker-compose.
My artifactory-yml-file (docker-compose.yml) uses the following traefik-configuration:
image: docker.bintray.io/jfrog/artifactory-cpp-ce
[...]
lables:
- "traefik.enable=true"
- "traefik.http.routers.artifactory.rule = Host(`localhost`) && PathPrefic(`/artifactory`)"
- "traefik.http.routers.artifactory.middlewares=artifactory-strippprefix"
- "traefik.http.middlewares.artifactory-strippprefix.strippprefix.prefixes=/"
- "traefik.docker.network=docker-network"
Note: My network docker network is just a simple docker network (external). I have this still in there because of traefik v1
My artifactory is accessible at the beginning over http://localhost/artifactory/, but only when starting up. As soon as artifactory wants me to redirect to its UI, it takes me to http://localhost/ui/ instead of (I guess?) http://localhost/artifactory/ui/, which is invalid.
I'm seeking either for a feature to tell artifactory it should account the prefix /artifactory when forwarding or a possibility in traefik to alter artifactory’s forward -response in a way that the forward-url matches the path.
I'm also using Jenkins with traefik, there it was as simple as adding
JENKINS_OPTS: "--prefix=/jenkins"

The Artifactory CE-C++ opens up two ports: 8081 and 8082. I suggest your reverse proxy points to the format port 8081, does it? Whereas the UI endpoint is AFAIK served on 8082. Did you try this port?

Related

Local and docker development IP resolution for one container to reach another

Let's assume we have two backend apps, where backend_for_frontend needs to fetch some data from api.
If both apps are run in docker or api runs in docker and backend_for_frontend runs locally, backend_for_frontend can use http://host.docker.internal:3001/api address to connect to api.
If both apps are run locally(not in docker) then backend_for_frontend needs to use http://127.0.0.1:3001/api for api connection.
Issue is that when we switch running api between docker or locally, we need to use different ip for backend_for_frontend that needs to be manually changed because backend_for_frontend doesn't know how we run api.
Is there a way to resolve this ip somehow automatically or use ip as env variable that will work in any case? Basically I want to run backend_for_frontend and api in any combination, while connection url for backend_for_frontend still can be resolved not by hand.
docker.compose example:
services:
api:
ports:
- 3001:3001
backend_for_frontend:
ports:
- 3002:3002
That's a very common configuration scenario and you'll usually solve it by setting an environment variable in backend_for_frontend to the URL of the API.
Let's call the environment variable API_URL. Then you can do
services:
api:
ports:
- 3001:3001
backend_for_frontend:
ports:
- 3002:3002
environment:
- API_URL=http://api:3001/
Then, when you run the API locally, you'd change it to http://host.docker.internal:3001/.
You'll need to change your backend_for_frontend code to fetch the URL from the environment variable. There's no universal way of doing it and it depends on what language your backend_for_frontend is coded in.
If you have an URL that you want to be the default, you can add an ENV statement to the backend_for_frontend Dockerfile to set it. Then you only need to specify it in your docker-compose file when you want to override it.
It can be achieved via using hostname instead of using direct IP address to access the API.
this will give flexibility to use hostname regardless of API running locally or on Docker.
for example, it can be used http://api:3001/api as the base URL for the API in backend_for_frontend, and then set up Docker Compose file to define a hostname for the api service:
services:
api:
hostname: api
ports:
- 3001:3001
backend_for_frontend:
ports:
- 3002:3002
Hello it's just the same as if you don't dockerize, call them by your port and socket
Your first api will call by : http://localhost:3001
And the seconds will be
http://localhost:3002
P.s you can use a network so they are in an internal network
That's all

Docker DinD fails to build images after upgrading

We are using Docker 18.9.8-dind. DinD — Docker-in-Docker — is running Docker in a separate container. This way, we send requests to this container to build our images, instead of executing Docker in the machine that wants the built image.
We needed to upgrade from 18.9.8-dind to 20.10.14-dind. Since we use Kubernetes, we just updated the image version in some YAML files:
spec:
containers:
- name: builder
- image: docker:18.09.8-dind
+ image: docker:20.10.14-dind
args: ["--storage-driver", "overlay2", "--mtu", "1460"]
imagePullPolicy: Always
resources:
Alas, things stopped working after that. Builds failed, and we could find these error messages in the code reaching for our Docker builder:
{"errno":-111,"code":"ECONNREFUSED","syscall":"connect","address":"123.456.789.10","port":2375}
Something went wrong and the entire build was interrupted due to an incorrect configuration file or build step,
check your source code.
What can be going on?
We checked the logs in the Docker pod, and found this message at the end:
API listen on [::]:2376
Well, our message in the question states we tried to connect to port 2375, which used to work. Why has the port changed?
Docker enables TLS as default from version 19.03 onwards. When Docker uses TLS, it listens on port 2376.
We had three alternatives here:
change the port to 2375 (which sounds like a bad idea: we would use the default plain port for TLS communication, a very confusing setup);
Connect to the new port; or
disable TLS.
In general, connecting to the new port is probably the best solution. However, for reasons specific to us, we choose to disable TLS, which only requires an environment variable in yet another YAML file:
- name: builder
image: docker:20.10.14-dind
args: ["--storage-driver", "overlay2", "--mtu", "1460"]
+ env:
+ - name: DOCKER_TLS_CERTDIR
+ value: ""
imagePullPolicy: Always
resources:
requests:
In most scenarios, though, it is probably better to have TLS enabled and change the port in the client.
(Sharing in the spirit of Can I answer my own questions? because it took us some time to piece the parts together. Maybe by sharing this information together with the error message, things can be easier for other affected people to find.)

Redirect AWS sdks' default endpoint to mocked localstack endpoints

I have multiple Java spring boot services (around 20 of them) using Amazon SDKs for S3, SQS, DynamoDB, etc..
Currently, to use Amazon Web Service I only need to specify my AWS key & secret.
ACCESS_AWS_KEY=<MY_KEY>
ACCESS_AWS_SECRET=<MY_SECRET>
However, I wanted to setup offline dev enviroment so I started to dockerize my services and set up a single multi-docker container with all my services dockerized and localstack should be used instead of remote AWS service to allow complete offline development.
docker-compose.yml looks something like this
version: '3'
services:
service_1:
build: ./repos/service_1
links:
- service_2:
- localstack
service_2:
build: ./repos/service_2
links:
- localstack
service_3:
build: ./repos/service_3
links:
- localstack
localstack:
image: localstack/localstack
Amazon SDK provides AWS_REGION env variable, but not an endpoint environment variable which I can easily use in all services.
I also don't want to make code changes in my services to accommodate the new non-default endpoint.
I want a generic solution to forward requests like this:
dynamodb.eu-west-1.amazonaws.com => localstack_1:4569
s3-eu-west-1.amazonaws.com => localstack_1:4572
where localstack_1 is a linked docker container of localstack and reachable by other containers.
I came across extra_hosts: in docker-compose, but it only redirects to IPs and has no hostname resolving.
Also, notice that I have dozens of ports exposed in localstack from 4569 to 4582.
I thought about running a script on each machine setting up a vhost in some way, or forwarding all outgoing connections from all containers to a centralized request forwarder service, but have no clue where to start.
This will only used as a offline dev environment and will not receive any real traffic.
Ok, I was able to finally find a solution for this. I had to actually go through localstack code base to be able to find the solution.
Couple of quick things :
Localstack is not integrated with IAM. So it just simply ignores the secret key or password.
If you're using IAM, you now need to have a flag to override the endpoint. You can probably have a flag to indicate localstack mode.
Couple of classes which I found helpful if you're debugging issues :
https://github.com/atlassian/localstack/blob/master/localstack/ext/java/src/test/java/com/atlassian/localstack/SQSMessagingTest.java
https://github.com/atlassian/localstack/blob/master/localstack/ext/java/src/test/java/com/atlassian/localstack/TestUtils.java
Now for the solution :
AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration("http://localhost:4576/", awsRegion.getName());
AmazonSQSClient client = (AmazonSQSClient) AmazonSQSClientBuilder.standard()
.withCredentials(configuration.getSqsConfiguration().getCredentialsProvider())
.withEndpointConfiguration(endpoint)
.build();
Here http://localhost:4576/ is where localstacks runs SQS. Don't miss the trailing slash.
For using this in camel route is same as using AWS resources. Hopefully this helps!

How to configure dns entries for Docker Compose

I am setting up a Spring application to run using compose. The application needs to establish a connection to ActiveMQ either running locally for developers or to existing instances for staging/production.
I setup the following which is working great for local dev:
amq:
image: rmohr/activemq:latest
ports:
- "61616:61616"
- "8161:8161"
legacy-bridge:
image: myco/myservice
links:
- amq
and in the application configuration I am declaring the AMQ connection as
broker-url=tcp://amq:61616
Running docker-compose up is working great, activeMQ is fired up locally and my application constiner starts and connects to it.
Now I need to set this up for staging/production where the ActiveMQ instances are running on existing hardware within the infrastructure. My thoughts are to either use spring profiles to handle a different configurations in which case the application configuration entry for 'broker-url=tcp://amq:61616' would become something like broker-url=tcp://some.host.here:61616 or find some way to create a dns entry within my production docker-compose.yml which will point an amq dns entry to the associated staging or production queues.
What is the best approach here and if it is DNS, how to I set that up in compose?
Thanks!
Using the extra_hosts flag
First thing that comes to mind is using Compose's extra_hosts flag:
legacy-bridge:
image: myco/myservice
extra_hosts:
- "amq:1.2.3.4"
This will not create a DNS record, but an entry in the container's /etc/hosts file, effectively allowing you to continue using tcp://amq:61616 as your broker URL in your application.
Using an ambassador container
If you're not content with directly specifying the production broker's IP address and would like to leverage existing DNS records, you can use the ambassador pattern:
amq-ambassador:
image: svendowideit/ambassador
command: ["your-amq-dns-name", "61616"]
ports:
- 61616
legacy-bridge:
image: myco/myservice
links:
- "amq-ambassador:amq"

How to setup hostnames using docker-compose?

I have setup a few docker-containers with docker-compose.
When I start them via docker-compose up I can access them via their exposed ports, e.g. localhost:9080 and localhost:9180.
I really would like to access them via hostnames, the localhost:9180 should be accessable on my localhost via api.local and the localhost:9080 via webservice.local
How can I achieve that? Is that something that docker-compose can do or do I have to use a reverse proxy on my localhost?
Currently my docker-compose.yml looks like this:
api:
build: .
ports:
- "9180:80"
- "9543:443"
external_links:
- mysql_mysql_1:mysql
links:
- booking-api
webservice:
ports:
- "9080:80"
- "9443:433"
image: registry.foo.bar:5000/webservice:latest
volumes:
- ~/.docker-history:/.bash_history
- ~/.docker-bashrc:/.bashrc
- ./:/var/www/virtual/webservice/current
No, you can't do this.
/etc/hosts file resolves host-names only. Thus it can only resolve localhost to 127.0.0.1.
If you add a line like
api.local 127.0.0.1:9180 it wont work.
The only think you can do is to setup a reverse proxy (like nginx) on your host that listen to api.local and forwards the requests to localhost:9180.
You should check out the dory project. By adding a VIRTUAL_HOST environment variable, the container becomes accessible by domain name. For example, if you set VIRTUAL_HOST=web.docker, you can reach the container at http://web.docker.
The project home page has more info. It's a young project but under active development. Support for macOS is also planned now that Docker for Mac and dlite have emerged/matured.

Resources