Cloudflare blocking docker push because of log4j vulnerability - docker

As part of a CI/CD pipeline, I'm trying to push a Docker image to a private registry that is behind Cloudflare WAF. Certain push attempts go fine, but for one specific image, I'm getting blocked by Cloudflare.
The details of the block are:
Rule ID: 100515B
Rule Message: Log4j Body
Method: PATCH
Path: /v2/teamname/imagename/blobs/uploads/8b9....
The command triggering this is: docker push myrepo.com/teamname/imagename --all-tags
As I understand it, Cloudflare recognizes the PATCH operation that Docker executes as containing some body that looks like it is trying to exploit the Log4j vulnerability. However, this is a legitimate call. And as I said, a previous docker push command (for another image, but in the same CI/CD pipeline) didn't get blocked.
I first though it might be the content of the image, but now I believe it must be in the body of the HTTP request. I can't really find anything online on the combination of Docker, Cloudflare and the Log4j vulnerability.
I'm wondering if anyone has seen this or knows how I can further troubleshoot this.

Related

How to handle Keystores / Deployment with docker

I'm trying to use Ballerina to build a REST-Api that uses JWT Authentication and is deployed with docker.
I managed building a simple service with a few endpoints and deploying a Docker image.
Now I want to add JWT authentication.
I tried using this example: https://ballerina.io/learn/by-example/secured-service-with-jwt-auth.html
(v1.2 and Swan Lake)
However, when I try to run the example i get:
"error: KeyStore File \bre\security\ballerinaKeystore.p12 not found" (I'm using Windows)
(I probably have to set my own keystore here for it to work, but the example does no say anything about that.)
EDIT: Nevermind... I'm an idiot. Forgot to pass --b7a.home=
But that still leaves my following questions regarding deployment with docker.
Also: (I think) I understand what a keystore is and why I need it. but: How do I handle keystores during development or when deploying? Seems like a bad idea to push the keystore file to a repo. Where do I save it? and how do I deploy it? Or did I get something completely wrong here?
You could refer to Sample with Docker, Sample with Kubernetes on how to deploy https services using the Annotations.
To use without annotations, you will need to copy the keystores/trustores to the docker file and give that path to the configurations of the http service's listener. In production you will most probably have your own keystores and truststores. So it is always better to copy these to the docker file and make your services run.

Dynamically set proxy for docker pull

I'm trying to pull an image from a server with multiple proxies.
Setting a proper proxy depends on which zone the machine is trying to docker pull from.
For the record, adding the one relevant proxy in /etc/systemd/system/docker.service.conf/http-proxy.conf of the machine which is pulling the image, works fine.
But the container is supposed to be downloaded on multiple zones, which require different proxies based on where the machine is.
I tried two things:
Passed the list of proxies in the http-proxy.conf, like this:
[Service]
Environment="HTTP_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="HTTPS_PROXY=http://proxy_1:port/,http://proxy_2:port/"
Environment="NO_PROXY=localhost"
Some machines require http://proxy_1:port/, which work fine.
But on a machine that requires http://proxy_2:port/ to pull; it does not work, meaning, Docker does not fallback to another proxy to try. It returns this error:
Error response from daemon: Get HTTP:<ip>:<proxy_1> proxyconnect tcp: dial tcp <ip>:<proxy_1>: connect: no route to host
Ofcourse if I were to provide only the second working proxy to the configuration, it will work.
Passing proxy as a parameter to docker pull, like in docker build/run but that is not supported as per the documentation.
I am looking for a way to set-up proxies in such a way that either
Docker falls back to trying other provided alternate proxies
OR
I can provide proxy dynamically at the time of pull. (This will be part of an automated process which determines relevant proxy to pass.)
I do not want to constantly change the http-proxy file and restart docker for obvious reasons.
What are my options?
If you're using a sufficiently recent docker (i.e. 17.07 and higher) you can have this configuration on the client side. Refer to the official documentation for details on the configuration.
You still need to have multiple configuration files for the various proxy configuration you need, but you can switch them without the need to restart the docker daemon.
In order to do something similar (not exactly related to proxy) I use a shell script that wraps the invocation of docker client pointing to a custom configuration file via the --config option.

Docker Hub: Remote Build Trigger doesn't work

I am trying to trigger image build via remote build trigger URL.
I have followed the Docker Hub documentation, but the actual Docker Hub UI option doesn't have the same options as described in the Docker Hub Docs for remote build trigger.
Docker Hub interface shown as per the docs:
My Docker Hub Interface:
I don't see token option anywhere.
Also, I tried hitting the trigger URL directly via browser, but that doesn't help either.
I guess I haven't understand this correctly, or there is some serious bug in Docker Hub especially for remote build trigger.
It seems you are following some unofficial documentation, which is outdated. Docker Hub redesigned this part some time ago. Now you don't need a token, because it's already included in the URL. But opening it in the browser is not enough, it must be a POST request, so you can try that from the command line with curl for example:
curl -X POST "<the-trigger-url-here>"

Graylog stream rule with application running on docker

I have an application that running on a docker container, and logs to our Graylog server, however, the Graylog source field is actually the container ID:
source
97c0212d3d75
Since the container ID changes frequently, I cannot apply the source to the stream rules.
I had a look at the message and seems like there is nothing much I can rely on to create stream rules for this application:
Can someone please share some experience on this case? My problem here is that I cannot identify the application nor environment.
I am looking for ideas like:
Is there a way to make container id static (prob not)
Is there a way to send more information to graylog without making code changes or making code to specify the specific values
Any better ideas
I just realised I can set the hostname to my docker containers, so just by adding the following to my docker-compose file should work
hostname: billing-rq-${ENV_NAME}

Is there any way to obtain detailed logging info when executing 'docker stack deploy'?

In Docker 17.03, when executing
docker stack deploy -c docker-compose.yml [stack-name]
the only info that is output is:
Creating network <stack-name>_myprivatenet
Creating service <stack-name>_mysql
Creating service <stack-name>_app
Is there a way to have Docker output more detailed info about what is happening during deployment?
For example, the following information would be extremely helpful:
image (i.e. 'mysql' image) is being downloaded from the registry (and provide the registry's info)
if say the 'app' image is unable to be downloaded from its private registry, that an error message (i.e. due to incorrect or omitted credentials - registry login required) be output
Perhaps it could be provided via either of the following ways:
docker stack deploy --logs
docker stack log
Thanks!
docker stack logs is actually a requested feature in issue 31458
request for a docker stack logs which can show the logs for a docker stack much like docker service logs work in 1.13.
docker-compose works similarly today, showing the interleaved logs for all containers deployed from a compose file.
This will be useful for troubleshooting any kind of errors that span across heterogeneous services.
This is still pending though, because, as Drew Erny (dperny) details:
there are some changes that have to be made to the API before we can pursue this, because right now we can only get the logs for 1 service at a time unless you make multiple calls (which is silly, because we can get the logs for multiple services in the same stream on swarmkit's side).
After I finish those API changes, this can be done entirely on the client side, and should be really straightforward. I don't know when the API changes will be in because I have started yet, but I can let you know as soon as I have them!

Resources