I'm having problems reloading my rabbitmq ssl certs when they get renewed. The core of my setup is as follows:
vps running the rabbitmq community edition docker container
ssl certs provisioned using letsencrypt on the vps, with the certs available to the docker image via a mounted folder
auto-renewal configured by mounting a folder that gets bound to /plugins/rabbitmq_management-{rmq-version}/priv/www in the docker container, and specifying the --webroot-path as that folder. This allows the renewal to work without shutting down the server because the /priv/www folder is where cowboy serves static assets from
The problem now is that even with the renewal successful, the server seems to not pick up the changed ssl certs unless the docker container is restarted. For this problem the closest I've found to a possible solution is this recommendation to clear the certs cache, but when I try that command the error message I get is this: unable to connect to epmd (port 4369) on {cluser-name}.localdomain: nxdomain (non-existing domain). I haven't found a solution to this problem yet. I've seen this suggestion to check the contents of the /etc/hosts/ file, and the contents of mine differ from the contents shown in that blog article. Specifically mine simply shows:
127.0.0.1 localhost
...
without the localhost.localdomain parts shown in the article. I'm stumped beyond this point. Any help will be much appreciated. Thank you!
Try running this command:
rabbitmqctl -n rabbit#localhost eval 'ssl:clear_pem_cache().'
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Related
I'm trying to deploy GridGain Web Console 2020.03.01 on RHEL7 x86_64 with Docker following documentation here.
However, there is 404 Not Found error on accessing http://localhost:3000/swagger-ui.html page which is used as healthcheck. Backend logs show no errors. The last version I'm able to get containers running with is 2019.12.02 (which in fact refuses to show a connected cluster, but that's another issue). Starting with 2020.01.00, all backend healthchecks fail. That looks suspicious considering that 2020.01.00 releasenotes include updates of io.springfox and swagger-ui-dist.
Besides that, 2020.03.01 releasenotes say that Console's default port is changed to 8008, but the server still starts on 3000.
Anyone had any luck deploying dockerized Web Console?
The Web Console consists of backend and frontend. The backend is started on port 3000 which is printed in log, while the frontend is started indeed on port 8008 - and you most probably want to use this.
The docker-compose.yml given on Documentation site maps container's 8008 port to host's 80 port, feel free to replace with any wanted.
Regarding the heathcheck, /health endpoint is now changed to this
The Swagger was removed in 2020.01.00 due to security concerns (same GG-26726 issue mentioned in the release notes). You are right to be suspicious, I'll ask right people to update release notes and the docs, sorry about the confusion and thanks for pointing the issue out. Swagger was supposed to be an internal feature for Web Console (WC) developer team only.
As you pointed out, starting with 2020.01.00 the Swagger-based health check won't work. Internally, the WC team uses dockerize to wait for backend to start, here's an example from our E2E test suite compose:
entrypoint: dockerize -wait http://backend:3000/health -timeout 2m -wait-retry-interval 5s node ./index.js --target=${TARGET:-on-premise}
This might work for you too, with some adaptation. You will most likely have to remove "healthcheck" sections from docker-compose.yml too, or modify these, if the "http://backend:3000/health" URL can indeed serve as a direct replacement for the old "http://localhost:3000/swagger-ui.html" URL, which I am not sure about.
I have a dockerized django app (cookiecutter) and I want to configure nginx inside of a docker container, so I can deploy it to an EC2 instance. For that I need ssl certificates.
The process to get a ssl certificate with Let's Encrypt like it is recommended everywhere seems to be a complicated task when you use docker, nginx and EC2. I tried it and can't get passed the error I'm linking below.
So I was wondering if there is a way to configure nginx with an AWS certificate. I read that AWS certificates are free but can't be downloaded (https://serverfault.com/questions/822035/). So my question is threefold:
a) Can I configure nginx without https, get a free certificate for my AWS EC2 instance and then run my app on that server with https?
b) If the answer is yes, how could I configure my nginx server to serve only http for that?
c) If I buy a certificate from a CA can I use it to configure my nginx and will it be transportable if I move my app (to Digital Ocean or Azure or sth)?
I am by no means an expert in most of these technologies and fighting myself through a jungle here. Very grateful for help, hints, tips, suggestions and guidance. Thanks very much in advance. I happily provide more code if needed.
Tutorial I tried but can't solve my error:
https://medium.com/#pentacent/nginx-and-lets-encrypt-with-docker-in-less-than-5-minutes-b4b8a60d3a71
Tutorial for nginx with docker and let's encrypt I wanted to follow if there is no easier and quicker solution: https://www.humankode.com/ssl/how-to-set-up-free-ssl-certificates-from-lets-encrypt-using-docker-and-nginx
Error with Let's Encrypt:
Timeout during connect (likely firewall problem) To fix these errors, please make sure that your domain name was entered correctly and the DNS A/AAAA record(s) for that domain contain(s) the right IP address.
I have a web-app deployed on cloud with ssl (using freeencrypt with nginx)
The app is dockerized.
Is it possible for me to run it on localhost just by copying it and run docker-compose up?
Is it possible for me to run it on localhost just by copying it and run docker-compose up?
Sure, that's entirely possible. There's nothing particularly different about running it locally vs running it remotely: in both cases, you're still interacting with your web app with a browser over a network connection.
The only tricky bit may be in ensuring that you can continue to use the appropriate hostname so that your SSL certificate will validate correctly. The easiest way to do this is probably to modify your /etc/hosts file to map the hostname to the ip address of your webapp container. This will override DNS. Just remove to remove the modification when you're done testing, otherwise you won't be able to reach the remote site!
Issue
So my problem is that I can't get the rancher server to find the rancher agent. I've looked at the Rancher Troubleshooting FAQs but that haven't helped with my issue. I'm using one server for both the rancher server and the agent and I'm setting the CATTLE_AGENT_IP to the IP of the physical server.
I'm running Ubuntu 16.04 and docker 1.12.3.
Iptables
At first I thought it might be a firewall issue, but I've tried disabled it and no luck.
Logs
Rancher agent error log message
time="2016-10-27T11:56:50Z" level="info" msg="Host not registered yet. Sleeping 1 second and trying again." Attempt=5 reportedUuid="492dc65c-6359-4a40-b6e3-89c6da704ffb"
I feel like I've tried everything without any result. Anyone have an idea what could be wrong or how I could continue to troubleshoot the problem?
Are you reusing the host from a previous Rancher install?
If so, there is sometimes old credentials that are tried instead of the new ones for the host. The files are in /var/lib/rancher. (they are .files so you need ls -a to view)
If you are using a self signed SSL cert it will fail to register if you are not bind mounting the CA root cert. See http://docs.rancher.com/rancher/v1.2/en/installing-rancher/installing-server/basic-ssl-config/ the last section "Adding Hosts" for more info.
I solved my issue. The problem was a faulty CATTLE_AGENT_IP. Apparently you can not have http:// before the IP address.
I am trying to resolve this network issue which I am facing multiple time while performing any docker commands like "Docker search Ubuntu".
I get an error saying:
"Error response from daemon: server misbehaving.
Can anyone help me on this?
For those who have this problem, it is typically related to having an issue with your DNS being unable to resolve index.docker.io. I had this issue today working from home where my internet connection has a default DNS server that is notoriously flakey.
My dev environment is OSX and I easily solved the issue by changing my DNS servers in network settings to Google's DNS servers (8.8.8.8 and 8.8.4.4) and then restarting my docker host through docker-machine restart MACHINENAME
Faster/Easier Solution: login to docker-machine and fix the dns.
Turns out you don't have to go to all the trouble and waiting associated with restarting docker-machine. Just login to the docker machine (i.e. docker-machine ssh default) and edit /etc/resolv.conf - Add the dns settings from your host machine at the top of resolv.conf.
This is more or less what happens when you restart docker-machine and explains why some repositories are unreachable sometimes after you switch networks.
I also had the exact same problem. Then I stopped the docker-machine and started it--it worked.
Make sure that, when you run this, you are connected to the internet, as Docker needs to be able to do this.
My issue not solved with stated Answer here.
This is problem with resolving Host... I was getting random error time out and misbehave
You need to enable through a configuration property experimentalHostResolver in %APPDATA%\rancher-desktop\settings.json. By default this property is set to false, meaning that the default DNS process in the rancher desktop will be handled through dnsmasq. However, if this property is set to true the default DNS lookup will switch to host-resolver.
NOTE: This feature can only be enabled for Windows currently and it is
an experimental feature
You can take a look at the example settings.json file below as a reference:
"kubernetes":{
"experimentalHostResolver":true <== This is the config!
},
Reference