mailu docker - how to include container id to let's encrypt certificate? - docker

I'm searching now for hours threw the internet but found nothing what would suit my case.
I have a mailu docker installed on my server and I want to send emails from my meteor application threw this container.
I set my MAIL_URL variable like process.env.MAIL_URL = 'smtps://USERNAME:PASSWORD#DOCKER-IP:465'; and this is working when I use also the global variable NODE_TLS_REJECT_UNAUTHORIZED = 0 but I don't want to use it, because of security reasons.
When I send emails from my meteor app on my laptop and using my email server mail.foo.com instead of the docker-id like smtps://USERNAME:PASSWORD#mail.foo.com:465 then it also works. So from outside I have no problem but when I'm on the server I can't use localhost like smtps://USERNAME:PASSWORD#localhost:465 or smtps://USERNAME:PASSWORD#mail.foo.com:465.
As #natevw said in Node.js Hostname/IP doesn't match certificate's altnames:
It would be better to first diagnose why the certificate is not authorizing and see if that could be fixed instead.
I would say my problem is that the internal docker-ip address is not in the certificate included.
So in my view I would say I have two options:
I could add somehow the ip address to the certificate
I could use somehow the localhost or domain name instead of the internal container id
But I sadly don't know how to achieve one of them.
If you need some configs or something like that please comment and I will edit this post.
Thanks in advance,
Michael

Related

Getting "ECONNREFUSED" error when trying to upload to Wolkenkit Blob Server

I'm currently developing a Wolkenkit application which is run on my local machine.
I want to upload a file from the Wolkenkit app to the blob server (as documented here).
When sending a POST request from the server to https://local.wolkenkit.io:3001/, Node.js gives me the error ECONNREFUSED.
I've tested the POST-Request with another program and it works there. Any idea why it doesn't work from the wolkenkit application itself?
Thanks!
The Storing files sample you linked to shows code that is to be run in the browser, not in the backend itself. Of course, both should work, but there are a few minor differences you need to watch out for.
Fixing the host name
First, I suppose that local.wolkenkit.io in your case maps to 127.0.0.1, which is the default for wolkenkit. That means that when you try to connect to this domain from within a Docker container, the container does not try to call out to the blog storage container, but it stays within itself. So, the first thing that needs to be fixed is the host name.
Basically, there are two options for this: You can either setup local.wolkenkit.io so that it resolves to the external IP address of your machine. This would work, but is pretty cumbersome. The other option is to directly address the appropriate container that is responsible for blob storage, by its internal name. The internal name is <name-of-your-app>-depot-file. So you need to replace https://local.wolkenkit.io:3001/ by https://<...>-depot-file.wolkenkit.io:3001/.
Fixing the port
Second, the port is wrong. This is because the blob storage service is internally running on port 3000, externally on 3001. So instead of https://<...>-depot-file.wolkenkit.io:3001/ you need to use https://<...>-depot-file.wolkenkit.io:3000/.
Once you have done this you should not get any more errors like ECONNREFUSED, since now the service can be found.
Fixing SSL issues
Third, since you are now connecting to the blob storage service using a different domain name, the SSL certificate doesn't match any more, since it was issued for local.wolkenkit.io. As a result, you will get SSL errors when trying to connect.
The simplest way to get around this is to disable any SSL checks (albeit this is also the most insecure way to handle this!). How to do this depends on the HTTP client module you are using. E.g., in request there is an option called strictSSL that you can set to false.
Of course, what you actually should do is to either use a custom certificate which includes this domain name as well, or to write a function that handles the certificate check and accepts the presented one, especially in this case.
If you do all of this, things should work :-)
PS: I am one of the authors of wolkenkit. Thanks a lot for bringing up this issue, and we will take care of this in the future, to make storing blobs easier.

I want to access Jira (Docker on Synology DS716+II) from LAN not only via IP_OF_SYNOLOGY:PORT but for example jira.synology.local

I am working with a Synology NAS type aDS716+II, DSM 6.1.4-15217 Update 2 on wich runs Docker with a Jira container.
So now what I want to do I'm assinged to get to work is to access Jira's webinterface with let's say jira.synology.local with synology being the servername.
I read a lot about nginx and how it's built in since DSM 6.X but I don't seem to get it to work properly at all.
I can access Jira's webinterface from another machine within the LAN via IP_OF_SYNOLGY:PORT so when setting up a reverse proxy on the server it should be pointing to LOCALHOST:PORT right? I have also tried using the actual IP instead of LOCALHOST but without success.
I can access the interface of Synology itself not only via IP_OF_SYNOLGY:PORT but also via DOMAINNAME.LOCAL if I set the domain name.
I really don't know what I'm missing and I tried everything I could think of. Does someone has experience with this?
If some information is missing, I'll gladly provide it. I'm fairly new to synology I have to admit. Thanks in advance!
So this has gotten zero response but I figured probably someone will have a similar "problem" in the future, so I will answer anyway.
I solved everything, when I setup Active Directory. When installing AD, the DNS-Server will automatically be installed too.
So we have JIRA running in a Docker container (on port, let's say, 12345) and I want to access it via the LAN on jira.domainname.
To do so we need to have installed DSM6.X or higher (for nginx) and the DNS-Server. That's it.
In the DNS-Server you will have to create a new master zone
and apply the following settings, whereas you can freely choose the domain name and Master DNS server must be the IP of your synology station, since it functions as a DNS
Then you want to edit the Resource Record
There you want to add an A Record Resource
and an CNAME Record Resource
So your Resource Records will look like this
Now the last step for setting up the DNS server is to tell it what to do if there is no specific record for a query. So for example if you want to open jira.domainname in your browser, there is a specific record for that and the DNS server knows how to direct it. But if you want to open up for example google.com the DNS server has no information on that and does now know what to do. So what we do now is to to tell the DNS server to forward the request, if it has no records for a request. To do so, enable the forwarders and put in the IP of your gateway/ managed switch as primary and some public DNS server (8.8.8.8 for one of google's DNS server) as secondary.
Please remember that jira.domainname shall always be the domainname you choose and 192.168.0.200 shall always be the IP of your synology station.
So now the DNS server is completely setup. Now we want to take advantage of the built-in reverse proxy (which runs on nginx in the background). To do so we navigate as seen here
and create a new reverse proxy rule
So now that the URL's can point to the same destination (your synology, 192.168.0.200) but on different Port. That comes in very handy for some applications running in docker.
So now if you are running this in an home setup or small office, you probably are working with standard issue commercial router such as for example a FritzBox by AVM. Those are pretty good but beware that some prohibit the so called DNS Rebinding which means that DNS requests pointing to a local IP will be not allowed. Since in this setup the DNS server (your synology) and the destination JIRA (also your synology) are in the same LAN, we have to create an exception. Probably other routers don't suppress those requests, but if so exceptions are necessary.
So the next step, it to tell your Gateway or managed switch that it has to use the newly setup DNS server as the primary DNS server. For FritzBox' you can do so here
put in the IP of your DNS server and an secondary DNS server. This is important as a fallback solution if your DNS server probably stops working at some point.
Now that everything is setup I would recommend to restart the router/ managed switch, synology and the workstation you are working on, to flush all caches. After that you can simply open your browser and type in jira.domainname and JIRA should open up. You can also open a terminal/ cmd and type in nslookup jira.domainname to see if it is being resolved correctly.
I really hope this will help someone at some point and if there are any additional questions, please feel free to comment this or write me directly!

Error: invalid_request device_id and device_name are required for private IP

I was doing my development with Google Drive API using [localhost:8080]. Suddenly I felt to test it in my local deployment sandbox and it has IP address as [192.168.1.1:8080]. And as per that I changed the credential in developer console client callback URL. I am using OAuth2WebServerFlow to get the refresh token using user consent. Then in future I am using the refresh token and OAuth2WebServerFlow to authenticate the user. But I was surprised - I got the error:
That’s an error.
Error: invalid_request
device_id and device_name are required for private IP:
I don't know what is happening or how can I fix it. What is going on, I don't understand
An alternative to editing a hosts file is to use the "Magic DNS" service http://xip.io/ or http://nip.io/ (see edit)
xip.io is a magic domain name that provides wildcard DNS for any IP address.Say your LAN IP address is 10.0.0.1. Using xip.io,
10.0.0.1.xip.io resolves to 10.0.0.1
www.10.0.0.1.xip.io resolves to 10.0.0.1
mysite.10.0.0.1.xip.io resolves to 10.0.0.1
foo.bar.10.0.0.1.xip.io resolves to 10.0.0.1
With this service, you can specify a public-looking domain that resolves to a private address.
In the Console, if your Redirect URI was (what you wish you had anyways):
http://192.168.1.1:8080/auth/google_oath2/callback
Replace it with:
http://192.168.1.1.xip.io:8080/auth/google_oath2/callback
"Redirect URIs" does not seem to accept wildcards, so the entire private ip-xip.io needs to be specified in the console.
I have no affiliation with xip.io; I'm just a satisfied user.
2016 Edit: I've heard reports of instability with the xip.io DNS servers. There is a copy-cat service nip.io that behaves exactly the same as xip.io, but during July 2016, nip.io had a 100% response rate while xip.io did not.
Google will not accept a local (private) IP address when doing Oauth calls. My workaround was to add an entry in my Windows hosts file for the local IP:
\Windows\System32\drivers\etc
192.168.1.2 fakedomain.com
then register fakedomain.com with Google in their dev console. That appears as a "real" domain to them, but will still resolve in your browser to the local IP. I'm sure a similar approach on Mac or Linux would also work.
Edit: Only relevant when developing locally.
Ok, I'm having the same problem on my Mac. Following steps resolved the issue
Go to your google development console https://console.developers.google.com/project, choose credentials and change the callback IP to a domain like http://myflask.com:5000/oauth2callback.
In my case I am using a Flask application, so the 5000 port is necessary.
Next add to your private/etc/hosts file a new entry matching the above hostname to your IP, like so:
# (example IP)
172.1.1.1 myflask.com
Give Google a minute to update your credentials, and visit your site at http://myflask.com:5000
I got the same error until I changed it from an IP address to a domain name, (192.168.1.113 to localhost in my case) so it looks like Google won't accept bare IP addresses.
Use a domain name for your sandbox, or setup a local domain server if you don't have one.
Worth noting that on a Mac you can do the same thing by editing as root:
/private/etc/hosts
Add a similar line as mentioned above
192.168.60.10 fakedomain.com
Modify your file hosts at \Windows\System32\drivers\etc\hosts
add "192.168.1.2 fakedomain.com" into hosts file
restart your windows
Update google console 192.168.1.2 update to fakedomain.com

1and1.com to Heroku

Edit: Just to be clear, actually I only want to use a different webserver, not mail server. It has to stay the same.
I've got a 1and1 plan including webspace and email service. As 1and1 doesn't support Ruby, I created my website at Heroku.
I'm now trying to point my www.example.de url to my example.herokuapp.com domain. As this is the very first time I'm doing this, please alert me if I'm missing something important.
The website uses the 1and1 mail server. I found out that, due to that fact, I can't go the cname way but need a A Record (or ANAME Record?). I already pointed heroku to the right urls and now need to set up 1and1 and a dns service.
I registered at dnsmadeeasy.com and created an ANAME Record:
Name: [left blank]
FQDN or IP: myapp.herokuapp.com
TTL: 1800 (I've got no idea what that is)
After creating that, dnsmadeeasy.com shows me several "System NS Records" which look like nameservers?
ns0.dnsmadeeasy.com
upto
ns4.dnsmadeeasy.com
As far as I understand I now have to enter the nameservers above at 1and1.
My question now is, does this disable the by default working mail server connection? If yes, how would I point back to it from dnsmadeeasy.com? Or is there a better way to do this?
It seems the easy way to do it now is with Point DNS. They have a free developer plan: https://devcenter.heroku.com/articles/pointdns
Enter the Point DNS servers into the 1and1 Domain Manager.
The problem was solved by just adding a A-record at 1and1 pointing at Heroku. No nameservers or any extra service needed.
Here is a more detailed update:
First you have to go to the Heroku app settings. There you can add domains for your app, like www.mydomain.com.
Then you use a service to get the IP address of the server where your Heroku app instance is running. (Just get the IP address of your instance of yourapp#herokuapp.com.) Then you add this IP address as a A-record to the 1and1 app settings. Now you have to wait for the settings to propagate over the DNS servers.
Basically you point your 1and1 app to the server on which your Heroku app runs and inform your Heroku app that there probably is some app trying to reach through to it, which has a specific address, like www.mydomain.com. That's how they find each other.

How can I get Yahoo OAuth to work when I develop locally when my local domain is not registered with Yahoo?

I'm working on an app that uses Yahoo OAuth. The OAuth had been working fine but I just registered my domain with Yahoo and now it will not let me use the OAuth when I develop locally because
"Custom port is not allowed or the host is not registered with this consumer key."
The issue is because my call back URL is to a domain that is not registered with Yahoo (http://localhost:8080/welcome).
I'm not sure what to do. I'm also new to development so if you could be specific with suggestions that would be awesome! Any help is greatly appreciated.
Hiii... yahoo works on localhost :).. what you have to do is while registering for a yahoo consumer key and secret key, the registration page asks you what type of application is yours. I guess it gives you two options , website and the oder one as stand alone app. Choose stand alone app as in your case. Then it will give you a pair of keys, and it will work on localhost :). Enjoy!
It looks like Yahoo! doesn't want you to do this. Some answers from similar questions might be helpful (or not):
How do I develop against OAuth locally?
401 Unauthorized using Yahoo OAuth
Yahoo OAuth question
EDIT: more evidence Yahoo! doesn't support this: http://developer.yahoo.net/forum/?showtopic=6496&cookiecheckonly=1
I found the simplest solution was just to register for a separate key for my development environment. As long as you don't verify the domain for that key, you shouldn't hit any issues.
After many attempts, I too came to the conclusion that Yahoo's redirect_uri does not seem to work with ports other than 80.
The one solution that worked for me:
Download ngrok
Run the app and input ngrok http xxxx in the console - where xxxx is the port you are trying to access
The command will generate a http://xxxxxx.ngrok.io forwarding link that can be used for Yahoo's needs
Create a new Installed Application at https://developer.yahoo.com/apps/create/ and input http://xxxxxx.ngrok.io in the Callback Domain field.
Links should now work with this redirect_uri
Addressing Muhammad's comment in Vignes's answer here because I can't comment. You should be able to use a callback with a stand alone app if you specify 127.0.0.1 as the callback domain. You may also needed to change the port that your local server is listening to, because you cannot request that yahoo use e.g. port 8000. Make sure your local server is listening to port 80.
As of writing, setting the Application Type to Installed Application and then leaving the Callback Domain blank will give you errors.
What works is configuring 127.0.0.1 as the Callback Domain for the app. This works regardless if you are choosing Web Application or Installed Application as the Application Type. However, Yahoo! does not accept callback URLs with ports in it so you have to make sure your app listens to port 80 (or 443 if https) when running locally.
Another less ideal option would be using some random non-existent domain like local.dev.env.com as Callback Domain and then editing your hosts file by adding this:
127.0.0.1 local.dev.env.com
This will forward all requests on local.dev.env.com to 127.0.0.1.

Resources