I want to setup a local IMAP server within my home network for archiving emails. The server does not need to be accessable via the internet. Therefore I can pass on a secured access via SSL (If this makes it easier). I want to integrate the server in my current docker setup. So the server has to run within a docker container.
I already tried the following containers:
https://hub.docker.com/r/blackflysolutions/dovecot
https://hub.docker.com/r/dovecot/dovecot
https://hub.docker.com/r/mailu/dovecot
https://hub.docker.com/r/mailcow/dovecot
https://hub.docker.com/r/eilandert/dovecot
But i could not get any of them to run. At the same time none of them have a forum or anything where I can put a question. Two of them (mailu/dovecot and mailcow/dovecot) are part of a bigger mailserver package. Which I do not need, I only want a IMAP server to put some email locally. But I tried them anyway.
Does anyone know how to get any of those to run? Or suggest me another stable docker container solution.
Related
Looking for a little help here. Trying to bootstrap a small side business, and I have never been the DevOps guy. I use the web hosted version Gitlab to store my codebase, but I am unable to use it to act as a repository for docker images that I am creating from that code. The images that I am generating are quite large and exceed the token expiration when I am attempting to push back to the registry from the group gitlab-runner that I have installed on my personal machine. I have an extra machine sitting around, so I installed gitlab-ee and exposed it through a dynamic dns service (NoIP). I then mirrored the repositories that I want to generate images for on my locally hosted gitlab instance. At first, I tried to use a runner that was on the same machine as my gitlab instance, but always failed due to all available memory being consumed and locked up the machine. All in all, gitlab docs pretty much don’t run the runner and instance on the same machine. So, I went back to using the runner I originally used for the web hosted instance, but I am having issues pushing to my local instance. When trying to push to my repository (through the ddns URL), I end up getting a lot of this:
e4fdbd3bf512: Retrying in X seconds
And it eventually times out due to job time limit or token time limit. I am guessing this is due to my connectivity not being great. What I would like to do is have the (installed on a local machine) runner push to the local IP on my network, but I am unsure how to do this with the SSL setup. When trying to login and push in my pipeline, I get the following error:
Error response from daemon: Get "https://xxx.xxx.xxx.xxx:xxxx/v2/": x509: cannot validate certificate for xxx.xxx.xxx.xxx because it doesn't contain any IP SANs
How do I correct this without affecting the https:// SSL that is already setup for when accessing the instance from the DDNS? Appreciate any help you can give me.
I abandoned attempts at getting this to work. Ran through a bunch of scenarios of creating my on CA and trying to create certificates for the IP address and share that with the other machine. Ultimately, gitlab obscures some things with LetsEncrypt. Funny enough it was just a connectivity issue where I was getting timeouts. Ended up hard-lining both machines and getting better throughput. Able to push ~6GB docker images up through the URL.
I have a java application as JAR. This JAR application runs fine from my machine, meaning it can send and receive HTTP Requests to and from an API Endpoint (let's call this endpoint example.com/api/).
And then i built a docker image of this JAR Application, and tried to run the image as container from my docker desktop. But then i got this error.
the error i got
it seems like my application cant reach the url from inside the docker container. I tried to set the proxy in Settings -> Resources -> Proxies -> Manual proxies configuraton, and put my company proxy since i'm inside my company network. But still it doesn't work.
I tried to google this problem but almost nothing shows up (anything that shows up have little correlation with my problem). Anyone knows what seems to be the problem? What should I do?
First check if your container is able to communicate with the endpoint. Ping or curl it from the container shell. If you use proxy, set environment variables in container:
export http_proxy=http://server-ip:port
export https_proxy=https://server-ip:port
I am currently running latest versions Nifi and Postgresql via docker compose.
as of 1.14 version update of Nifi, when you accesss the UI on web it connects via https, thus asking you for ID and Password every time you log in. Its too cumbersome to go to nifi-app.log file and look for credentials every time I access the UI. I know that you can change the setting where it keeps https as the default method but I am not sure how to do that in a docker container. Can anyone help me with this?
You could use some env like AUTH in the documentation
You can find the full explanations here
I am a beginner in regards to ArangoDB and I am trying to deploy my first project using it. The website is PHP based - what I did is that I created an Arango Docker container on Digital Ocean so that I can access it from the browser with the ipv4 provided. Public access to port 8529 is enabled. Locally, I am able to modify the .config file in order to point to the corresponding ip and I can painlessly retrieve data.
As a hosting provider I am using one.com. When uploading the same files that I am able to run locally on my own domain I get the following error:
["_database":"ArangoDBClient\Connection":private]=> string(7) "_system" } ArangoDBClient\ConnectException: cannot connect to endpoint 'tcp://xxx.xx.xxx.xxx:8529/': Connection timed out
I want to mention that I have also tried out ArangoOasis. No luck with it - I get the same error. Been at it for quite a few weeks - I would very much use some guidance. Even what to do next as I am out of ideas and documentation to read.
I have a web-app deployed on cloud with ssl (using freeencrypt with nginx)
The app is dockerized.
Is it possible for me to run it on localhost just by copying it and run docker-compose up?
Is it possible for me to run it on localhost just by copying it and run docker-compose up?
Sure, that's entirely possible. There's nothing particularly different about running it locally vs running it remotely: in both cases, you're still interacting with your web app with a browser over a network connection.
The only tricky bit may be in ensuring that you can continue to use the appropriate hostname so that your SSL certificate will validate correctly. The easiest way to do this is probably to modify your /etc/hosts file to map the hostname to the ip address of your webapp container. This will override DNS. Just remove to remove the modification when you're done testing, otherwise you won't be able to reach the remote site!