I want to setup a local IMAP server within my home network for archiving emails. The server does not need to be accessable via the internet. Therefore I can pass on a secured access via SSL (If this makes it easier). I want to integrate the server in my current docker setup. So the server has to run within a docker container.
I already tried the following containers:
https://hub.docker.com/r/blackflysolutions/dovecot
https://hub.docker.com/r/dovecot/dovecot
https://hub.docker.com/r/mailu/dovecot
https://hub.docker.com/r/mailcow/dovecot
https://hub.docker.com/r/eilandert/dovecot
But i could not get any of them to run. At the same time none of them have a forum or anything where I can put a question. Two of them (mailu/dovecot and mailcow/dovecot) are part of a bigger mailserver package. Which I do not need, I only want a IMAP server to put some email locally. But I tried them anyway.
Does anyone know how to get any of those to run? Or suggest me another stable docker container solution.
I have a java application as JAR. This JAR application runs fine from my machine, meaning it can send and receive HTTP Requests to and from an API Endpoint (let's call this endpoint example.com/api/).
And then i built a docker image of this JAR Application, and tried to run the image as container from my docker desktop. But then i got this error.
the error i got
it seems like my application cant reach the url from inside the docker container. I tried to set the proxy in Settings -> Resources -> Proxies -> Manual proxies configuraton, and put my company proxy since i'm inside my company network. But still it doesn't work.
I tried to google this problem but almost nothing shows up (anything that shows up have little correlation with my problem). Anyone knows what seems to be the problem? What should I do?
First check if your container is able to communicate with the endpoint. Ping or curl it from the container shell. If you use proxy, set environment variables in container:
export http_proxy=http://server-ip:port
export https_proxy=https://server-ip:port
I am currently running latest versions Nifi and Postgresql via docker compose.
as of 1.14 version update of Nifi, when you accesss the UI on web it connects via https, thus asking you for ID and Password every time you log in. Its too cumbersome to go to nifi-app.log file and look for credentials every time I access the UI. I know that you can change the setting where it keeps https as the default method but I am not sure how to do that in a docker container. Can anyone help me with this?
You could use some env like AUTH in the documentation
You can find the full explanations here
We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure
I am currently trying to setup wso2 api manager on openshift. The problem i am running into is that when i try to browse the url created by the openshift route, the application redirects me to the internally created IP address of the publisher app. However when i launch the container without openshift, the application directs me to it's intended API login page which is the Mgt console url.
I suspect this has to do with how the HAProxy embedded load balancer is behaving. I was able to hack around the configurations by changing the default ports to 443 however that created a new set of issues because changing the ports also required me hard coding container hostnames in the carbon.xml. Hardcoding settings in the configuration files prevents me from being able to scale up the containers.
Any assistance on this will be much appreciated.