Gitlab SSL Configuration for both Internal and External Access - docker

Looking for a little help here. Trying to bootstrap a small side business, and I have never been the DevOps guy. I use the web hosted version Gitlab to store my codebase, but I am unable to use it to act as a repository for docker images that I am creating from that code. The images that I am generating are quite large and exceed the token expiration when I am attempting to push back to the registry from the group gitlab-runner that I have installed on my personal machine. I have an extra machine sitting around, so I installed gitlab-ee and exposed it through a dynamic dns service (NoIP). I then mirrored the repositories that I want to generate images for on my locally hosted gitlab instance. At first, I tried to use a runner that was on the same machine as my gitlab instance, but always failed due to all available memory being consumed and locked up the machine. All in all, gitlab docs pretty much don’t run the runner and instance on the same machine. So, I went back to using the runner I originally used for the web hosted instance, but I am having issues pushing to my local instance. When trying to push to my repository (through the ddns URL), I end up getting a lot of this:
e4fdbd3bf512: Retrying in X seconds
And it eventually times out due to job time limit or token time limit. I am guessing this is due to my connectivity not being great. What I would like to do is have the (installed on a local machine) runner push to the local IP on my network, but I am unsure how to do this with the SSL setup. When trying to login and push in my pipeline, I get the following error:
Error response from daemon: Get "https://xxx.xxx.xxx.xxx:xxxx/v2/": x509: cannot validate certificate for xxx.xxx.xxx.xxx because it doesn't contain any IP SANs
How do I correct this without affecting the https:// SSL that is already setup for when accessing the instance from the DDNS? Appreciate any help you can give me.

I abandoned attempts at getting this to work. Ran through a bunch of scenarios of creating my on CA and trying to create certificates for the IP address and share that with the other machine. Ultimately, gitlab obscures some things with LetsEncrypt. Funny enough it was just a connectivity issue where I was getting timeouts. Ended up hard-lining both machines and getting better throughput. Able to push ~6GB docker images up through the URL.

Related

Local IMAP server on docker

I want to setup a local IMAP server within my home network for archiving emails. The server does not need to be accessable via the internet. Therefore I can pass on a secured access via SSL (If this makes it easier). I want to integrate the server in my current docker setup. So the server has to run within a docker container.
I already tried the following containers:
https://hub.docker.com/r/blackflysolutions/dovecot
https://hub.docker.com/r/dovecot/dovecot
https://hub.docker.com/r/mailu/dovecot
https://hub.docker.com/r/mailcow/dovecot
https://hub.docker.com/r/eilandert/dovecot
But i could not get any of them to run. At the same time none of them have a forum or anything where I can put a question. Two of them (mailu/dovecot and mailcow/dovecot) are part of a bigger mailserver package. Which I do not need, I only want a IMAP server to put some email locally. But I tried them anyway.
Does anyone know how to get any of those to run? Or suggest me another stable docker container solution.

Azure Cloud Service microservice to K8 Migration

I am in the process of evaluating moving a very large Azure Cloud Service (Web Role) microservice architecture to AKS and have been working through the necessary code and build changes to support it.
In order to replicate the production environment locally for the developers, we run nginx on the host with SSL offloading and DNS (hosted in Azure) A records pointing to 127.0.0.1. When running in the Azure Emulator, the net affect is the ability for both the developer to visit the various web front ends in their browser (i.e. https://myapp.mydomain.dev) as well as hit the various API's in the solution (Web API 2) in Postman/cURL, etc.
Additionally due to how the networking of the Azure Emulator works, the apps themselves can resolve each other through nginx on the host (i.e. MVC app at https://myapp.mydomain.dev can obtain a token from the IdP web API at https://identity.mydomain.dev and then use that token at the API at https://api.mydomain.dev). This is the critical piece and the source of my question.
All attempts at getting the containers themselves to resolve each other the same way the host OS can (browser/Postman, SSL offloading via nginx) have failed. Many of the instructions out there are understandably for linux containers but having adapted the various networking docker-compose settings for the windows container equivalent have not yet yielded an success. In order to keep the development environments aligned with the real work systems, which are tenantized and make sure of the default mapping in nginx to catch all incoming traffic and route it to a specific user facing app/container, it is not as simple as determining a "static" method of addressing these on startup and why the effort was put in to produce the development environments we have today.
Right now when one service (container) attempts to communication with another, it ultimately results in a resolution error as all requests resolve to https://127.0.0.1 due to the DNS A records hosted in Azure for the domain. Since this migration will be a longer term project, the environments need to co-exist so changing the way that DNS is resolved (real DNS A records pointing to 127.0.0.1), host running nginx and handling SSL offloading to the various webroles normally running in the Azure Emulator is not an option.
Is there a way (with Windows containers) to either:
Allow the container to utilize nginx on the host OS transparently (app must still call the API at https://api.mydomain.dev), which will cause the traffic to be routed properly to the correct container/port defined in the docker-compose file?
OR
Run nginx on each container, allowing each container to then resolve and route appropriately without knowing the IP of the other container, possibly through an alias which could be added to the containers nginx.conf before the service starts?
The platform utilizes OAuth2/OIDC and it is critical to maintain the full URL to the other services from the applications perspective. Beyond mirroring production and sandbox environments, this URL's are utilized for redirect URL and post logout redirect URL validation among other things so using "https://myContainerNameForOtherContainerAlias" is not a workable solution.
Will I have the same problem when setting up the AKS environment as well?

Connecting to a remote ArangoDB dockerized server

I am a beginner in regards to ArangoDB and I am trying to deploy my first project using it. The website is PHP based - what I did is that I created an Arango Docker container on Digital Ocean so that I can access it from the browser with the ipv4 provided. Public access to port 8529 is enabled. Locally, I am able to modify the .config file in order to point to the corresponding ip and I can painlessly retrieve data.
As a hosting provider I am using one.com. When uploading the same files that I am able to run locally on my own domain I get the following error:
["_database":"ArangoDBClient\Connection":private]=> string(7) "_system" } ArangoDBClient\ConnectException: cannot connect to endpoint 'tcp://xxx.xx.xxx.xxx:8529/': Connection timed out
I want to mention that I have also tried out ArangoOasis. No luck with it - I get the same error. Been at it for quite a few weeks - I would very much use some guidance. Even what to do next as I am out of ideas and documentation to read.

After upgrading from micro to small Amazon EC2 instance, I cannot deploy any new code

I upgraded from micro instance to small instance on Amazon EC2.
When I wanted to deploy a new code, the code was not deployed due to
** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: ELASTIC_IP (Errno::ETIMEDOUT: Operation timed out - connect(2))
connection failed for: ELASTIC_IP (Errno::ETIMEDOUT: Operation timed out - connect(2))
So it looks like the upgrade ignore the old elastic IP. Thus, I created a new Elastic IP and assigned this IP to the new instance and this error gone.
But when I access www.my_project.com, or 11.22.33.44 (elastic IP) or the Public DNS (ec2-11-222-333-444.compute-1.amazonaws.com), there is still an empty page and not my application.
The code is deployed via Capistrano without any error. On the old micro instance I used nginx - is this nginx accessible also on the new instance or do I need to set up/install again?
How to make my app accessible?
Thank you
If I had to guess, it's that the SSH key (not the EC2 key pair, but the actual SSH key coming from the machine) has changed, and by default, SSH on your local machine will block the connection for security reasons.
If you have a Mac/Linux machine you're using, you can look inside ~/.ssh/known_hosts and remove the entry for your Elastic IP, save the changes, and try to SSH into the machine again to confirm the connection.
Not sure of the right path in Windows, but you'd make the same changes.
Aws needs manual monitoring when you end up with issues like this.
While you were upgrading your instance, what was the approach you took?
Either you
create an ami with instance and volumes and then launch the ami with fresh small instance or
Detach the ebs volume and attach to a small instance and made required configuration changes.
ssh into the instance and check for
If you could manually deploy the code.
If it's a git repo, you can pull and push changes directly.
All the processes related to nginx,db etc are running.
Where does the default home page for instance lands to . for ex documentroot in case of apache.conf.
I cannot rule out the possibility of key mismatch still the error doesn't points to that.

scp files through gateway to remote machine

I can't figure out how to scp a file to another machine if there is a gateway connecting my client machine to the remote server. From my client machine I can connect to both the gateway and subsequently to the remote server using SSH without any problems.
When I try to scp my directory dir to the remote server I have no clue how to move past the gateway, because my ssh connection is actually an two-step approach. Scp'ing dir to the gateway first fails, with the remark "Permission denied".
Something like
~$: scp -r /var/www/dir usrname#remotesrv.com:/var/www/dircp
doesn't work and the only approach I found so far involves public/private keys. Is it only possible to copy files through a gateway with keys? And if that's so, can somebody tell me how to overcome the problem with copy&pasting into the terminal which sometimes just won't work (using Ubuntu 11.10). Already installed autokey hoping to circumvent buggy Ubuntu shortcuts by changing them to another hotkey, but the program is crashing all the time.
I would appreciate your help in one way or another!

Resources