I installed a instance of Nexus Repository Manager 3 in rancher and i'm trying to use https port for a docker hosted repository. This means that i need to create a self-signed certificate to make it work. After a lot of research i came down to a problem, i cant find jetty-https.xml in /etc. The questions is, do this file exist or do i need to create it?
Source:
https://support.sonatype.com/hc/en-us/articles/217542177?_ga=2.62350444.1144825414.1623920039-1845083682.1622816513
https://help.sonatype.com/repomanager3/system-configuration/configuring-ssl#ConfiguringSSL-HowtoEnabletheHTTPSConnector
After modify the nexus.properties file in /nexus-data/etc/ and uncomented the nexus-args and restart the container the jetty-https.xml appeared on $install-dir/etc/jetty/. if you check the logs you can see the exact location of the jetty config folder.
Related
I'm using confluentinc/cp-server-connect (https://hub.docker.com/r/confluentinc/cp-server-connect) with elasticsearch sink connector (https://www.confluent.io/hub/confluentinc/kafka-connect-elasticsearch) i added trough dockerfile and rebuilding the image and it works just fine. I'm configuring the connector using http requests like it's done in this tutorial https://www.confluent.io/blog/kafka-elasticsearch-connector-tutorial/.
My problem is that I couldn't find a way to keep the connector configuration i set during removing and stopping again the docker container with this image.
I couldn't find any mentions of keeping configuration in docker image's documentation on docker hub or by googling it. I also tried manually searching in the image for where this configuration may be stored but i had no luck. Where should I point with docker volume to save this configuration, or maybe the configuration is kept somewhere else like in a specific topic in kafka?
Yes, the configurations are kept on Kafka topic. The Connect container doesn't store them.
Therefore, don't restart the Kafka (or Zookeeper) container(s), and your configs will be maintained.
I'm trying to use Ballerina to build a REST-Api that uses JWT Authentication and is deployed with docker.
I managed building a simple service with a few endpoints and deploying a Docker image.
Now I want to add JWT authentication.
I tried using this example: https://ballerina.io/learn/by-example/secured-service-with-jwt-auth.html
(v1.2 and Swan Lake)
However, when I try to run the example i get:
"error: KeyStore File \bre\security\ballerinaKeystore.p12 not found" (I'm using Windows)
(I probably have to set my own keystore here for it to work, but the example does no say anything about that.)
EDIT: Nevermind... I'm an idiot. Forgot to pass --b7a.home=
But that still leaves my following questions regarding deployment with docker.
Also: (I think) I understand what a keystore is and why I need it. but: How do I handle keystores during development or when deploying? Seems like a bad idea to push the keystore file to a repo. Where do I save it? and how do I deploy it? Or did I get something completely wrong here?
You could refer to Sample with Docker, Sample with Kubernetes on how to deploy https services using the Annotations.
To use without annotations, you will need to copy the keystores/trustores to the docker file and give that path to the configurations of the http service's listener. In production you will most probably have your own keystores and truststores. So it is always better to copy these to the docker file and make your services run.
I've cloned a new project on OSX, inside which I have composer.json with a reference to a private repository.
I want to use the official Docker composer image, to install all dependencies. Everything works, but the problem occurs with a private repository, because of course, composer container doesn't have a SSH key installed in it. Reasonable.
Could somebody explain to me what would be the 'accurate' way to install the PHP dependency from my private repo?
I've read on the official docs (https://docs.docker.com/samples/library/composer/), where they say:
When you need to access private repositories, you will either need to share your configured credentials, or mount your ssh-agent socket inside the running container:
I'm on OSX, so the mounting part won't work as I've found out during my research.
I also read that the 'Docker' way is to not have the SSH part on the composer image. In other words, only one process per container.
So another way I found is to run a separate SSH server, but I'm not sure how this works actually. Supposedly I would connect through it into the composer container?
If anyone had some experience with this kind of problem, please share your thoughts.
I'm sorry if I left something out, if I did, please let me know.
Thank you!
I created an app which consists of many components so I use docker-compose.
I published all my images into my private repository (but I also use public repos from other providers).
If I have many customers: how can they receive my full app?
I could send them my docker-compose.yml file per email or if I have access to the servers, I can scp the .yml file.
But is there another solution to provide my full app without scp'ing a yml file?
Edit:
So I just read about docker-machine. This looks good, and I already linked it with an Azure subscription.
Now what's the easiest way to deploy a new VM with my docker-application? Do I still have to scp my .yml file, ssh into this machine and start docker-compose? Or can I tell to use a specific .yml during VM creation and automatically run it?
There is no official distribution system specifically for Compose files, but there are many options.
The easiest option would be to host the Compose file from a website. You could even use github or github pages. Once you have it hosted by an http server you can curl it to download it.
There is also:
composehub a community project to act as a package manager for Compose files
Some related issues: #1597, #3098, #1818
The experimental DAB feature in Docker
I followed https://about.gitlab.com/aws/ and I could not visit the "Public IP" of the aws image. It said "This site can’t be reached". So, then I ssh'd into the instance and found there was no /etc/gitlab/gitlab.rb file so I created one and simply pasted in the contents of https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-config-template/gitlab.rb.template and replaced external_url 'GENERATED_EXTERNAL_URL' with the public IP. Still it doesn't work. Any tips?
Also on https://about.gitlab.com/aws/ it says you should use a c4.large instance but that sounds expensive -- can I just use a t2.micro?
I am used to using github so I was never worried about losing files but now that I'm hosting myself what is the professional way to backup (like what if the ec2 instance crashes) -- through s3 and by following http://docs.gitlab.com/omnibus/settings/backups.html?
Finally, The reason why I need to host my own gitlab is because I need to run pre-receive githooks. Is there any easier way to run pre-receive githooks without subscribing to an expensive enterprise service?
I believe https://about.gitlab.com/aws/ is broken. It's better to setup the default ubuntu instance given by amazon (you can pick t2.medium or c4.large) and then just follow the instructions for installation on gitlab.com for that version of ubuntu. It's just 4 steps (don't do the "from source").