I'm trying to get up and running with Minio server. I've read their "server config guide" here, but there's one thing i don't get.
The guide says that previously you could put a config.json in the mino home dir you specify, but that it's now deprecated. You're instead supposed to use their client ('mc') to update configs via admin commands?
This seems very cumbersome to me, although i understand that you can pass in an entire json.config file via the mc client.
However, what if you have a docker container and want to start it with a custom config? I don't understand how you'd do that, and their "docker run" only contains info about how to start it with environment variables for custom username/password.
To me, it makes more sense to still have a config.json in the minio home dir, i don't totally get why they removed it.
If someone could help me understand the config better, i'd be a happier minio camper.
Related
I have docker stack started with docker stack deploy --compose-file ...
and later manually edited via Docker Portainer UI.
I'd like to write a script that updates the docker image tag of one of the services.
To do that I need to "download" the latest "docker-compose" stack definition however I cannot find the appropriate docker command.
I do know that the best would be to stop changing stack manually and rely on its definition stored in git but unfortunately, it is not up to me.
Please point me to the appropriate docker command or confirm that it is not available.
As far as i know there is no command you could get the compose file from the running container directly. At least not implemented out of the box in docker. You could try to parse all the relevant information from docker inspect and few other commands to list/inspect all relevant objects?.
I have once came across the similar situation where we had a running container but no run/compose command which we needed to update. At the time (roughly a year ago) i found and used docker-autocompose which did very good job. We only had to manually verify and adjust few things,but it got all the difficult parts with run parameters done for us.
It could help in your case to automate it if your compose configs are simple enough.
But if you wanted to fully automate it to mimic CD, then i would not recommend the approach above. In that case i would check if you could use portainer api as #LinFelix recommended. Or store compose files somewhere - prepared with parameters ($IMAGE_TAG) (git/on server) so you can then generate temporary compose files with all configuration and then remove the current one.
I'm trying to use Ballerina to build a REST-Api that uses JWT Authentication and is deployed with docker.
I managed building a simple service with a few endpoints and deploying a Docker image.
Now I want to add JWT authentication.
I tried using this example: https://ballerina.io/learn/by-example/secured-service-with-jwt-auth.html
(v1.2 and Swan Lake)
However, when I try to run the example i get:
"error: KeyStore File \bre\security\ballerinaKeystore.p12 not found" (I'm using Windows)
(I probably have to set my own keystore here for it to work, but the example does no say anything about that.)
EDIT: Nevermind... I'm an idiot. Forgot to pass --b7a.home=
But that still leaves my following questions regarding deployment with docker.
Also: (I think) I understand what a keystore is and why I need it. but: How do I handle keystores during development or when deploying? Seems like a bad idea to push the keystore file to a repo. Where do I save it? and how do I deploy it? Or did I get something completely wrong here?
You could refer to Sample with Docker, Sample with Kubernetes on how to deploy https services using the Annotations.
To use without annotations, you will need to copy the keystores/trustores to the docker file and give that path to the configurations of the http service's listener. In production you will most probably have your own keystores and truststores. So it is always better to copy these to the docker file and make your services run.
I am trying to setup a small development environment using Docker. phpStorm team is working hard on get Docker integrated for remote interpreter and therefore for debugging but sadly is not working yet (see here). The only way I have to add such capabilities for debugging is by creating and enabling an SSH access to the container which works like a charm.
Now, I have read a lot about this and some people like the one on this post says is not recommended. I have read others which says to have a dedicated SSH Docker container which I don't get how to fit on this environment.
I am already creating a user docker-user (check repo here) for certain tasks like run composer without root permissions. That could be used for this SSH stuff easily by adding a default password to it.
How would you handle this under such circumstances?
I too have implemented the ssh server workaround when using jetbrains IDEs.
Usually what I do is add a public ssh key to the ~/.ssh/authorized_keys file for the SSH user in the target container/system, and enable passwordless sudo.
One solution that I've thought of, but not yet had the time to implement, would be to make some sort of SSH service that would be a gateway to a docker exec command. That would potentially allow at least some functionality without having to modify your images in any way for this dev requirement.
I followed https://about.gitlab.com/aws/ and I could not visit the "Public IP" of the aws image. It said "This site can’t be reached". So, then I ssh'd into the instance and found there was no /etc/gitlab/gitlab.rb file so I created one and simply pasted in the contents of https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/files/gitlab-config-template/gitlab.rb.template and replaced external_url 'GENERATED_EXTERNAL_URL' with the public IP. Still it doesn't work. Any tips?
Also on https://about.gitlab.com/aws/ it says you should use a c4.large instance but that sounds expensive -- can I just use a t2.micro?
I am used to using github so I was never worried about losing files but now that I'm hosting myself what is the professional way to backup (like what if the ec2 instance crashes) -- through s3 and by following http://docs.gitlab.com/omnibus/settings/backups.html?
Finally, The reason why I need to host my own gitlab is because I need to run pre-receive githooks. Is there any easier way to run pre-receive githooks without subscribing to an expensive enterprise service?
I believe https://about.gitlab.com/aws/ is broken. It's better to setup the default ubuntu instance given by amazon (you can pick t2.medium or c4.large) and then just follow the instructions for installation on gitlab.com for that version of ubuntu. It's just 4 steps (don't do the "from source").
I'm new to rabbitmq and by association new to erlang. I'm running into a problem where I cannot start rabbitmq as the 'home' location for the .erlang.cookie has been changed. I've run the command
init:get_argument(home).
which returns
{ok,[["H:\\"]]}
this is an issue, as this is a network drive I do not always have access to. I need to be able to change the 'home' directory to something local.
when I run
rabbitmqctl status
it gives me the following error:
{error_logger,{{2013,7,5},{14,47,10}},"Failed to create cookie file 'h:/.erlang.cookie': enoent",[]}
which again leads me to believe that there is an issue with the home argument. I need to be able to change this location to something local.
Versions:
Erlang R16B01 32 bit
RabbitMQ 3.1.3
Running on Win7
I have uninstalled and reinstalled multiple times hoping to resolve this. I am looking for a way to change the 'home' location in erlang so rabbitmq can properly start.
The solution I came up with was to not bother with the installed service. I used the rabbitmq-server.bat to start the service, SET HOMEDRIVE=C: at the start of the file. I'm planing to run this from a parent service so that I can install this on servers.
Final note to earlang and rabbitMQ developers; using pre-existing environment variables for you own purposes is just wrong. You should create your own, or better yet put this stuff in a configuration file. Telling people to talk to their system administrators to change the HOMEDRIVE and APPDATA variables is arrogant to say the least.
You need to set the correct values for variables $HOMEDRIVE and $HOMEPATH. These links should help:
Permanently Change Environment Variables in Windows
Overriding HOMEDRIVE and HOMEPATH as a Windows 7 user