I need help. How me to run prometheus with additional arguments, i have OS Debian 8, i run example:
/etc/init.d/promethus start - ok
/etc/init.d/promethus ---storage.local.memory-chunks=336342 start - doesn't work.
I dont know another variant's solutions this trouble.
Thank You
/etc/init.d/promethus is a service init script rather than Prometheus binary executable, typically you'll use it like:
sudo service prometheus start/stop/status/restart
To pass additional arguments to the daemon you're going to start, you can configure arguments in file /etc/default/prometheus, or you can read the shell script to see if there's any environment variable could be used.
On Debian-based systems, you can usually add arguments in the file for the service under /etc/default
Related
I have a service that I originally had configured in my environment. I felt that the configuration was not very well documented and the service not easily deployable, so I decided to adopt Docker to solve that.
The service uses a Python script that has its own dependencies and can be called with a number of different arguments. Originally, the scripts dependencies were installed system-wide so I just hard-coded the path to the script in the service's code and it worked.
However, now that I'm trying to move to Docker I'm not sure how to deal with that. Some ideas:
bind-mount the script directory - but then how to make sure all its dependencies are available within the container environment?
dockerize the Python script and add it as a service to the docker compose YAML. Really unsure about this one as it's not really a service, just a utility script that exits as soon as it's done processing.
The Python script is also called with a combination of arguments. Not sure if it's relevant, but I noticed that when I create a new container from an image, I can't start the container again with different arguments - I have to re-create the container with different arguments. I really don't understand the idea behind this behaviour and would appreciate if somebody could explain the logic behind this.
I have a Node.JS based application consisting of three services. One is a web application, and two are internal APIs. The web application needs to talk to the APIs to do its work, but I do not want to hard-code the IP address and ports of the other services into the codebase.
In my local environment I am using the nifty envify Node.JS module to fix this. Basically, I can pretend that I have access to environment variables while I'm writing the code, and then use the envify CLI tool to convert those variables to hard-coded strings in the final browserified file.
I would like to containerize this solution and deploy it to Kubernetes. This is where I run into issues...
I've defined a couple of ARG variables in my Docker image template. These get turned into environment variables via RUN export FOO=${FOO}, and after running npm run-script build I have the container I need. OK, so I can run:
docker build . -t residentmario/my_foo_app:latest --build-arg FOO=localhost:9000 BAR=localhost:3000
And then push that up to the registry with docker push.
My qualm with this approach is that I've only succeeded in punting having hard-coded variables to the container image. What I really want is to define the paths at pod initialization time. Is this possible?
Edit: Here are two solutions.
PostStart
Kubernetes comes with a lifecycle hook called PostStart. This is described briefly in "Container Lifecycle Hooks".
This hook fires as soon as the container reaches ContainerCreated status, e.g. the container is done being pulled and is fully initialized. You can then use the hook to jump into the container and run arbitrary commands.
In our case, I can create a PostStart event that, when triggered, rebuilds the application with the correct paths.
Unless you created a Docker image that doesn't actually run anything (which seems wrong to me, but let me know if this is considered an OK practice), this does require some duplicate work: stopping the application, rerunning the build process, and starting the application up again.
Command
Per the comment below, this event doesn't necessarily fire at the right time. Here's another way to do it that's guaranteed to work (and hence, superior).
A useful Docker container ends with some variant on a CMD serving the application. You can overwrite this run command in Kubernetes, as explained in the "Define a Command and Arguments for a Container" section of the documentation.
So I added a command to the pod definition that ran a shell script that (1) rebuilt the application using the correct paths, provided as an environment variable to the pod and (2) started serving the application:
command: ["/bin/sh"]
args: ["./scripts/build.sh"]
Worked like a charm.
Im working on some Ansible stuff that I we have setup in a docker container. when run from a linux system it works great. When run from a windows system I get the following error:
ERROR! Problem running vault password script /etc/ansible-deployment/secrets/vault-dev.txt ([Errno 8] Exec format error). If this is not a script, remove the executable bit from the file.
Basically what this is saying is that the file is marked as an executable. What i've noticed (and hasnt been a huge problem until now) is that all files mounted to a linux container from windows are ALWAYS tagged with the executable attribute.
Is there any way to control/prevent this?
Did you try adding :ro at the end of the mounted path?
Something like this:
HOST:CONTAINER:ro
This is a limitation of the SMB-based approach that Docker for Windows uses for making host-mounted volumes work, see here
To solve the executable bit error, I ended up passing Ansible a python script as the --vault-password-file argument as a workaround, see here.
#!/usr/bin/env python
import os
vault_password = open('PATH_TO_YOUR_VAULT_PASSWORD_FILE', 'r')
print vault_password.read()
vault_password.close()
Since the python script is executed in the container, the vault password file path needs to be accessible in the container - I'm mounting it as a volume, but you can also build it into your image. The latter is a security risk and is not recommended.
I'm trying to create a Docker container based on CentOS 7 that will host R, shiny-server, and rstudio-server, but to I need to have systemd in order for the services to start. I can use the systemd enabled centos image as a basis, but then I need to run the container in privileged mode and allow access to /sys/fs/cgroup on the host. I might be able to tolerate the less secure situation, but then I'm not able to share the container with users running Docker on Windows or Mac.
I found this question but it is 2 years old and doesn't seem to have any resolution.
Any tips or alternatives are appreciated.
UPDATE: SUCCESS!
Here's what I found: For shiny-server, I only needed to execute shiny-server with the appropriate parameters from the command line. I captured the appropriate call into a script file and call that using the final CMD line in my Dockerfile.
rstudio-server was more tricky. First, I needed to install initscripts to get the dependencies in place so that some of the rstudio scripts would work. After this, executing rstudio-server start would essentially do nothing and provide no error. I traced the call through the various links and found myself in /usr/lib/rstudio-server/bin/rstudio-server. The daemonCmd() function tests cat /proc/1/comm to determine how to start the server. For some reason it was failing, but looking at the script, it seems clear that it needs to execute /etc/init.d/rstudio-server start. If I do that manually or in a Docker CMD line, it seems to work.
I've taken those two CMD line requirements and put them into an sh script that gets called from a CMD line in the Dockerfile.
A bit of a hack, but not bad. I'm happy to hear any other suggestions.
You don't necessarily need to use an init system like systemd.
Essentially, you need to start multiple services, there are existing patterns for this. Check out this page about how to use supervisord to achieve the same thing: https://docs.docker.com/engine/admin/using_supervisord/
I often find myself in need of re-creating container with minor modifications to arguments used to docker run container originally (things like changing published ports, network, memory amount).
Now I am making images and running them in place of old containers.
This works fine but I don't always have original params to docker run saved and sometimes (esp. when there are lot of things to define) it becomes pain to recover them.
Is there any way to recover docker run arguments from existing container?
Sorry for being a couple of years late, but I had a similar question and no satisfying answer yet, so I still needed to find my way out.
I've found two sources addressing the issue:
A gist
To run, save this to a file, e.g. run.tpl and do docker inspect --format "$(<run.tpl)" name_or_id_of_running_container
A docker image
Quick run:
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock nexdrew/rekcod <container>
Both solutions are quite simple to use, but the second one failed to generate the command for an Nginx container because they did not manage to have it quoted like this "nginx" "-g" "daemon off;"
So, I focused on the first solution, which is a golang template intended to feed the --format parameter of docker inspect. I liked it because it was kind of simple, elegant, and no other tool needed.
I've made some improvements in my forked gist and notified the original author about it.
Couple of answers to this. Run your containers using docker-compose, then you can just run compose files and retain all your configuration. Obviously compose is designed for multi-container applications, but massively underrated for single-container, complex run argument use cases.
Second one is to put your run command into a LABEL on the image. Take a look at Label Schema's docker.cmd etc... Then you can easily retrieve from the image (or from your Dockerfile).
the best way to do this is not to type the commands manually. put them into a shell script... a .sh file on linux/mac, or a .cmd file on windows. then you just run the shell script to create your container and you never have to worry about re-typing the commands and options, you'll never get them wrong, etc.
personally, i write my scripts with "npm scripts" in my package.json file. but the same thing can be done with any tool that can run command-line program with arguments
i do this along with a few other tricks to make sure i never fail to build my images or run my containers. makes life with docker soooo much easier. :)
You can use docker inspect to get the container's configuration. Reconstructing the docker run command from that can be somewhat tedious though.
Another option is to search your shell history using either history | grep "docker run" or ctrl+r (if you use bash). That way, you don't need to go out of your way to save the commands but can still recover them quickly.