How to access JIRA Software files in a docker image? [closed] - docker

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I deployed Jira in a Docker container.
docker run --detach --publish 8080:8080 cptactionhank/atlassian-jira-software:latest
I am accessing the files using:
docker exec -t -i containerid /bin/bash
But I am not able to see the files which are needed to edit.
Supposedly for creating a maintenance splash page.
Ref : https://confluence.atlassian.com/confkb/how-to-create-a-maintenance-splash-page-290751207.html

According to the documents that you sent and location of the installation directory that you mentioned, you need to edit /opt/atlassian/jira/conf/server.xml file to edit the context section. Then edit /opt/atlassian/jira/conf/web.xml file to adding new error page.
Please note that you have to access those files via bin/bash from docker:
sudo docker exec -i -t --user root containerid /bin/bash
Also this has a good information as well.

Related

Why do I need to specify `/bin/bash -c` when I already have SHELL ["bash", "-c"] in the Dockerfile? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
According to here, in order to docker-run multiple commands, I need
docker run image_name /bin/bash -c "cd /path/to/somewhere; python a.py"
However, I can specify the default shell in the Dockerfile with the SHELL instruction. So given that I can already specify the default shell of a container, why is it necessary to specify again with /bin/bash -c when running the docker container? What was the rationale behind not automatically using the shell specified with the SHELL instruction?
the SHELL instruction in the Dockerfile is only used for the Dockerfile RUN instructions (done during image creation)
docker run is a completely different command (it creates a container and run a command in it). The command you specify after the image name depends on the way the image is built. Some containers allow to run arbitrary command and also /bin/bash (if installed).
The default command is specify within you docker file with the Dockerfile CMD instruction. It default to empty.
You can also specify an ENTRYPOINT instruction that will run the CMD. The default is /bin/sh.

Minikube and Docker - conflict? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I've been using Docker for quite a while for some development. Now, I am trying to learn some more advanced stuff using Kubernetes.
In some course I found information that I should run
eval $(minikube docker-env)
That would register a few environmental variables: DOCKER_TLS_VERIFY, DOCKER_HOST, DOCKER_CERT_PATH and DOCKER_API_VERSION. What would it do? Wouldn't that break my day to day work on default values with my host?
Also, is it possible to switch context/config for my local Docker somehow similar to kubectl config use-context?
That command points Docker’s environment variables to use a Docker daemon hosted inside the Minikube VM, instead of one running locally on your host or in a Docker Desktop-managed VM. This means that you won’t be able to see or run any images or Docker-local volumes you had before you switched (it’s a separate VM). In the same way that you can $(minikube docker-env) to “switch to” the Minkube VM’s Docker, you can $(minikube docker-env -u) to “switch back”.
Mostly using this actually only makes sense if you’re on a non-Linux host and get Docker via a VM; this lets you share the one Minikube/Docker VM and not have to launch two separate VMs, one for Docker and one not.
If you’re going to use Minikube, you should use it the same way you’d use a real, remote Kubernetes cluster: set up a Docker registry, docker build && docker push images there, and reference it in your Deployment specs. The convolutions to get things like live code reloading in Kubernetes are tricky, don’t work on any other Kubernetes setup, and aren’t what you’d run in production.
The said command will only manipulate the current shell. Opening up a new one will allow you to continue working with your normal workflow as the docker CLI will for example per default connect to the daemon socket at /var/run/docker.sock.
I don't know of a tool that will allow you to switch those settings with a single command and based on a context name as kubectl allows you to. You could however write an alias. For bash you could for example just execute:
$ echo 'alias docker-context-a="eval \$(minikube docker-env)"' >> ~/.bashrc

Is "docker start" completely resume all running service started by "docker run" [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
The command docker run is used to create container from image and make container running. When calling docker run I can pass CMD to tell docker to run some service at startup.
But when I call docker stop to stop the running container and then call docker start, does it run the same with the above docker run, for example, does it start all service the same as docker run
The docker client is a convenience wrapper for many calls to the Docker API.
Docker run will :
Attempt to create the container
If the dockerimage isnt found, it will attempt to pull it
If the image is pulled successfully, it will then create the container
Once the container is created, it will then call Docker start on the new container
The short answer to your question is: Docker stop is the opposite of the Docker Start command. Docker run calls docker start at the end, but it also does a bunch of other things.
Docker run will always try and create a new container, and throw an error if the container name already exists. Docker start can be used to manually start an existing container. (you could also look into the "docker restart" command, which I believe calls docker stop and then docker start.)
Hope this helps!

Does docker-machine can only mount /c/Users on windows? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
But I want to mount /d because I like to put my projects on /d.
docker-machine uses a boot2docker.iso VM image, based on TinyCore
The original boot2docker project mentioned that you can mount other folders with, at runtime:
mount -t vboxsf -o uid=1000,gid=50 your-other-share-name /some/mount/location
Issue 1814 of docker-machine suggests that, and it seems to work.

How can I make /etc/hosts writable by root in a Docker Container? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 25 days ago.
Improve this question
I'm new to using docker and am configuring a container.
I am unable to edit /etc/hosts (but need to for some software I'm developing). Auto-edit (via sudo or root) of the file says its on a read only file system. Manual (vim) edit of the file says its read-only and I'm unable to save changes as root (file permissions are rw for owner (root)).
I can however modify other files and add files in /etc.
Is there some reason for this?
Can I change the Docker configuration to allow edit of /etc/hosts?
thanks
UPDATE 2014-09
See #Thomas answer:
/etc/hosts is now writable as of Docker 1.2.
Original answer
You can use this hack in the meanwhile
https://unix.stackexchange.com/questions/57459/how-can-i-override-the-etc-hosts-file-at-user-level
In your Dockerfile:
ADD your_hosts_file /tmp/hosts
RUN mkdir -p -- /lib-override && cp /lib/x86_64-linux-gnu/libnss_files.so.2 /lib-override
RUN perl -pi -e 's:/etc/hosts:/tmp/hosts:g' /lib-override/libnss_files.so.2
ENV LD_LIBRARY_PATH /lib-override
/etc/hosts is now writable as of Docker 1.2.
From Docker's blog:
Note, however, that changes to these files are not saved during a
docker build and so will not be preserved in the resulting image. The
changes will only “stick” in a running container.
This is currently a technical limitation of Docker, and is discussed further at https://github.com/dotcloud/docker/issues/2267.
It will eventually be lifted.
For now, you need to work around it, e.g. by using a custom dnsmasq server.
I have recently stumbled upon a need to add an entry into /etc/hosts file as well (in order to make sendmail working).
I ended up making it part of the Dockerfile's CMD declaration like this:
CMD echo "127.0.0.1 noreply.example.com $(hostname)" >> /etc/hosts \
&& sendmailconfig \
&& cron -f
So it effectively is not a part of the image, but it is always available after creating a container from the image.
You could do that without change your /etc/hosts file. Just add extra_hosts into your docker-compose.yml like this example bellow:
myapp:
image: docker.io/bitnami/laravel:9
ports:
- 80:8080
extra_hosts:
- "myapp.test:0.0.0.0"
- "subdomain.myapp.test:0.0.0.0"
References:
How can I add hostnames to a container on the same docker network?

Resources