Running official Nexus3 Image where script running is enabled - docker

I'm trying to start a container of Nexus 3.22.0 where the script running option is automatically enabled.
From this link (https://support.sonatype.com/hc/en-us/articles/360045220393) I can see that it is possible to enable it after I create the image and restart the container, though i'm interested in creating it with that option right away, is this possible?
Possibly a flag or an argument to add the the "docker run...sonatype/nexus3:3.22.0" command.
Somewhere to add the "nexus.scripts.allowCreation=true" option to the command or add it to the end of the /nexus-data/etc/nexus.properties file as it says in the documentation.

Related

VScode remote containers - how to view the dockerised service console output?

This is a follow on from this question (none of the current answers seem hit the nail on the head).
VScode's default behaviour for starting a remote vscode session (using VScode Remote-Containers) seems to be:
Run the project's docker-compose file
If the user selects show log (a UI popup) during build, open a VScode terminal session called Dev Containers reflecting Docker's build logging, supplemented with VSCode Remote-Containers logging. This output ends after build complete.
If the user didn't select show log, and later opens a VScode terminal after build complete, just start a new bash session within the container. No other VScode terminal sessions exist.
Launch a VScode session from inside the now-running container
From the user's perspective, the container is running, but the output that is happening inside the container seems inaccessible (even if the docker-compose command did not use daemon mode).
So, how can the user now view the console output that is happening inside the container?
If I am reading correctly, VScode Remote-Containers documentation seems to suggest overriding the default behaviour, ie:
supress your docker-compose command that would otherwise have started the service, and instead apply some dummy command to persist the container upon creation, then
manually start the service (using debug mode, or via VScode terminal) from inside the remote session. This reveals the output, but within an accessible VSCode terminal session.
Is there no way to:
A) Start the services via system terminal (e.g. docker-compose up), and then start a VSCode remote session in this already running container*, or
B) Accessing the service's output without having to override as above (the override seems hacky)
*This would be ideal. The Remote-Containers "Attach to Running Container..." command sounds close to this. But it seems to instantiate itself in a directory I don't recognise, and doesn't seem to be the container.
Option A seems to be achievable by
Starting the service in terminal (docker-compose up)
In vscode, using the remote-containers "remote explorer" UI (not the cmd+P "Attach to container" commands) to select the running container's working directory. Right click > "Open in container". This doesn't actually open a new container, it "Opens the directory, from within the container".
OR ( thanks #cybercoder )
Letting vscode start the services
In a separate terminal: docker logs -f container_name OR docker-compose logs -f

How to make VSCode run custom script when attaching to a running remote container

I have a running Docker container and would like to use the VSCode remote container plugin to attach to it.
Is it possible to have VSCode run a script when it attaches? Some custom actions are required to setup the container. These actions cannot be baked into the Dockerfile/Image.
Is it possible to configure the Docker exec arguments when attaching to a running container. (This is possible for Docker Run using .devcontainer when creating new containers, but I haven't found anything about Docker exec regarding already running containers).
There is a "postAttachCommand" that lets you execute a custom command after the vscode attached to the running container.
However my preference would be to use a login shell, for that there is an undocumented property called
"userEnvProbe": "loginInteractiveShell"
Below github issue explains this parameter (This is where i learnt about the parameter as well) :
https://github.com/microsoft/vscode-remote-release/issues/3585
The userEnvProbe and postAttachCommand is per docker container, you have to add them to "Container Configuration File", hover your mouse on the tip of the red arrow and you will see a settings icon, when you press it you can access to the "Container Configuration File"
For further customization there is a great github page that explains what else you can do to further customize the way you execute docker commands as well
https://github.com/microsoft/vscode-docker/issues/1596

How to view docker logs from vscode remote container?

I'm currently using vscode's remote containers extension with a .devcontainer.json file that points to my docker-compose.yml file.
Everything works fine and my docker-compose start command gets run (which launches a web server), but I haven't found a way to quickly see the logs from the web server. Has anyone found a way to view the docker log output automatically once vscode connects to the remote container?
I know as an alternative I could remove my container's start command and, after vscode connects, manually open a terminal and start the web server, but I'm hoping there's an easier way.
Thanks in advance!
I'm not using remote containers, just local once, so not sure if this applies but for locally running containers, you can go to the "Docker" tab (you need to install the official Microsoft Docker VS Code Plugin) where you can see your running containers. Just right-click on the container you want to see the logs for and select "View Logs":
You'll see a new "Task" appear in the Terminal pane that will show all your docker logs:
This question is really old and I'm not sure it this option was available at this time but just open the Command Palette (F1) and select/find "Remote-Containers: Show Log".
You see now the log of your container in the terminal.
You can open the command palette and search for: Remote Explorer: Focus on containers view. You should see a sidebar of containers, if you right click your container you can view logs.
I use VS Code's builtin terminal to see the live logs of the docker container that is connected with VS Code.
When VS Code is connected to the docker container, you can open the builtin terminal using the View > Terminal menu option. You should see an existing terminal labeled Dev Containers.
Maybe this is too late? But for others, this is how I do it.
First, instead of logging stuff to the stdout, I redirect all of the outputs into one single file and then using the tail command to steam the output to the terminal instead.
For example, I am going Go here:
logFile, err := os.OpenFile(logFileName, os.O_WRONLY|os.O_CREATE, 0755)
if err != nil {
log.Fatal("Fail to open the log file")
}
logrus.SetOutput(logFile)
Once that's done, I open up my terminal and run my the following command:
$ tail -f {logFileName}
That's one way to do it I guess, but I sure hope VSCode can come up with a better solution.
In the Remote Explorer tab you can see all your docker containers. Under "Dev Containers" is the container for the service specified in devcontainer.json; the rest are in "Other Containers." Simply right click on the container you're interested in and click "Show Container Log." You'll see the full output of the command for that service, just like in an interactive terminal - not a docker build log!
Note I am using a local development container and did not test with remote containers but I'm guessing it's the same.

Setting up a container from a users github source

Can be closed, not sure how to do it.
I am to be quite frank lost right now, the user whom published his source on github somehow failed to update the installation instructions when he released a new branch. Now, I am not dense, just uneducated when it comes to docker. I would really appreciate a push in the right direction. If I am missing any information from this post, please allow me to provide it in the comments.
Current Setup
O/S - Debian 8 Minimal (Latest kernel)
Hardware - 1GB VPS (KVM)
Docker - Installed with Compose (# docker info)
I am attempting to setup this (https://github.com/pboehm/ddns/tree/docker_and_rework), first I should clone this git to my working directory? Lets say /home for example. I will run the following command;
git clone -b docker_and_rework https://github.com/pboehm/ddns.git
Which has successfully cloned the source files into /home/ddns/... (working dir)
Now I believe I am supposed to go ahead and build something*, so I go into the following directory;
/home/ddns/docker
Inside contains a docker-compose.yml file, I am not sure what this does but by looking at it, it appears to be sending a bunch of instructions which I can only presume is to do with actually deploying or building the whole container/image or magical thing right? From here I go ahead and do the following;
docker-compose build
As we can see, I believe its building the container or image or whatever its called, you get my point (here). After a short while, that completes and we can see the following (docker images running). Which is correct, I see all of the dependencies in there, but things like;
go version
It does not show as a command, so I presume I need to run it inside the container maybe? If so I dont have a clue how, I need to run 'ddns.go' which is inside /home/ddns, the execution command is;
ddns --soa_fqdn=dns.stealthy.pro --domain=d.stealthy.pro backend
I am also curious why the front end web page is not showing? There should be a page like this;
http://ddns.pboehm.org/
But again, I believe there is some more to do I just do not know what??
docker-compose build will only build the images.
You need to run this. It will build and run them.
docker-compose up -d
The -d option runs containers in the background
To check if it's running after docker-compose up
docker-compose ps
It will show what is running and what ports are exposed from the container.
Usually you can access services from your localhost
If you want to have a look inside the container
docker-compose exec SERVICE /bin/bash
Where SERVICE is the name of the service in docker-compose.yml
The instructions it runs that you probably care about are in the Dockerfile, which for that repo is in the docker/ddns/ directory. What you're missing is that Dockerfile creates an image, which is a template to create an instance. Every time you docker run you'll create a new instance from the image. docker run docker_ddns go version will create a new instance of the image, run go version and output it, then die. Running long running processes like the docker_ddns-web image probably does will run the process until something kills that process. The reason you can't see the web page is probably because you haven't run docker-compose up yet, which will create linked instances of all of the docker images specified in the docker-compose.yml file. Hope this helps

Docker: Run command while another command is running

I need to configure a program running in a docker container. To achieve that the program must be running (and provide an open port) so that the administration program can connect to the running process. Unfortunately there is no simple editable config file so this is the only way. The RUN command is obviously not the right one because it does not provide a running instance after docker went to the next command. The best way would be doing this while building the docker image but if it has to be done during container start it would be OK as well. But there is (as far as I know) also no easy way to run multiple commands on startup. Does anyone has an idea how to do that?
To make it a bit more clear, here is a simple example from my Dockerfile:
# this command should start the application which has to be configured
RUN /usr/local/server/server.sh
# I tried this command alternatively because the shell script is blocking
RUN nohup /usr/local/server/server.sh &
# this is the command which starts an administration program which connects to the running instance started above
RUN /usr/local/administration/adm [some configuration parameters...]
# afterwards the server process can be stopped
Downloading the complete program directory containing the correct state could be a solution, too. But then the configuration cannot changed easily in the Dockerfile, what would be great.
A Dockerfile is supposed to be a sequential list of instructions to produce an image. The image should contain your application's code, and all of its installable dependencies.
Each RUN instruction gets executed as its own container. Once the command that you run completes, any changed files get committed as a new image layer.
Trying to run a process in the background, will cause the command you are running to return immediately. Once that happens, the container is considered stopped, and the Dockerfile's next instruction will be executed in a new separate container.
If you really need two processes running, you will need to produce a command that you can pass to a single RUN instruction.

Resources