JBoss cli connect to docker - docker

i have transfer my files to host machine. I already bring up jboss docker. How can i deploy the war file in my host mchine to container via cli. Please dvice. I not planning to use custom image

You need to open the management port from docker -p9990:9990.
Also I'm not sure to what address the management is bound to. You would have to pass -Djboss.bind.address.management=0.0.0.0 to the command line.

Related

Gitlab Runner, docker executor, deploy from linux container to CIFS share

I have a Gitlab runner that runs all kind of jobs using Docker executors (host is Ubuntu 20, guests are various Linux images). The runner runs containers as unprivileged.
I am stumped on an apparently simple requirement - I need to deploy some artifacts on a Windows machine that exposes the target path as an authenticated share (\\myserver\myapp). Nothing more than replacing files on the target with the ones on the source - a simple rsync would be fine.
Gitlab Runner does not allow specifying mounts in the CI config (see https://gitlab.com/gitlab-org/gitlab-runner/-/issues/28121), so I tried using mount.cifs, but I discovered that by default Docker does not allow mounting anything inside the container unless running privileged, which I would like to avoid.
I also tried the suggestion to use --cap-add as described at Mount SMB/CIFS share within a Docker container but they do not seem to be enough for my host, there are probably other required capabilities and I have no idea how to identify them. Also, this looks just slightly less ugly than running privileged.
Now, I do not strictly need to mount the remote folder - if there were an SMB-aware rsync command for example I would be more than happy to use that. Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
Do you have any idea on how achieve this?
Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
You could simply copy an executable (that you can build anywhere else) in Go, listening to a port, ready to receive a file.
This this implementation for instance: file-receive.go. It listens on port 8080 (can be changed) and copies the file content to a local folder.
No installation or setup required: just copy the exe to the target machine and run it.
From your GitLab runner, you can 'curl' send a file to the remote PC machine, port 8080.

Docker desktop networking Windows and Linux nodes

I have a windows service within a Docker container that needs to access a MySQL database in a Linux container on the same machine (dev machine currently).
I thought of creating an overlay network on the two "nodes" on the same machine but this isn't possible as creating the swarm worker fails on windows after creating the swarm master on linux.
Is this possible, if not what is the easiest way of doing this? The purpose of the windows container is simply to deploy to a test environment to gather data. Do I need to deploy the linux to the cloud or another machine maybe, so the windows container can communicate?
You can simply use docker compose, it will create the network automatically. Replace the MySQL host with the MySQL service name you defined in the compose yaml file. Detailed information please refer to docker-compose.

Access Docker daemon on Host without knowing Host OS

I use docker-compose to spin up a few containers as part of an application I'm developing. One of the containers needs to start a docker swarm service on the host machine. On Docker for Windows and Docker for Mac, I can connect to the host docker daemon using the REST Api by using the "host.docker.internal" DNS name and this works great. However, if I run the same compose file on linux, "host.docker.internal" does not work (yet, seems it may be coming in the next version of docker). To make matters worse, on Linux I can use network mode of "host" to work around the issue but that isn't supported on Windows or Mac.
How can I either:
Create a docker-compose file or structure a containerized application to be slightly different based on the host platform (windows|mac|linux) without having to create multiple docker-compose.yml files or different application code?
Access the host docker daemon in a consistent way regardless of the host OS?
If it matters, the container that is accessing the docker daemon of the host is using the docker python sdk and making api calls to docker over tcp without TLS (this is used for development only).
Update w/ Solution Detail
For a little more background, there's a web application (aspnet core/C#) that allows users to upload a zip file. The zip file contains, among other things, an exported docker image file. There's also an nginx container in front of all of this to allow for ssl termination and load balancing. The web application pulls out the docker image, then using the docker daemon's http api, loads the image, re-tags the image, then pushes it to a private docker repository (which is running somewhere on the developer's network, external to docker). After that, it posts a message to a message queue where a separate python application uses the python docker library to deploy the docker image to a docker swarm.
For development purposes, the applications all run as containers and thus need to interact with docker running on the host machine as a stand alone swarm node. SoftwareEngineer's answer lead me down the right path. I mapped the docker socket from the host into the web application container at first but ran into a limitation of .net core that won't be resolved until .net 5 which is that there's no clean way of doing http over a unix socket.
I worked around that issue by eventually realizing that nginx can reverse proxy http traffic to a unix socket. I setup all containers (including the dynamically loaded swarm service from the zips) to be part of an overlay network to give them all access to each other and allowing me to hit an http endpoint to control the host machine's docker/swarm daemon over http.
The last hurdle I ran into was that nginx couldn't write to the mapped in /var/run/docker.sock file so I modified nginx.conf to allow it to run as root within the container.
As far as I can tell, the docker socket is available at the path /var/run/docker.sock on all systems. I have personally verified this with a recent Linux distro (Ubuntu), Windows 10 Pro running Docker for Windows (2.2.0) with both WSL2 (Ubuntu and Alpine) and the windows cmd (cli) and powershell. From memory, it works with OSX too, and I used to do the same thing in WSL1.
Mapping this into a container is achieved on any terminal with the -v, --volume, or --mount flags. So,
docker container run -v /var/run/docker.sock:/var/run/docker.sock
Mounts the socket into an identical path within the container. This means that you can access the socket using the standard docker client (docker) from within the container with no extra configuration. Using this path inside a Linux container is recommended because the standard location and is likely to be less confusing to anyone maintaining your code in the future (including yourself).

Nifi install using Docker - Can´t access the webserver

I'm new to both docker and Nifi, I found this command that installs via docker and used it on a virtual machine that I have in GCP but I would like to access this container via webserver. In docker ps this appears to me:
What command do I need to execute to gain access to the tool via port 8080?
The container has already exposed port 8080 on the host, as evidence by the output 0.0.0.0:8080->8080/tcp. You read that as {HOST_INTERFACE}:{HOST_PORT}->{CONTAINER_PORT}/{PROTOCOL}.
Navigate to http://SERVER_ADDRESS:8080/ (or maybe http://SERVER_ADDRESS:8080/nifi) using your web browser. You may need to modify the firewall rules applied to your VM to ensure that you can access that port from your local machine.

How can I connect from an external source to a Logstash service hosted in a docker / rancher container

Need some help us with a Rancher / Docker issue encountered while trying to set up Logstash to parse our IIS logs.
There’s a container with 1 service Logstash (not a web service but a system service, part of the ELK stack that we want to use to ingests files from a given input(s) and parse them into fields before sending them to the configured output(s) – in this case, Elasticsearch).
We need to have the service accessible from an outside system (namely our web server which is going to send the IIS logs over for processing).
The problem is that we can’t get the endpoint configuration.
There is a load balancer host running on rancher with two open ports that are supposed to channel all requests to the inner services containers via path name and target but we can’t get a path configured to the logstash service.
I have been digging into the logstash configs and there is a setting for node.name in the logstash.conf file but … I haven’t managed to do anything with it yet.
Hoping someone who is more familiar with this stuff can offer some insight.
Basically I can get the Logstash service on Rancher to connect to the AWS Elasticsearch but I cannot get our web box (with the IIS logs) to connect with the Logstash service on its input port.
Solution was not to use the standard image but customize it. steps involved:
create local repo with folder structure that we need to emulate. Only
the folders we are going to replace are needed
add a dockerfile which will be used to build up the image from a docker run command
in the dockerfile reference the ready-made / standard image as a base in the first line, i.e. 'FROM '
in the dockerfile RUN command remove the directories and files that need to be customized. In this case it was logstash/pipeline
directory and logstash/config directory
use ADD commands to replace the missing directories with our customized versions
use EXPOSE command to expose the port the service is listening on
Build the container using docker run and the -p flag to publish the ports we want open, mapping them to ports on the host container

Resources