My goal is to develop an application that will run on WebSphere Liberty hosted in Docker and eventually running on Bluemix. During development, I have installed Docker on my local Linux PC and then downloaded from IBM the base docker image containing a configured Liberty. This image is called:
registry.ng.bluemix.net/ibmliberty
I now start this image in Docker locally in my PC and attach a shell so that I can see what is going on. I find that there is a Liberty server located at
/opt/ibm/wlp/usr/servers/defaultServer
Now here comes the puzzle.
In Liberty servers I am used to working upon, the messages produced by a server are written into a "logs/messages.log" file relative to the server. This would mean that I would have expected to find the Liberty message files here:
/opt/ibm/wlp/usr/servers/defaultServer/logs/messages.log
However, when I start my server, there is nothing there.
How can I access the logs of my Liberty server obtained from the Bluemix base image (registry.ng.bluemix.net/ibmliberty) running under Docker natively on my Linux environment on my local PC?
If we examine this IBM Liberty/Bluemix documentation page:
https://console.bluemix.net/docs/images/docker_image_ibmliberty/ibmliberty_starter.html
We will find a section that reads:
Note: All ibmliberty images are configured to write Liberty log files to the directory /logs inside the container. All other files that are written by the Liberty server, are created in the directory /opt/ibm/wlp/output/defaultServer. You can access these files by using the shortcut /output.
And this is the key. The Liberty server log files can be found in /logs (that is the directory called logs south of the root of the file system).
Related
I have inherited a Microsoft Visual Studio MVC project for modification and I now need to allow users to upload files to the web server. I utilize Windows 11 and IIS Express from within MS Visual Studio for development purposes and there is no issue with IIS. However, the app runs in production on a Ubuntu-based server running NGINX as a web server.
PROBLEM: When I attempt to upload a file larger than 1MB from my browser to the production server I receive the error message "413 Request Entity Too Large." After scouring the web I have discovered: (a) NGINX invokes the 1MB limit by default; and (b) it is necessary to modify the nginx.conf file by adding the NGINX "client-max-body-size" directive.
I have located the nginx.conf file on the production server and have browsed it with Nano. However, I have stopped short of attempting to modify and save the file due to the presence of Docker. Admittedly, I know virtually nothing of Docker and, unfortunately, the principals who set up this server have long since departed the Company. Furthermore, it is unclear to me whether simply modifying the nginx.conf file and restarting NGINX on the production server will do the trick given what I presume to be the necessity to involve Docker.
As an aside, my customer utilizes Azure DevOps to facilitate collaboration. I regularly stage changes to my project using Git from within MS Visual Studio and subsequently use a Ubuntu update_app.sh script to push the changes into production. I had previously attempted to modify the nginx.conf file included with the local copy of my MVC project. The file was pushed via Azure DevOps to the production server but the modified nginx.conf file would not push to production, presumably due to the presence of Docker.
I would appreciate someone providing me with an explanation of the interaction between Docker and NGINX. I would further appreciate any tip on how to get the modified nginx.conf file pushed into production.
Thanks in advance for your assistance.
Thankfully this is pretty straightforward and NGIX has good documentation on this topic (see Copying Content and Configuration Files from the Docker Host section of Deploying NGINX and NGINX Plus on Docker).
Find the Dockerfile for the deployed solution (typically in project or solution root directory)
Copy the contents of the current NGIX configuration file (either from browsing the container image or finding configuration files within source)
Create a directory named conf within your source.
Copy the current configuration into the a file within conf and apply any changes to the configuration file.
Update Dockerfile to copy files within conf into the container where NGIX loads configuration files upon startup. Add the following line into Dockerfile after the line FROM ngix (note: line may vary but should be obvious its using ngix as base image): COPY conf {INSERT RELATIVE PATH TO CONF PARENT FOLDER}
This will copy your local configuration file into the container upon creation which NGIX will load upon startup.
Note: There are other solutions that support configuration changes without rebuilding the container.
I have ansible scripts that install a docker container running nifi. I've run these scripts on our dev box without issue. However, when I run them on our int box I see the following error in the nifi-bootstrap.log, causing the whole nifi to die immediately at startup:
java.io.FileNotFoundException: /data/nifi/work/snappy-1.0.5-libsnappyjava.so (No such file or directory)
I checked the dev server where this is running and there is no /data/nifi/work directory and libsnappyjava doesn't exist anywhere on that server according to mlocate.
The flow file is exactly the same between the two versions, I've done an md5sum to ensure that. The only difference in the nifi.properties file is that the they each have their own VM's hostname injected into appropriate fields by ansible. The nifi installation is part of a parent docker image that hasn't been touched and so should also be identical across images.
I'm using a nifi tarball created by my company, containing some company specific jars etc, but it should be built on top of the latest nifi version.
The only difference between the functional dev and non-functional int that I can tell is that I original installed a docker image running an older nifi version before upgrading the nifi to get the more recent nifi api. I don't know if somehow running the old nifi before upgrading it would have made a change to our /data directory in some way preventing the upgraded nifi from failing?
So why is my int looking for snappyJava when dev seems fine without it?
When my Docker containers start, I receive the following notification that reads:
Docker Desktop has detected that you shared a Windows file into a WSL 2 container, which may perform poorly. Click here for more details.
My questions are:
What does this mean?
What is the better practice / how should this be avoided?
If the message has been closed, or I've clicked "Don't show again", how can I get to the details of this warning?
I am happy to share the Dockerfile or Docker-Compose setup if needed, but I simply cannot find anything either here on SO or by a Google search that points me in any direction, so I'm not sure where to start. I'm assuming the issue lies in the Dockerfile since that is where we running COPY to move some files around.
Docker Version: Docker Desktop 2.4.0.0 (48506) Community
Operating System: Windows 10 Pro (version 10.0.19041)
This error means that accessing files on the Windows host file system from a Linux container will perform a little slower than accessing files that are already in a Linux filesystem. Accessing Windows files from the Linux container will perform like accessing files on a remote file share.
Docker and Microsoft recommend avoiding this by storing your source files in a WSL2 distro's file system (which you can bind mount to the container) or building your container image to include all the files needed rather than storing your files in the Windows file system.
If you've clicked "Don't show again", you can get to the details of this message by going to Develop with Docker and WSL 2.
For more information, Docker for Windows Best Practices says:
Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem. For example, some web development workflows rely on inotify events for automatic reloading when files have changed.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME.
Microsoft's Comparing WSL 1 and WSL 2 article has a whole section on Performance across OS file systems, and its opening paragraph says:
We recommend against working across operating systems with your files, unless you have a specific reason for doing so. For the fastest performance speed, store your files in the WSL file system if you are working in a Linux command line (Ubuntu, OpenSUSE, etc). If you're working in a Windows command line (PowerShell, Command Prompt), store your files in the Windows file system.
Also, the Docker blog article Docker Desktop: WSL 2 Best practices has an "Awesome mounts performance" section that says:
Both your own WSL 2 distro and docker-desktop run on the same utility VM. They share the same Kernel, VFS cache etc. They just run in separate namespaces so that they have the illusion of running totally independently. Docker Desktop leverages that to handle bind mounts from a WSL 2 distro without involving any remote file sharing system. This means that when you mount your project files in a container (with docker run -v ~/my-project:/sources <...>), docker will propagate inotify events and share the same cache as your own distro to avoid reading file content from disk repeatedly.
A little warning though: if you mount files that live in the Windows file system (such as with docker run -v /mnt/c/Users/Simon/windows-project:/sources <...>), you won’t get those performance benefits, as /mnt/c is actually a mountpoint exposing Windows files through a Plan9 file share.
All of that advice is great if you want your primary development workflow to be in Linux. Docker wants you to go "all in" on Linux containers. But if you work primarily in Windows and just want to use a Linux container for a specialized task, then it's fine to click "Don't show again". As Microsoft said, "If you're working in a Windows command line, store your files in the Windows file system."
I run with my main development folder in Windows, and I bind mount it to a Linux container that's just used to execute unit tests. So my full build runs in Windows, then I run all my unit tests in Windows, and I finish by running all my unit tests in a Linux container too. Having Linux bind mount to my Windows folder works fast and great for this scenario where the "dotnet test" call in Linux is just loading and executing the required DLLs from my Windows volume.
This setup may sound like heresy to those that believe containers must be used everywhere, but I love containers for application deployment. I'm not convinced that you need to go all in and do all your development inside a container too. I'm happy with Windows (and VS 2019) as my development environment, and then I use Linux containers for application testing and deployment. So the Windows/WSL2 file system performance hit is a minimal impact to me.
We have to deploy a dockerized web app into a closed external system(our client's server).
(our image is made of gunicorn, nginx, django-python web app)
There are few options i have already considered:
option-1) using docker registries: push image into registry, pull
from client's system run docker-compose up with pulled image
option-2) docker save/load .tar files: docker save image in local dev
environment, move .tar file into the client's system and run docker load
(.tar file) there.
Our current approach:
we want to move source code inside a docker image(if possible).
we can't make our private docker registry public --yet--(so option-1 is gone)
client servers are only accessible from their internal local network(has no connection to any other external network)
we dont want to copy all the files when we make an update(to our app), what we want to do is somehow detect diff or changes on docker image and copy/move/update only changed parts of app into client's server.(option-2 is gone too)
Is there any better way to deploy to client's server with the approach explained above?
PS: I'm currently checking "docker commit": what we could do is, docker load our base image into client's server, start container with that image and when we have an update we could just copy our changed files into that container's file system, then docker commit(in order to keep changed version of container). But the thing i don't like in that option is we would need to keep changes in our minds, then move our changed files(like updated .py or .html files) to client's server.
Thanks in advance
If I am creating a docker image for one of my applications and publishing it in docker hub.
This image was downloaded by many users and ran that application in their containers and that generated application logs in a folder.
Now as a developer how can I see those application logs from my machine when that container is in remote computer for which I dont have access?
If it is a virtual machine, I can do ssh to that same machine and go to that folder anse see the logs for that particular application, so how it is possible with docker?
I am not talking about docker event logs, the logs generated by my python application with the logging module. Could you please help me on how to handle this case in dockers.
I don't have any experience with working on dockers.
docker exec can be used to run bash commands in a docker container. But in your case the containers are running in a remote machine and not in your local machine. So, in that case, you have 2 options.
1. ssh into the remote machine and then use docker exec command to check the logs.
2. Directly ssh to the docker container.
But, in both scenarios, you will need SSH access to the remote machines from the end users.
I hope this helps.
If your application writes log files to the container filesystem, this is one of a couple of good uses for Docker bind mounts. If the operator (the person running the container; not you, the original software author) starts the container with
docker run -v $PWD/logs:/app/logs ... you/yourimage
then they will be able to read the log files directly on their host system.
As the original application developer, you have no access to these logs. This is the same as every other (non-SaaS) application: the end user installs software on their system and runs it, but it's on a system you can't log into, so you can't directly see things like log files. The techniques for dealing with this are the same as anything else: when a user files a bug report make sure they provide a sufficient reproduction, log files, and relevant configuration, and reproduce the issue yourself locally.