I have a WebLogic 12.1.2 image that I'm looking to auto deploy an application ear on to.
I can do this when building the image using the Dockerfile i.e.
COPY some.ear /u01/domains/mydomain/autodeploy
The container run using this image runs fine and I can reach my application REST interface with Postman.
However, if I restart that container, the autodeployed application no longer exists in the container's WebLogic console and hence Postman calls fail.
The ear file is still in the autodeploy directory on restart.
Has anyone seen this behaviour before?
I've found a little hack around this.
My container has the WebLogic autodeploy directory mounted to a host directory containing the ear. If I rename the ear e.g. rename some.ear some.ear.1 and rename it back to some.ear.1 to some .ear or cut/paste the ear to/from another directory then WebLogic can pick it up. It's not ideal but it works in case anyone else comes across the same issue.
Related
I have inherited a Microsoft Visual Studio MVC project for modification and I now need to allow users to upload files to the web server. I utilize Windows 11 and IIS Express from within MS Visual Studio for development purposes and there is no issue with IIS. However, the app runs in production on a Ubuntu-based server running NGINX as a web server.
PROBLEM: When I attempt to upload a file larger than 1MB from my browser to the production server I receive the error message "413 Request Entity Too Large." After scouring the web I have discovered: (a) NGINX invokes the 1MB limit by default; and (b) it is necessary to modify the nginx.conf file by adding the NGINX "client-max-body-size" directive.
I have located the nginx.conf file on the production server and have browsed it with Nano. However, I have stopped short of attempting to modify and save the file due to the presence of Docker. Admittedly, I know virtually nothing of Docker and, unfortunately, the principals who set up this server have long since departed the Company. Furthermore, it is unclear to me whether simply modifying the nginx.conf file and restarting NGINX on the production server will do the trick given what I presume to be the necessity to involve Docker.
As an aside, my customer utilizes Azure DevOps to facilitate collaboration. I regularly stage changes to my project using Git from within MS Visual Studio and subsequently use a Ubuntu update_app.sh script to push the changes into production. I had previously attempted to modify the nginx.conf file included with the local copy of my MVC project. The file was pushed via Azure DevOps to the production server but the modified nginx.conf file would not push to production, presumably due to the presence of Docker.
I would appreciate someone providing me with an explanation of the interaction between Docker and NGINX. I would further appreciate any tip on how to get the modified nginx.conf file pushed into production.
Thanks in advance for your assistance.
Thankfully this is pretty straightforward and NGIX has good documentation on this topic (see Copying Content and Configuration Files from the Docker Host section of Deploying NGINX and NGINX Plus on Docker).
Find the Dockerfile for the deployed solution (typically in project or solution root directory)
Copy the contents of the current NGIX configuration file (either from browsing the container image or finding configuration files within source)
Create a directory named conf within your source.
Copy the current configuration into the a file within conf and apply any changes to the configuration file.
Update Dockerfile to copy files within conf into the container where NGIX loads configuration files upon startup. Add the following line into Dockerfile after the line FROM ngix (note: line may vary but should be obvious its using ngix as base image): COPY conf {INSERT RELATIVE PATH TO CONF PARENT FOLDER}
This will copy your local configuration file into the container upon creation which NGIX will load upon startup.
Note: There are other solutions that support configuration changes without rebuilding the container.
I'm using Visual Studio Code on a Windows machine. I'm trying to setup a Python Dev Container using a directory that contains a large set of CSV files (about 200GB). When I click to launch the remote container in Visual Studio the application hangs saying (Starting Dev Container (show log): Building image.
I've been looking through the docs and having read the Advanced Container Configuation I've tried modifying the devcontainer.json file by adding workspaceMount and workspaceFolder entries:
"workspaceMount" : "source=//c/path/to/folder,target=/workspace,type=bind,consistency=delegated"
"workspaceFolder" : "/workspace"
But to no avail. Is there a solution to launching Dev Containers on Windows using folders which contain large files?
I had a slightly different problem, but the solution might help you or someone else. I was trying to run docker-compose inside a docker-in-docker image (provided by vscode). In my case, my container was able to start, but nothing inside the container was able to run.
To solve my issue, I updated vscode and and now there is a new option Remote-Containers: Clone Repository in Container Volume.... If your code is a git repo, you can do this:
Step #1:
Step #2:
Step #3 and onwards:
Follow the given steps provided by vscode and you should have your repository in the container as a volume. It reduced my building times from about 30mins to 3mins (within the running container) because I brought stuff into the container after it was up and running.
Assuming the 200GB is ignored by your .gitignore, what you could try to do is once the container has started, you can copy the 200GB worth of excel files into the container. I thought this would help because I did a similar thing by bringing in all my node_modules after running the container.
I have an image based on opencpu/base. It starts an apache based server, and then invokes R scripts everytime sombody calls an API endpoint.
One of those scripts tries to write a file to a location in the container. When I mount a folder into that location, it works on my Windows machine, but not on Ubuntu.
I've tried using named volumes on Ubuntu, but it does not work either. When I run bash inside the container interactively on Ubuntu, I can write and read the mounted volume just fine. But the apache process cannot.
Does anybody have some hints what could be going on here?
When you log in interactively to the container, you will have root permissions.
Apache usually runs as another user (www-data), and that user must have read permissions on the folder that you want it to read.
Make sure that the permissions of the folder matches the user that will read it.
We have a Tomcat application running in a Docker container.
The Tomcat application used to be deployed on a virtual machine and used Context path to provide facility to download files:
in web.xml
<Context docBase="/var/files/logs" path="/path_for_logs" />
Where the file to download was accessible by a link as such:
https://my.domain.com/path_for_logs/file_to_download
This worked very well.
But now we have deployed in a Docker container and the Context path no longer works. The file is in the location as expected, but the download fails with
Failed - No file
The file is in the location in the container, so the Tomcat application can read/write to that location within the container. But, the the Context path no longer appears to work to link to a file.
Any guidance on how to remedy is greatly appreciated!
I am newbie to Azure batch as well as docker. The problem i am facing is i have created an image based on another custom image in which some files and folders are created at the root level/directory of the container and every thing works fine but when the same image is running in Azure batch task, i dont know where these files and folders are being created because the wd (working directory) folder is empty. Any suggestions please? Thank you. I know the Azure batch does something with the directory structure but i am not clear about it.
As you're no doubt aware, Batch maps directories into the container (from the docs):
All directories recursively below the AZ_BATCH_NODE_ROOT_DIR (the root of Azure Batch directories on the node) are mapped into the container
so for example if you have a resource file on the task, this ends up in the working directory within the container. However this doesn't mean the container is allowed to write back to the same location on the host (but only within the container). I would suggest that you take whatever results/output you have generated and upload them into Azure Blob Storage via a Shared Access Signature - this is the usual way to get results from a Batch job even without using Docker.