Azure batch working directory issue - docker

I am newbie to Azure batch as well as docker. The problem i am facing is i have created an image based on another custom image in which some files and folders are created at the root level/directory of the container and every thing works fine but when the same image is running in Azure batch task, i dont know where these files and folders are being created because the wd (working directory) folder is empty. Any suggestions please? Thank you. I know the Azure batch does something with the directory structure but i am not clear about it.

As you're no doubt aware, Batch maps directories into the container (from the docs):
All directories recursively below the AZ_BATCH_NODE_ROOT_DIR (the root of Azure Batch directories on the node) are mapped into the container
so for example if you have a resource file on the task, this ends up in the working directory within the container. However this doesn't mean the container is allowed to write back to the same location on the host (but only within the container). I would suggest that you take whatever results/output you have generated and upload them into Azure Blob Storage via a Shared Access Signature - this is the usual way to get results from a Batch job even without using Docker.

Related

Copy many large files from the host to a docker container and back

I am a beginner with Docker and I have been searching for 2 days now and I do not understand which would be a better solution.
I have a docker container on a Ubuntu server. I need to copy many large video files to the Ubuntu host via FTP. Docker via cron will process the videos using ffmpeg and save the result to the Ubuntu host somehow so the files are accessible via FTP.
What is the best solution:
create a bind drive - I understand the host may change files in the bind drive
create a volume but I do not understand how may I add files to the volume
create a folder on the Ubuntu and have a cron that will copy using "docker cp" command and after a video has been processed to copy it to the host?
Thank you in advance.
Bind-mounting a host directory for this is probably the best approach, for exactly the reasons you lay out: both the host and container can directly read and write to it, but the host can't easily write to a named volume. docker cp is tricky, you note the problem of knowing when the process is completed, and anyone who can run any docker command at all can pretty trivially root the host; you don't want to give this permission to something network-facing.
If you're designing a larger-scale system, you also might consider an approach where no files are actually shared at all. The upload server sends the files (maybe via HTTP POST) to an internal storage service, then posts a message to a message queue (maybe RabbitMQ). That then retrieves the files from the storage service, does its work, uploads the result, and posts a response message. The big advantages of this approach are being able to run it on multiple systems, easily being able to scale the individual components of it, and not needing to worry about filesystem permissions. But, it's a much more involved design.

How to configure a writable folder inside an application published to Azure App Service using Docker for Windows

I'm working in an application to obtain some data from a web service, create a text file in the local filesystem send a command to a command line application, obtain the result and then send the results back via the web service.
I need to be able to write to the local file system, read from it and then delete the temporary file. I was reading about bind mounts and volumes but this folder can be delete if a new version of the image is uploaded is just a staging area.
Any ideas how this can be done, thanks.
When using containers in App Service, I believe you will have to link a storage account and mount file shares accordingly. Depending on the OS (windows / linux), the steps vary a bit.
If you are not using containers, then you should be able to access the temporary file locations for file-based requirements. Do note that the storage available this way is limited and not shared across site instances.

WAS Liberty Docker Image Deployment Issue in Openshift

Am able to deploy Liberty docker image in Local Docker container and can access Liberty server.
I pushed the liberty image to Minishift installed in my system ,but when am going to create docker container, am facing error as follows:
Is anyone tried this before, please share your view:
Log Trace:
unable to write 'random state'
mkdir: cannot create directory '/config/configDropins': Permission denied
/opt/ibm/docker/docker-server: line 32:
/config/configDropins/defaults/keystore.xml: No such file or directory
JVMSHRC155E Error copying username into cache name
JVMSHRC686I Failed to startup shared class cache. Continue without
using it as -Xshareclasses:nonfatal is specified
CWWKE0005E: The runtime environment could not be launched.
CWWKE0044E: There is no write permission for server directory
/opt/ibm/wlp/output/defaultServer
By default OpenShift will run images as an assigned user ID unique to a project. Many available images have been written so that they can only be run as root, even though they have no requirement to run as root.
If you try and run such an image, because directories/files have been set up so they are only writable by the root user, running the image as a non root user ID will cause it to fail.
Best practice is to write images so that can be run as an arbitrary user ID. Unfortunately very few people do this, with the result that their images cannot be used in more secure multi tenant environments for deploying applications in containers.
OpenShift documentation provides guidelines on how to implement images so that can run in such more secure environments. See section 'Support Arbitrary User IDs' in:
https://docs.openshift.org/latest/creating_images/guidelines.html
If the image is built by a third party and they show no interest in making the changes to their image so works in secure multi tenant environments, you have a few options.
The first is to create a derived image which in the steps to build it, goes back and fixes permissions on the directories and files so can be used. Note that in doing this you have to be careful what you change permissions on, as changing permissions on files in a derived image caused a complete copy of the file to be made. If files are large, this will start to blow out your image size.
The second is if you are admin on the OpenShift cluster, you can relax security on the cluster for the service account the image is run as so that it is allowed to run the container as root. You should avoid doing this if possible, especially with third party images which you do not trust. For details on how to do this see:
https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile
A final way that might be able to be used with some images if total size of what needs to have permissions fixed is small, is to use an init container to make a copy of the directories that need write access to an emptyDir volume. Then in the main container mount that emptyDir volume on top of the directory copied. This avoids needing to modify the image or enable anyuid. The amount of space available in emptyDir volumes may not be enough if have to copy application binaries as well. This is probably only going to work where the application wants to update config files or create lock files. You wouldn't be able to use this if the same directory is used for large amounts of transient file system data such as cache database or logs.

How to track file changes within a Docker Container

Is there an easy way to track file changes (files will be changed elsewhere) inside the docker container.
I used COPY within the Dockerfile to test the functionality but now I need to keep track if the copied files are changing in the background.
The changes are made within a different application (Not a docker container). This app fetches data and overwrites those files if something has changed --> Then my container should react to the changes and synchronize it's files.
Is a simple MOUNT enough to establish that?
Regards
Check
inotify docker image
https://github.com/pstauffer/docker-inotify
or
https://hub.docker.com/r/coppit/inotify-command/
or
https://hub.docker.com/r/coppit/inotify-command/~/dockerfile/

How to persist data in a Docker .NET Core Web app?

I have trouble understanding how to work with data in a WebApi-app running with Docker.
In my app, a user can upload files which are stored like this:
~\App_Data\accounts\user123\files\<sha256>.bin
Without configuring any volumes, a Docker container with my app-image seem to work fine and writes files without any problems.
Now I'd like to configure so that the files ends up somewhere I can specify explicitly and not inside the default docker volumes folder.
I assume I need to create a volume mapping?
I have tried creating a folder and mapped it to "/App_Data". It still works as before, but I still don't see any files in this folder.
Is it a problem with write access on this folder? If not having access, will Docker fallback and write to a default volume?
What user/group should have write access to this folder? Is there a "docker" user?
I'm running this on a Synology NAS so I'm only using the standard Docker UI with the "Add Folder" button.
Here's the folders I have tried:
Got it working now!
The problem was this line:
var appDataPath = env.ContentRootPath + #"\App_Data";
which translated to #"/app\App_Data" when running in Docker.
First of all I was using a Windows dir separator '\' which I don't think work on Linux. Also I don't think the path can include the "/app" since it is relative to this folder. When running this outside of Docker in Windows I got a rooted path which worked better: #"c:\wwwroot\app\App_Data"
Anyway, by changing to this it started working as expected:
var appDataPath = #"/App_Data";
Update
Had a follow up problem. I wanted the path's to work both in Docker on Linux and with normal Windows hosting but I couldn't just use /App_Data as path because that would translate to c:\App_Data on Windows. So I tried using this path instead: ./AppData which worked fine in Windows, resulting in c:\wwwroot\app\App_Data. But this would still not work in Docker unfortunately. Don't get why though. Maybee Docker is really picky with the path-matching and only accepts an exact match, i e /App_Data because that's the path I have mapped to in the container-config.
Anyway, this was a real headache, have spent 6 hrs straight with this now. This was what I came up with that worked both on Linux and Windows. Not looking terribly nice but it works:
Path.Combine(Directory.GetCurrentDirectory().StartsWith("/") ? "/" : ".", "App_Data");
If you can come up with a better looking method, please feel free to let me know.
Update 2
Ok I think I get it now. I think. When running this in Docker every path has to be rooted with '/'. Relative path's are not allowed. My app-files are copied to the container path '/app' and I have mapped my data to '/data'. The current dir is set to '/app' but to access the data I obviously have to point to '/data' and not '/app/data'. I was mistakenly believing that all paths was relative to '/app' and not '/'. The reason for this is likely since I have my data-files inside the app-folder when running this in standard Windows hosting (which probably not is a very good idea in any case). This however confused me to think the same applied for my Docker environment.
Now that I realized this it is a lot more clearer. I have to use '/data' and not './data' or '/app/data' or even 'data' (which is also relative) etc.
In standard Windows hosting where relative paths are ok I can still use './data' or any other relative path which will be resolved relative to ContentRootPath/CurrentDir. However a absolute rooted path like '/data' will not work because it will resolve to 'c:\data' (relative to the root of the current drive).
I suggest you do a mapping of the volume, in the application is done as follows. Check that this is not read mode because you will not see the files.
Volume:Since Transmission is a downloader, we need a way to access the file downloaded. Without mapping a physical shared folder on Synology NAS, all downloaded files will be stored in the containers and are difficult to retrieve.In Transmission’s Dockerfile page, we saw two volumes in Transmission: /config and /downloads. We will now perform the following to map these two volumes to the physical shared folders on Synology NAS:
Un-check the Read-Only option as we need to grant Transmission permission to write data into the physical drives.

Resources