I would also like to know why this would even happen in the first place. My assumption is that I somehow messed up symlinks after installing docker on my local. Why do you think it happens?
Related
I'm having issues with creating a useable docker container for a ColdFusion 2021 app. I can create the container, but everytime it is rebuilt I have to reinstall all of the modules (admin, search, etc.). This is an issue because the site that the container will be housed on will be rebuilding the container everyday.
The container is being built with docker-compose. I have tried using the installModule and importModule environmental variables, running the install command from the Dockerfile, building the container and creating a .car file to keep the settings, and disabling the secure mode using the environmental variables.
I have looked at the log, and all of the different methods used to install/import the modules are actually downloading and installing the modules. However, when the container first starts to spin up there's a section where the selected modules are installed (and the modules that are not installed are listed). That section is followed by the message that the coldfusion services are available, then it starts services, security, etc. and uninstalls (and removes) the modules. It then says that no modules are going to be installed because they are not present, and it gives the "services available" message again.
Somehow, it seems that one of the services is uninstalling and removing the module files, and none of the environmental variables (or even the setupscript) are affecting that process. I thought it might be an issue with the secure setup, but even with disabling that the problem persists. My main question is, what could be causing it to be uninstalled?
I was also looking for clarification on a couple of items:
a) all of the documentation I could find said that the .CAR file would be automatically loaded if it was in the /data folder (and in one spot it's referred to the image's /data folder). That would be at the top level with /opt and /app, right? I couldn't find an existing data folder anywhere.
b) Several of the logs and help functions mention a /docs folder, but I can't find it in the file directory. Would anyone happen to know where I can find them? It seems like that would be helpful for solving this.
Thank you in advance for any help you can give!
I don't know if the Adobe images provide a mechanism to automatically install modules every time the container rebuilds, but I recommend you look into the Ortus CommandBox-based images. They have an environment variable for the cfpm packages you want installed and CFConfig which is much more robust than car files.
https://hub.docker.com/r/ortussolutions/commandbox/
FYI, I work for Ortus Solutions.
I'm trying to create a container disk size limit in docker. Specifically, I have a container that downloads data, and I want this data to be under a limit, that I can cap beforehand.
So far, what I've created works on the surface-level, (prevents the file from actually being saved onto the computer) - however I can watch the container doing it's work, and I can see the download complete to 100%, before it says 'Download failed.' Therefore it seems like it's downloading to a temporary directory, and then checking the size of the file before passing it to the final location. (or not)
This doesn't fully resolve the issue I was trying to fix, because obviously the download consumes a lot of resources. I'm not sure what exactly I am missing here..
This is what creates the above behavior:
sudo zfs create new-pool/zfsvol1
sudo zfs set quota=1G new-pool/zfsvol1
docker run -e "TASK=download" -e "AZURE_SAS_TOKEN= ... " -v /newpool/zfsvol1:/data containerName azureFileToDownload
I got the same behavior while running the container interactively without volumes and downloading into the container. I tried changing the storage driver (inside $docker info) to zfs (from overlay) and it didn't help. I looked into docker plugins but they didn't seem like they would resolve the issue.
This is all run inside an Ubuntu VM; I made a zfs pool to test all of this. I'm pretty sure this is not supposed to happen because it's not very useful. Would anyone have an idea why this is happening?
Ok- so I actually figured out what was going on, and like #hmm suggested the problem wasn't because of Docker. The place it was buffering to was my memory, before downloading to the disk, and that was the issue. It seems like azcopy (Azure's copy command) first downloads to memory before saving to the disk, which is not great at all, but there is nothing to be done about it in this case. I think my approach itself works completely.
I've recently delved into the wonder that is Docker and have set up my containers using docker-compose (an Express app and a MySQL DB).
It's been great for sharing the project with the team and pushing to a VPS, but one thing that's fast becoming tedious is the need to stop the running app, docker-compose build then docker-compose up any time there are changes to the code (which I believe is also creating numerous unnecessary images?).
I've scoured about but haven't found a clear-cut way to get around this, barring ditching Docker-compose locally and using docker run to run the Express app pointing to a local DB (which would do away with a lot of the easy set up perks that come with Docker, such as building the DB from scratch).
Is there a Nodemon-style way of working with Docker (images/containers get updates automatically when code changes)? Is there something obvious I'm missing? Or is my approach the necessary "evil" that comes with working on a Dockerised app?
You can mount a volume to your source directory for local development. Any changes on the host will be reflected in the container. https://docs.docker.com/storage/volumes/
You might consider separate services for deployment/development. I usually have a separate service which mounts the source directory and installs test dependencies inside the container.
I have trouble understanding how to work with data in a WebApi-app running with Docker.
In my app, a user can upload files which are stored like this:
~\App_Data\accounts\user123\files\<sha256>.bin
Without configuring any volumes, a Docker container with my app-image seem to work fine and writes files without any problems.
Now I'd like to configure so that the files ends up somewhere I can specify explicitly and not inside the default docker volumes folder.
I assume I need to create a volume mapping?
I have tried creating a folder and mapped it to "/App_Data". It still works as before, but I still don't see any files in this folder.
Is it a problem with write access on this folder? If not having access, will Docker fallback and write to a default volume?
What user/group should have write access to this folder? Is there a "docker" user?
I'm running this on a Synology NAS so I'm only using the standard Docker UI with the "Add Folder" button.
Here's the folders I have tried:
Got it working now!
The problem was this line:
var appDataPath = env.ContentRootPath + #"\App_Data";
which translated to #"/app\App_Data" when running in Docker.
First of all I was using a Windows dir separator '\' which I don't think work on Linux. Also I don't think the path can include the "/app" since it is relative to this folder. When running this outside of Docker in Windows I got a rooted path which worked better: #"c:\wwwroot\app\App_Data"
Anyway, by changing to this it started working as expected:
var appDataPath = #"/App_Data";
Update
Had a follow up problem. I wanted the path's to work both in Docker on Linux and with normal Windows hosting but I couldn't just use /App_Data as path because that would translate to c:\App_Data on Windows. So I tried using this path instead: ./AppData which worked fine in Windows, resulting in c:\wwwroot\app\App_Data. But this would still not work in Docker unfortunately. Don't get why though. Maybee Docker is really picky with the path-matching and only accepts an exact match, i e /App_Data because that's the path I have mapped to in the container-config.
Anyway, this was a real headache, have spent 6 hrs straight with this now. This was what I came up with that worked both on Linux and Windows. Not looking terribly nice but it works:
Path.Combine(Directory.GetCurrentDirectory().StartsWith("/") ? "/" : ".", "App_Data");
If you can come up with a better looking method, please feel free to let me know.
Update 2
Ok I think I get it now. I think. When running this in Docker every path has to be rooted with '/'. Relative path's are not allowed. My app-files are copied to the container path '/app' and I have mapped my data to '/data'. The current dir is set to '/app' but to access the data I obviously have to point to '/data' and not '/app/data'. I was mistakenly believing that all paths was relative to '/app' and not '/'. The reason for this is likely since I have my data-files inside the app-folder when running this in standard Windows hosting (which probably not is a very good idea in any case). This however confused me to think the same applied for my Docker environment.
Now that I realized this it is a lot more clearer. I have to use '/data' and not './data' or '/app/data' or even 'data' (which is also relative) etc.
In standard Windows hosting where relative paths are ok I can still use './data' or any other relative path which will be resolved relative to ContentRootPath/CurrentDir. However a absolute rooted path like '/data' will not work because it will resolve to 'c:\data' (relative to the root of the current drive).
I suggest you do a mapping of the volume, in the application is done as follows. Check that this is not read mode because you will not see the files.
Volume:Since Transmission is a downloader, we need a way to access the file downloaded. Without mapping a physical shared folder on Synology NAS, all downloaded files will be stored in the containers and are difficult to retrieve.In Transmission’s Dockerfile page, we saw two volumes in Transmission: /config and /downloads. We will now perform the following to map these two volumes to the physical shared folders on Synology NAS:
Un-check the Read-Only option as we need to grant Transmission permission to write data into the physical drives.
I am venturing into using docker and trying to get a firm grasp of the product.
While I love everything it promises it is a big change from doing things manually.
Right now I understand how to build a container, attach your code, commit and push it to your repo.
But what I am really wondering is how do I update my code once deployed, for example, I have some minor bug fixes but no change to dependencies but I also run a database in the same container.
Container:
Node & NPM
Nginx
Mysql
php
Right now the only way I understand you can do it is to close thje container re pull the new container and run, but I am thinking you will lose database data.
I have been reading into https://docs.docker.com/engine/tutorials/dockervolumes/
and thinking maybe the container mounts a data file that persists between containers.
What I am trying to do is run a web app/website with the above container layout and just change code with latest bugfixes/features.
You're quite correct. Docker images are something you should be rebuilding and discarding with each update - avoid commit wherever possible (outside your build scripts anyway).
Persistent state should be managed via data containers that you then mount with your image. Thus your "data" is decoupled from that specific version and instance of the application.