Confused by Docker directory structure (file not found error) - docker

I am getting a file not found error and even though I manually create the file in the Docker container it still reports as not found. Solving this is of course complicated by me being new to Docker and learning how everything in the docker world works.
I am using Docker Desktop with a .net core application.
In the .Net application I am looking for the file to use as an email template. All of this works when I run outside a Docker container but inside docker it fails with file not found.
public async Task SendEmailAsyncFromTemplate(...)
{
...snipped for brevity
string path = Path.Combine(Environment.CurrentDirectory, #$"Infrastructure\Email\{keyString}\{keyString}.cshtml");
_logger.LogInformation("path: " + path);
//I added this line because when I connect to docker container the root
//appears to start with infrastructure so I chopped the app part of
var fileTemplatePath = path.Replace(#"/app/", "");
_logger.LogInformation("filePath: " + fileTemplatePath);
The container log for the above is
[12:40:09 INF] path: /app/Infrastructure\Email\ConfirmUser\ConfirmUser.cshtml
[12:40:09 INF] filePath: Infrastructure\Email\ConfirmUser\ConfirmUser.cshtml
As mentioned in the comments I did this because when I connect to the container the root shows Infrastructure as the first folder.
So naturally I browse into Infrastructure and the Email folder is missing. I have asked a separate SO question here about why my folders aren't copying.
OK my Email files and folders under Infrastructure are missing. So to test this out I manually created the directory structure and create the cshtml file using this command:
docker exec -i addaeda2130d sh -c "cat > Infrastructure/Email/ConfirmUser/ConfirmUser.cshtml" < ConfirmUser.cshtml
I chmod the file permissions to 777 just to make sure the application has write access and then added this debugging code.
_logger.LogInformation("ViewRender: " + filename);
try
{
_logger.LogInformation("Before FileOpen");
var fileExista = File.Exists(filename);
_logger.LogInformation("File exists: " + fileExista);
var x = File.OpenRead(filename);
_logger.LogInformation("After FileOpen:", x.Name);
As you can see from the logs it reports the file does NOT exist even though I just created it.
[12:40:09 INF] ViewRender: Infrastructure\Email\ConfirmUser\ConfirmUser.cshtml
[12:40:09 INF] Before FileOpen
[12:40:09 INF] File exists: False
Well, the only logical conclusion to this is I don't know / understand what is going on which is why I am reaching out for help.
I have also noted that if I stop the container (not recreate just stop) and then start it all my directories and files I created are gone.
So...are these directories / files in memory and not on "disk" and I need to commit the changes somehow?
It would seem to make sense as the application code is looking for the files on disk and if they are in memory then they wouldn't be found but in Googling, Pluralsight courses etc. I can't find any mention of this.
Where can I start looking in order to figure this out?

Front slash '/' in path is different than '\'. Just change direction of your slashes and it'll work.
I tried this program in my docker container and it worked fine.
using System;
using System.IO;
// forward slash don't work
// string path = Path.Combine(Environment.CurrentDirectory, #"files\hello\hello.txt");
string path = Path.Combine(Environment.CurrentDirectory, #"files/hello/hello.txt");
Console.WriteLine($"Path: {path}");
string text = System.IO.File.ReadAllText(path);
Console.WriteLine(text);

Related

Docker Desktop on Hyper-V - bind mount do not propagate inotify on file copy

I have a Docker Desktop installed on my dev machine, with WSL 2 disabled. I have shared my entire C:/ drive:
Then I have a container that inside has a .net 6 (Core) application that uses the FileSystemWatcher to observe one directory, and when a file is pasted inside to read it.
I red in several articles in the internet that WSL2 do not support notification to propagate from the Windows file system to the underlying Linux distribution that docker is running on, hence there is no way that I can bind the directory that I have to "watch" with the app in the container. So I swithed to the old Hyper-V support of docker.
I run the container with the following command:
docker run `
--name mlc-importer `
-v C:/temp/DZBank:/opt/docker/mlc_importer/dfs/DZBank `
-v C:\temp\appsettings.json:/app/appsettings.json `
-v C:\temp\log4net.config:/app/log4net.config `
mlc-importer
The container starts and starts "watching" for new files. The strange thing is, that when I cut a file and paste it in the directory, the app in the container registers the new file and reads it, but when I copy the file and paste in in the directory, the app in teh container do not register it and read it.
Can someone help me because I can't find out what the problem might comes from.
Thanks in advance,
Julian
I managed to solve mu problem, and I'll post it here if somebody encounters the same problem.
The problem was in teh file itself. This I found out when I started a new container with only debian, and installed inotify-tools, and binded the same path. When I tried to copy the file and paste it in the binded dir the output was:
Three times MODIFY event.
When I tried to cut the file and paste in in the new dir the events were:
So with copy - three times MODIFY, with cut one CREATE and two MODIFY.
Then I inspected the copied file and saw this:
When I checked the checkbox and hit ok, everything is ok. And since in the container app (from the post), I hook to only "File created" callback, it not triggers when the file is only modified.
Hope this helps someone with a similar problem

How to access my file system from a dockered pgadmin4

I tried to install pgadmin4 on my system in several ways, but each time I was defeated by the intricacies of the install. Luckily I discovered a Dockerfile (dpage/pgadmin4) and that worked out of the box. In my docker-compose.yml I added a volume statement
volumes:
- /var/lib/pgadmin4:/var/lib/pgadmin
In order to preserve the pgadmin data over successive runs. pgadmin4 is accessible from 0.0.0.0:5050 and all works fine.
However, I cannot access the files from my local file system with the query tool, this is all hidden in the docker file system. Fortunately that is in the /var/lib/pgadmin4 system on my local machine. In that directory there is a directory storage and that contains the id I use to login as the name of a directory: the ID x#y.z becomes directory x_y.z and that contains the files and folders I had created from my browser as a test. I tried to change this in the pgadmin4 options to /home/user/development but that path is not recognized because it is not in x_y.z.
Question: how can I change pgadmin4's path from /var/lib/pgadmin4/storage/x_y.z into /home/user/development?
Update
I tried to link a part of my home directory into /var/lib/pgadmin4/storage/x_y.z as a symbolic link:
sudo ln -s /home/user/Documents
After that command there exists a linked directory /var/lib/pgadmin4/storage/x_y.z/Documents with uid:gid being root:root and 777 permission. When I next start the query toolbox and click at open the open box appears and I get 4 identical error messages:
Error: [Errno 2] No such file or directory: /var/lib/pgadmin4/storage/x_y.z/Documents
I have changed the owner:group to the relevant ones I could think of:
1000:1000 (me as user)
root:root
5050:5050 (pgadmin uid and gid)
In all three cases I got this error. What is wrong here?
You override paths in config_local.py (you can create it if not exists already).
STORAGE_DIR = os.path.join(DATA_DIR, 'storage')
to
STORAGE_DIR = '/home/user/Documents'
Restart pgAdmin4.
Ref: https://www.pgadmin.org/docs/pgadmin4/4.22/config_py.html

docker-compose caches run results

I'm having an issue with docker-compose where I'm passing a file into the container when it's run. The issue is that it doesn't seem to recognize when the file has been changed and serves the saved result back indefinitely until I change the name of the file.
An example (modified names for brevity):
jono#macbook:~/myProj% docker-compose run vpn conf.opvn
Options error: Unrecognized option or missing parameter(s) in conf.opvn:71: AXswRE+
5aN64mYiPSatOACC6+bISv8RcDPX/lMYdLwe8zQY6qWtbrjFXrp2 (2.3.8)
Then I change the file, save it, and run the command again - exact same output.
Then without changing anything I do this:
jono#macbook:~/myProj% cp conf.opvn newconf.opvn
And when I run $ docker-compose run vpn newconf.opvn it works. Seems really silly.
I'm working with Tmux and Mac if there is some way that affects it. Is this the expected behaviour? I couldn't find anything documenting this on the docker-compose homepage.
EDIT:
Specifically I'm using this repo from the amazing Jess.
The image you are using is using volume in order to mount your current directory. Basically the file conf.opvn is copied to the docker container.
When you change the file, the container doesn't see that change, but it does pick up the rename (which the container sees as a new file). This most probably is due to user rights of the file and the user rights of the folder in the docker container where this file is mounted. Try changing the file's permissions to 777 before beginning the process and check again.
You can find a discussion about this in the official forum of docker

Cayley docker isn't writing data

I'm using [or, trying to use] the docker cayley from here: https://github.com/saidimu/cayley-docker
I created a data dir at /var/lib/apps/cayley/data and dropped the .cfg file in there that I was instructed to make:
{
"database": "myapp",
"db_path": "/var/lib/apps/cayley/data",
"listen_host": "0.0.0.0"
}
I ran docker cayley with:
docker run -d -p 64210:64210 -v /var/lib/apps/cayley/data/:/data saidimu/cayley:v0.4.0
and it runs fine, I'm looking at it's UI in the browser:
And I add a triple or two, and I get success messages.
Then I go to the query interface and try to list any vertices:
> g.V
and there is nothing to be found (I think):
{
"result": null
}
and there is nothing written in the data directory I created.
Any ideas why data isn't being written?
Edit: just to be sure there wasn't something wrong with my data directory, I ran the local volume mounted docker for neo4j on the same directory and it saved data correctly. So, that eliminates some possibilities.
I can not comment yet but I think to obtain results from your query you need to use the All keyword
g.V().All() // Print All the vertices
OR
g.V().Limit(20) // Limits the results to 20
If that was not your problem I can edit and share my dockerfile which is derived from the same docker-file that you are using.
You may refer to the lib here to learn more about how to use Cayley's APIs and the data format in Cayley and some other stuff like N-Triples N-quads and RDF:
Cayley APIs usages examples (mocha test cases)
Clearly designed entry-level N-quads data for getting you in: in the project test/data directory

Deploying Symfony Projekt using Rsync in Windows 7 - Permission Problem

I am desperetly trying to deploy my Symfony app with Rsync.
I instally cwRsync and it somewhat works, at least SSH does. My app is located in E:\xampp\htdocs\MyProject.
Rsync actually does create one directory on my Server but other than that, I only get permission errors.
Now, this seems to be a common problem, however I am not able to implement any solutions, such as this one:
cwRsync ignores "nontsec" on Windows 7
I installed cwRsync to the following directory: c:\cwrsync
My Question: what does my fstab file need to look like, and where do I even have to put it? Are there any other solutions to this problem?
Thanks in advance!
I'd posted that question you referred to. Here's what I ended up doing to get symfony project:deploy to work from Windows 7 (it required hacking symfony a bit, so it may not be the most optimal solution). With this solution, you don't need fullblown cygwin installed, you just need cwRsync.
In your fstab, add this line (fstab should be located under [cwrsync install dir]\etc):
C:/wamp/www /www ntfs binary,noacl 0 0
This essentially maps "C:\wamp\www" on your windows filesystem to "/www" for cygwin.
Modify symfony/lib/task/sfProjectDeployTask.class.php:
protected function execute($arguments = array(), $options = array())
{
...
$dryRun = $options['go'] ? '' : '--dry-run';
// -- start hack --
if(isset($properties['src']))
$src = $properties['src'];
else
$src = './';
$command = "rsync $dryRun $parameters -e $ssh $src $user$host:$dir";
// -- end hack --
$this->getFilesystem()->execute($command, $options['trace'] ? array($this, 'logOutput') : null, array($this, 'logErrors'));
$this->clearBuffers();
}
This allows you to specify an additional src field in properties.ini:
src=/www/myProject
Doing this makes the whole filesystem mapping between windows and cygwin much more clearly defined. Cygwin (and cwRsync) understand unix paths much better than windows paths (i.e. /www vs. C:/wamp/www), so doing this makes everything just work.
Run a Script
I think Rsync always breaks your file permissions during sync between Windows and Linux.
You can quite easily create a script that goes through your file after a sync and resets the file permissions using chmod though.

Resources