Can I transfer files from a local server to my Nextcloud server without using the internet and allow users to access them?(same computer) - docker

I used docker technology to set up a nextcloud server for myself and my family
Can I transfer files from a local server to my Nextcloud server without using the internet and allow users to access them?
Because I have discovered two strange things:
1.Placing files directly under a specific user's file path on the server does not allow the user to successfully access the file.
2.As long as I don't delete the files added by the user, even if I directly change the content of the files on the server, the user can still accurately and correctly read the original content.
Or is the user profile path that I think is incorrect?
I think it's /var/www/html/data/"USERID"/files
I would like to know how to solve it, but at the same time, I also want to know what is the reason that causes the following two problems.
Thank you so much.

Related

Access Split Database Problem with Simultaneous Users

I have a split database that was created in Access 2016 and has some users using Access 2016 and others using Access 365.
It works fine when only one person is using it. When two people access it at the same time, sometimes it will generate a copy of the back-end file for one of the users, not saving that user's data to the networked back-end file. The problem usually occurs with the second user to open the front end, but not always.
It gives a message like, 'unable to sync ***_be.accdb', generating a copy. It doesn't matter if the user is using 2016 or 365.
Another symptom is when this occurs, if the user whose file was copied opens one of the forms, they see the screen (form) of the person who is correctly linked to the networked back-end file.
When I monitor the networked back-end file, sometimes it updates quickly with changes and other times it takes a while. Basically, it's not consistent in how it allows access and transfers data to the back-end file from user to user.
I've tried one user on vpn the other on the network, both users on the network and both users on vpn with no obvious difference.
Has anyone run into this?

Getting "ECONNREFUSED" error when trying to upload to Wolkenkit Blob Server

I'm currently developing a Wolkenkit application which is run on my local machine.
I want to upload a file from the Wolkenkit app to the blob server (as documented here).
When sending a POST request from the server to https://local.wolkenkit.io:3001/, Node.js gives me the error ECONNREFUSED.
I've tested the POST-Request with another program and it works there. Any idea why it doesn't work from the wolkenkit application itself?
Thanks!
The Storing files sample you linked to shows code that is to be run in the browser, not in the backend itself. Of course, both should work, but there are a few minor differences you need to watch out for.
Fixing the host name
First, I suppose that local.wolkenkit.io in your case maps to 127.0.0.1, which is the default for wolkenkit. That means that when you try to connect to this domain from within a Docker container, the container does not try to call out to the blog storage container, but it stays within itself. So, the first thing that needs to be fixed is the host name.
Basically, there are two options for this: You can either setup local.wolkenkit.io so that it resolves to the external IP address of your machine. This would work, but is pretty cumbersome. The other option is to directly address the appropriate container that is responsible for blob storage, by its internal name. The internal name is <name-of-your-app>-depot-file. So you need to replace https://local.wolkenkit.io:3001/ by https://<...>-depot-file.wolkenkit.io:3001/.
Fixing the port
Second, the port is wrong. This is because the blob storage service is internally running on port 3000, externally on 3001. So instead of https://<...>-depot-file.wolkenkit.io:3001/ you need to use https://<...>-depot-file.wolkenkit.io:3000/.
Once you have done this you should not get any more errors like ECONNREFUSED, since now the service can be found.
Fixing SSL issues
Third, since you are now connecting to the blob storage service using a different domain name, the SSL certificate doesn't match any more, since it was issued for local.wolkenkit.io. As a result, you will get SSL errors when trying to connect.
The simplest way to get around this is to disable any SSL checks (albeit this is also the most insecure way to handle this!). How to do this depends on the HTTP client module you are using. E.g., in request there is an option called strictSSL that you can set to false.
Of course, what you actually should do is to either use a custom certificate which includes this domain name as well, or to write a function that handles the certificate check and accepts the presented one, especially in this case.
If you do all of this, things should work :-)
PS: I am one of the authors of wolkenkit. Thanks a lot for bringing up this issue, and we will take care of this in the future, to make storing blobs easier.

Edit system files from within Ruby on Rails app

I am putting together a web interface for an embedded hardware product (think like your router) that needs the ability to change system files that are owned by root. In particular I need to change the network address and then restart the service.
What is the best way to handle this both for editing the file and securely handling the escalation (preferably outside of the webapp somehow). I had the idea of a user who can sudo with no password for scripts to use that was banned from SSH or Terminal login, but I am unsure if this is the best thing to do security wise as it leaves that user open to attacks that can then escalate privleges.
I effectively want to read ifcfg-eth0, write changes to a temporary file, double check those changes are valid, then write it back to the ifcfg-eth0 original file, finally restart the network interface.

File access control in Rails

I have a web application which allows users to upload files and share them with other people across the internet. Anyone who has access can download the files, but if the uploader doesn't specifically share the file with someone else, that person can't download the files.
Since the user permissions are controlled by rails, each time someone tries to download a file it sent to the user from a rails process. This is a serious bottle neck - rails is needed for the file upload and permissions but it shouldn't be in the way taking up memory just for others to download files.
I would like to split the application on different servers for the frontend, database and file server. If the user does to my site, they should have the ability to download the file directly from something like my-fileserver.domain.com/file/38183 instead of running it through rails.
What is the best option for this? I would like to control file access at the database level, not the file system - but I don't want rails taking up all of the memory on my system for such a simple process. Any ideas?
Edit:
One thing I may be able to do is load a list of files/permissions from mysql into a node.js app and give access rights to the file server as a true/false response based on what the file server sends in. This still requires the file server to run a web server, however.
May be You could generator a rand url for file, and control by center system .

Elmah XML Logging on Load Balanced Environment

We're implementing Elmah for an internal application. For development and testing we use a single server instance but on the production environment the app is delivered using a load balanced environment.
Everything works as charm using Elmah, except for the fact that the logs are done independant in each server. What I mean with this is that if an error happens in Server1 the xml file is stored physically on that server and the same for Server2, since I'm storing that files on the App_Data
When I access the axd location to see the error list, I just see the ones of the server that happened to attend my request.
Is there any way to consolidate the xml files other than putting them on a shared folder? Having a shared folder will make us to allow the user that executes the application on the server to have access to that separate folder and to be on only one of the servers instead of both.
I cannot use In-Memory or Database logging since FileLog is the only one allowed.
You might consider using ElmahR for this case, since you are not able to implement In-Memory or Database logging. ElmahR will provide you with a central location for the two load balanced servers to send errors to (in addition to logging them locally) via an Http post. Then you can access the ElmahR site for to view an aggregated list. Also, ElmahR is storing the error messages in a SqlServerCE database, so it can persist the error messages it receives.
Keep in mind that if the ElamhR Dashboard app design does not meet your initial needs/desires, it could be modified as needed given that it is an open source project.
Hope this might be a viable option.

Resources