I have installed Bitnami Lamp Stack on AWS. Installation was very quick and smooth and I was able to quickly get the bitnami welcome page.
PHP Hello World was successful and uploading files with Filezilla on the folder /opt/bitnami/apache2/htdocs was OK.
However PHP couldn't write a file on this folder with the following php:
<?php
echo "Hello World<br>";
$data_table="this is a file write test";
$file = 'test.txt';
$handle = fopen($file, "w") or die("Unable to open file!");
echo "The file {$file} can be written";
fwrite($handle, $data_table);
fclose($handle);
?>
In fact I got the error message Unable to open file!
Apparently the web server cannot write to the folder while Filezilla can do it.
I spent several hours trying to understand what is wrong. I tried to add the complete path, I changed temporarily chmod to 777, chown the group/user but no success.
I tried also other PHP applications that need to create files and even them did not work. Now I am quite frustrated. Any help is highly appreciated.
Roberto
Related
I'm trying to find a way to re-direct the output as well as the errors to AWS S3 as a file inside docker. There are already some great answers in this link.
Using the first answer given in the link I tried the following (outisde docker):
python3 train.py 2>&1 | aws s3 cp - s3://my_bucket_name/folder/output.log
This is working properly. The output.log file gets created in my s3 bucket as I intent to do. But when I put the same command as the CMD command inside Dockerfile it does nothing.
CMD python3 train.py 2>&1 | aws s3 cp - s3://my_bucket_name/folder/output.log
In fact docker kind of stuck and terminates after a while.
But, if I use the following code inside docker, the output gets created in the mount directory without any issue:
CMD python3 train.py > /mount/directory/output.log 2>&1
But I want the file uploaded to S3 live.
My use case:
I'm trying to train a deep learning model in an EC2 instance, but I want to somehow get whatever happening on the console as a log file and store it on S3 live. From S3 whenever a log file is uploaded lambda triggers and it sends that log file to the localhost/another server for some processing.
Also, is there any way to show the output in the main console along with the file being uploaded in S3?
P.S. I don't have a software background. I'm a mathematician trying to get in the field of deep learning. So, if I've framed the question wrong or used wrong terminologies pardon me.
I am getting a file not found error and even though I manually create the file in the Docker container it still reports as not found. Solving this is of course complicated by me being new to Docker and learning how everything in the docker world works.
I am using Docker Desktop with a .net core application.
In the .Net application I am looking for the file to use as an email template. All of this works when I run outside a Docker container but inside docker it fails with file not found.
public async Task SendEmailAsyncFromTemplate(...)
{
...snipped for brevity
string path = Path.Combine(Environment.CurrentDirectory, #$"Infrastructure\Email\{keyString}\{keyString}.cshtml");
_logger.LogInformation("path: " + path);
//I added this line because when I connect to docker container the root
//appears to start with infrastructure so I chopped the app part of
var fileTemplatePath = path.Replace(#"/app/", "");
_logger.LogInformation("filePath: " + fileTemplatePath);
The container log for the above is
[12:40:09 INF] path: /app/Infrastructure\Email\ConfirmUser\ConfirmUser.cshtml
[12:40:09 INF] filePath: Infrastructure\Email\ConfirmUser\ConfirmUser.cshtml
As mentioned in the comments I did this because when I connect to the container the root shows Infrastructure as the first folder.
So naturally I browse into Infrastructure and the Email folder is missing. I have asked a separate SO question here about why my folders aren't copying.
OK my Email files and folders under Infrastructure are missing. So to test this out I manually created the directory structure and create the cshtml file using this command:
docker exec -i addaeda2130d sh -c "cat > Infrastructure/Email/ConfirmUser/ConfirmUser.cshtml" < ConfirmUser.cshtml
I chmod the file permissions to 777 just to make sure the application has write access and then added this debugging code.
_logger.LogInformation("ViewRender: " + filename);
try
{
_logger.LogInformation("Before FileOpen");
var fileExista = File.Exists(filename);
_logger.LogInformation("File exists: " + fileExista);
var x = File.OpenRead(filename);
_logger.LogInformation("After FileOpen:", x.Name);
As you can see from the logs it reports the file does NOT exist even though I just created it.
[12:40:09 INF] ViewRender: Infrastructure\Email\ConfirmUser\ConfirmUser.cshtml
[12:40:09 INF] Before FileOpen
[12:40:09 INF] File exists: False
Well, the only logical conclusion to this is I don't know / understand what is going on which is why I am reaching out for help.
I have also noted that if I stop the container (not recreate just stop) and then start it all my directories and files I created are gone.
So...are these directories / files in memory and not on "disk" and I need to commit the changes somehow?
It would seem to make sense as the application code is looking for the files on disk and if they are in memory then they wouldn't be found but in Googling, Pluralsight courses etc. I can't find any mention of this.
Where can I start looking in order to figure this out?
Front slash '/' in path is different than '\'. Just change direction of your slashes and it'll work.
I tried this program in my docker container and it worked fine.
using System;
using System.IO;
// forward slash don't work
// string path = Path.Combine(Environment.CurrentDirectory, #"files\hello\hello.txt");
string path = Path.Combine(Environment.CurrentDirectory, #"files/hello/hello.txt");
Console.WriteLine($"Path: {path}");
string text = System.IO.File.ReadAllText(path);
Console.WriteLine(text);
Facing a strange problem.
I tried running cp command in a script (Debian Lenny - Loaded with KDE).The verbose throws a message as if the file is copied but the file actually doesn't get copied until I manually refresh the destination folder.
Here is the copy command that I am trying to execute:
cp -v /data/publish/${VAR1}.txt ${VAR2}
FYI: VAR2- Variable holding the path to destination folder. In this case happens to be a usb drive.
The reason I call it a strange problem, is because I have no problems while executing this piece in Ubuntu, but facing problem in an another box running debian (loaded with KDE 3.5, I understand its pretty outdated).
Please help guys!
I suppose the title is not entirely accurate. I am able to login to an instance, but I do not see my application in any of the directories. My sense is that I am ssh'ing into the wrong directory.
Does anyone have any experience with this? Thanks.
When you ssh into your AWS instance, you're dropped into your home directory, just as if you're running your own server. From the terminal, try typing:
pwd
You'll likely see something like:
/home/ec2-user
Note that this is your home directory and not your application directory. In other words, your Document Root (where the web application starts with /index.php) is likely something like /var/www/html, so try typing:
cd /var/www/html
ls
The "ls" command shows you the contents of this directory. Here is where you'll build your web application.
I am desperetly trying to deploy my Symfony app with Rsync.
I instally cwRsync and it somewhat works, at least SSH does. My app is located in E:\xampp\htdocs\MyProject.
Rsync actually does create one directory on my Server but other than that, I only get permission errors.
Now, this seems to be a common problem, however I am not able to implement any solutions, such as this one:
cwRsync ignores "nontsec" on Windows 7
I installed cwRsync to the following directory: c:\cwrsync
My Question: what does my fstab file need to look like, and where do I even have to put it? Are there any other solutions to this problem?
Thanks in advance!
I'd posted that question you referred to. Here's what I ended up doing to get symfony project:deploy to work from Windows 7 (it required hacking symfony a bit, so it may not be the most optimal solution). With this solution, you don't need fullblown cygwin installed, you just need cwRsync.
In your fstab, add this line (fstab should be located under [cwrsync install dir]\etc):
C:/wamp/www /www ntfs binary,noacl 0 0
This essentially maps "C:\wamp\www" on your windows filesystem to "/www" for cygwin.
Modify symfony/lib/task/sfProjectDeployTask.class.php:
protected function execute($arguments = array(), $options = array())
{
...
$dryRun = $options['go'] ? '' : '--dry-run';
// -- start hack --
if(isset($properties['src']))
$src = $properties['src'];
else
$src = './';
$command = "rsync $dryRun $parameters -e $ssh $src $user$host:$dir";
// -- end hack --
$this->getFilesystem()->execute($command, $options['trace'] ? array($this, 'logOutput') : null, array($this, 'logErrors'));
$this->clearBuffers();
}
This allows you to specify an additional src field in properties.ini:
src=/www/myProject
Doing this makes the whole filesystem mapping between windows and cygwin much more clearly defined. Cygwin (and cwRsync) understand unix paths much better than windows paths (i.e. /www vs. C:/wamp/www), so doing this makes everything just work.
Run a Script
I think Rsync always breaks your file permissions during sync between Windows and Linux.
You can quite easily create a script that goes through your file after a sync and resets the file permissions using chmod though.