Bitnami Permissions php.ini - bitnami

I am trying the modify a php.ini file in a bitnami moodle deployment, as user 'bitnami' but for some reason my permissions are denied through sftp.
I have never used the console before, so can someone please provide some step by step instructions to modify these files or permissions.

When you launched a Bitnami Moodle Cloud Image, the permissions set by default for the file /opt/bitnami/php/etc/php.ini are 0644 with user: bitnami group: root. Therefore, if you are connecting via SSH to your server using the user 'bitnami' you must be able to edit the file.
On the other hand, the permissions set by default to the folder /opt/bitnami/php/etc/ are 0755 with user: root group: root. Therefore, to create a new file on that folder you must have root permissions. That's the reason why you can't upload a new file via SFTP to that folder with user 'bitnami'.
More information at:
https://docs.bitnami.com/general/faq/#how-to-connect-to-the-server-through-ssh
https://docs.bitnami.com/general/faq/#how-to-upload-files-to-the-server-with-sftp

Related

Connecting to different ARN/Role/Amazon Account when trying to deploy

I have previously had Serverless installed on a server, and then when I tried to edit the function and package it back up to edit the zip file I broke it, so I have to start all over. So to begin this issue: I had Serverless running and was using it with this package - https://github.com/adieuadieu/serverless-chrome/tree/master/examples/serverless-framework/aws
When I sudo npm run deploy, I get the ServerlessError:
ServerlessError: User: arn:aws:sts::XXX:assumed-role/EC2CodeDeploy/i-268b1acf is not authorized to perform: cloudformation:DescribeStackResources on resource: arn:aws:cloudformation:us-east-1:YYY:stack/aws-dev/*
I'm not sure why it is trying to connect to a Role and not an IAM. So I check the Role, and it is in an entirely different AWS account than the account I've configured. Let's call this Account B.
When it comes to configuration, I've installed AWS CLI and entered in the key, id, and region in my Account A in AWS. Not touching Account B whatsoever. When I run aws s3 ls I see the correct s3 buckets of the account with the key/id/regioin, so I know CLI is working with the correct account. Sounds good. I check the ~/.aws/creditionals file and just has one profile [default] which seems normal. No other profiles are in here. I copied this over to the ~/.aws/config file so now both files are same. Works great.
I then go into my SSH where I've installed serverless, and run npm run deploy and it gives me the same message above. I think maybe somehow it is not using the correct account for whatever reason. So I manually set the access key and secret with the following commands:
serverless config credentials --provider aws --key XXX --secret YYY
It tells me there already is a profile in the aws creds file, so I then add --o to the end to overwrite. I run sudo npm run deploy and still same error.
I then run this command to manually set a profile in the creds for serverless, with the profile name matching the IAM user name:
serverless config credentials --provider aws --key XXX --secret YYY --profile serverless-agent
Where "serverless-agent" is the name of my IAM user I've been trying to use to deploy. I run this, it tells me there already is an existing profile in the aws creds file so I run it with --o and it tells me the aws file is now updated. In bash I go to Vim the file and I only see the single "[default]" settings, as if nothing has changed. I run sudo npm run deploy and it gives me the same Error.
I then go and manually set the access and secret:
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=YYY
I run sudo npm run deploy and it gives me the same Error.
I even removed AWS CLI, and the directory that holds the creditionals and config files - and when I manually set my account creds via serverless config it tells me there already is a profile set up in my aws file, prompting me to use the overwrite command - how is this possible when the file is literally not on my computer?
So I then think that serverless itself has a cache or something, calling the wrong file or whatever for creds, so I uninstall serverless via sudo npm uninstall -g serverless so that I can start from zero again. I then do all of the above steps and more all over again, and nothing has changed. Same error message.
I do have Apex.run set up, but that should be using my AWS CLI config file so I'm not sure if that is causing any problems. But then again I've no clue of anything deep on this subject, and I can't find any ability to remove Apex itself in their docs.
In the package I am trying to deploy, I do not have a profile:XXX set in the serverless.yml file, because I've read if you do not then it just defaults to the [default] profile you have set in the aws creds file on your computer. Just to check, I go into the serverless.yml file and set the profile: default, and the error I now get when I run npm run deploy is
Profile default does not exist
How is that possible when I have the "default" profile set in my creds file? So I remember that previously I ran the serverless config creditionals command and added the profile name of serverless-agent to it (yet didn't save in the aws creds file as I mentioned above), so I add that profile name to the serverless.yml file just to see if this works, and same error of "Profile default does not exist".
So back to the error message. The Role is an account not even related to the IAM user I'm using in my aws creds. Without knowing a lot about this, it's as if the config in serverless via ssh isn't correct or something. Is it using old creds I had set up in Apex.run? Why is the aws creds file not updated with the profile when I manually set it in serverless config command? I am using the same user account (but with new key and secret) that I used a few weeks ago when I correctly deployed and my Lambda and API was set up for me on AWS. Boy do I miss those time and wish I didn't mess up my existing Lambda functions, without setting version number prior, forcing me to start all over.
I am so confused. Any help would be greatly appreciated.
If you are using IAM role then you have to use that IAM role through assume role using powershell.
I was also facing same issue earlier, when we moved from from user to role.

Access to the path 'C:\inetpub\wwwroot\App_Data\TEMP\PluginCache' is denied. Amazon AWS Beanstalk

I'm trying to setup an Umbraco website on Amazon AWS Beanstalk using the AWS Toolkit for Visual studio 2017, I have added the .ebextensions folder and inside my config file
{
"containercommands": {
"01-changeperm": {
"command": "icacls \"C:/inetpub/wwwroot/App_Data\" /grant IIS_IUSRS:(OI)(CI)"
}
}
}
I have also tried DefaultAppPool instead of IIS_IUSRS as per this post How can I set folder permissions for elastic beanstalk windows application? and I have also tried
commands:
create_default_website_folder:
command: if not exist "C:\inetpub\wwwroot" mkdir "C:\inetpub\wwwroot"
update_iis_user_permissions:
command: Icacls.exe "C:\inetpub\wwwroot" /grant IIS_IUSRS:(OI)(CI)F
from this post https://aws.amazon.com/blogs/devops/run-umbraco-cms-with-flexible-load-balancing-on-aws/ along with many other post, but none work, does anyone know what else I need to do as I'm constantly getting the following error.
Access to the path 'C:\inetpub\wwwroot\App_Data\TEMP\PluginCache' is
denied.
You can visit this page to see what Umbraco needs: https://our.umbraco.com/documentation/Getting-Started/Setup/Server-Setup/permissions
Essentially all of these need modify permissions to all of the folders in your umbraco installation:
IUSR
IIS_IUSRS
IIS apppool\[appoolname]
Wow! your post is really help me (and https://thedeveloperspace.com/granting-write-access-to-asp-net-apps-hosted-on-aws-beanstalk/ and ASP.Net Core at AWS EBS - write permissions and .ebextensions )
In my scenario my local folder for temp files is at
C:\inetpub\wwwroot\Temp
So I reedit your command to
commands:
create_default_website_folder:
command: if not exist "C:\inetpub\wwwroot\Temp" mkdir "C:\inetpub\wwwroot\Temp"
container_commands:
01storage_permissions:
command: "icacls C:\\inetpub\\wwwroot\\Temp /grant DefaultAppPool:(OI)(CI)F"
Then I have permission to use my target folder thanks for the mkdir command.
Becuase your Access to the path 'C:\inetpub\wwwroot\App_Data\TEMP\PluginCache' is denied.
May be your config must be specific folder like this ?
commands:
create_default_website_folder:
command: if not exist "C:\inetpub\wwwroot\App_Data\TEMP\PluginCache" mkdir "C:\inetpub\wwwroot\App_Data\TEMP\PluginCache"
update_iis_user_permissions:
command: Icacls.exe "C:\inetpub\wwwroot\App_Data\TEMP\PluginCache" /grant IIS_IUSRS:(OI)(CI)F

Security of a Docker image

I am considering to package a Rust application into a Docker container.
The current version of that application contains various credential files used to register to Discord API or Google API through a service account key.
Would these files be accessibles if I package my application as such?
[EDIT: added Dockerfile]
FROM rust:1.28.0
WORKDIR /usr/src/<application>
COPY . .
RUN cargo install --force --path .
CMD ["<application>"]
Never put actual credentials into anything that might not be accessed by you and only you.
You basically have two options:
1) Have your application pull the required credentials from its environment, then set these variables when you start the container. see docs
2) Have your application read the credentials from a config file, that doesn't get pulled into the docker image. Then, when running the container, mount that file into it, see docs
You could actually do both: Have an environment variable that tells your application whether it should look for a config file ( maybe in production) and if that variable is unset, check the environment (for development).
Edit: It's best practice to create a .dockerignore File in your build-context, containing the name (or path) of the file holding the credentials.

Capistrano configuration leading to mkdir permission denied

Upon execution a deploy to a server for a specific application, the process interrupts at this stage
DEBUG [88db4789] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.4" ; /usr/bin/env mkdir -p /var/www/v4/shared /var/www/v4/releases )
DEBUG [88db4789] mkdir:
DEBUG [88db4789] cannot create directory ‘/var/www’
DEBUG [88db4789] : Permission denied
Note: this occurring only for this particular application. Another application that deploys to the same server processes past this stage
I have attempted to change ownership as suggested here, but that fails
chown: cannot access ‘/var/www/’: No such file or directory
so I am led to believe a configuration issue is the culprit. Aside from the environment data
server 'xx.xxx.xxx.xxx', user: 'deploy', roles: %w{db web app}
where have I missed something?
Your server instance does not have the folder /var/www, so you can do manually by ssh to that server as user deploy then try to make the folder yourself.
I think it again will fail because of your deploy user does not have the rights to /var folder. Try to change the ownership following the guide you have to do so.
While yeuem1vannam's answer is valid, this use case actually had a different problem in the deploy.rb file. The path specified there had an error in the user name, thus the permission error to create the folder upon deploy.

Google Cloud bitnami wordpress installation permissions

I have setup an installation of Bitnami Wordpress Multisite in google cloud. I have also setup the SSH, and I am able to connect through SSH, but I want to go to the wordpress installation and edit files / upload plugins / edit permissions. Any idea of how I am able to do that. I followed bitnami's guide but it still does not allow me.
The cloud image of Bitnami WordPress Multisite is already configured with the right permissions to allow you install or upload plugins, edit any file, etc using the WordPress administration panel.
However, if you are using an old version of WordPress you may find some permissions problems. If it is your case, you can try the following workaround:
Open /opt/bitnami/apps/wordpress/htdocs/wp-config.php
Look for define('FS_METHOD', 'ftptext');
Replace it with define('FS_METHOD', 'direct');
Change the permissions of /opt/bitnami/apps/wordpress/htdocs to allow the server user (daemon) to make the modifications. You can do that by executing sudo chown -R daemon /opt/bitnami/apps/wordpress/htdocs
Go to the WordPress admin panel and check you can perform the operation you want.
Hope it helps.
For security reasons, WordPress files are not editable from the
WordPress application itself. We would suggest using an FTP client to
edit the files remotely.
Another option is to change the permissions to be able to edit from
the WordPress application temporarily. Note that this configuration is
not secure so please revert it after editing the files temporarily:
sudo chown daemon:daemon /opt/bitnami/apps/wordpress/htdocs/
Afterwards revert the changes to stay secure..
For /wp-content folder
sudo chown -R bitnami:daemon /opt/bitnami/apps/wordpress/htdocs/wp-content
sudo chmod -R g+w /opt/bitnami/apps/wordpress/htdocs/wp-content
https://docs.bitnami.com/google/apps/wordpress/

Resources