Capistrano configuration leading to mkdir permission denied - ruby-on-rails

Upon execution a deploy to a server for a specific application, the process interrupts at this stage
DEBUG [88db4789] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.4" ; /usr/bin/env mkdir -p /var/www/v4/shared /var/www/v4/releases )
DEBUG [88db4789] mkdir:
DEBUG [88db4789] cannot create directory ‘/var/www’
DEBUG [88db4789] : Permission denied
Note: this occurring only for this particular application. Another application that deploys to the same server processes past this stage
I have attempted to change ownership as suggested here, but that fails
chown: cannot access ‘/var/www/’: No such file or directory
so I am led to believe a configuration issue is the culprit. Aside from the environment data
server 'xx.xxx.xxx.xxx', user: 'deploy', roles: %w{db web app}
where have I missed something?

Your server instance does not have the folder /var/www, so you can do manually by ssh to that server as user deploy then try to make the folder yourself.
I think it again will fail because of your deploy user does not have the rights to /var folder. Try to change the ownership following the guide you have to do so.

While yeuem1vannam's answer is valid, this use case actually had a different problem in the deploy.rb file. The path specified there had an error in the user name, thus the permission error to create the folder upon deploy.

Related

VSCode docker dev container can't access ~/.ssh

I need access to /home/vscode/.ssh/ so the tools I am using can use my ssh key, however I can't seem to change the permissions.
I build the container like so:
# other steps
COPY id_rsa /home/vscode/.ssh/id_rsa
RUN chmod 600 /home/vscode/.ssh/id_rsa \
&& touch /home/vscode/.ssh/known_hosts \
&& ssh-keyscan bitbucket.org >> /home/vscode/.ssh/known_hosts
# other steps
This grabs my id_rsa from my local directory and adds it into the docker container so I can keep using my existing SSH key.
I then try and use a tool like terraform to execute a command that clones in some code from a repository that is setup with my SSH key.
$ terraform plan
ERRO[0001] 1 error occurred:
* error downloading 'ssh://git#bitbucket.org/example/example.git?ref=master': /usr/bin/git exited with 128: Cloning into '/workspaces/example'...
Failed to add the RSA host key for IP address 'xxx.xxx.xxx.xxx' to the list of known hosts (/home/vscode/.ssh/known_hosts).
Load key "/home/vscode/.ssh/id_rsa": Permission denied
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
How can I correctly setup access in my Dockerfile? I can't even run ssh-kegyen!
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/vscode/.ssh/id_rsa):
/home/vscode/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Saving key "/home/vscode/.ssh/id_rsa" failed: Permission denied
The first error is occurring because the command is executed from a directory (/workspaces) that is not writable by non-admin users. For some reason, VS Code creates this directory automatically and uses it as the start-up directory when you open a new terminal session. The pwd command can be used to find out which directory the session is currently in.
If the permissions on the user's home directory are correctly set up, the command should succeed by first changing the directory:
$ cd $HOME
$ terraform plan
In the case of the second command (ssh-keygen), the path is already pointing to the user's home directory, so there might be another issue. The command might try to save temporary files to the current directory, or the permissions in the home directory could be wrong.
As #rasmus91 pointed out, it is usually not a good idea to use root privileges, even if only for development. These kind of practices can creep into production just "because it works".
EDIT: To take a guess, the Dockerfile is probably built with root privileges and the user vscode is changed only after the Dockerfile has been processed. Hence the file /home/vscode/.ssh/id_rsa might end up having the admin user as the owner.
After several attempts at overriding user permissions in the Dockerfile I came to find out the user is set from the devcontainer.json file.
Simply go to the bottom where it sets the"remoteUser": "vscode" and comment it out.
This will now set the user as root. For local development this is perfect and should cause no issues. I can now utilise the SSH key as well as anything else previously locked off.
// .devcontainer/devcontainer.json
{
"name": "Ubuntu",
"build": {
"dockerfile": "Dockerfile",
// Update 'VARIANT' to pick an Ubuntu version: focal, bionic
"args": { "VARIANT": "focal" }
},
"settings": {},
"extensions": [],
// "remoteUser": "vscode"
}

Access to the path 'C:\inetpub\wwwroot\App_Data\TEMP\PluginCache' is denied. Amazon AWS Beanstalk

I'm trying to setup an Umbraco website on Amazon AWS Beanstalk using the AWS Toolkit for Visual studio 2017, I have added the .ebextensions folder and inside my config file
{
"containercommands": {
"01-changeperm": {
"command": "icacls \"C:/inetpub/wwwroot/App_Data\" /grant IIS_IUSRS:(OI)(CI)"
}
}
}
I have also tried DefaultAppPool instead of IIS_IUSRS as per this post How can I set folder permissions for elastic beanstalk windows application? and I have also tried
commands:
create_default_website_folder:
command: if not exist "C:\inetpub\wwwroot" mkdir "C:\inetpub\wwwroot"
update_iis_user_permissions:
command: Icacls.exe "C:\inetpub\wwwroot" /grant IIS_IUSRS:(OI)(CI)F
from this post https://aws.amazon.com/blogs/devops/run-umbraco-cms-with-flexible-load-balancing-on-aws/ along with many other post, but none work, does anyone know what else I need to do as I'm constantly getting the following error.
Access to the path 'C:\inetpub\wwwroot\App_Data\TEMP\PluginCache' is
denied.
You can visit this page to see what Umbraco needs: https://our.umbraco.com/documentation/Getting-Started/Setup/Server-Setup/permissions
Essentially all of these need modify permissions to all of the folders in your umbraco installation:
IUSR
IIS_IUSRS
IIS apppool\[appoolname]
Wow! your post is really help me (and https://thedeveloperspace.com/granting-write-access-to-asp-net-apps-hosted-on-aws-beanstalk/ and ASP.Net Core at AWS EBS - write permissions and .ebextensions )
In my scenario my local folder for temp files is at
C:\inetpub\wwwroot\Temp
So I reedit your command to
commands:
create_default_website_folder:
command: if not exist "C:\inetpub\wwwroot\Temp" mkdir "C:\inetpub\wwwroot\Temp"
container_commands:
01storage_permissions:
command: "icacls C:\\inetpub\\wwwroot\\Temp /grant DefaultAppPool:(OI)(CI)F"
Then I have permission to use my target folder thanks for the mkdir command.
Becuase your Access to the path 'C:\inetpub\wwwroot\App_Data\TEMP\PluginCache' is denied.
May be your config must be specific folder like this ?
commands:
create_default_website_folder:
command: if not exist "C:\inetpub\wwwroot\App_Data\TEMP\PluginCache" mkdir "C:\inetpub\wwwroot\App_Data\TEMP\PluginCache"
update_iis_user_permissions:
command: Icacls.exe "C:\inetpub\wwwroot\App_Data\TEMP\PluginCache" /grant IIS_IUSRS:(OI)(CI)F

Rails/Rubber can't create staging

Trying to get started with a Rails Amazon EC2 deployment using https://github.com/rubber/rubber, and I keep ending up here after attempting to create a staging server with cap rubber:create_staging:
** [out :: production.foo.com] curl: (7) couldn't connect to host command finished in 2022ms
failed: "/bin/bash -l -c 'sudo -p '\\''sudo password: '\\'' bash -l /tmp/create_inputs'" on production.foo.com
I've been sticking with Rubber's quickstart guide, but can't solve this. I'm using rvm, if that makes a difference to anyone.
Any ideas?
It looks like you are trying to connect to production.foo.com. Change your configuration to connect to the right remote server or if you are running locally in the EC2 instance you can make it localhost.
Make sure you setup your public ssh key in the ~/.ssh/authorized_keys for the user that you are trying to deploy as. This is to allow capistrano/rubber to do passwordless ssh authentication.
Reviving an old post here but there's another possible cause of this issue that I've just encountered so figured this might help someone else.
I had to re-create and download a new keypair for my ec2 instances. When I moved it to ~/.ec2/gsg-keypair I forgot to alter the permissions.
When SSH'ing directly into the instance you get the full warning, which makes debugging it easy:
UNPROTECTED PRIVATE KEY FILE! permissions 0644 for 'xxxxx.pem' are
too open. It is recommended that your private key files are NOT
accessible by others. This private key will be ignored. bad
permissions: ignore key: xxxxx.pem Permission denied (publickey).
But when running a rubber task you simply get a generic CURL error. If this is the case for you too just update the permissions like this:
chmod 600 ~/.ec2/gsg-keypair

Cannot create directory permission denied

I am deploying my rails application to a vps. On cap deploy:setup I get the error that mkdir: cannot create directory `/apps'.
I am using set :user_sudo, false in my deploy.rb file.
I am a linux newbie, How can I give permission to the current user to create directories.
You will not have (or should not) hand permission to create directories to the root ('/') directory.
Change the mkdir command to create the directory in a directory where the process has the appropriate permissions.

Capistrano: Problem with permissions on deploy

I have a problem deploying a Rails app to my server. Performing a
cap deploy
I get lots of errors, stating that chmod is not able to change permissions of (and only of) git object files:
...
** [out :: ██████████████] chmod: changing permissions of `/srv/www/kunsthof/releases/20101113162736/.git/objects/04/779c6d894bbea4c26d6e035f71cd1ab124cc90': Operation not permitted
...
failed: "sh -c 'chmod -R g+w /srv/www/kunsthof/releases/20101113162736'" on ██████████████
The files are put there on the deploy itself, so it should be possible for the deploy user to change their permissions. Any suggestions on what could be the problem here?
Usually on deploy if you are using cached-copy, your repo will be cloned to a shared directory and will be rsynced/copied to the current release directory. While coping, you should be excluding .git directory and other unnecessary directories like spec / test (which are not going to used in production) with the following variable:
set :copy_exclude, [".git", "spec"]
With this, you are not going to copy the .git directory and should not be facing the permission problem on doing chmod there after.

Resources