in alot of online tutorials I can see that one can navigate through resources in azure as if it were a directory.
When I use Cloud Shell this feature is not available to me.
I only have following directory structure.
Directory: /home/MY NAME
Mode LastWriteTime Length Name
---- ------------- ------ ----
l---- 6/19/2020 6:08 AM clouddrive -> /usr/csuser/clouddrive
Does anybody know what I am missing?
regards
Stefan
It's expected behavior.
Azure Cloud Shell runs on a temporary host and is assigned one machine per user account. Cloud Shell persists $HOME using a 5-GB image held in your file share. By default, you have full permission to access the /home/user directory. Additionally, your $HOME directory is persisted as an .img in your Azure File share. Files outside of $HOME and machine state are not persisted across sessions.
Read Overview of Azure Cloud Shell for more details.
Related
Using Service Account credentials, I am successful at running Cloud Build to spin up gsutil, move files from gs into the instance, then copy them back out. All is good.
One of the Cloud Build steps successfully loads a docker image from outside source, it loads fine and reports its own help info successfully. But when run, it fails with the error message:
"fail to open file "..intermediary_work_product_file." permission denied.
For the app I'm running in this step, this error is typically produced when the file cannot be written to its default location. I've set dir = "/workspace" to confirm the default.
So how do I grant read/write permissions to the app running inside a Cloud Build step to write its own intermediary work product to the local folders? The Cloud Build itself is running fine using Service Account credentials. Have tried adding more permissions including with Storage, Cloud Run, Compute Engine, App Engine admin roles. But the same error.
I assume that the credentials used to create the instance are passed to the run time. Have dug deep into the GCP CloudBuild documentation and examples, but found no answers.
There must be something fundamental I'm overlooking.
This problem was resolved by changing the Dockerfile USER as suggested by #PRAJINPRAKASH in this helpful answer https://stackoverflow.com/a/62218160/4882696
Tried to solve this by systematically testing GCP services and role permissions. All Service Account credentials tested were able to create container instances, and run gcloud or gutil fine. However, the custom apps created containers but failed when doing local write even to the default shared /workspace.
When using GCP Cloud Build, local read/write permissions do not "pass through" from the default service account to the runtime instance. The documentation is not clear on this.
I encountered this problem while building my react app with Cloud Build, i wasn't able to install node-sass globally...
So i tried to chown recursively the /usr directory to nobody:nogroup, and it worked. I have no idea if there is another better solution to this, but, the important thing, it fixed my issue.
I had a similar problem; the snippet I was looking for in my cloudbuild manifest was:
- id: perms
name: "gcr.io/cloud-builders/git"
entrypoint: "chmod"
args: ["-v", "-R", "a+rw", "."]
dir: "path/to/some/dir"
[google-cloud-storage]I am trying to copy files from Linux directory to GCP bucket using "Transfer for on-premises" option. I’ve installed docker script on Linux and GCP bucket is created. I now need to run Docker Run command to copy files. My question is how do I specify the source & target places in the docker command. For example;
Sudo docker run –source –target --hostname=$(hostname) --agent-id-prefix=ID123456789
The short answer is you can't supply a source/destination to this command, because its purpose is not to transfer the data. This command starts the agents for the service - agents are always-running processes that help you move data.
After starting agents that have access to your files, you issue a copy command in the Cloud Console, where you can specify a source directory and target bucket+prefix. When you do this, the service will contact the agents and use them to push the data to Google Cloud in parallel, for faster transfers. See the following links for more details:
Overview of how Transfer Service for on-premises data works
Setting up the service, and how to submit a transfer job
Am able to deploy Liberty docker image in Local Docker container and can access Liberty server.
I pushed the liberty image to Minishift installed in my system ,but when am going to create docker container, am facing error as follows:
Is anyone tried this before, please share your view:
Log Trace:
unable to write 'random state'
mkdir: cannot create directory '/config/configDropins': Permission denied
/opt/ibm/docker/docker-server: line 32:
/config/configDropins/defaults/keystore.xml: No such file or directory
JVMSHRC155E Error copying username into cache name
JVMSHRC686I Failed to startup shared class cache. Continue without
using it as -Xshareclasses:nonfatal is specified
CWWKE0005E: The runtime environment could not be launched.
CWWKE0044E: There is no write permission for server directory
/opt/ibm/wlp/output/defaultServer
By default OpenShift will run images as an assigned user ID unique to a project. Many available images have been written so that they can only be run as root, even though they have no requirement to run as root.
If you try and run such an image, because directories/files have been set up so they are only writable by the root user, running the image as a non root user ID will cause it to fail.
Best practice is to write images so that can be run as an arbitrary user ID. Unfortunately very few people do this, with the result that their images cannot be used in more secure multi tenant environments for deploying applications in containers.
OpenShift documentation provides guidelines on how to implement images so that can run in such more secure environments. See section 'Support Arbitrary User IDs' in:
https://docs.openshift.org/latest/creating_images/guidelines.html
If the image is built by a third party and they show no interest in making the changes to their image so works in secure multi tenant environments, you have a few options.
The first is to create a derived image which in the steps to build it, goes back and fixes permissions on the directories and files so can be used. Note that in doing this you have to be careful what you change permissions on, as changing permissions on files in a derived image caused a complete copy of the file to be made. If files are large, this will start to blow out your image size.
The second is if you are admin on the OpenShift cluster, you can relax security on the cluster for the service account the image is run as so that it is allowed to run the container as root. You should avoid doing this if possible, especially with third party images which you do not trust. For details on how to do this see:
https://docs.openshift.org/latest/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile
A final way that might be able to be used with some images if total size of what needs to have permissions fixed is small, is to use an init container to make a copy of the directories that need write access to an emptyDir volume. Then in the main container mount that emptyDir volume on top of the directory copied. This avoids needing to modify the image or enable anyuid. The amount of space available in emptyDir volumes may not be enough if have to copy application binaries as well. This is probably only going to work where the application wants to update config files or create lock files. You wouldn't be able to use this if the same directory is used for large amounts of transient file system data such as cache database or logs.
Given I have 2 containers, Weblogic and Tomcat.
Weblogic runs under oracle user, Tomcat runs under root user.
I use the same volume mapping for both services, so that application deployed in Tomcat orchestrates business process in which application deployed in Weblogic saves files to that shared folder.
I came across the issue with permissions because Tomcat runs under root (creates directory structure with root owner and group) and Weblogic running under oracle can't save files.
What is the best way to handle shared host data folder between two containers and avoid problems with permissions?
The unix/linux solutions to this are to use either:
The same UID and open permissions on the user
The same GID and open permissions on the group
None of the above and open permissions for everyone
These options all apply identically for apps running inside of containers.
The third option is least ideal since it allows anyone on the host to modify these files. However, implementing it is a quick chmod -R 777 dir and updating the umask to be 000 for any apps that create files in that directory.
That leaves option 1 or 2. Option 1 means either dropping root for Tomcat, or running Weblogic as root, the former being preferred but may not be possible depending on the app.
If option 1 isn't possible, try using a common group between the two apps. Add the users to the same GID in both images, and in your directory, change the group to that common GID and set the group sticky bit in your permissions to ensure every file in that directory are also created as that group.
chgrp $gid dir
chmod g+s dir
I am installing graphite via a docker container.
I have seen that whisper files should not be saved in the container.
So I will be using a data volume from docker to save these on the host machine.
My question is is there anything else I should be saving on the host (I know this is subjective so I guess Im looking for recommendations on whats important)?
Don't believe I need configuration e.g. carbon conf as this will come from my installation
So I'm thinking are there any other files from graphite I need (e.g log files etc)?
What is your reason for keeping log files? Though you do need the directory structure in place. Logging defaults to /opt/graphite/storage/logs. In here you have carbon-cache/ and webapp/ directories. The log directory for the webapp is set in the config- local_settings.py, whereas carbon uses carbon.conf. The configs are well documented so you can look into them for specific information.
Apart from configs that are generated during installation the only other 'file' crucial for the webapp to work is graphite.db in the /opt/graphtie/storage. It is used internally by the django webapp for housekeeping information such as user-auth etc. It gets generated by python manage.py --syncdb so i believe you can generate it again at the target system.