Azurefile PVC stuck in pending state because no storage account was automatically created by AKS.
I tried step by step documentation provided here https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
but without any sucess. I'm running azure-cli version 2.0.73.
Azurefile PVC creation should change to status Bound.
For your issue, when you want to use the Azure File share as the persistent volumes. You need to follow two steps:
create the storage class
create the persistent volume claim
After these two things are created, then you can mount them to the pods. When creating, you need to pay attention to something:
If you want to create the storage account automatically, you just need to add one or both the parameters skuName and location, no storageAccount.
If you add the parameter storageAccount, then you must create the storage account yourself with the name you set.
You need to wait for munites, create the storage account also needs some time. Take a look at the Storage Class for Azure File.
Related
I am trying to pull images from the same Artifactory repo using 2 different access tokens. This is because one image is available to one user, and another one is accessible by another user.
I tried using docker login, but I can login only once to a repo. Is there a way to specify in the docker-compose.yml file a user and token that Compose should use in order to pull the image?
The docker-compose file specification does not support providing credentials per service / image.
But putting this technicality aside, the described use case clearly indicates there is a user who needs access to both images...
I use Active Storage on my Rails site with AWS. After upgrading to 6.1, I'd like to configure public access per the guide so my images have permanent URLs.
I've determined that I need to keep the existing service as-is so previously uploaded images continue to work. I've created a new service and configured the app to use it like this.
Previous images continue to work like this, but new image uploads result in Aws::S3::Errors::AccessDenied. Note that the credentials used are exactly the same as in the previous, working, non-public service. The guide mentions that the bucket needs to have the proper permissions, but not what exactly needs to be set.
Looking in AWS, the section "Block public access (bucket settings)" is all set to "Off". In "Access control list (ACL)", "Bucket owner (your AWS account)" has "List, Write" for both "Objects" and "Bucket ACL". No other permissions are listed. I've tried changing "Everyone (public access)" to include "List" for "Objects" and "Read" for "Bucket ACL" - doesn't seem to solve the problem.
How do I get public URLs working with Active Storage?
The permission you need when switching from private access to public is PutObjectAcl. Adding this in the IAM Management Console makes it work.
In addition, rather than creating a new service, you can mark all images in the existing service as public-readable via the UI or via a script.
The title says it all. I have a VM instance set up in my google cloud for generating some model data. A friend of mine also has a new account. We're both basically using the free credits Google provides. We're trying to figure out if there is a way that I can generate the data in my VM instance and then transfer it to my friend's GCS Bucket. He hasn't set up any buckets yet, so we're also open to suggestions on the type of storage that would help us do this task.
I realize I can set up a persistent disk and mount it to my own VM instance. But that isn't our goal right now. We just need to know if there is a way to transfer data from one Google account to another. Any input is appreciated.
There is a way to do this by having your friend create the bucket and then he gives your email permission to access the bucket. Then from your VM you can us the gsutil command to copy the files to the bucket.
1) Have your friend create the bucket in the console.
2) In the permissions section, he will Add Member and add your email and provide you with Storage Object Creator role
3) Then you SSH into your VM and use the following gsutil command to copy the files. For example gsutil cp testfile.txt gs://friend_bucket
4) If you get 403 error, you probably have to run gcloud auth login first
I have already created a databag item which is existing on the chef server.
Now, I am trying to pass on that databag item secret value to a docker container.
I am creating the data bag as follows:
knife data bag create bag_secrets bag_masterkey --secret-file C:\path\data_bag_secret
I am retrieving value of that databag item in Chef recipe as follows:
secret = Chef::EncryptedDataBagItem.load_secret("#{node['secret']}")
masterkey = Chef::EncryptedDataBagItem.load("databag_secrets", "databag_masterkey", secret)
What logic do i need to add to pass on the data bag secret to a docker container?
I've said this like twice on different questions: DO NOT USE ENCRYPTED DATABAGS LIKE THIS IT IS NOT SAFE.
I think you fundamentally misunderstand the security model of encrypted bags, they exist only to allow for data where the Chef Server cannot read it. The cost is you must manage key distribution. For Docker this would probably be via sidecar containers or data volumes but running chef-client inside a container is relatively rare so you'll have to sort that out yourself. I would recommend working with a security/infosec engineer at your company to figure out the right security model for your usage.
I'm writing a Windows service which needs to persist some data across reboots/restarts of the service. Currently I'm writing the files in a directory returned by Application.UserAppDataPath, but that doesn't seem to be giving me a consistent answer. How should I determine the right place to write the data?
It depends if your service is running with the system account or with a specific user account.
System account. Store the files in the CommonApplicationData folder:
string pathForSystem = Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData);
User account. Store the files in the ApplicationData folder:
string pathForUser = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData);
If you want it to be consistent (i.e. user agnostic) try Application.CommonAppDataPath.
If this is a .NET service I think you could use IsolatedStorage