GitHub Codespace secrets when using VSCode devcontainers locally - environment-variables

I am using a GCP service account file as a GitHub Codespaces secret, and I am able to access it from the Codespace container, as explained here.
Now, I want to also support developing locally without GitHub Codespaces but still use VSCode devcontainers.
I also hold the service account file on my local filesystem, but outside of the git repo (for obvious reasons). How should I reference it?

You can use the mounts property in devcontainer.json. Codespaces ignores bind mounts (more info can be found in the documentation) so you should be able to mount the file from your local filesystem without affecting how your Codespaces are built/ run.

Update
I have release an extension on the marketplace to solve this usecase: https://marketplace.visualstudio.com/items?itemName=pomdtr.secrets
It stores the secrets in the user keychain. Since it is a web extension, it runs on the client and also works with devcontainers.
Previous Answer
You can use the terminal.integrated.env.linux to pass the secret in your settings.json file.
You can disable settings sync using the settingsSync.ignoredSettings array:
{
"terminal.integrated.env.linux": {
"GITHUB_TOKEN": "<your-token>"
},
"settingsSync.ignoredSettings": [
"terminal.integrated.env.linux"
]
}

Related

Prebuilding a docker image *within* a Github Codespace when the image relies on the organization's other private repositories?

I'm exploring how best to use Github Codespaces for my organization. Our dev environment consists of a Docker dev environment that we run on local machines. It relies on pulling other private repos we maintain via the local machine's ssh-agent. I'd ideally like to keep things as consistent as possible and have our Codespaces solution use the same Docker dev environment from within the codespace.
There's a naive solution of just building a new codespace with no devcontainer.json and going through all the setup for a dev environment each time you create a new one... but I'd like to avoid this. Ideally, I keep the same dev experience and am able to get the codespace to prebuild by building the docker image and somehow getting access to our other private repos.
An extremely hacky-feeling solution that works for automated building is creating an ssh key and storing it as a user codespace secret, then setting up the ssh-agent with that ssh-key as part of the postCreateCommand. My understanding is that this would not work with the onCreateCommand because "it will not typically have access to user-scoped assets or secrets.". To reiterate, this works for automated building, but not pre-building.
From this Github issue it looks like cloning via ssh is a complete no-go with prebuilds because ssh will need a user-defined ssh key, which isn't available from the onCreateCommand. The only potential workaround I can see for this is having an organization-wide read-only ssh-key... which seems potentially even sketchier than having user-created ssh keys as user secrets.
The other possibility I can think of is switching to https for the git clones. This would require adding access to the other repos, which is no big deal. BUT I can't quite see how to get access from within the docker image. When I tried this, I was getting errors because I was asked for a username and password when I ran a git clone from within docker... even though git clone worked fine in the base codespace. Is there a way to forward whatever tokens Github uses for access to other repos into the docker build process? Is there a way to have user-generated tokens get passed into the docker build process and use that for access instead?
Thoughts and roasts welcome.

How can I include secrets to my docker image?

I've built myself a web api that uses a sql database. I've used Visual Studio to create this project, and have the ability to right click and "manage user secrets" on my project file.
Its in user secrets that I've stored my connection string and I dont want to add it to my github (private) repo.
The user secret is a json file.
How do I noe include these secrets? Do I include them in the project, making them a part of the image? Or do I do something fancy with the running instance?
There's many ways to go about doing this, but typically you either:
Extract your secrets from your codebase (GIT repo), inject them through environment variables at container startup and then access them from your application code like any other environment var. This is not the most secure option, but at least your secrets won't be in your VCS anymore.
Pull the secrets from some type of secrets manager (e.g. AWS Secrets Manager) straight from your application code. This is more secure than the first option, but requires more code changes and creates a dependency between your application and your secrets manager.

Docker PGAdmin Container persient config

I am new to docker. So what I want to have is a pgadmin container which I can pull and have always my configs and connections up to date. I was not really sure how to do that, but can I have a Volume which is alsways shared for example on my Windows PC at home and on work? I couldt find an good tutorial for that and dont know if that makes sense. Lets say my computer would be stolen I just want to install docker and my images and fun.
What about a shared directory using DropBox ? as far as i know that the local dropbox directories always synced with the actual dropbox account which means you can have the config up to date for all of your devices.
Alternatively you can save the configuration - as long as it does not contain sensitive data - on a git repository which you can clone it then start using it. Both cases can be used as volumes in docker.
That's not something you can do with Docker itself. You can only push images to DockerHub, which do not contain information that you added to a container during an execution.
What you could do is using a backup routine to S3 for example, and sync your 'config and connections' between your docker container running on your home PC and work one.

Create environment variables for Kubernetes main container in Kubernetes Init container

I use an Kubernetes Init container to provision the application's database. After this is done I want to provide the DB's credentials to the main container via environment variables.
How can this be achieved?
I don't want to create a Kubernetes Secret inside the Init container, since I don't want to save the credentials there!
I see several ways to achieve what you want:
From my perspective, the best way is to use Kubernetes Secret. #Nebril has already provided that idea in the comments. You can generate it by Init Container and remove it by PreStop hook, for example. But, you don't want to go that way.
You can use a shared volume which will be used by InitConatainer and your main pod. InitContainer will generate the environment variables file db_cred.env in the volume which you can mount, for example, to /env path. After that, you can load it by modifying a command of your container in the Pod spec and add the command source /env/db_cred.env before the main script which will start your application. #user2612030 already gave you that idea.
Another alternative way can be Vault by Hashicorp, you can use it as storage of all your credentials.
You can use some custom solution to write and read directly to Etcd from Kubernetes apps. Here is a library example - k8s-kv.
But anyway, the best and the most proper way to store credentials in Kubernetes is Secrets. It is more secure and easier than almost any other way.

How can I share my full app (.yml file) with others?

I created an app which consists of many components so I use docker-compose.
I published all my images into my private repository (but I also use public repos from other providers).
If I have many customers: how can they receive my full app?
I could send them my docker-compose.yml file per email or if I have access to the servers, I can scp the .yml file.
But is there another solution to provide my full app without scp'ing a yml file?
Edit:
So I just read about docker-machine. This looks good, and I already linked it with an Azure subscription.
Now what's the easiest way to deploy a new VM with my docker-application? Do I still have to scp my .yml file, ssh into this machine and start docker-compose? Or can I tell to use a specific .yml during VM creation and automatically run it?
There is no official distribution system specifically for Compose files, but there are many options.
The easiest option would be to host the Compose file from a website. You could even use github or github pages. Once you have it hosted by an http server you can curl it to download it.
There is also:
composehub a community project to act as a package manager for Compose files
Some related issues: #1597, #3098, #1818
The experimental DAB feature in Docker

Resources