How to assign AWS credentials to SYSTEM user? - aws-powershell

I have a Powershell script that uploads audit logs to an S3 repository. The script works fine when I run it while logged in but I need to define a scheduled task and the task needs to be run as SYSTEM user. Can someone recommend a way that I can provide the SYSTEM user with the AWS credentials so that they are not stored on the machine in clear text?

I just found what I think is the answer: if I run the script thru the task scheduler once with the 'Set-AWSCredentials' command it creates the encrypted key info in C:\Windows\System32\config\systemprofile\AppData\Local\AWSToolkit\RegisteredAccounts.json. Then I was able to remove the 'Set-AWSCredentials' command and the script seems to run successfully.

I don't believe this is possible. Your authentication info will have to be stored in the clear on the client machine one way or another, and windows doesn't provide any convenient methods for protecting that information.
You might find it more convenient to manage access if you use ssh + encryption certificates as credentials (i believe there's an ssh client for powershell, though i haven't tried using it for aws work)

Related

Prebuilding a docker image *within* a Github Codespace when the image relies on the organization's other private repositories?

I'm exploring how best to use Github Codespaces for my organization. Our dev environment consists of a Docker dev environment that we run on local machines. It relies on pulling other private repos we maintain via the local machine's ssh-agent. I'd ideally like to keep things as consistent as possible and have our Codespaces solution use the same Docker dev environment from within the codespace.
There's a naive solution of just building a new codespace with no devcontainer.json and going through all the setup for a dev environment each time you create a new one... but I'd like to avoid this. Ideally, I keep the same dev experience and am able to get the codespace to prebuild by building the docker image and somehow getting access to our other private repos.
An extremely hacky-feeling solution that works for automated building is creating an ssh key and storing it as a user codespace secret, then setting up the ssh-agent with that ssh-key as part of the postCreateCommand. My understanding is that this would not work with the onCreateCommand because "it will not typically have access to user-scoped assets or secrets.". To reiterate, this works for automated building, but not pre-building.
From this Github issue it looks like cloning via ssh is a complete no-go with prebuilds because ssh will need a user-defined ssh key, which isn't available from the onCreateCommand. The only potential workaround I can see for this is having an organization-wide read-only ssh-key... which seems potentially even sketchier than having user-created ssh keys as user secrets.
The other possibility I can think of is switching to https for the git clones. This would require adding access to the other repos, which is no big deal. BUT I can't quite see how to get access from within the docker image. When I tried this, I was getting errors because I was asked for a username and password when I ran a git clone from within docker... even though git clone worked fine in the base codespace. Is there a way to forward whatever tokens Github uses for access to other repos into the docker build process? Is there a way to have user-generated tokens get passed into the docker build process and use that for access instead?
Thoughts and roasts welcome.

Needed example of a docker run --user on a windows server running docker

On my windows server 2016, I am trying to figure out the run command syntax to run a docker image as a user in my ldap. I read this article, but I am not following it very well (different environments)
Perhaps I am miss understanding the concept all together, but in the end I need to run the container as a specific user in our active directory.
Any links to a well documented run --user examples would be appreciated...
One of the things that is confusing is trying to figure out the UserId and such...
The answer depends on the use case, but may be gMSA authentication would help? Basically, with gMSA authentication, you can add the host OS to an AD domain, and containers running on it can share the privileges to use things like network drive. That way, you don't need to pass credential every time you access them.
MS team has a good write up on it here:
Active Directory Service Accounts for Windows Containers
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts
Also, artisticcheese has fantastic walk through.
Enabling integrated Windows Authentication in windows docker container
https://artisticcheese.wordpress.com/2017/09/09/enabling-integrated-windows-authentication-in-windows-docker-container/
Hope this helps.

Is it safe to fetch secure credentials in Travis CI in an open-source repo?

My open-source app uses AWS's Parameter Store feature to keep secure copies of app secrets (database passwords, etc). When my app is deployed to EC2 a script fetches these secrets and makes them available to the code, and I run this same script locally too.
Some of my tests need database access to run and therefore I need my Travis build script to have access.
Is it safe for me to run that script on my (public) Travis build? As far as I can tell, Travis doesn't expose the build artefacts anywhere (beyond what's on GitHub, which doesn't have my secrets). I know I can encrypt config items in my .travis.yml file but ideally there'd be a single place where this data lives, then I can rotate config keys without updating them in multiple places.
Is there a safe/better way I can do this?
Not really.
If you're accepting pull requests, it's trivially easy to create a pull request that dumps the publicly dumps the keys to the Travis console. Since there's no restrictions on what PRs can modify, edit, etc., wherever the keys are, someone could easily modify the code & print them.
Travis built it secure environment variables to prevent this type of attack, i.e. by not exposing the variables to PRs. That means that tests requiring secure environment variables can't be run with encrypted variables, but that's a trade off that one has to make.
As StephenG has mentioned it is trivial to expose a secret as it is simply an environment variable that the CI systems try to mask.
Example with Bash:
$ SECRET=mysecret
$ rev <<< $SECRET
tercesym
Since the secret is now longer a perfect string match TravisCI, Jenkins, GitHub Actions will not be able to mask you secret anymore and let it be displayed on the console. There are other ways such as uploading a secret to an external server, and so on. For example one could just do env >> debug.log and if that debug.log file is archived, then the secret would be in that file.
Rather than connecting to your real backends, which is not a good idea for CI pipelines anyways, I would much rather recommend that you use a service container.
We do not use TravisCI so I don't how practical it is with Travis, but with Jenkins on Kubernetes and GitHub Actions you can add a service container that runs parallel to your tests.
For example if your integration tests need DB access to mysql or PSQL just run the containers for them. For dynamoDB Amazon provides a container implementation for the explicit purpose of testing your code against the DynamoDB API. If you need more AWS services, LocalStack offers fake AWS core services such as S3.
Now if you actually need to write data to your DBs in AWS ... you should probably expose that data as build artifacts and trigger a notification event instead so that a custom backend on your side can fetch the data on the trigger. Something like a small Lambda function for example.

Docker and SSH for development with phpStorm

I am trying to setup a small development environment using Docker. phpStorm team is working hard on get Docker integrated for remote interpreter and therefore for debugging but sadly is not working yet (see here). The only way I have to add such capabilities for debugging is by creating and enabling an SSH access to the container which works like a charm.
Now, I have read a lot about this and some people like the one on this post says is not recommended. I have read others which says to have a dedicated SSH Docker container which I don't get how to fit on this environment.
I am already creating a user docker-user (check repo here) for certain tasks like run composer without root permissions. That could be used for this SSH stuff easily by adding a default password to it.
How would you handle this under such circumstances?
I too have implemented the ssh server workaround when using jetbrains IDEs.
Usually what I do is add a public ssh key to the ~/.ssh/authorized_keys file for the SSH user in the target container/system, and enable passwordless sudo.
One solution that I've thought of, but not yet had the time to implement, would be to make some sort of SSH service that would be a gateway to a docker exec command. That would potentially allow at least some functionality without having to modify your images in any way for this dev requirement.

Rsync over SSH from an ant script with a password

I have a virtual machine running on my developer machine, and I need to rsync files to it over SSH via an ant build script to "deploy". In production, security is a concern, but I really don't care about secure SSH practices when communicating with a dev VM on my local machine.
I could have created a cert and installed it in my SSH keys, but that's a little annoying. I'd much rather just send my password to rsync via the ant script and call it a day.
(EDIT - If you reeeeally can't handle this question without an example, let's assume this server is outside my control, and their evil sysadmin refuses to allow me to sign in with an SSH key for whatever reason. Who knows? He's just crazy man!)
Is there any way to invoke SSH, or more specifically rsync in non-interactive mode, without editing your ssh config? In other words, just supply the password?
I happen to have already figured out a solution to this, but it wasn't very easy, so I wanted to share it.
Basically, I used a command line program called "expect" to fill my password into rsync's interactive mode. I also didn't want to have to write it up as a script, so I condensed it into a single command. This also works for ssh as well as rsync, if you need that for some reason.
Maybe there's a better way, but this seems to work fine.
192.168.64.131 is obviously my local VM's ip in the following. Replace login_name and login_password with your ssh login & pass.
expect -c 'spawn rsync -avz -e ssh ./ login_name#192.168.64.131:/var/www/auth/; expect "*?assword:*" {send "login_password\r"; interact};'
Much easier and more secure to use an SSH key. An example is given in the following answer:
Ant, download fileset from remote machine

Resources