Does VS2019 honor the launchsettings.json "environmentVariables?
"Docker": {
"commandName": "Docker",
"launchBrowser": true,
"launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}",
"environmentVariables": {
"Business__ApiUrl": "https://localhost:5113",
"Business__BusinessUrl": "https://localhost:5123",
"ASPNETCORE_URLS": "https://+:5123;http://+:5120",
"ASPNETCORE_HTTPS_PORT": "5123 ",
"ASPNETCORE_ENVIRONMENT": "Development"
},
"publishAllPorts": true,
"httpPort": 5120,
"useSSL": true,
"sslPort": 5123
}
I don't see Business__ApiUrl used in the Visual Studio's docker run or in the container. VS also disregards the ASPNETCORE_URLS, or ASPNETCORE_HTTPS_PORT.
Here's the command VS executes (copied from the logs)
1> docker run -dt -v "C:\Users\me\vsdbg\vs2017u5:/remote_debugger:rw" -v "D:\dev\3ta\src\3ta.business:/app" -v "D:\dev\3ta\src:/src/" -v "C:\Users\me\AppData\Roaming\Microsoft\UserSecrets:/root/.microsoft/usersecrets:ro" -v "C:\Users\me\AppData\Roaming\ASP.NET\Https:/root/.aspnet/https:ro" -v "C:\Users\me\.nuget\packages\:/root/.nuget/fallbackpackages3" -v "C:\Program Files (x86)\Microsoft Visual Studio\Shared\NuGetPackages:/root/.nuget/fallbackpackages" -v "C:\Program Files (x86)\Microsoft\Xamarin\NuGet\:/root/.nuget/fallbackpackages2" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -e "ASPNETCORE_LOGGING__CONSOLE__DISABLECOLORS=true" -e "ASPNETCORE_ENVIRONMENT=Development" -e "NUGET_PACKAGES=/root/.nuget/fallbackpackages3" -e "NUGET_FALLBACK_PACKAGES=/root/.nuget/fallbackpackages;/root/.nuget/fallbackpackages2;/root/.nuget/fallbackpackages3" -p 5120:5120 -p 5123:5123 -P --name 3ta.business --entrypoint tail image3tabusiness:dev -f /dev/null
1> 29016d3f14e56b0bda7d63723a8ccff3030c1ee061b8357839e320d4eb635f0a
I don't mind using an alternative way of passing settings, I just don't want to rule out a method by mistake.
Frustrating not knowing why something fails.
Unfortunately it seems it does not. It is a bit of a hack, but one way to get around this is by passing environment variables using DockerfileRunArguments as shown below in launchSettings.json, making sure to use backslashes to escape the extra double quotation marks:
"Docker": {
"commandName": "Docker",
"launchBrowser": true,
"launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}",
"publishAllPorts": true,
"httpPort": 5120,
"useSSL": true,
"sslPort": 5123,
"DockerfileRunArguments": "-e \"Business__ApiUrl=https://localhost:5113\""
}
This appends the arguments to the docker run command:
The passed environment variables can then be seen in the container:
Related
For example I have this code in my pipeline:
sshPublisher(
failOnError: true,
continueOnError: false,
publishers: [
sshPublisherDesc(
configName: 'some_config',
verbose: true,
transfers: [
sshTransfer(
sourceFiles: 'some_path/some_script.sh',
remoteDirectory: '/tmp',
removePrefix: 'some_path',
execCommand: 'sudo cp /tmp/some_script /usr/local/bin/some_script && sudo chmod a+x /usr/local/bin/some_script'
)
]
)
]
)
But during execution this code I have error:
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
Ssh config some_config content username and ssh private key.
How I can exec sudo commands?
If I use usePty then the process infinite wait password.
Situation and Problem
I am running macOS Mojave 10.14.5, upgraded bash like described here and have a TeXlive docker container (basically that one), that I want to call to typeset LaTeX files. This does work very well and also execution with this following tasks.json worked flawlessly up unti some recent update (that I cannot pin down, as I am not using this daily).
tasks.json
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "runit",
"group": {
"kind": "build",
"isDefault": true
},
"command": "docker",
"args": [
"run",
"-v",
"${fileDirname}:/doc/",
"-t",
"-i",
"mytexlive",
"pdflatex",
"${fileBasename}"
],
"problemMatcher": []
},
{
"type": "shell",
"label": "test",
"command": "echo",
"args": [
"run",
"-v",
"${fileDirname}:/doc/",
"-t",
"-i",
"mytexlive",
"pdflatex",
"${fileBasename}"
],
}
]
}
Trying to run docker yields a "command not found" :
> Executing task: docker run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex <
/usr/local/bin/bash: docker: command not found
The terminal process command '/usr/local/bin/bash -c 'docker run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex'' failed to launch (exit code: 127)
... while trying to echo, works just fine.
> Executing task: echo run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex <
run -v /path/to/file:/doc/ -t -i mytexlive pdflatex file_name.tex
Even though, it once worked just like described above and the very same command works in the terminal, it fails now if I execute it as a build-task. Hence, my
Question
How to use docker in a build-task ?
or fix the problem in the above set up.
additional notes
Trying the following yielded the same "command not found"
{
"type": "shell", "label": "test",
"command": "which", "args": ["docker"]
}
... even though this works:
bash$ /usr/local/bin/bash -c 'which docker'
/usr/local/bin/docker
bash$ echo $PATH
/usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
edit: One more note:
I am using a context entry to start vscode with an automator script that runs the following bash command with the element 'right-clicked' passed as the variable:
#!/bin/sh
/usr/local/bin/code -n "$1"
So since there hasn't been any progress here and I got help on GitHub: I will just answer myself such that others led here searching for a solution won't be let down.
Please give all the acknowledgement to joaomoreno for his answer here
Turns out that by starting vscode via a context-entry there is some issue with an enviroment variable. Starting like this fixed that problem thus far:
#!/bin/sh
VSCODE_FORCE_USER_ENV=1 /usr/local/bin/code -n "$1"
I am trying to run a service using DC/OS and Docker. I created my Stack using the template for my region from here. I also created the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y expect openssh-client
WORKDIR "/root"
ENTRYPOINT eval "$(ssh-agent -s)" && \
mkdir -p .ssh && \
echo $PRIVATE_KEY > .ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa && \
expect -c "spawn ssh-add /root/.ssh/id_rsa; expect \"Enter passphrase for /root/.ssh/id_rsa:\" send \"\"; interact " && \
while true; do ssh-add -l; sleep 2; done
I have a private repository that I would like to clone/pull from when the docker container starts. This is why I am trying to add the private key to the ssh-agent.
If I run this image as a docker container locally and supply the private key using the PRIVATE_KEY environment variable, everything works fine. I see that the identity is added.
The problem that I have is that when I try to run a service on DC/OS using the docker image, the ssh-agent does not seem to remember the identity that was added using the private key.
I have checked the error log from DC/OS. There are no errors.
Does anyone know why running the docker container on DC/OS is any different compared to running it locally?
EDIT: I have added details of the description of the DC/OS service in case it helps:
{
"id": "/SOME-ID",
"instances": 1,
"cpus": 1,
"mem": 128,
"disk": 0,
"gpus": 0,
"constraints": [],
"fetch": [],
"storeUrls": [],
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "IMAGE NAME FROM DOCKERHUB",
"network": "BRIDGE",
"portMappings": [{
"containerPort": SOME PORT NUMBER,
"hostPort": SOME PORT NUMBER,
"servicePort": SERVICE PORT NUMBER,
"protocol": "tcp",
"name": “default”
}],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"healthChecks": [],
"readinessChecks": [],
"dependencies": [],
"upgradeStrategy": {
"minimumHealthCapacity": 1,
"maximumOverCapacity": 1
},
"unreachableStrategy": {
"inactiveAfterSeconds": 300,
"expungeAfterSeconds": 600
},
"killSelection": "YOUNGEST_FIRST",
"requirePorts": true,
"env": {
"PRIVATE_KEY": "ID_RSA PRIVATE_KEY WITH \n LINE BREAKS",
}
}
Docker Version
Check that your local version of Docker matches the version installed on the DC/OS agents. By default, the DC/OS 1.9.3 AWS CloudFormation templates uses CoreOS 1235.12.0, which comes with Docker 1.12.6. It's possible that the entrypoint behavior has changed since then.
Docker Command
Check the Mesos task logs for the Marathon app in question and see what docker run command was executed. You might be passing it slightly different arguments when testing locally.
Script Errors
As mentioned in another answer, the script you provided has several errors that may or may not be related to the failure.
echo $PRIVATE_KEY should be echo "$PRIVATE_KEY" to preserve line breaks. Otherwise key decryption will fail with Bad passphrase, try again for /root/.ssh/id_rsa:.
expect -c "spawn ssh-add /root/.ssh/id_rsa; expect \"Enter passphrase for /root/.ssh/id_rsa:\" send \"\"; interact " should be expect -c "spawn ssh-add /root/.ssh/id_rsa; expect \"Enter passphrase for /root/.ssh/id_rsa:\"; send \"\n\"; interact ". It's missing a semi-colon and a line break. Otherwise the expect command fails without executing.
File Based Secrets
Enterprise DC/OS 1.10 (1.10.0-rc1 out now) has a new feature named File Based Secrets which allows for injecting files (like id_rsa files) without including their contents in the Marathon app definition, storing them securely in Vault using DC/OS Secrets.
Creation: https://docs.mesosphere.com/1.10/security/secrets/create-secrets/
Usage: https://docs.mesosphere.com/1.10/security/secrets/use-secrets/
File based secrets wont do the ssh-add for you, but it should make it easier and more secure to get the file into the container.
Mesos Bug
Mesos 1.2.0 switched to using Docker --env_file instead of -e to pass in environment variables. This triggers a Docker env_file bug that it doesn't support line breaks. A workaround was put into Mesos and DC/OS, but the fix may not be in the minor version you are using.
A manual workaround is to convert the rsa_id to base64 for the Marathon definition and back in your entrypoint script.
The key file contents being passed via PRIVATE_KEY originally contain line breaks. After echoing the PRIVATE_KEY variable content to ~/.ssh/id_rsa the line breaks will be gone. You can fix that issue by wrapping the $PRIVATE_KEY variable with double quotes.
Another issue arises when the container is started without attached TTY, typically via -i -t command line parameters to docker run. The password request will fail and won't add the ssh key to the ssh-agent. For the container being run in DC/OS, the interaction probably won't make sense, so you should change your entrypoint script accordingly. That will require your ssh key to be passwordless.
This changed Dockerfile should work:
ENTRYPOINT eval "$(ssh-agent -s)" && \
mkdir -p .ssh && \
echo "$PRIVATE_KEY" > .ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa && \
ssh-add /root/.ssh/id_rsa && \
while true; do ssh-add -l; sleep 2; done
I have following question:
How to run docker with experimental features on (like image squashing docker build --squash=true... for reduce it size) on ubuntu 16.04 ?
To turn on experimental docker functions create following file by:
sudo nano /etc/docker/daemon.json
and add below content to it
{
"experimental": true
}
and save file (by CTRL+X and Enter ) and exit. In terminal type:
sudo service docker restart
To check that experimental funcions are ON, type in terminal:
docker version
And you should see Experimental: true
UPDATE
Instead of nano you can use this one-liner:
echo $'{\n "experimental": true\n}' | sudo tee /etc/docker/daemon.json
I tried everything here on a Ubuntu 18.04 VM on my mac--nothing worked. All over the interwebs said the same thing, but the one thing that finally got experimental turned on was #Michael Haren's tiny answer:
fyi- to enable this for the client, the config file to create is ~/.docker/config.json and the value is "enabled", not true
which meant something like this for me:
$ mkdir ~/.docker
$ echo '{ "experimental": "enabled" }' > ~/.docker/config.json
$ sudo systemctl restart docker
$ docker version
...
Experimental: true
...
This should be a top-level answer. So, credit to them (except sweet internet karma points for me...).
If you only want to run it temporarily / without modifying files, you can export DOCKER_CLI_EXPERIMENTAL=enabled. The below turns on experimental mode for your client.
$ docker version
Experimental: false
$ export DOCKER_CLI_EXPERIMENTAL=enabled
$ docker version
Experimental: true
Posting this to help those who are running docker on macOS
You will need to enable experimental on two files, one is client while another is docker engine
I suggest open the file manually instead of direct echo into the file as that file might have some other configuration and you might not want to overwrite them accidentally
For client, visit ~/.docker/config.json, and add "experimental": "enabled" on top level config as below
{
"experimental" : "enabled",
"auths" : {
"harbor.xxx.com" : {
}
},
"credsStore" : "desktop"
}
For Docker Engine, visit ~/.docker/daemon.json and add "experimental": true on top level config as below
{
"features": {
"buildkit": true
},
"experimental": true,
"builder": {
"gc": {
"defaultKeepStorage": "20GB",
"enabled": true
}
}
}
Do note that the "value" of experimental is different between client and server.
Once done, restart the docker using command below
killall Docker && open /Applications/Docker.app
then verify the result
docker version
sudo sed -i 's/ExecStart=\/usr\/bin\/dockerd -H fd:\/\/ --containerd=\/run\/containerd\/containerd.sock/ExecStart=\/usr\/bin\/dockerd -H fd:\/\/ --containerd=\/run\/containerd\/containerd.sock --experimental/g' /lib/systemd/system/docker.service
sudo systemctl daemon-reload
sudo systemctl restart docker
I think you can solve this on Linux using the systemctl as described by https://stackoverflow.com/a/70460819/433814 on this SO. However, first you need to edit the correct files... Here's the way to set it up in a MacOS if you were looking for similar answers.
Docker run with Experiments MacOS
Just set the variable ENABLED=true or ENABLED=false and this script will automagically turn it on or off, writing to the file
NOTE: You MUST have jq installed to execute and update in-place.
ENABLED=true; \
CONFIG=~/.docker/config.json; DAEMON=~/.docker/daemon.json ; \
cat <<< $(jq --argjson V ${ENABLED} '.experimental = $V' ${DAEMON}) > ${DAEMON} ; \
cat <<< $(jq --arg V $(if [ "${ENABLED}" = "true" ]; then echo "enabled"; else echo "disabled"; fi) '.experimental = $V' ${CONFIG}) > ${CONFIG} ; \
cat ~/.docker/config.json ; \
cat ~/.docker/daemon.json
Output confirmation
This will be output automatically confirming
{
"auths": {
"https://index.docker.io/v1/": {},
"registry.gitlab.com": {}
},
"credsStore": "desktop",
"experimental": "enabled",
"currentContext": "default"
}
{
"builder": {
"gc": {
"defaultKeepStorage": "20GB",
"enabled": true
}
},
"experimental": true,
"features": {
"buildkit": true
}
}
Restart Docker Engine in MacOS
Just run the following
killall Docker && open /Applications/Docker.app
References
JQ convert to number, convert to boolean when generating new json from shell variables
passing arguments to jq filter
I'm using the official RabbitMQ Docker image (https://hub.docker.com/_/rabbitmq/)
I've tried editing the rabbitmq.config file inside the container after running
docker exec -it <container-id> /bin/bash
However, this seems to have no effect on the rabbitmq server running in the container. Restarting the container obviously didn't help either since Docker starts a completely new instance.
So I assumed that the only way to configure rabbitmq.config for a Docker container was to set it up before the container starts running, which I was able to partly do using the image's supported environment variables.
Unfortunately, not all configuration options are supported by environment variables. For instance, I want to set {auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'EXTERNAL']} in rabbitmq.config.
I then found the RABBITMQ_CONFIG_FILE environment variable, which should allow me to point to the file I want to use as my conifg file. However, I've tried the following with no luck:
docker service create --name rabbitmq --network rabbitnet \
-e RABBITMQ_ERLANG_COOKIE='mycookie' --hostname = "{{Service.Name}}{{.Task.Slot}}" \
--mount type=bind,source=/root/mounted,destination=/root \
-e RABBITMQ_CONFIG_FILE=/root/rabbitmq.config rabbitmq
The default rabbitmq.config file containing:
[ { rabbit, [ { loopback_users, [ ] } ] } ]
is what's in the container once it starts
What's the best way to configure rabbitmq.config inside Docker containers?
the config file lives in /etc/rabbitmq/rabbitmq.config so if you mount your own config file with something like this (I'm using docker-compose here to setup the image)
volumes:
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
that should do it.
In case you are having issues that the configuration file get's created as directory, try absolute paths.
I'm able to run RabbitMQ with a mounted config using the following bash script:
#RabbitMQ props
env=dev
rabbitmq_name=dev_rabbitmq
rabbitmq_port=5672
#RabbitMQ container
if [ "$(docker ps -aq -f name=${rabbitmq_name})" ]; then
echo Cleanup the existed ${rabbitmq_name} container
docker stop ${rabbitmq_name} && docker rm ${rabbitmq_name}
echo Create and start new ${rabbitmq_name} container
docker run --name ${rabbitmq_name} -d -p ${rabbitmq_port}:15672 -v $PWD/rabbitmq/${env}/data:/var/lib/rabbitmq:rw -v $PWD/rabbitmq/${env}/definitions.json:/opt/definitions.json:ro -v $PWD/rabbitmq/${env}/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:ro rabbitmq:3-management
else
echo Create and start new ${rabbitmq_name} container
docker run --name ${rabbitmq_name} -d -p ${rabbitmq_port}:15672 -v $PWD/rabbitmq/${env}/data:/var/lib/rabbitmq:rw -v $PWD/rabbitmq/${env}/definitions.json:/opt/definitions.json:ro -v $PWD/rabbitmq/${env}/rabbitmq.config:/etc/rabbitmq/rabbitmq.config:ro rabbitmq:3-management
fi
I also have the following config files in my rabbitmq/dev dir
definitions.json
{
"rabbit_version": "3.7.3",
"users": [{
"name": "welib",
"password_hash": "su55YoHBYdenGuMVUvMERIyUAqJoBKeknxYsGcixXf/C4rMp",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": ""
}, {
"name": "admin",
"password_hash": "x5RW/n1lq35QfY7jbJaUI+lgJsZp2Ioh6P8CGkPgW3sM2/86",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
}],
"vhosts": [{
"name": "/"
}, {
"name": "dev"
}],
"permissions": [{
"user": "welib",
"vhost": "dev",
"configure": ".*",
"write": ".*",
"read": ".*"
}, {
"user": "admin",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}],
"topic_permissions": [],
"parameters": [],
"global_parameters": [{
"name": "cluster_name",
"value": "rabbit#98c821300e49"
}],
"policies": [],
"queues": [],
"exchanges": [],
"bindings": []
}
rabbitmq.config
[
{rabbit, [
{loopback_users, []},
{vm_memory_high_watermark, 0.7},
{vm_memory_high_watermark_paging_ratio, 0.8},
{log_levels, [{channel, warning}, {connection, warning}, {federation, warning}, {mirroring, info}]},
{heartbeat, 10}
]},
{rabbitmq_management, [
{load_definitions, "/opt/definitions.json"}
]}
].