How to install Ngrok 2.0 on linux subsystem on Windows 10 - ruby-on-rails

I am trying to use Ngrok to create an introspected tunnel to localhost webhook development tool on my Linux subsystem (Powershell --> bash)
I try to install ngrok using the following:
sudo apt install ngrok-client
From what I understand the sudo command used installs ngrok v 1.6.
When I attempt to execute ngrok 80 I get an error:
Invalid address server_addr 'ngrokd.ngrok.com:443'
I searched for this in another post and was informed that ngrok v. 1.6 is obsolete and in order to continue using ngrok would be to upgrade to 2.0
Testing PayPal with Rails
It's advised to download from the website http://ngrok.com, which is simple enough, but what is the correct way to download for linux subsystem use? Should I download for Linux and unzip? Or am I suppose to download for Windows?
Let me know if I'm misunderstanding anything

I use ubuntu since years, I trust the online instructions so either do:
sudo apt-get update
sudo apt-get install ngrok-client
or try to use this instruction and let me know if you have problems
the same instructions are included hear
$ unzip /path/to/ngrok.zip
$ is just the sign from your terminal. You do not need to input $, but you need to open the terminal (ctrl+alt+t), run the unzip command to unzip the folder.
You will find the folder under your downloads. So / is the root of your machine. You need to go inside your home folder which will be /home so you do cd (change directory) followed by that path
cd /home
then you do ls to list all the directories. You should see your profile with your name username. You should be able to get in the download folder with cd <username>/Downloads where <username> should be replaced with your personal folder name
At this point you are inside the Downloads directory. You can do an ls in your terminal, find the name of the file you downloaded (should be something like ngrok-stable-linux-amd64.zip) and run
unzip <file-name.zip>
where file name is the file you downloaded (something like ngrok-stable-linux-amd64.zip) or you can go back to the root directory and run
cd /
unzip /home/<username>/Downloads/<yourfile.zip>
Read the documentation on how to use ngrok. Try it out by running it from the command line:
./ngrok help
Also I read from the documentation, that you can set up that address
There is some discussion online about this
Testing PayPal with Rails
I can help you more but I need your feedback

You can try this from the official docs.
sudo tar xvzf ~/Downloads/ngrok-v3-stable-linux-amd64.tgz -C /usr/local/bin

Related

Ubuntu, shopsys install via composer, docker, still crashing

I wanna install shopsys via composer and docker, as is recommended.
https://github.com/shopsys/shopsys/blob/master/docs/installation/installation-using-docker-linux.md
I installed git, php-fpm (configured), postgres (configured), composer, docker, docker-compose.
sudo apt install git
sudo apt install php7.2-fpm
sudo apt install postgresql
sudo apt install composer
sudo apt install docker-ce
sudo apt install docker-compose
Everything ok.
I added my user to docker group.
sudo usermod -a -G docker $(whoami)
Ok.
Next I made folder /var/www/html/shopsys, created project shopsys via composer.
composer create-project shopsys/project-base --no-install --keep-vcs
cd project-base/
Then I run this in /var/www/html/shopsys/project-base.
./scripts/install.sh
Everything seems to be ok, until this.
[RuntimeException]
/var/www/html/vendor does not exist and could not be created.
I set rights to 777 for folder /var/www/html, and run it again, but same problem.
The I run this.
sudo composer install
It shows me this error.
....Exception\InvalidConfigurationException]
Invalid configuration for path "monolog.handlers.main": You can only use ex
cluded_http_codes/excluded_404s with a FingersCrossedHandler definition
In ScriptHandler.php line 294:
An error occurred when executing the "'shopsys:domains-urls:configure'" command:
In BaseNode.php line 319:
...\Exception\InvalidConfigurationException]
Invalid configuration for path "monolog.handlers.main": You can only use ex
cluded_http_codes/excluded_404s with a FingersCrossedHandler definition
...
etc., error is quite ugly.
Last error when i run script install.sh.
file_put_contents(/var/www/html/vendor/composer/installed.json): failed to open stream: Permission denied
But this folder does not exist.
ls: cannot access '/var/www/html/vendor/': No such file or directory
Just question, where could be the problem?
Is possible to download sources from some link, extract it, configure and display in web browser with easy way, for example as wordpress?
Thanks.
To solve problem with vendor:
It seems that your UID and GID is different than default 1000, that is set in docker-compose.yml for Linux by default.
To solve your issue you can continue by step 3 in https://github.com/shopsys/shopsys/blob/master/docs/installation/installation-using-docker-linux.md#3-set-the-uid-and-gid-to-allow-file-access-in-mounted-volumes
You found issue with installation script, I have created issue on GitHub.
To solve problem with Invalid configuration for path "monolog.handlers.main":
Currently there is problem with new minor version (3.4.0) of symfony/monolog-bundle that created BC break. There is already created issue about this problem and there is already merged fix in Shopsys master.
To solve problem in your project you have to add
"symfony/monolog-bundle": ">=3.4.0", in conflict section in your composer.json file and then run composer install again.
We are trying to answer questions on stackoverflow as soon as possible, but we also have Slack where is many users and you might get your question answered much faster.

Apache Jena Commands not found

I'm trying to set up my system (Ubuntu 16.04) with Apache Jena 3.10.0, and followed the provided instructions, but I'm unable to access any of the commands that I should have access to.
For example, sparql --version and bin/sparql --version both return:
sparql: command not found
I have downloaded and extracted the files to /home/[user]/apache-jena-3.10.0, then run:
export JENA_HOME=/home/[user]/apache-jena-3.10.0
export PATH=$PATH:$JENA_HOME/bin
The command cd $JENA_HOME successfully goes the apache-jena-3.10.0 directory.
I feel that there is a basic linux thing here that I'm missing, but I've tried a lot of things and had no luck so far. Any help would be greatly appreciated. Thanks!
The files in the download from Apache were not marked as executable. From the main apache-jena-3.10.0 directory, chmod -R 775 bin changed all files so I could run them from command line.

"bash: aws: command not found" on Windows 7 in Git Bash

I'm trying to use AWS CLI to access CodeCommit. And it's sort of working. I am able to use the aws command in the Windows command prompt. However, when I try to access it using the Git Bash shell, it says
"bash: aws: command not found."
Additionally, when I try to do do a git clone in the Windows command promt, trying to access CodeCommit, it tries to use aws using the credentials helper, which also results in "aws: command not found."
I followed the instructions in the AWS documentation, which suggests some directories to add to the PATH:
https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-windows.html#awscli-install-windows-path
Here's what my PATH variable looks like:
C:\Users\ddrayton\AppData\Local\Programs\Python\Python36\Scripts\;C:\Users\ddrayton\AppData\Local\Programs\Python\Python36\;C:\Windows\System32;;C:\Program
Files\Docker
Toolbox;C:\Users\ddrayton\MyCurl;%USERPROFILE%\AppData\Local\Programs\Python\Python36\Scripts;C:\Program
Files\Amazon\AWSCLI;C:\Program Files
(x86)\Amazon\AWSCLI;C:\Users\ddrayton\AppData\Local\Programs\Python\Python36;C:\Users\ddrayton\AppData\Local\Programs\Python\Python36\Scripts
But I'm not sure if it's a PATH problem, since the Windows command prompt has no problem accessing the "aws" command.
Any ideas?
Fixed this by simply installing the AWS CLI again but this time using Git Bash instead of the Windows command prompt.
pip install awscli
If anyone could provide some insight as to why this was necessary, it would be appreciated.
In my case, I think a recent-to-me update to the AWS CLI changed what's run to being aws.cmd (full path C:\Program Files\Amazon\AWSCLI\bin\aws.cmd)
Git Bash needs the extension aws.cmd to make it work.
In Bash, you could try typing aws.cmd vs aws. If the former works, but not the latter, you can do alias aws='aws.cmd' in your bash startup script. I don't know if it's the best solution, but it worked for me.
FWIW, I think it's related to this:
https://unix.stackexchange.com/questions/280528/is-there-a-unix-equivalent-of-the-windows-environment-variable-pathext
On Windows 10 I was installing just once from GitBash via pip install awscli --upgrade --user as described in AWS manual for CLI installation for Linux
It installed aws executables into %USERPROFILE%\AppData\Roaming\Python\Python37\Scripts
After that just add this folder to your PATH. Re-open GitBash or cmd - it should work from both places

`docker-credential-gcloud` not in system PATH

After the latest updates to gcloud and docker I'm unable to access images on my google container repository. Locally when I run: gcloud auth configure-docker as per the instructions after updating gcloud, I get the following message:
WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
gcloud credential helpers already registered correctly.
Running which docker-credential-gcloud returns docker-credential-gcloud not found.
I have no other gcloud-related path issues and for the life of me can't figure out how to install/add docker-credential-gcloud to path. Here's what I have installed (shown via gcloud version):
Google Cloud SDK 197.0.0
beta 2017.09.15
bq 2.0.31
container-builder-local
core 2018.04.06
docker-credential-gcr
gsutil 4.30
I also have Docker CE Version 18.03.0-ce-mac60 (23751).
Here's my $PATH:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
I also ran source /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/path.zsh.inc on original gcloud install.
Notice: All docker-credential-gcr below can be replaced with docker-credential-gcloud. I think it is just different versions of gcloud, I might be wrong.
I used Homebrew Cask to install gcloud too. I installed docker-credential-gcr with
$ gcloud components install docker-credential-gcr
And then like you said, which docker-credential-gcr doesn't gave you anything.
So I ran which gcloud to find there is a symlink to gcloud in /usr/local/bin. This symlink is created by Homebrew when you installed gcloud at first place. Now docker-credential-gcr wasn't installed by Homebrew but by gcloud itself, so there isn't a symlink.
I called readlink /usr/local/bin/gcloud and found out gcloud is installed in /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/.
Then:
$ ls /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin
There you should see docker-credential-gcr listed there.
I simply linked it to /usr/local/bin:
$ ln -s \
/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/docker-credential-gcr \
/usr/local/bin/
Then run:
$ docker-credential-gcr configure-docker
It should succeed.
Just had the same issue on Windows, running Docker with Linux containers, Docker engine v19.03.8. Using docker compose. I do not use gcloud for my dockerfiles...
DT1001 dockerpycreds.errors.InitializationError:
docker-credential-gcloud not installed or not available in PATH
Option 1: Edit the docker configuration file and remove all gcloud entries from there.
Windows c:/Users/<your account>/.docker/config.json
Linux & MacOS ~/.docker/config.json
Option 2: Go to Troubleshoot -> Reset to factory defaults.
After this my docker compose was creating containers and running the images without any issues.
On MacOS
Step 1:
Install gcloud and docker-credential-gcr,
following this tutorial
Step 2:
$ ln -s /usr/local/google-cloud-sdk/bin/docker-credential-gcr /usr/local/bin/docker-credential-gcloud
Step 3:
$ rm -rf ~/.docker
Step 4:
$ docker-compose build --pull
Finished!
Never found a way to directly resolve the docker-credential-gcloud issue, but the following got me up and running again. WARNING: the following will delete all your existing docker images and install a bunch of gcloud utilities:
gcloud components install docker-credential-gcr,
Restart the terminal completely
docker-credential-gcr configure-docker.
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
umount /var/lib/docker/overlay2
rm -rf /var/lib/docker
Restart the terminal completely.
The new version of google-cloud-sdk has only docker-credential-gcr but not docker-credential-gcloud anymore. On the other hand one of my python packages always requested docker-credential-gcloud.
The solution was to symlink docker-credential-gcloud to docker-credential-gcr:
ln -s /path/to/google-cloud-sdk/bin/docker-credential-gcr /usr/local/bin/docker-credential-gcloud
ls -l /usr/local/bin | grep docker should now print:
...
docker-credential-gcloud -> /path/to/google-cloud-sdk/bin/docker-credential-gcr
...
Usually, this error indicates that your $PATH variable has been clobbered by a package or program you have recently installed so that the Google Cloud SDK can't be found.
$PATH is altered by many programs when they install by altering ~/.profile, ~/.bash_profile or ~/.bashrc or their non-bash equivalents. With a bad $PATH, Google Cloud SDK is configured in docker but can't be seen as executables so we get this error. This assumes you have used the Google Cloud SDK in the past, but if gcloud is configured with your docker then you probably have. Don't reinstall gcloud or disable it, you already have it on your system and that is fine.
The solution then is to fix your $PATH, not to install anything.
echo $PATH
This should be a pretty long : delimited list of directories that your files are in. Do you see a google-cloud-sdk/bin in the string? Is the string way too short given all the trouble you've gotten into in your life on this computer? You use NVM but it is missing? Use Homebrew but it is missing? Try brew from the command line, does it work?
If the answer is "no" to any of the above, inspect the files above to see if there are any new entries at the bottom of each that might have broken things. Did you just install anything new?
Something is clobbering your $PATH and you need to figure out what that is. For me it is usually something to do with Anaconda Python via the conda init command. For you it might be nvm or something else. Figure out what it is and fix the problem. Don't start over with a new $PATH and install the same stuff over again or disable gcloud authentication.
It really seems to be something with the Homebrew Cask. I uninstalled the cask and then reinstalled the Google Cloud SDK by manually downloading the tar ball and running the packaged install script as described there.
Now docker-credential-gcloud is in my path:
$ which docker-credential-gcloud
/Users/moritz/google-cloud-sdk/bin/docker-credential-gcloud
I can't figure out what Google is trying to achieve here. On Linux there is docker-credential-gcloud and on Windows there is docker-credential-gcr.exe, and then there is docker-credential-gcloud.cmd which calls gcloud auth docker-helper. This is kind of a nightmare if you're trying to write portable build scripts or gradle rules because not everything seems capable of finding and calling docker-credential-gcloud.cmd when you exec docker-credential-gcloud... it might work from the dos prompt, but in general doesn't work.
After a ton of fooling around with .bat scripts, cygwin scripts, .cmd scripts and so forth, I found the best solution was to go into the gcloud installation and just copy docker-credential-gcr.exe docker-credential-gcloud.exe ... not a very satisfying solution, but is the only thing I found that would do the trick.
I got the issue when I tried to SSH from Google Cloud Build into an Engine VM Instance, so I had
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'ssh',
'--project', '$PROJECT_ID',
'--zone', 'asia-southeast1-b',
'--strict-host-key-checking=no',
'username#instance-1',
'--command' ,'sh start.sh'
My start.sh
#!/bin/sh
echo "Started: $(date --iso-8601=seconds)"
docker pull gcr.io/aaa/bbbc/cccc
echo "Finished: $(date --iso-8601=seconds)"
The issue was How to set PATH when running a ssh command?
https://unix.stackexchange.com/questions/332532/how-to-set-path-when-running-a-ssh-command
So I just faced the same problem where I am trying to pull an image from GCR to an GCP instance and want to share my solution.
I ran gcloud auth configure-docker and got the warning:
WARNING: `docker-credential-gcloud\` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
I applied the accepted answer for this thread and ran gcloud components install docker-credential-gcr and got a long error:
ERROR: (gcloud.components.install) You cannot perform this action because this Cloud SDK installation is managed by an external package manager.
Please consider using a separate installation of the Cloud SDK created through the default mechanism described at: https://cloud.google.com/sdk/
When no solution was working, I uninstalled the Google provided google-cloud-sdk package that was installed via snap and instlled with distro specifice package manager, for me that is apt-get as instructed in the Installing Google Cloud SDK: Installation options page and re-ran the gcloud auth configure-docker and this time it solved my problem.
In my case the problem was due to how WSL 1 works with Docker on Windows. At first I only installed and initialized gcloud in WSL Ubuntu, not in Windows. However as Docker daemon is actually run by Windows, you need to install gcloud for Windows as well (and don't forget to run all of the inits and authorizations there).
On Windows 10/11, you need to ensure that C:\Users\USERNAME\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\ is added to your system $PATH environment variable. It may not have been added if the Google Cloud SDK was not able to add it during GCloud installation. So add it manually like this:
Windows Task Bar ➔ Press the search icon 🔍 or the search bar
Type "environment" ➔ and click on "Edit the System Environment Variables" (ensure that you have Administrator access)
At the bottom of the dialog, click the Environment Variables... button
System Variables ➔ click Path ➔ Edit... ➔ New ➔ paste in C:\Users\USERNAME\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\ (replace "USERNAME" with your username)
Close and restart any open Command Prompt windows.
Then verify on the Git Bash for Windows console:
Optional: Note that the AppData folder is hidden by default, so you may want to unhide AppData first, to see its contents.
Restart the Git Bash Terminal window
echo $PATH ➔ This should print a long string that contains: :/c/Users/USERNAME/AppData/Local/Google/Cloud SDK/google-cloud-sdk/bin
where docker-credential-gcloud ➔ This should print C:\Users\USERNAME\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\docker-credential-gcloud.cmd

Wkhtmltopdf in a Dokku app?

I have a NodeJS/Express Dokku container. I'm trying to use a node module which just runs the wkhtmltopdf command from shell, but it can't find wkhtmltopdf.
Anyone have any experience with this?
You need to check how wkhtmltopdf was installed in that image.
As mentioned in node-wkhtmltopdf issues 32:
The wkhtmltopdf command is executed as a shell command on non-Windows systems.
Make sure the /usr/local/bin directory is in your $PATH variable. Do this by running:
$ sh
sh-3.2$ which wkhtmltopdf # Or try:
sh-3.2$ echo $PATH
sh-3.2$ exit
(In your case, you can do a sudo docker exec -it <containerIdOrName> sh)
The same issue adds:
What I ended up doing was downloading the dmg directly from wkhtmltopdf and that seemed to do the trick.
That means you might have to create a new image from the current one, installing wkhtmltopdf that way (with the dmg package)
jsonfry what installing wkhtmltopdf as a service container means: openlabs/docker-wkhtmltopdf-aas illustrates the installation process.
I got into the same issue as you did. I didn't want to run wkhtmltopdf in another container nor did I want to change the code to use remote calls. Since downloading wkhtmltopdf using apt-get plugin may result in a package that throws errors, I have created a new plugin that should set up wkhtmltopdf in the dokku container for you.
It is licensed using MIT license so feel free to do whatever you want. Hopefully it will help somebody.
URL: https://github.com/mbriskar/dokku-wkhtmltopdf

Resources