Lando links to local site are red - docker

working on an existing project that was working on my Win 10 machine. and continues to work on a Colleagues mac
calling
lando start -vvv
launches docker, starts the containers
but where it check the urls for the site lando prints checking to see if https://localhost:52144 is ready. diversiview 14:53:11 DEBUG ==> scan failed for https://localhost:52144 code=undefined, message=Request failed with status code 502, status=502, server=nginx, date=Wed, 09 Nov 2022 03:53:11 GMT, content-type=text/html, content-length=150, connection=close diversiview 14:53:11 DEBUG ==> Response for https://localhost:52144, returned http code we should retry for. Setting to bad
and the nginx docker container prints 2022/11/09 03:52:58 [error] 238#0: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.2, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.20.0.6:9000", host: "diversiview.lndo.site"
tried on a new computer and the same problem meaning something has changed in the project but
ive tride backups of the project from when it was working and they dont anymore.
tried with docker using WSL and without.
I dont know enough about lando or docker to know what other info to provide
re installed all programs relevent

Related

How to start the Web UI of Chirpstack-Application?

OS: Windows 10 Pro
The whole setup for properly starting up the Web UI seems confusing to me.
There’s the source code to the chirpstack-application-server and its finished docker image. Running docker-compose up at the source code directory starts all the necessary backend services, but not the UI. In the source code, there’s a section with the UI inside the /ui directory. Starting this through npm works up until after this console log:
Note that the development build is not optimized. To create a
production build, use npm run build.
After this I get this proxy error:
Proxy error: Could not proxy request /swagger/internal.swagger.json
from localhost:3000 to http ://localhost:8080/. See https:// nodejs.
org/api/errors.html#errors_common_system_errors for more information
(ECONNREFUSED).
Then there’s the chirpstack-appliaction from precompiled binary. I started this one by first creating the config file chirpstack-application-server configfile > chirpstack-application-server.toml and then starting the executable ./chirpstack-application-server.exe. Here I just get a connection error to PostgreSQL:
time=“2020-09-17T11:09:08+02:00” level=warning msg=“storage: ping
PostgreSQL database error, will retry in 2s” error=“dial tcp
[::1]:5432: connectex: No connection could be made because the target
machine actively refused it.”
So what am I missing to get the UI up and running locally?

Unable to create machine in docker

I've just installed docker on my windows 7 machine. When I start Docker QuickStart, I get following error which seems to be while creating the machine:
Creating machine...
(default) Unable to get the latest Boot2Docker ISO release version: Get https:/
/api.github.com/repos/boot2docker/boot2docker/releases/latest: dial tcp 192.30.2
52.124:443: connectex: A connection attempt failed because the connected party d
id not properly respond after a period of time, or established connection failed
because connected host has failed to respond.
(default) Copying C:\Users\robot\.docker\machine\cache\boot2docker.iso to C:\Use
rs\robot\.docker\machine\machines\default\boot2docker.iso...
(default) Creating VirtualBox VM...
(default) Creating SSH key...
Error attempting heartbeat call to plugin server: read tcp 127.0.0.1:60733->127.
0.0.1:60732: wsarecv: An existing connection was forcibly closed by the remote h
ost.
Error attempting heartbeat call to plugin server: connection is shut down
Error attempting heartbeat call to plugin server: connection is shut down
Error attempting heartbeat call to plugin server: connection is shut down
Error attempting heartbeat call to plugin server: connection is shut down
Error creating machine: Error in driver during machine creation: read tcp 127.0.
0.1:60733->127.0.0.1:60732: wsarecv: An existing connection was forcibly closed
by the remote host.
Looks like something went wrong... Press any key to continue...
There is a similar issue in docker/machine/issues/2773.
Try and see if the issue persists when creating a machine yourself instead of using quick-start:
Find where docker-machine.exe has been installed (or copy the latest released one in your %PATH%) and use that, from a regular CMD session:
First test the existing machine:
# find the name of the machine created.
docker-machine ls
docker-machine env --shell cmd <nameOfTheMachine>
docker machine ssh <nameOfTheMachine>
Then try creating a new one:
docker-machine create -d virtualbox <aNewMachine>
docker-machine env --shell cmd <aNewMachine>
docker machine ssh <aNewMachine>
I do not have a solution but found the root cause.
I had installed boot2docker and has been using for months. I had been creating all
my vbox images on the same folder all the while.
One fine day I decided to archive my machines and changed the folder in which I was creating the vbox images. It started giving this wired error. I reverted back my archive and tested again. It started working fine.
The difference I found on both the set up was, in the archived folder it was skipping the ca cert creation step and was directly creating the machine. In the new folder it was creating a cert and then creating the machine. It looks like the server doesn't like the new certs!!!!

ruby on rails nginx with passenger can't find views

I'm putting together my first ROR app on NginX and Passenger. I'm following tutorials all over the web and I'm getting the following errors when I try to go to my home page.
I created a controller home_controller.rb using the command line. I also created the views at the command line which made a default ERB file.
The nginx service is running and I start passenger manually via "passenger start". I can see passenger accepting the incoming HTTP requests as in the following errors. What's strange though is that it's looking in the public folder of my ruby app for home/index. I used "rails generate [controller/view] foo" which writes files outside of public.
My nginx config is configured to point to the public folder of my ROR project.
I'm using ROR 2.0.0, Phusion Passenger 4.0.29, and nginx 1.1.19.
Am I missing something in Passenger to tell it where the controllers/views/etc. are?
mj
2013/12/13 15:20:12 [error] 18305#0: *4 "/usr/development/sandbox/app/public/index.html"
is not found (2: No such file or directory), client: 127.0.0.1, server: _, request: "HEAD /
HTTP/1.1", host: "0.0.0.0"
2013/12/13 15:20:13 [error] 18305#0: *5 "/usr/development/sandbox/app/public/index.html"
is not found (2: No such file or directory), client: 127.0.0.1, server: _, request: "GET
HTTP/1.1", host: "localhost:3000"
2013/12/13 15:20:18 [error] 18305#0: *5 open()
"/usr/development/sandbox/app/public/home/index" failed (2: No such file or directory),
client: 127.0.0.1, server: _, request: "GET /home/index HTTP/1.1", host: "localhost:3000"
2013/12/13 15:27:11 [error] 18305#0: *13 open()
"/usr/development/sandbox/app/public/home/index" failed (2: No such file or directory),
client: 127.0.0.1, server: _, request: "GET /home/index HTTP/1.1", host: "localhost:3000"
edit 0 - if I use "rails server", everything seems to work fine /edit 0
You are using Phusion Passenger in the wrong way.
Phusion Passenger provides 3 modes: a standalone mode (one that runs as a standalone web server), an Nginx integration mode, and an Apache integration mode. By running passenger start, you are using its standalone mode.
You also have Nginx running. And from your logs, it looks like you are accessing Nginx. But that doesn't do anything. Passenger is running standalone and is not running inside Nginx.
In a diagram:
Nginx <--------------------- [Your request]
(Not integrated with Passenger,
so doesn't know what to do with
your request)
Passenger Standalone
(waiting for your request,
but you never sent one
to it)
So here's how it looks like if you use rails server:
Nginx
(not receiving any
requests from you)
rails server <--------------- [Your request]
What you actually want is to access Passenger Standalone, which -- just like rails server would -- listens on port 3000. In fact, Passenger Standalone told you during startup that it's listening on port 3000.
Nginx
(not receiving any
requests from you;
so you may as well
disable it)
Passenger Standalone <--------------- [Your request]

Strange issue with unicorn and nginx caused 502 error

We have Ruby on Rails application, that is running on VPS. This night the nginx went down and responded with "502 Bad Gateway". Nginx error log contained lots of folowing messages:
2013/10/02 00:01:47 [error] 1136#0: *1 connect() to
unix:/app_directory/shared/sockets/unicorn.sock failed (111:
Connection refused) while connecting to upstream, client:
5.10.83.46, server: www.website.com, request: "GET /resource/206 HTTP/1.1", upstream:
"http://unix:/app_directory/shared/sockets/unicorn.sock:/resource/206",
host: "www.website.com"
These errors started suddenly, because previous error messages was 5 days earlier.
So the problem was in unicorn server. Then i opened unicorn error log and found there just some info messages, which doesn't connected with a problem. Production log was useless too.
I tried to restart server via service nginx restart, but it didn't help. Also there were not some pending processes of unicorn.
The problem was solved when i redeploy the application. And it is strange, because i deployed the same version of application 10 hours before server went down.
I'm looking for any suggestions how to prevent such 'magic' cases in future. Appreciate any help you can provide!
Looks like your unicorn server wasn't running when nginx tried to access it.
This can be caused by VPS restart, some exception in unicorn process, or killing of unicorn process due to low free memory. (IMHO VPS restart is the most possible reason)
Check unicorn by
ps aux | grep unicorn
Also you can check server uptime with
uptime
Then you can:
add script that would start unicorn on VPS boot
add it as service
run some monitoring process (like monit)

Deploying Rails app to VPS using nginx. Error when trying to access dynamic pages

I'm trying to deploy a Ruby on Rails app to my VPS for the first time. I'm following this guide.
Everything seems to be working on my local machine.
I'm currently at the "Testing the VPS" part of the guide.
On my VPS, static pages that I have in my public folder work fine. However, I'm receiving an error message when I try to access a dynamic page.
When trying to go to http://server-ip-address.com/users (as specified by the guide) I am encountering this error from nginx:
An error occurred.
Sorry, the page you are looking for is currently unavailable.
Please try again later.
If you are the system administrator of this resource then you should check the error log for details.
The logs at /opt/nginx/logs/error.log state:
2013/07/08 20:47:32 [crit] 17760#0: *151 connect() to /tmp/passenger.1.0.12435/generation- 0/request failed (2: No such file or directory) while connecting to upstream, client: client- ip-address, server: server-ip-address, request: "GET /user HTTP/1.1", upstream: "passenger:/tmp/passenger.1.0.12435/generation-0/request:", host: "server-ip-address"
I'm not sure how to fix this. Any help?
Thank you!
This is a bug in 4.0.7. Fixed in 4.0.8, which was released an hour ago.

Resources