Plumber API works on server but not when I set up with systemd - plumber

So I have an API that works fine locally as well as on the server if I run the plumber commands manually, by which I mean ssh-ing in the server and running :
r <- plumb("plumber.R")
r$run(port=8000, host = "0.0.0.0")
It looks like this:
#* #serializer contentType list(type="application/html")
#* #get /test
function(res){
include_rmd("test.Rmd", res)
}
#* Echo the parameter that was sent in
#* #param msg The message to echo back.
#* #get /echo
function(msg=""){
list(msg = paste0("The message is: '", msg, "'"))
}
They both work with no problem. But when I keep them alive on the server with systemd only the /echo one works. The other one just says "An exception occurred."
The systemd setup looks like this:
[Unit]
Description=Plumber API
# After=postgresql
# (or mariadb, mysql, etc if you use a DB with Plumber, otherwise leave this commented)
[Service]
ExecStart=/usr/bin/Rscript -e "api <- plumber::plumb('/home/chrisbeeley/api/plumber.R'); api$run(port=8000, host='0.0.0.0')"
Restart=on-abnormal
WorkingDirectory=/home/chrisbeeley/api/
[Install]
WantedBy=multi-user.target
I can't find error logs anywhere and I'm very confused as to why it should work when I run the commands on the server but not when I use systemd.
I'm using Ubuntu 16.04.
Since I posted this last night I've deployed the whole thing on a totally separate server which is also running 16.04 and it shows the exact same behaviour on there.
Edit: I've also tried this, based on code on the plumber documentation that returns a pdf, and that also returns "an exception occurred"
#* #serializer contentType list(type="text/html; charset=utf-8")
#* #get /html
function(){
tmp <- tempfile()
render("test_report.Rmd", tmp, output_format = "html_document")
readBin(tmp, "raw", n=file.info(tmp)$size)
}

Well, I never solved this. Instead I tried it with pm2, as detailed here https://www.rplumber.io/docs/hosting.html#pm2
I was a bit put off by the npm dependency, seemed like baggage, but it works like a charm.
So if anyone does Google this with a similar problem, I advise you to use pm2. It took me approximately 5 minutes to have it up and running :-)
I should add that although I haven't used them yet I gather pm2 will create log files, too, which sounds useful.

Related

Confused by Docker directory structure (file not found error)

I am getting a file not found error and even though I manually create the file in the Docker container it still reports as not found. Solving this is of course complicated by me being new to Docker and learning how everything in the docker world works.
I am using Docker Desktop with a .net core application.
In the .Net application I am looking for the file to use as an email template. All of this works when I run outside a Docker container but inside docker it fails with file not found.
public async Task SendEmailAsyncFromTemplate(...)
{
...snipped for brevity
string path = Path.Combine(Environment.CurrentDirectory, #$"Infrastructure\Email\{keyString}\{keyString}.cshtml");
_logger.LogInformation("path: " + path);
//I added this line because when I connect to docker container the root
//appears to start with infrastructure so I chopped the app part of
var fileTemplatePath = path.Replace(#"/app/", "");
_logger.LogInformation("filePath: " + fileTemplatePath);
The container log for the above is
[12:40:09 INF] path: /app/Infrastructure\Email\ConfirmUser\ConfirmUser.cshtml
[12:40:09 INF] filePath: Infrastructure\Email\ConfirmUser\ConfirmUser.cshtml
As mentioned in the comments I did this because when I connect to the container the root shows Infrastructure as the first folder.
So naturally I browse into Infrastructure and the Email folder is missing. I have asked a separate SO question here about why my folders aren't copying.
OK my Email files and folders under Infrastructure are missing. So to test this out I manually created the directory structure and create the cshtml file using this command:
docker exec -i addaeda2130d sh -c "cat > Infrastructure/Email/ConfirmUser/ConfirmUser.cshtml" < ConfirmUser.cshtml
I chmod the file permissions to 777 just to make sure the application has write access and then added this debugging code.
_logger.LogInformation("ViewRender: " + filename);
try
{
_logger.LogInformation("Before FileOpen");
var fileExista = File.Exists(filename);
_logger.LogInformation("File exists: " + fileExista);
var x = File.OpenRead(filename);
_logger.LogInformation("After FileOpen:", x.Name);
As you can see from the logs it reports the file does NOT exist even though I just created it.
[12:40:09 INF] ViewRender: Infrastructure\Email\ConfirmUser\ConfirmUser.cshtml
[12:40:09 INF] Before FileOpen
[12:40:09 INF] File exists: False
Well, the only logical conclusion to this is I don't know / understand what is going on which is why I am reaching out for help.
I have also noted that if I stop the container (not recreate just stop) and then start it all my directories and files I created are gone.
So...are these directories / files in memory and not on "disk" and I need to commit the changes somehow?
It would seem to make sense as the application code is looking for the files on disk and if they are in memory then they wouldn't be found but in Googling, Pluralsight courses etc. I can't find any mention of this.
Where can I start looking in order to figure this out?
Front slash '/' in path is different than '\'. Just change direction of your slashes and it'll work.
I tried this program in my docker container and it worked fine.
using System;
using System.IO;
// forward slash don't work
// string path = Path.Combine(Environment.CurrentDirectory, #"files\hello\hello.txt");
string path = Path.Combine(Environment.CurrentDirectory, #"files/hello/hello.txt");
Console.WriteLine($"Path: {path}");
string text = System.IO.File.ReadAllText(path);
Console.WriteLine(text);

How to rollback database in docker container using elixir phoenix releases and the example MyApp.Release.rollback in the guides

I cannot figure out how to rollback a database when trying to do it through a phoenix app running in a docker container. I am trying to simulate locally what it would be like when migrating on a remote server.
I am running it locally by running:
docker run -it -p 4000:4000 -e DATABASE_URL=ecto://postgres:postgres#host.docker.internal/my_app_dev -e SECRET_KEY_BASE=blahblah my-app-tag:v1
I view the running containers with:
docker ps
I bash into the container
docker exec -it 8943918c8f4f /bin/bash
cd into app/bin
cd bin
try to rollback
./my_app rpc 'MyApp.Release.rollback(MyApp.Repo, "20191106071140")'
=> 08:43:45.516 [info] Already down
If this did indeed work when running through the application it should blow up as I do different things. But it doesn't.
If I try eval
./my_app eval 'MyApp.Release.rollback(MyApp.Repo, "20191106071140")'
=>
08:46:22.033 [error] GenServer #PID<0.207.0> terminating
** (RuntimeError) connect raised KeyError exception: key :database not found. The exception details are hidden, as they may contain sensitive data such as database credentials. You may set :show_sensitive_data_on_connection_error to true when starting your connection if you wish to see all of the details
(elixir) lib/keyword.ex:393: Keyword.fetch!/2
(postgrex) lib/postgrex/protocol.ex:92: Postgrex.Protocol.connect/1
(db_connection) lib/db_connection/connection.ex:69: DBConnection.Connection.connect/2
(connection) lib/connection.ex:622: Connection.enter_connect/5
(stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Last message: nil
** (EXIT from #PID<0.163.0>) shutdown
I am trying to ensure I know how to deploy an application to a remote (Heroku, AWS) and have the application automatically migrate on every deploy but also have the option to run a command to roll back 1 step at a time.
I am not finding any information. The debugging above is first step in creating this migrate/rollback functionality on a remote server but testing on my local machine first.
The migrate/rollback code is taken directly from https://hexdocs.pm/phoenix/releases.html#ecto-migrations-and-custom-commands
Any help/direction would be greatly appreciated.
Thank you
In the first place, rpc call should succeed. Make sure you indeed have the migration in the question up before running my_app rpc. Note, that the second argument is the version to revert to, not the migration to revert.
Regarding the eval. One should start or at least load the application before any attempt to access its config. As per documentation:
You can start an application by calling Application.ensure_all_started/1. However, if for some reason you cannot start an application, maybe because it will run other services you do not want, you must at least load the application by calling Application.load/1. If you don't load the application, any attempt at reading its environment or configuration may fail. Note that if you start an application, it is automatically loaded before started.
For the migration to succeed, one needs Ecto aplication Ecto.Adapters.SQL.Application started and your application loaded (to access configs.)
That said, something like this should work.
def my_rollback(version) do
Application.load(:my_app)
Application.ensure_all_started(:ecto_sql)
Ecto.Migrator.with_repo(MyApp.Repo,
&Ecto.Migrator.run(&1, :down, to: version))
end
And call it as
./my_app eval 'MyApp.Release.my_rollback(20191106071140)'
Still, rpc should start the required applications out of the box (and it indeed does, according to the message you get back,) so I’d suggest you to triple-check the migration you are requesting to down is already up and you pass the proper version to downgrade to.
There were two issues here and thanks to #aleksei-matiushkin I got it working.
The first issue was not having Application.load(:my_app) in the function.
The second issue was that I was calling the rollback functions (both mine and #aleksei-matiushkin) as a string and not an int. Now I call it like: ./my_app eval 'MyApp.Release.my_rollback(20191106071140)'
The file now looks like this:
defmodule MyApp.Release do
#app :my_app
def migrate do
for repo <- repos() do
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :up, all: true))
end
end
def rollback(repo, version) do
setup_for_rollback()
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :down, to: version))
end
def my_rollback(version) do
setup_for_rollback()
rollback(MyApp.Repo, version)
end
defp setup_for_rollback() do
Application.load(#app)
Application.ensure_all_started(:ecto_sql)
end
defp repos do
Application.load(#app)
Application.fetch_env!(#app, :ecto_repos)
end
end
I am not sure if this is an idiomatic implementation. I did not have any issues excluding Application.ensure_all_started(:ecto_sql) but since it was recommended I guess I'll leave it in.

VBS printer script executing error

I have some trouble executing/using vbs scripts linked to printers. They are located in %windir%/System32/Printing_Admin_Scripts
The objective is to plan a weekly print task to preserve ink cartridge
Looking at the scripts, everything was available for me to create this task
The main script to use is prnqctl.vbs
Before to create my task, I have tried to test the script and this is what I got (sorry for the french version, I will try to update the screenshot in english later):
There is obviously something wrong.
I have tried to google the error code, nothing conclusive.
I have tried to run my script in admin mode and also under admin session, same problem
I have made some research on CIMWin32, it seems to be a dll and I can find it in some locations of my filesystem
My OS is W8.1.
If anybody has a suggestion or solution, I'm interested in
==>cscript C:\Windows\System32\Printing_Admin_Scripts\en-US\prnqctl.vbs -e
Unable to get printer instance. Error 0x80041002 Not found
Operation GetObject
Provider CIMWin32
Description
Win32 error code
The error culprit is clear: you should provide a valid -p argument. It's a mandatory parameter in case of -e operation:
==>cscript C:\Windows\System32\Printing_Admin_Scripts\en-US\prnqctl.vbs -e -p "Fax"
Success Print Test Page Printer Fax
==>

lost logout functionality for grails app using spring security

I have a grails app that moved to a new subnet with a change to the DNS. As a result, the logout functionality stopped working. When I inspect the network using chrome, I get this message under request headers: CAUTION: Provisional headers are shown.
This means request to retrieve that resource was never made, so the headers being shown are not the real thing.
The logout function is executing this action
package edu.example.performanceevaluations
import org.codehaus.groovy.grails.plugins.springsecurity.SpringSecurityUtils
class LogoutController {
def index = {
// Put any pre-logout code here
redirect uri: SpringSecurityUtils.securityConfig.logout.filterProcessesUrl // '/j_spring_security_logout'
}
}
Would greatly appreciate a direction to look towards.
As suggested by that link run chrome://net-internals and see if you get anywhere
If you are still lost, I would suggest a two way debugging if you have Linux find something related to your traffic and run either something like tcpdump or if thats too complex install and run ngrep -W byline -d any port 8080 -q. and look for the pattern see what is going on.
ngrep/tcpdump and look for that old ip or subnet on entire traffic see if anything is still trying get through - (this all be best on grails app server ofcourse
(unsure possibly port 8080 or any other clear text port that your app may be running on)
Look for your ip in the apache logs does it hit the actual server when you log out etc?
Has the application been restarted since subnet change since it could have cached the next point from application in the running Java process:
pgrep java|awk '{print "netstat -plant "$1" |grep "$1 }'|/bin/sh
or
pgrep java|awk '{print " lsof -p "$1" |grep -i listen"}'|/bin/sh
I personally think something somewhere needs to be restarted since its hooking on to a cache of something .
Also check the hosts files of any end machines involved ensure nothing has previous subnet physically configured in there.

RAILS, CUCUMBER: Getting the testing server address

While running a cucumber test, I need to know the local testing server address. It will be something like "localhost:47632". I've searched the ENV but it isn't in there, and I can't seem to find any other variables that might have it. Ideas?
I believe that the port is generated is dynamically generated on test runs. You can use OS level tools to inspect what connections are opened by process and glean the port that way. I do this on my ubuntu system infrequently so I can't tell you off the top of my head what tool does that. Netstat maybe? I always have to go out and google for it so consider this more of a hint than a complete answer.
Ah, to be more clear...I put a debug breakpoint in, and when it breaks THEN I use the OS level tools to see what port the test server is running on at that moment in time. How to discover it predictively? No idea, sorry.
here's what I use:
netstat -an | grep LISTEN
(Answering my own question just so that the code formatting will be correct)...
Using jaydel's idea to use netstat, here's the code. I extract the line from netstat that has the current pid. (Probably not the most elegant way to do this, but it works)
value = %x( netstat -l -p --tcp )
pid = $$.to_s
local_port = ""
value.split( "\n" ).each do |i|
if i.include?( pid )
m = i.match( /\*:(\d+)/ )
local_port = m[1].to_s
end
end

Resources