Vaadin: upgraded from v18 to v21: web ui does not load - vaadin

I upgraded an existing (and working!) Vaadin application that was using v18.0.2 to v21.0.4. With that new version the server side application starts up as usual, it initializes OK and the first requests triggers the frontend compilation (which also seems to work OK, at least the log shows no abnormalities or errors), but the UI just hangs and fails to load (there is a thin blue progress bar at the top of the page which quickly reaches about 50% of the width, then it gets slower and slower and then starves at about 90% of the screen width).
I don't have the slightest clue in which direction I should check or analyze this. Any suggestion or hint what could be wrong here would be highly appreciated!
If I should attach any config or log details to help analyzing this let me know!
Later addendum:
I attach my vaadin_dance.cmd here:
#echo off
:package_entries
set fn=package.json
echo Step 1: Going to remove unsupported Vaadin v19+ entries from %fn%:
pause
rem let user see what we do:
#echo on
type %fn% | findstr /V /C:"#vaadin/application-theme-plugin" > %fn%_1
type %fn%_1 | findstr /V /C:"#vaadin/stats-plugin" > %fn%_2
type %fn%_2 | findstr /V /C:"#vaadin/theme-live-reload-plugin" > %fn%_3
type %fn%_3 | findstr /V /C:"#vaadin/theme-loader" > %fn%_4
rem remove an already existing backup - just in case (if there were one the cp below won't work)
rm %fn%~
rem rename back to original and keep a backup:
cp -b -f %fn%_4 %fn%
rem delete the temp. files":
rm %fn%_?
#echo off
echo unsupported Vaadin v19+ entries removed from %fn%
:local_stuff
echo Step 2: Going to remove project local stuff:
pause
rem let user see what we do:
#echo on
rmdir /S /Q .\target
rmdir /S /Q .\node_modules
rmdir /S /Q .\frontend\generated
rm package.json
rm package-lock.json
rm pnpm-lock.yaml
rm pnpmfile.js
rm tsconfig.json
rm types.d.ts
rm webpack.config.js
rm webpack.generated.js
#echo off
echo project local vaadin-generated stuff removed.
:global_stuff
echo Step 3: Going to remove global stuff: removing pnpm stuff
pause
rem let user see what we do:
#echo on
rm -r -f %USERPROFILE%\.pnpm-debug.log
rm -r -f %USERPROFILE%\.pnpm-state.json
rmdir /S /Q %USERPROFILE%\.vaadin
rmdir /S /Q %USERPROFILE%\.pnpm-store
rem just in case - I encountered them here, too:
rmdir /S /Q D:\.pnpm-store
rmdir /S /Q U:\.pnpm-store
#echo off
echo global vaadin-installed stuff removed.
rem clear (and preload) default repository:
:repo_stuff
echo Step 4: Going to empty m2repository!
pause
rem let user see what we do:
#echo on
rem strange enough I again and again got "access denied" on certain .jars ||-( So we first take ownership...
takeown /R /F %USERPROFILE%\.m2\m2repository
rem ... before removing the stuff:
rm -r -f %USERPROFILE%\.m2\m2repository\*
#echo off
echo m2repository cleaned.
echo.
pause
The process with the above .cmd file is such, that I run first step 1, then I stop it and try to rebuildv(in a different cmd window). If that does not work, I restart it from begin and run steps 1 & 2, then I stop and try to rebuild, etc. At most after steps 1, 2, 3 & 4 I was (at least so far) always able to rebuild and execute my application. That's at least when build with or reverting to v18.0.3. With v21.x I haven't been successful so far. :-(
Second addendum with the console output:
The application starts up OK (i.e. without any error message( up to the point where I see "Tomcat has been started on port: ..."
At that point I direct my browser to that port which triggers the dispatching of the DispatcherServlet. That page never loads and the browser times out, but there is NO error message or anything giving a hint re. the type or cause of the problem on the console:
...
2021-12-17 19:36:03,459 INFO [main] org.springframework.boot.web.embedded.tomcat.TomcatWebServer: Tomcat started on port(s): 8085 (http) with context path ''
2021-12-17 19:36:23,678 INFO [http-nio-8085-exec-1] org.apache.juli.logging.DirectJDKLog: Initializing Spring DispatcherServlet 'dispatcherServlet'
2021-12-17 19:36:23,682 INFO [http-nio-8085-exec-1] org.springframework.web.servlet.FrameworkServlet: Initializing Servlet 'dispatcherServlet'
2021-12-17 19:36:23,689 INFO [http-nio-8085-exec-1] org.springframework.web.servlet.FrameworkServlet: Completed initialization in 2 ms
2021-12-17 19:36:26,103 WARN [http-nio-8085-exec-1] org.apache.juli.logging.DirectJDKLog: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [2,172] milliseconds.
2021-12-17 19:36:26,103 WARN [http-nio-8085-exec-3] org.apache.juli.logging.DirectJDKLog: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [1,457] milliseconds.
2021-12-17 19:36:26,179 INFO [http-nio-8085-exec-4] com.vaadin.flow.spring.SpringInstantiator: The number of beans implementing 'I18NProvider' is 0. Cannot use Spring beans for I18N, falling back to the default behavior
<Console output stops here and browser times out>
Unfortunately there seems no way to attach screenshots here, so I can't provide the output of the WebDeveloper's Network Tab's output here...
The Browser Inspector Console displays:
Fri Dec 17 2021 19:52:01 GMT+0100 (Central European Standard Time) Atmosphere: unload event vaadinPush-min.js:1:40213
Vaadin push loaded vaadinPush-min.js:1:44231
Failed to register/update a ServiceWorker for scope ‘http://localhost:8085/’: Bad Content-Type of ‘text/html’ received for script ‘sw-runtime-resources-precache.js’. Must be a JavaScript MIME type.
Uncaught (in promise) TypeError: ServiceWorker script at http://localhost:8085/sw.js for scope http://localhost:8085/ threw an exception during script evaluation.
Path '/login' is not properly resolved due to an error. Resolution had failed on route: '(.*)' vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58:17000
Uncaught (in promise) TypeError: class heritage e is not an object or null
to http://localhost:8085/VAADIN/build/vaadin-3-1a44b245d20aa3c33130.cache.js:1
266 http://localhost:8085/VAADIN/build/vaadin-3-1a44b245d20aa3c33130.cache.js:765
r http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:1
promise callback*imports http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
flowInit http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
async*get action/< http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
Z http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
__resolveRoute http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
promise callback*__resolveRoute http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
resolveRoute http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
a http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
a http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
promise callback*a http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
resolve http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
promise callback*resolve http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
render http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
__onNavigationEvent http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
setRoutes http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
<anonymous> http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:58
r http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:1
<anonymous> http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:1
<anonymous> http://localhost:8085/VAADIN/build/vaadin-bundle-aec7d8b0cb06e0cbb6bd.cache.js:1
vaadin-3-1a44b245d20aa3c33130.cache.js:1:153
​
How is a non-Vaadin intern supposed to decode that stuff and analyse what's going wrong here?

I got the issue replicated and the problem is that in development mode I got a 400 (Bad Request) in the console from Flow.ts with the faulty response Error 400 Invalid location: Location parameter missing from bootstrap request to server.
The fix was to delete the ./frontend/generated folder after which the application worked as it should when running mvn clean jetty:run. But the vaadin:clean-frontend goal should remove the generated folder in frontend which at least for me was the problem.
Check the inspector and look if there is any exceptions in the console.

I just upgraded to v22.0.1. With that new version my application now loads again! Beats me what was broken in versions 19.0.x - 21.0.4 that caused my application's UI to not load.
But there are still one issue:
The initial page is not the application's main-page but some odd self-additionalManifestEntries-page (see my append
Vaadin v22: Odd page displayed each time after login to application). I always need to reload the main page to enter my actual application.

Related

Jenkins does not abort on SVN error E175002

I am calling svn ls from Jenkins on a SVN directory to get a list of paths that match a certain pattern (that I later process further).
Maybe not very nice, but this is currently how it works:
def proc = bat (returnStdout: true, script: '#svn ls https://path/to/my/repo/trunk -R --trust-server-cert-failures=unknown-ca --non-interactive | findstr /R "^[^_].*_src/$" | findstr /R "^FolderA ^FolderB" | findstr /V "_test"').trim()
Problem
Sometimes due to connection issues the svn ls command fails and the Jenkins job aborts (which is perfectly ok because it is the expected behavior.
Sometimes however, it seems that only some folders / sub-folders are not accessible temporarily and I get an error message, but it seems not an error signal from the svn ls:
svn: E175002: Unexpected server error 500 'Internal Server Error' on 'path/to/a/folder
This is a problem, because the pipeline does not abort, but continues and the content of proc only contains parts of the result, but not the full result.
Is there any way to detect this case where the error occurs? Please note, that not the occurrance of the E175002 error is my problem, but its detection.

Error running ghost on Windows 10

I installed ghost and ghost-CLI on Windows 10. When I run ghost start I get the error below. how to fix this? It seems related to the command to check file and folder permissions. Note that I'm running ghost in the D: drive.
By the way, if I run ghost run it works.
D:\onlinehelp>ghost start
Process manager 'systemd' will not run on this system, defaulting to 'local'
√ Checking current folder permissions
√ Validating config
× Checking folder permissions
× Checking file permissions
√ Checking memory availability
One or more errors occurred.
1) Checking folder permissions
Message: Command failed: C:\WINDOWS\system32\cmd.exe /q /s /c "find ./ -type d ! -perm 775 ! -perm 755"
FIND: Parameter format not correct
Exit code: 2
2) Checking file permissions
Message: Command failed: C:\WINDOWS\system32\cmd.exe /q /s /c "find ./ -type f ! -path "./versions/*" ! -perm 664 ! -perm 644"
File not found - ./
File not found - -TYPE
File not found - F
File not found - !
File not found - -PATH
File not found - !
File not found - -PERM
File not found - 664
File not found - !
File not found - -PERM
File not found - 644
Exit code: 1
Debug Information:
OS: Microsoft Windows, v10.0.16299
Node Version: v8.9.1
Ghost-CLI Version: 1.7.2
Environment: production
Command: 'ghost start'
Additional log info available in: C:\Users\pablo\.ghost\logs\ghost-cli-debug-2018-05-01T16_58_30_857Z.log
Try running ghost doctor to check your system for known issues.
Please refer to https://docs.ghost.org/v1/docs/troubleshooting#section-cli-errors for troubleshooting.
D:\onlinehelp>
This isn't an ideal solution but it worked for me.
Ghost uses linux's find command to check permissions, this command does not exist on windows (or at least the windows find command does not accept the same arguments.
I was pretty sure my permissions were fine, so I decided to bypass the check.
To do so, locate where ghost-cli is installed globally, in my case it was
C:\Users\your-name-here\AppData\Roaming\npm\node_modules\ghost-cli
In there, you want to find lib\commands\doctor\checks\check-permissions.js
you will notice a line that starts with
return execa.shell(
This is the line we want to avoid, to do so, we can return a result before it is run, in my case I added the line return Promise.resolve();
e.g.
return Promise.resolve();
return execa.shell(checkTypes[type].command, {maxBuffer: Infinity}).then((result) => {
...

opensipsctl start gives an error: opensips.pid does not exist

When I run opensipsctl start command for start opensips that time I got one error.
ERROR: PID file /var/run/opensips.pid does not exist -- OpenSIPS start failed
So please help me to solve it.
open up opensipsctl, it includes the file opensipsctlrc, which defined $PID_FILE as /var/run/opensips.pid
Then in opensipsctl, when you run start one of the checks is..
if [ ! -s $PID_FILE ] ; then
echo
merr "PID file $PID_FILE does not exist -- OpenSIPS start failed"
exit 1
fi
Which is saying if then check of whethever '/var/run/opensips.pid exists and is bigger than 0 bytes' fails, then echo out the above error.
This means the file isn't being created.
If you look just above that line it does..
if [ $SYSLOG = 1 ] ; then
$OSIPSBIN -P $PID_FILE $STARTOPTIONS 1>/dev/null 2>/dev/null
else
$OSIPSBIN -P $PID_FILE -E $STARTOPTIONS
fi
Which is where opensips actually starts. I would suggest adding the following to your opensips.cfg if you havn't already..
# Logging
debug=6
log_stderror=no
log_facility=LOG_LOCAL0
..now everything will be logged to /var/log/syslog on boot.
Try boot again, then look at that log for info about what's happened.
Another thing to check, is the user you're running opensips as has permission to access the directory it's trying to create the pid file in.
I had the same error & it was driving me mad as well. I managed to trace it down to one of two things - I had both!
1/ A misconfiguration in the OpenSIPS config file. journalctl -xe should be able to tell you what the error is
2/ Something else is listening on the port that you are trying to listen on
For 2, you can try the below, if you have Ubuntu, to see if anything is already listening on that port
lsof -i :5060
I was able to see logs and fix issue by below steps
Set log_level=4 in opensips.cfg to view debug logs in /var/log/syslog
debug is deprecated in 2.4 and higher version.
You can refer here for different log level

waff wiki function in ns-3 does not get parameters

In ns-3 simulator documentation they provide a simple bash function to ease your life:
function waff {
CWD="$PWD"
cd $NS3DIR
./waf --cwd="$CWD" $*
cd -
}
This function is supposed to execute the ./waf program situated in the ns-3 root folder but inside the folder you are actually situated into.
So in the case of ~/project$ waff --run first waf will run the first script in the ~/project folder.
But if I try to run any simulation by adding one parameter to the script's command like ~/project$ waff --run "first --PrintHelp" it throws an error
waf: error: no such option: --PrintHelp.
It only works when I actually run the scripts from the root folder without the waff function.
How to modify the function to make it expand the $* to an argument between double commas?
Well, I feel embarrased because the solution was way easier than expected.
If anyone using DCE has the same problem, it's as easy as quoting the $*:
./waf --cwd="$CWD" $*
with:
./waf --cwd="$CWD" "$*"
This function works for me with bash (supposed you defined the environment variable $NS3DIR) :
function waff {
CWD="$PWD"
cd $NS3DIR >/dev/null
./waf --cwd="$CWD" "$#"
cd - >/dev/null
}
Proof it works is :
$ waff --run "wifi-simple-adhoc --help"
Waf: Entering directory `/home'
Waf: Leaving directory `/home'
'build' finished successfully (2.013s)
ns3.22-wifi-simple-adhoc-debug [Program Arguments] [General Arguments]
Program Arguments:
--phyMode: Wifi Phy mode [DsssRate1Mbps]
--rss: received signal strength [-80]
--packetSize: size of application packet sent [1000]
--numPackets: number of packets generated [1]
--interval: interval (seconds) between packets [1]
--verbose: turn on all WifiNetDevice log components [false]
General Arguments:
--PrintGlobals: Print the list of globals.
--PrintGroups: Print the list of groups.
--PrintGroup=[group]: Print all TypeIds of group.
--PrintTypeIds: Print all TypeIds.
--PrintAttributes=[typeid]: Print all attributes of typeid.
--PrintHelp: Print this help message.
$ waff --run wifi-simple-adhoc --command-template=" %s --help"
Waf: Entering directory `/home'
Waf: Leaving directory `/home'
'build' finished successfully (1.816s)
ns3.22-wifi-simple-adhoc-debug [Program Arguments] [General Arguments]
Program Arguments:
--phyMode: Wifi Phy mode [DsssRate1Mbps]
--rss: received signal strength [-80]
--packetSize: size of application packet sent [1000]
--numPackets: number of packets generated [1]
--interval: interval (seconds) between packets [1]
--verbose: turn on all WifiNetDevice log components [false]
General Arguments:
--PrintGlobals: Print the list of globals.
--PrintGroups: Print the list of groups.
--PrintGroup=[group]: Print all TypeIds of group.
--PrintTypeIds: Print all TypeIds.
--PrintAttributes=[typeid]: Print all attributes of typeid.
--PrintHelp: Print this help message.

jenkins plugin for triggering build whenever any file changed in a given directory

I am looking for functionality where we have a directory with some files in it.
Whenever any one makes a change in any of the files in the directory, jenkins shoukd trigger a build.
Is there any plugin or mathod for this functionality. Please advise.
Thanks in advance.
I have not tried it myself, but The FSTrigger plugin seems to do what you want:
FSTrigger provides polling mechanisms to monitor a file system and
trigger a build if a file or a set of files have changed.
If you can monitor the directory with a script, you can trigger the build with a HTTP GET, for example with wget or curl:
wget -O- $JENKINS_URL/job/JOBNAME/build
Although slightly related.. it seems like this issue was about monitoring static files on system.. however there are many version control systems for just this purpose.
I answered this in another post if you're using git to track changes on the files themselves:
#!/bin/bash
set -e
job_name="whatever"
JOB_URL="http://myserver:8080/job/${job_name}/"
FILTER_PATH="path/to/folder/to/monitor"
python_func="import json, sys
obj = json.loads(sys.stdin.read())
ch_list = obj['changeSet']['items']
_list = [ j['affectedPaths'] for j in ch_list ]
for outer in _list:
for inner in outer:
print inner
"
_affected_files=`curl --silent ${JOB_URL}${BUILD_NUMBER}'/api/json' | python -c "$python_func"`
if [ -z "`echo \"$_affected_files\" | grep \"${FILTER_PATH}\"`" ]; then
echo "[INFO] no changes detected in ${FILTER_PATH}"
exit 0
else
echo "[INFO] changed files detected: "
for a_file in `echo "$_affected_files" | grep "${FILTER_PATH}"`; do
echo " $a_file"
done;
fi;
You can add the check directly to the top of the job's exec shell, and it will exit 0 if no changes detected.. Hence, you can always poll the top level of the repo for check-in's to trigger a build. And only complete a build if the files in question change.

Resources