ios-webkit-debug-proxy gets disconnected from ipad while running tests - ios

I am trying to run test on physical iPad. I am using appium and webkit-ios-debug-proxy. But ios-webkit-debug-proxy gets disconnected when i try to run tests. It gives an error -
Invalid message _rpc_applicationUpdated: <dict>
<key>WIRApplicationIdentifierKey</key>
<string>PID...</string>
<key>WIRIsApplicationProxyKey</key>
<false/>
<key>WIRApplicationNameKey</key>
...
</dict>
Disconnected
I found some forums, where it was mentioned to run command -
.bin/ios-webkit-debug-proxy-launcher.js -c UDID -d
from the appium folder, but when I run the command, I get an error -
"module.js: ..throw err Error: Cannot find module
'underscore' at Function.Module_resolveFuleName (module.js:336:15) ....
So that solution does not work for me either.

The command above in the question [.bin/ios-webkit-debug-proxy-launcher.js -c UDID -d ], worked for me eventually.
It wasn't initially working as I was downloading ios-webkit-debug-proxy-launcher.js again from github instead of looking for the .js file in current installation, and also, I guess some ports where not freed from the previous ios_webkit_debug_proxy run. Hence it was throwing an error. In most cases the installation on mac, it should be on /use/local/lib/..
Also, to find and kill existing instances, this worked for me -
ps aux | grep portNumber
OR
ps -efl | grep ios_webkit_debug_proxy
and then
kill -9 PID

Related

How do I resolve this OSError: [Erno 48] Address already in use error while working with the byob botnet on GitHub?

(I have seen other solutions to "Errno 48" issues on StackOverflow, but none have been successful yet.)
I am trying to develop a botnet using byob on github here: https://github.com/malwaredllc/byob
I am encountering a address in use error every time I run the command sudo ./startup.sh. It returns OSError: [Errno 48] Address already in use.
However when I attempt to use the ps -fA | grep python and kill the associated 502 18126 16973 0 9:16PM ttys000 0:00.00 grep python by using kill -9 181216, I get this error: kill: kill 18126 failed: no such process.
Does anyone have any idea what to do?
I am using a "MacOS M1Pro Chip OS V12.0.1 Monterey". Also the program byob is trying to run on port 5000 of IPv4 127.0.0.1 (this is a generic IP not specifically mine). http://127.0.0.1/5000.
In case you try to duplicate the problem you need to install docker.io or the docker desktop app depending on os then navigate to cd <outer-dir>/byob-master/web-gui then execute sudo ./startup.sh. The code will not work without access to docker, and the program needs to be ran with admin perms using the prefix sudo. The actual downloads take a while and it will prompt you to restart once. Then when you run it again, I encounter this problem...
Please let me know if someone was able to fix this. Thanks!

ADB push failed with Windows Cmder Terminal

Issue related to reported error #1121. Does the error can be avoided by any other means?
The suggested change is not working for me - conversion of bash script to bat.
Additional escape character can be added to avoid the error. My bad!
adb push --sync -p "issues.txt" "//sdcard" 2>&1

Listen error: unable to monitor directories for changes

I am getting the following error while running my rails app in Ubuntu server
FATAL: Listen error: unable to monitor directories for changes. Visit
https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers
for info on how to fix this.
I have followed the above GitHub page, but I was unable to write in max_user_watches which were set in 8192 and I want to set that to 524288.
in cat /proc/sys/fs/inotify/max_user_watches the file was in only read mode.I tried to grant write permissions, but I was getting permission denied error even with root access.
Thanks in Advance!!!
1000 is way too small, try with 524288 as explained in the wiki page: https://github.com/guard/listen/blob/master/README.md#increasing-the-amount-of-inotify-watchers
Listen uses inotify by default on Linux to monitor directories for
changes. It's not uncommon to encounter a system limit on the number
of files you can monitor. For example, Ubuntu Lucid's (64bit) inotify
limit is set to 8192.
and
If you are running Debian, RedHat, or another similar Linux
distribution, run the following in a terminal:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
If you are running ArchLinux, run the following command instead
echo fs.inotify.max_user_watches=524288 | sudo tee /etc/sysctl.d/40-max-user-watches.conf && sudo sysctl --system
Just try to execute this from your console
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
Hope this will work for you .
References: click here
For others who may have this issue. I had a VM disconnect which left the previous rails server running. Running below resolved the issue without needing to up the number of watcher.
kill -9 $(lsof -i tcp:3000 -t)
In my case, I just need to turn off the terminal and then start it back again. It works when I try to run rails c command :)
Deleting Gemfile.lock and running 'bundle' in the project directory terminal worked for me.
This error occurred for me as I had a number of ruby processes currently running that I was unaware of. Just need to terminate them and all is good
I had this issue during development while running rake (even with rake -h), and the solution from https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers did not work, neither did killing the running ruby processes, killing the terminal or even restarting the computer.
To avoid this error I did a new and clean clone of my project and then rake was working (maybe git clean -fdx could have worked but I did not try it).
I was running rake version 13.0.3, rails 6.1.1, ruby 2.7.2p137.
Adding to #mayur-shah's answer,
It worked for me after closing the server and console. So, if you are running rails server/console, close that first.

Error while running Jenkins in Docker

I am trying to run a docker with jenkins in it as below command:
docker run --rm -p 2222:2222 -p 9080:9080 -p 8081:8081 -p 9418:9418
-tivjenkinsci/workflow-demo
I continuously get below errors
INFO: Failed mkdirs of /var/jenkins_home/caches
[7412] Connection from 127.0.0.1:57701
[7412] Extended attributes (16 bytes) exist
[7412] Request upload-pack for '/repo'
[4140] [7412] Disconnected
[7415] Connection from 127.0.0.1:39829
[7415] Extended attributes (16 bytes) exist
[7415] Request upload-pack for '/repo'
[4140] [7415] Disconnected
I am following:https://github.com/jenkinsci/workflow-aggregator-plugin/blob/master/demo/README.md
My configuration:
OS : CentOS Linux release 7.2.1511 (Core)
user : jenkins
Checked inside the docker : directory /var/jenkins_home/caches was
getting created as jenkins user, having another directory:
git-f20b64796d6e86ec7654f683c3eea522
EVERYTHING IS DEFAULT
So if I google that error, I find a page: https://recordnotfound.com/git-plugin-jenkinsci-31194/issues (I know, not the project you're looking at but maybe same or similar issue) and that page if you just do a text search for that error, you'll see a line:
fix logging "Failed mkdirs of /var/jenkins_home/caches" when the directory already exists
it indicates that this is an open issue and that it was logged 11 days ago albeit for a different repo. if you delete the folder does it fix the issue? Maybe monitor that bug report for a fix or log an issue against the workflow-aggregator-plugin.

Icinga check_jboss "NRPE: unable to read output"

I'm using Icinga to monitor some servers and services. Most of them run fine. But now I like to monitor a JBoss-AS on one server via NRPE. Therefore I'm using the check_jboss-Plugin from MonitoringExchange. Although each time I try running a test-command from my Icinga-Server via NRPE I'm getting a NRPE: unable to read output error. When I try executing the command directly on the monitored server it runs fine. It's strange that the execution on the monitored server takes around 5 seconds to return a acceptable result but the NRPE-Exceution returns immediately the error. Trying to set up the NRPE-timeout didn't solve the problem. I also checked the permissions of the check_jboss-plugin and set them to "777" so that there should be no error.
I don't think that there's a common issue with NRPE, because there are also some other checks (e.g. check_load, check_disk, ...) via NRPE and they are all running fine. The permissions of these plugins are analog to my check_jboss-Plugin.
Following one sample exceuction on the monitored server which runs fine:
/usr/lib64/nagios/plugins/check_jboss.pl -T ServerInfo -J jboss.system -a MaxMemory -w 3000: -c 2000: -f
JBOSS OK - MaxMemory is 4049076224 | MaxMemory=4049076224
Here are two command-executions via NRPE from my Icinga-Server. Both commands are correctly
./check_nrpe -H xxx.xxx.xxx.xxx -c check_hda1
DISK OK - free space: / 47452 MB (76% inode=97%);| /=14505MB;52218;58745;0;65273
./check_nrpe -H xxx.xxx.xxx.xxx -c jboss_MaxMemory
NRPE: Unable to read output
Does anyone have a hint for me? If further config-information needed please ask :)
Try to rule out SELinux either by disabling it globally or by changing the SELinux type to nagios_unconfined_plugin_exec_t.

Resources