How to get around TPTP's IWAT0435E in Windows - eclipse-tptp

I have just installed TPTP's latest version (4.6.1) and am using Eclipse build 20090621-0832 on Windows 7 (User Control enabled).
I'm trying to profile an application but am getting the following error under the Monitor tab:
IWAT0435E An error occurred when connecting to the host.
Anyone know how to resolve this?

For Windows, this may be related to a missing reference to the Integrated Agent Controller (IAC)
The Integrated Agent Controller (IAC) is a new feature in TPTP workbench which allows users to profile a Java application locally and to run a TPTP Test locally without needing the standalone Agent Controller on the local machine.
Profiling on a remote machine or running a TPTP test on a remote machine would still need the Agent Controller on that remote machine.
The biggest benefit of the Integrated Agent Controller is that it simplifies the usage of TPTP functionality in the local scenario by removing the dependency on the local standalone Agent Controller. There is no need to install and configure the local standalone Agent Controller when using the Profiling and Logging perspective and Test perspective locally.
However, that local agent might not be referenced correctly in one of the TPTP config files, either because:
it is already referenced in the workspace (incorrectly)
See workspace\.metadata\.plugins\org.eclipse.tptp.platform.iac.administrator\config\ serviceconfig.xml
or because the IAC service is not started automatically:
Check the connection with the localhost (via Window->preferences->Agent Controller->hosts->Test connection).
See this thread (and its follow up) for a working setup.
This thread might also be relevant.
For those On Linux:
It may be related to gcc version or libstdc++xx-glibcy.y version, as mentioned in bug 244213 or bug 240677
This blog post relates what kind of workaround is possible in his case. May be it could help here.
Extract:
A workaround for this is described in this blog post.
It's pretty straight forward: get a deb file libstdc++2.10-glibc2.2_2.95.4-27_i386.deb and install it with sudo dpkg ....
(update Dec. 2010: gcc before 3.x is no longer available.
You can try with, for instance, (gcc-3.3) libstdc++5_3.3.6-20_i386.deb, replacing all reference to a .deb by this one:
sudo dpkg --install libstdc++5_3.3.6-20_i386.deb
Or if you are running on 64 bits:
sudo dpkg --force-architecture --install libstdc++5_3.3.6-20_i386.deb
Not tested though.)
The problem is now that the workaround involves root actions.
First, one hasn't always the required root permissions to do so.
More importantly however, I don't like installing external debs and other stuff as root because it can interfere with the packaging system, pollute system directories and break things in ugly, or worse, unrecoverable ways.
I always install third party stuff (with which I mean things that are not available through the standard packaging system) in my home directory under ~/usr.
For example, I build autotools-managed software with the option --prefix=~/usr, so things end up in ~/usr/bin, ~/usr/lib, ~/usr/share, etcetera.
I managed to install the deb file described above also in ~/usr as follows.
First, I extracted the deb file to a temporary directory to get an idea of its contents:
dpkg -x libstdc++2.10-glibc2.2_2.95.4-27_i386.deb tmp/
This resulted in the following file tree:
tmp/
`-- usr
|-- lib
| |-- libstdc++-3-libc6.2-2-2.10.0.so
| `-- libstdc++-libc6.2-2.so.3 -> libstdc++-3-libc6.2-2-2.10.0.so
`-- share
`-- doc
`-- libstdc++2.10-glibc2.2
|-- README.Bugs.gz
|-- README.Debian
|-- changelog.Debian.gz
`-- copyright
So I just had to move the shared libraries libstdc++*.so to ~/usr/lib, or alternatively, extract the deb file directly int my home directory:
dpkg -x libstdc++2.10-glibc2.2_2.95.4-27_i386.deb ~/
One important missing piece of the puzzle is to make sure these shared libraries can be found at run time (or compile time in case you want to compile against them).
The trick is to set the LD_LIBRARY_PATH and LD_RUN_PATH environment variables, typically in your ~/.bashrc or ~/.profile startup scripts:
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/usr/lib
export LD_LIBRARY_PATH
LD_RUN_PATH=$LD_RUN_PATH:~/usr/lib
export LD_RUN_PATH
And Eclipse TPTP profiling lived happily ever after. I hope.

I found that IAC was disabled by default.
Try Windows/Preferences/Agent Controller/Integrated Agent Controller/Activate the Integrated Agent Controller on demand - enabled

Related

What files or directories of a release are the bare minimum to run a release?

Let's say, I have a completely new VPS server which I've just rolled out, which I haven't installed anything on yet.
And I've compiled and build a production release of Phoenix application on my local machine which is identical to a VPS server Linux distributive- and version-wise.
In the directory _build/prod/rel/my_app123 there have been generated 4 subdirectories:
bin
erts-12.3
lib
releases
Will copying the content of rel/my_app123/, that is, these 4 subdirectories, over to a VPS will be absolutely enough in order to run an application?
Or will I have install something extra as well? Elixir and Erlang?
How about production dependencies from mix.exs? Or are these have been included and compiled into into a release?
P.S. Assume that my web application has no "js", "css" and the like files, and doesn't use a database.
When you run mix release, it bundles all of your Elixir/Erlang dependencies for the MIX_ENV in question into the release directory, the erlang BEAM runtime/VM that you were using in your build, and any files that you specify in your mix project in mix.exs.
Because the BEAM runtime and code that bootstraps loading your code are included in the release, you won't need to install Elixir or Erlang on the target machine.
Things that are not included include:
any non-Elixir dependencies. For example, if you rely on openssl, you'll need to make sure you have a binary-compatible version of that installed on the machine you plan to run on (typically, the equivalent major verson release).
Portable bytecode. BEAM isn't like the Java VM. The compiled BEAM code needs to run on a substantially similar architecture. Build on an Arm64 machine for deployment on an Arm64 virtual machine, or x86 for Intel-compatible hardware, for instance. And it's probably best to use the same major OS distribution. There may be cases where "Any Linux * Same CPU architecture" is fine, but for example, building on a Windows or MacOS install of Elixir/OTP and deploying on Linux is a non-starter; you'd need to use a sufficiently similar OS.
As an example, one of my projects has its releases built on Alpine using Docker, so we only really have to worry about CPU compatibility. In our case we do need to make sure some external non-Elixir dependencies our app binds to are included on the docker image.
RUN apk add --no-cache libstdc++ openssl ncurses-libs wkhtmltopdf xvfb \
fontconfig \
freetype \
ttf-dejavu
(ignore the fact that wkhtmltopdf is kind of deprecated, we're working on it. But for now it's a non-elixir dependency we rely on).
If you're building for a, say, an EC2 instance and not using Docker, you'd just need to make sure your release is built on a similar OS to what you're using for production, and make sure the production AMI (image) has those non-Elixir dependencies on it, or will at the time of deployment, perhaps using apt or another package manager. For a VPS, the solution for non-elixir dependencies will depend on whether they have the option for customizing the base machine image (maybe with Packer or Ansible)
Since you may seem to have been a bit confused about it in the comments, yes, MIX_ENV=prod mix release will build all of your production Elixir/Erlang dependencies and include them in the /_build/prod folder.
I include the whole ./prod folder in our release, but it looks like protocol consolidation binaries and the lib folder .Beam files are all in the rel folder so that's a bit unnecessary.
If you do a default build, the target will be inside your _build directory, with sub-directories for the config environment and your application, e.g. _build/dev/rel/your_app/. That directory should contain everything you need to run your app -- the prompt after running mix release provides some clues for this when it says something like:
Release created at _build/dev/rel/your_app!
I find it more useful, however, to zip up the app into a single portable file (and yes, I agree that the details about how to do this are not necessarily the first things you see when reading about Elixir releases). The trick is to customize your mix.exs by fleshing out the releases option -- this is usually done via a dedicated private function but the organization of how you supply the options is up to you.
What I find is often useful is the generation of a single zipped .tar.gz file. This can be accomplished by specifying the include_executables_for option along with steps. It looks something like this:
# mix.exs
defmodule YourApp.MixProject do
use Mix.Project
def project do
[
# ...
releases: releases()
# ...
]
end
defp releases do
[
my_app: [
include_executables_for: [:unix],
steps: [:assemble, :tar]
]
]
end
When you configure your application this way, running mix release will generate a nice portable file containing your app with everything it needs. Unzipping this file is education for understanding everything your app needs. By default this file will be created at a location like _build/dev/yourapp-1.0.0.tar.gz. You can configure the build path by specifying a path for your app. See Mix.Release for more options.

Same VSCode-Settings for the whole crew?

We want to have the same VSCode settings for the whole crew of developers. Also it would be fine to have a oneline command to tear VSCode down and restart it from scratch with predefined settings and plugins so that you do not have to worry about trying out plugins and getting beck to the known state. Kind of Config-as-Code for VSCode.
I already found:
https://code.visualstudio.com/docs/editor/extension-gallery#_command-line-extension-management
https://github.com/microsoft/vscode-dev-containers
https://marketplace.visualstudio.com/items?itemName=Shan.code-settings-sync&ssr=false#qna
https://github.com/gantsign/ansible-role-visual-studio-code-extensions
https://code.visualstudio.com/docs/remote/containers
https://github.com/gantsign/ansible-role-visual-studio-code
But non of these provides a good solution to me
We are using Mac and Windows machines and develop most of the time locally (not remotely in the cloud or the like).
I imagine like having a script like
.... projectname up
or
.... projectname reset
(or
.... projectname down)
to receive/reset the configured settings and newest plugins that have been configured for the project.
Have any ideas or use a similar solution already?
After doing a lot of research, playing with Docker, Ansible and so on... it seems that although I excluded it at first the plugin Settings Sync Plugin from Shan Khan is the way to go. It has round about 1 million installs!
Only dependency - you need a GitHub account to host your configs. That is what held me back at first - but it should be not that much of a problem to get one for everyone in the team and connect it to like a company-github-account.
Copy the files settings.json and keybindings.json to your target machine(s) to copy the settings. You can find those files here:
Win: ~\AppData\Roaming\Code\User
Mac: ~/Library/Application Support/Code/User/
Linux: ~/.config/Code/User
You can copy extentions from ~/.vscode/extensions or C:\Users\username\.vscode\extensions from linux/mac or windows respectively.
FalcoGer's answer should explain how to copy the files in a way VS Code will pick them up. If you only need to copy the config files once, this solution would be fine.
If you need to "sync" these config files on a regular basis, I would advise to create a Git repository where all config files will be stored.
When cloning the repo to local machines, you can symlink the files to the config destinations (see FalcoGer's anwser). Then when you need to "sync", you only have to run git pull and restart VS Code to apply the changes.
For your other script-related question, you could create a CLI for this. Python would be the most friendly way to do this. You can find an example here.

How to install waf?

I have cloned and built the waf script using:
./waf-light configure
Then to build my project (provided by Gomspace) I need to add waf and the eclipse.py to my path. So far I haven't found better than this setenv script:
WAFROOT=~/git/waf/
export PYTHONPATH=$WAFROOT/waflib/extras/:$PYTHONPATH
export PATH=~/git/waf/:$PATH
Called with:
source setenv
This is somehow a pretty ugly solution. Is there a more elegant way to install waf?
You don't install waf. The command you found correctly builds waf: /waf-light configure build Then for each project you create, you put the built waf script into that projects root directory. I can't find a reference, but this is the way in which waf:s primary author Thomas Nagy wants the tool to be used. Projects that repackage waf to make the tool installable aren't "officially sanctioned."
There are advantages and disadvantages with non-installation:
Disadvantages:
You have to add the semi-binary 100kb large waf file to your repository.
Because the file contains binary code, people can have legal objections to distributing it.
Advantages:
It doesn't matter if new versions of waf break the old API.
Users don't need to install waf before compiling the project -- having Python on the system is enough.
Fedora (at least Fedora 22) has a yum package for waf, so you could see that it's possible to do a system install of waf, albeit with a hack.
After you run something like python3 ./waf-light configure build, you'll get a file called waf that's actually a Python script with some binary data at the end. If you put it into /usr/bin and run it as non-root, you'll get an error because it fails to create a directory in /usr/bin. If you run it as root, you'll get the new directory and /usr/bin/waf runs normally.
Here's the trick that I learned from examining the find_lib() function in the waf Python script.
Copy the waf to /usr/bin/waf
As root, run /usr/bin/waf. Notice that it creates a directory. You'll see something like /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18
mv that directory to /usr/lib, dropping the . in the directory name, e.g. mv /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18 /usr/lib/waf-2.0.19-b2f63c807a4215294bf6005410c74c18
If you want to use waf with Python3, repeat Steps 2-3 running the Python script /usr/bin/waf under Python3. Under Python3, the directory names will start with .waf3-/waf3- instead instead of .waf-/waf-.
(Optional) Remove the binary data at the end of /usr/bin/waf.
Now, non-root should be able to just use /usr/bin/waf.
That said, here's something to consider, like what another answer said: I believe waf's author intended waf to be embedded in projects so that each project can use its own version of waf without fear that a project will fail to build when there are newer versions of waf. Thus, the one-global-version use case seems to be not officially supported.

is symfony system wide?

i have a directory with some .php files in it...do i need to install symfony in that directory so that i can run symfony commands from that directory?? i tried: pear channel-discover pear.symfony-project.com
and got:
Channel "pear.symfony-project.com" is already initialized
the i tried: pear install symfony/symfony
and got:
WARNING: configuration download directory "/build/buildd/php5-5.3.2/pear-build-download" is not writeable. Change download_dir config variable to a writeable dir to avoid this warning
Cannot install, php_dir for channel "pear.symfony-project.com" is not writeable by the current user
how do i fix this please? must i run it as sudo??
thanks
If I remember well, Symfony is "system-wide" if installed through PEAR (like you are trying to do).
Now it is recommended by the manual to do a "standalone" install through SVN. See http://www.symfony-project.org/gentle-introduction/1_4/en/03-Running-Symfony
As well, the sandbox will bring a "standalone" installation of Symfony (you will have to be in the project root directory to be able to run the symfony commands)
EDIT
Example of quick install using SVN:
mkdir /path/to/symfony
cd /path/to/symfony
svn checkout http://svn.symfony-project.com/tags/RELEASE_1_4_0 .
but it should be better to follow the manual and configure the external SVN sources.
I run servers where I have applications that use different versions of symfony simultaneously. I found it much easier to deal with the SVN checkout than the PEAR insteall.
So what I've done is do an SVN checkout of each revision that I need
into it's own folder. In my case i check them out to
/usr/share/symfony/(version)
Then I configure the ProjectConfiguration.class.php to reference whatever version is needed, using the line:
require_once '/usr/share/symfony/1.x.x/lib/autoload/sfCoreAutoload.class.php';
If I need to switch to a different symfony version, I can just change that config.
That is, assuming all my code is compatible with that version

What do you use if you wish to install an Erlang library?

Do you use CEAN, copy the source and compile them, copy the BEAM files, or something else. I need to distribute some Erlang code and I'm not sure which to choose.
I've been working on EPM, an Erlang package manager. It pulls from GitHub. It's non-invasive and doesn't require installing anything on your system other than downloading an escript. It works as follows:
jvorreuter$ ./epm install ibrowse mochiweb
epm v0.1.0, 2010
===============================
Install the following packages?
===============================
+ epm-mochiweb-master
+ cmullaparthi-ibrowse-master
([y]/n) y
+ downloading http://github.com/epm/mochiweb/tarball/master
+ running mochiweb build command
+ running mochiweb install command
+ downloading http://github.com/cmullaparthi/ibrowse/tarball/master
+ running ibrowse build command
+ running ibrowse install command
You can read more about it at http://www.jkvor.com/erlang-package-manager
The linked blog post is blank as of August 2013. The GitHub repository is at https://github.com/JacobVorreuter/epm
I clone the git repository, build it, and add the directory to my ERL_LIBS path. I hack the source for my private customizations. For those commits that are sensible, I publish.
I am a much happier developer after finding git and being able to manage my own changes and still be able to get upstream changes that I can rebase my stuff on.
I realize that this looks raw for end users, but I am my own end user.
If I would ship something to other end users I would look into using .ez zip archive files that the erlang code loader can use. See section "Loading of Code From Archive Files" on that page. Then provide a script that invokes erl with the correct arguments.
If the repository isn't available as a git, I git-svn clone it. If I can't do that, I tend to stay away from it.
I use faxien (a package manager for Erlang releases and applications) from the Erlware project: http://www.erlware.org. It and sinan are essential tools for Erlang development :).
I use Agner on Mac, Linux, and Unix; and CEAN on Windows. CEAN's the only Erlang package manager with Windows support.
I package them on a Debian repository on Launchpad.

Resources