I need to find out the directory in which the Dart script (which is currently running) is located. How do I go about doing this?
Previously, this was possible through new Options().script, but that class is no more. The arguments list from main(args) doesn't help, since it only contains arguments, not the executed Dart file (at least on Mac OS X).
According to BREAKING CHANGE: dart:io Platform and Options are deprecated, and will be removed and Breaking Change: Platform.script is a Uri, dart:platform library is gone new Options().script should be replaced by Platform.script.
Note that Platform.script returns an Uri and you should consider using toFilePath to get the file path from the file URI.
As noted the simple method is:
import 'dart:io';
print(Platform().script);
> file:///home/..../dcli_unit_tester/bin/dcli_unit_tester.dart
This is a file url, so to get the actual path use:
import 'dart:io';
print(Platform().script.toFilePath());
Of course you wanted the directory rather than the script so:
import 'dart:io';
import 'package:path/path.dart';
final pathToScript = Platform().script.toFilePath();
final pathToDirectory = dirname(pathToScript);
print(pathToDirectory);
> /home/..../dcli_unit_tester/bin
There are a few things you should understand when attempting to get the directory of the script in that the directory will change depending on how you run the script.
In particular if running your script from a globally activated package then you might not (should not) want to use the script's dirirectory for storing things.
execute .dart from cli
Here is the output from Platform if you run it from the cli:
dart bin/dcli_unit_tester.dart -p
executable: dart
script: file:///home/..../dcli_unit_tester/bin/dcli_unit_tester.dart
execute script when globally activated
Now if you run the same script from a globally activated package:
pub global activate dcli_unit_tester
dcli_unit_tester -p
executable: dart
script: file:///home/..../.pub-cache/global_packages/dcli_unit_tester/bin/dcli_unit_tester.dart-2.14.2.snapshot
In this case the directory is in pub-cache which by nature is transitory, so don't store anything that you want to get back later.
execute from compiled script
Now if you compile the script:
dart compile exe bin/dcli_unit_tester.dart
bin/dcli_unit_tester.exe -p
executable: bin/dcli_unit_tester.exe
script: file:///home/..../dcli_unit_tester/bin/dcli_unit_tester.exe
execute from within a unit test
If you run the same test from a unit test the script will be the test.dll file not you script.
I will update this answer once I work out how to get the script directory when running under unit tests (if it is even possible).
using dcli
If you are building a console app you can use the dcli package:
import 'package:dcli/dcli.dart'
print(DartScript.self.pathToScript);
print(DartScript.self.pathToScriptDirectory);
> /home/..../dcli_unit_tester/bin/dcli_unit_tester.dart
> /home/..../dcli_unit_tester/bin
Related
I want to compile uic to PySide6 but I don't find how to install pyside6-uic tool. Where can I install pyside6-uic? I downloaded PySide6 but command pyside6-uic doesn't work.
There is a reference here in the title:
https://doc.qt.io/qtforpython/tutorials/basictutorial/uifiles.html#using-ui-files-from-designer-or-qtcreator-with-quiloader-and-pyside6-uic
Step 1. If you installed PySide6
a. In a venv then go to your venv's folder.
b. Globally then go to your python installation folder.
Step 2. Go to Lib then site-packages then PySide6.
Step 3. Copy uic.exe ( or uic if the file extension are hidden) create a folder called bin and paste what you've copied inside it.
To compile ui files from:
QtDesigner: From the top menu select Form -> View Python Code... then click on the save icon (floppy disk) from the newly opened window.
Command Prompt: pyside6-uic.exe mainwindow.ui > ui_mainwindow.py
PowerShell: pyside6-uic.exe mainwindow.ui -o ui_mainwindow.py
If you using virtual environment, when you install pyside6 with pip in the virtual environment there is a folder named Scripts there is the pyside6-uic.exe tool.
if you have install pyside6 globally in your system and you use visual studio code you can use the extension PySide2-vsc then when you installed you can go to preferences > settings and search the PySide2-vsc extension settings then look for the "Command to compile a .ui file into python". Then you can use that feature with right-click on .ui files.
I could solve this issue to add this Path to my %PATH% Variable on windows.
C:\Users\<YOUR_USER_PATH_NAME>\AppData\Roaming\Python\Python311\Scripts
The pyside6-uic tool is supposed to be installed automatically when installing the Python package.
Check if uic is in PATH
When using loadUiType, the Qt documentation (here) states that :
The internal process relies on uic being in the PATH. The pyside6-uic
wrapper uses a shipped uic that is located in the
site-packages/PySide6/uic, so PATH needs to be updated to use that if
there is no uic in the system.
But even then, I got the following error :
Cannot run 'pyside6-uic': "execvp: No such file or directory" -
Exit status QProcess::NormalExit ( 255 )
Check if 'pyside6-uic' is in PATH
For me, pyside6-uic was not located in site-packages/PySide6/uic. When reinstalling the module with pip, I noticed this message :
WARNING: The scripts pyside6-assistant, pyside6-designer, pyside6-genpyi,
pyside6-linguist, pyside6-lrelease, pyside6-lupdate, pyside6-rcc and pyside6-uic are
installed in '/Users/<user>/Library/Python/3.8/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning,
use --no-warn-script-location.
So make sure to add the right directory to your $PATH variable.
Once it's done, you will be able to use the pyside6-uic command to generate a Python class from a UI file :
pyside6-uic mainwindow.ui > ui_mainwindow.py
Loading a .ui from code
You can also load a .ui file from your code using either:
loadUiType (doc page) :
This function generates and loads a .ui file at runtime, and it
returns a tuple containing the reference to the Python class, and the
base class.
or QUiLoader (doc page):
enables standalone applications to dynamically create user interfaces
at run-time using the information stored in UI files or specified in
plugin paths
from PySide6.QtUiTools import QUiLoader
ui_file = QFile("mainwindow.ui")
ui_file.open(QFile.ReadOnly)
loader = QUiLoader()
window = loader.load(ui_file)
window.show()
Generally speaking, use absolute paths to access your UI files. Relative paths are susceptible to errors.
I am new to NixOs and nix-shell and am still getting to know its idioms. Right now I have a Java project which I am using nix-shell via direnv to load up the build tool chain, including a jdk and bazel.
I would like the IDE - in my case ItelliJ - to use this same toolchain. My naive approach is to use a nix-shell script as follows, which is the default.nix in the root of my project, and the one picked up by direnv.
with import <nixpkgs> {};
stdenv.mkDerivation {
name = "my-project";
buildInputs = with pkgs; [
jdk11
bazel
jetbrains.idea-ultimate
];
shellHook = ''
export JAVA_HOME="${pkgs.jdk11}/lib/openjdk"
ln -s ${pkgs.jdk11}/lib/openjdk ./jdk
'';
}
I can then launch IntelliJ with the following command from the shell:
$ idea-ultimate 2>1 > /dev/null &
While it works, I have the following concerns:
Loading up the IDE into my command line shell seems really heavy, especially for the CI build.
It is going to get worse as I add other IDEs the team uses, like Eclipse.
It seems like the wrong way.
I can, of course, install these IDE packages using other Nix facilities, like home manager, which gives me the application launcher in the menu after the right config steps; however, I like directly launching the IDE from the shell to ensure the correct tool chain is in place and in correct paths.
My thought for a next step was to remove the IDE input from this default.nix and create custom nix files which include the inputs for the IDE and a launcher script to actually launch the IDE with nix-shell. My hope is that, if executed from the shell above, it will inherit its inputs, augment it with the IDE input, and then launch the IDE.
Again, my goal is to use nix to launch my IDEs, and use the packages and configs setup by the default.nix which is in the root of the project to ensure consistency.
Suggestions, including alternative approaches, are appreciated.
I'm in the process of migrating entire CloudFormation stacks to Troposphere, including Lambda and Lambda-reliant CFN Custom Resources.
One of my goals is to circumvent the creation of template files altogether, making the Python code the sole "source of truth" (i.e without template files that are created and therefore can be edited, causing config drift).
This requires the ability to:
Pass a file-like object to the SAM builder (instead of a file-name)
Calling the AWS SAM builder from Python and not the CLI
My first naive idea was that I would be able to import a few modules from aws-sam-cli put a wrapper for io.StringIO around it (to hold the template as file-like object) and presto! Then I looked at the source code for sam build and all hope left me:
I may not be able to use Docker/containers for building, as I it will map the build environment, including template files.
AWS SAM CLI is not designed to have a purely callable set of library functions, similar to boto3. Close, but not quite.
Here is the core of the Python source
with BuildContext(template,
base_dir,
build_dir,
clean=clean,
manifest_path=manifest_path,
use_container=use_container,
parameter_overrides=parameter_overrides,
docker_network=docker_network,
skip_pull_image=skip_pull_image,
mode=mode) as ctx:
builder = ApplicationBuilder(ctx.function_provider,
ctx.build_dir,
ctx.base_dir,
manifest_path_override=ctx.manifest_path_override,
container_manager=ctx.container_manager,
mode=ctx.mode
)
try:
artifacts = builder.build()
modified_template = builder.update_template(ctx.template_dict,
ctx.original_template_path,
artifacts)
move_template(ctx.original_template_path,
ctx.output_template_path,
modified_template)
click.secho("\nBuild Succeeded", fg="green")
msg = gen_success_msg(os.path.relpath(ctx.build_dir),
os.path.relpath(ctx.output_template_path),
os.path.abspath(ctx.build_dir) == os.path.abspath(DEFAULT_BUILD_DIR))
click.secho(msg, fg="yellow")
This relies on a number of imports from a aws-sam-cli internal library with the build focused ones being
from samcli.commands.build.build_context import BuildContext
from samcli.lib.build.app_builder import ApplicationBuilder, BuildError, UnsupportedBuilderLibraryVersionError, ContainerBuildNotSupported
from samcli.lib.build.workflow_config import UnsupportedRuntimeException
It's clear that this means it's not as simple as creating something like a boto3 client and away I go! It looks more like I'd have to fork the whole thing and throw out nearly everything to be left with the build command, context and environment.
Interestingly enough, sam package and sam deploy, according to the docs, are merely aliases for aws cloudformation package and aws cloudformation deploy, meaning those can be used in boto3!
Has somebody possibly already solved this issue? I've googled and searched here, but haven't found anything.
I use PyCharm and the AWS Toolkit which if great for development and debugging and from there I can run SAM builds, but it's "hidden" in the PyCharm plugins - which are written in Kotlin!
My current work-around is to create the CFN templates as temp files and pass them to the CLI commands which are called from Python - an approach I've always disliked.
I may put in a feature request with the aws-sam-cli team and see what they say, unless one of them reads this.
I've managed to launch sam local start-api from a python3 script.
Firstly, pip3 install aws-sam-cli
Then the individual command can be imported and run.
import sys
from samcli.commands.local.start_api.cli import cli
sys.exit(cli())
... provided there's a template.yaml in the current directory.
What I haven't (yet) managed to do is influence the command-line arguments that cli() would receive, so that I could tell it which -t template to use.
Edit
Looking at the way aws-sam-cli integration tests work it seems that they actually kick off a process to run the CLI. So they don't actually pass a parameter to the cli() call at all :-(
For example:
class TestSamPython36HelloWorldIntegration(InvokeIntegBase):
template = Path("template.yml")
def test_invoke_returncode_is_zero(self):
command_list = self.get_command_list(
"HelloWorldServerlessFunction", template_path=self.template_path, event_path=self.event_path
)
process = Popen(command_list, stdout=PIPE)
return_code = process.wait()
self.assertEquals(return_code, 0)
.... etc
from https://github.com/awslabs/aws-sam-cli/blob/a83aa9e620ff679ca740496a3f1ff4872b88894a/tests/integration/local/invoke/test_integrations_cli.py
See also start_api_integ_base.py in the same repo.
I think on the whole this is to be expected because the whole thing is implemented in terms of the click command-line application framework. Unfortunately.
See for example http://click.palletsprojects.com/en/7.x/testing/ which says "The CliRunner.invoke() method runs the command line script in isolation ..." -- my emphasis.
I am using following python script to run sam cli commands. This should work for you too.
import json
import sys
import os
try:
LAMBDA_S3_BUCKET="s3-bucket-name-in-same-region"
AWS_REGION="us-east-1"
API_NAME = "YourAPIName"
BASE_PATH="/path/to/your/project/code/dir"
STACK_NAME="YourCloudFormationStackName"
BUILD_DIR="%s/%s" % (BASE_PATH, "build_artifact")
if not os.path.exists(BUILD_DIR):
os.mkdir(BUILD_DIR)
os.system("cd %s && sam build --template template.yaml --build-dir %s" % (BASE_PATH, BUILD_DIR))
os.system("cd %s && sam package --template-file %s/template.yaml --output-template-file packaged.yaml --s3-bucket %s" %(BASE_PATH, BUILD_DIR, LAMBDA_S3_BUCKET))
os.system("cd %s && sam deploy --template-file packaged.yaml --stack-name %s --capabilities CAPABILITY_IAM --region %s" %(BASE_PATH, STACK_NAME, AWS_REGION))
except Exception as e:
print(e.message)
exit(1)
I'm doing long-running computations in an electron app. I'm "forking" the node process using child_process.fork to execute a script in a separate process not to block the renderer process.
The app is working fine provided that the script I'm launching with child_process.fork is in the directory I'm launching it from. What I'd like to do instead is to ship the script inside the binary (I'm using electron-builder to build one).
I've found out that the builder packs everything (including the script I'm interested in) into an .asar archive - can it be accessed by the binary at runtime?
child_process.spawn and using a language other than JS is an option too, but the problem persists - I don't know how to embed the script into the binary.
I solved the problem thanks to this response from another question.
I've added
node: {
__dirname: true,
},
to webpack.config.js and used process.resourcesPath to resolve the file path.
child_process.fork can utilize the asar library, if given a script path inside an asar archive. To achieve this I used roughly
import path from 'path';
import { fork } from 'child_process';
const scriptPath = path.join(process.resourcesPath!, 'app.asar', fileName);
const args = [];
const process = fork(scriptPath, args);
child.execFile on the other hand can utilize the asar library to spawn a process of the binary contained in an asar archive. Note that it can only resolve the binary, not its parameters, so if you wanted to embed say a Python interpreter and a Python script in your electron app, you should package it in a single binary or extract the script to the file system and execute using the embedded Python binary or load its content to a variable and execute directly using python -c "print(44)" with your script content as a -c argument parameter.
I have a simple dart project that has one executable in bin folder (test.dart). I have activated it with dart global activate and now I can run it directly with just typing the name of that executable file.
Inside that dart file I would like to know the path of that script. Basically for now I'm just printing something like this:
print('1: ' + Platform.script.toString());
print('2: ' + Platform.script.path);
print('3: ' + Platform.executable);
print('4: ' + Platform.packageRoot);
print('5: ' + Platform.resolvedExecutable);
When I run it directly:
test
or with pub:
pub global run test
or even with package name:
pub global run test:test
I always get the same result:
1: http://localhost:53783/test.dart
2: /test.dart
3: E:\apps\dart-sdk\bin\dart
4:
5: E:\apps\dart-sdk\bin\dart.exe
The issue here is that I can't get the absolute path for test.dart file.
When I run it like this:
dart /path/to/project/bin/test.dart
I get what I need:
1: file:///E:/projects/dart/test/bin/test.dart
2: /E:/projects/dart/test/bin/test.dart
3: dart
4:
5: E:\apps\dart-sdk\bin\dart.exe
Is there a way how to get absolute path for a script that is currently running, regardless of a way how it was executed?
tl;dr: There's not a great way of doing what you want, but it's in the works.
The notion of a "path for a script that is currently running" is more complicated than it might sound at first blush. There are a number of ways that the Dart ecosystem invokes main(). Off the top of my head, here are a few:
Manually running the file with dart.
Running the file in an isolate.
Compiling the file to a snapshot and either manually running it or running it in an isolate.
Automatically compiling the file to a cached executable and running that either in a subprocess or in an isolate.
Adding a wrapper script that imports the file and invokes main(), and either running that in a subprocess or in an isolate.
Serving the file over HTTP, and running it either in a subprocess or in an isolate.
In some of these cases, the "script that is running" is actually a wrapper, not the original file you authored. In others, it's a snapshot that may have no inherent knowledge of the file from which it was created. In others, the file has been modified by transformers and the actual Dart code that's running isn't on disk at all.
I suspect what you're actually looking for isn't the executable URL itself, but the location of your package's files. We're working on a collection of APIs that will make this possible, as well as a resource package that will provide a nice API for dealing with your packages' resources.
Until that lands, there is a dart:mirrors hack you can use. If you give one of your library files an explicit library tag, say library my.cool.package, you can write:
var libPath = currentMirrorSystem().findLibrary(#my.cool.package).uri.path;
This will give you the path to your library, which you can use to figure out where your package's lib/ files live.
If you want a reliable way to access the current file while running in pub you'll need to use mirrors.
Here's a sample usage with dartdoc tests - https://github.com/dart-lang/dartdoc/blob/41a5e4d3f6e0084a9bc2af80546da331789f410d/test/compare_output_test.dart#L17
import 'dart:mirrors';
Uri get _currentFileUri =>
(reflect(main) as ClosureMirror).function.location.sourceUri;
void main() { ... }
Not particularly pretty, but it's easy to just put into a util file.