I don't quite know what to do. I use VSCode and Jupyter Notebook and conda env. I just downloaded Atom and it keeps saying no kernal for grammar python. I have a similar problem if I try using the conda command in Terminal where it doesn't recognize the conda command until I:
export PATH=/Users/edgar/anaconda3/bin:$PATH
How do I make my atom run my python code? Thank you very much.
to set up atom to become a python ide you need packages like:
Community Packages (14) /home/simone/.atom/packages
├── Hydrogen#2.14.1
├── atom-ide-ui#0.13.0
├── autocomplete-python#1.16.0
├── hydrogen-python#0.0.8
├── ide-python#1.5.0
├── intentions#1.1.5
├── linter#2.3.1 (disabled)
├── linter-flake8#2.4.0
├── linter-ui-default#1.8.1
└── python-autopep8#0.1.3
and to run atom on a conda / pyenv environment you just need to:
$ cd [path to project]
$ conda activate [env]
$ atom .
so that atom will use that python env to run the scripts.
The easiest way is to install the package script. Then open the python script you want to run and go to the packages menu in the menu bar. Under this menu, you should see an option - Script. Select "script" and one option is to run the python script. Select this option and your python file should run. You can also tap the F5 key. That will also run your file.
This assumes you have the package "language-python" installed in Atom. If you dont you can get it from here.
Related
I'm doing an internship (= yes I'm a newbie). My supervisor told me to create a conda environment. She passed me a log file containing many packages.
A quick qwant.com search shows me how to create envs via the
conda env create --file env_file.yaml
The file I was give is however NOT a yaml file it is structured like so:
# packages in environment at /home/supervisors_name/.conda/envs/pancancer:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
bedtools 2.29.2 hc088bd4_0 bioconda
blas 1.0 mkl
bzip2 1.0.8 h7b6447c_0
The file contains 41 packages = 44 lines including comments above. For simplicity I'm showing only the first 7.
Appart from adding env name (see 2. below), is there a way to use the file as it is to generate an environment with the packages?
I ran the cmd using
conda env create --file supervisors.log.txt
SpecNotFound: Environment with requirements.txt file needs a name
Where in the file should I put the name?
alright, so, it seems that they give you the output of conda list rather than the .yml file produced by conda with conda env export > myenv.yml. Therefore you have two solutions:
You ask for the proper file and then proceed to install the env with conda built-in pipeline
If you do not have any access on the proper file, you could do one of the following:
i) Parse with python into a proper .yml file and then do the conda procedure.
ii) Do a bash script, downloading the packages listed in the file she gave you.
This is how I would proceed, personally :)
Because there is no other SO post on this error, for people of the future: I got this error just because I named my file conda_environment.txt instead of conda_environment.yml. Looks like the yml extension is mandatory.
Is it possible to tell pipenv where the venv is located? Perhaps there's something you can put in the pipfile, or something for the .env file?
I fairly frequently have to recreate my venv because pipenv seemingly loses track of where it is.
For example, I started a project using Pycharm to configure the file system and create my pipenv interpreter. It created the venv in ~/.local/share/virtualenvs/my-project-ZbEWMGNA and it was able to keep track of where that interpreter was located.
Switching to a terminal window & running pipenv commands then resulted in;
Warning: No virtualenv has been created for this project yet! Consider running pipenv install first to automatically generate one for you or seepipenv install --help for further instructions.
At which point I ran pipenv install from the terminal & pointed pycharm at that venv, so the path would become ~/apps/my-project-ZbEWMGNA (which sits alongside the project files ~/apps/my-project)
Now I've got venvs in both paths and pipenv still can't find them.
mwalker#Mac my-project % pipenv --where
/Users/mwalker/apps/my-project
mwalker#Mac my-project % pipenv --venv
No virtualenv has been created for this project yet!
Aborted!
mwalker#Mac my-project % ls ~/apps
my-project
my-project-ZbEWMGNA
mwalker#Mac my-project % ls ~/.local/share/virtualenvs
my-project-ZbEWMGNA
Yes, it is possible by setting environment variables. You can set a path for virtual environments via the WORKON_HOME. Or have the virtual environment created in the project with PIPENV_VENV_IN_PROJECT.
Pipenv automatically honors the WORKON_HOME environment variable, if you have it set — so you can tell pipenv to store your virtual environments wherever you want
-- https://pipenv-fork.readthedocs.io/en/latest/advanced.html#custom-virtual-environment-location
or
PIPENV_VENV_IN_PROJECT
If set, creates .venv in your project directory.
-- https://pipenv-fork.readthedocs.io/en/latest/advanced.html#pipenv.environments.PIPENV_VENV_IN_PROJECT
In my experience, PyCharm will uses the existing venv created by Pipenv. Otherwise it will create it in the directory that PyCharm is configured to create it.
I have this run step in my circle.yaml file with no checkout or working directory set:
- run:
name: Running dataloader tests
command: venv/bin/python3 -m unittest discover -t dataloader tests
The problem with this is that the working directory from the -t flag does not get set. I have moduleNotFound Errors when trying to find an assertions folder inside the dataloader class.
My tree:
├── dataloader
│ ├── Dockerfile
│ ├── Makefile
│ ├── README.md
│ ├── __pycache__
│ ├── assertions
But this works:
version: 2
defaults: &defaults
docker:
- image: circleci/python:3.6
jobs:
dataloader_tests:
working_directory: ~/dsys-2uid/dataloader
steps:
- checkout:
path: ~/dsys-2uid
...
- run:
name: Running dataloader tests
command: venv/bin/python3 -m unittest discover -t ~/app/dataloader tests
Any idea as to what might be going on?
Why doesn't the first one work with just using the -t flag?
What does working directory and checkout with a path actually do? I don't even know why my solution works.
The exact path to the tests folder from the top has to be specified for 'discovery' to work. For example:'python -m unittest discover src/main/python/tests'. That must be why its working in the second case.
Its most likely a bug with 'unittest discovery' where discovery works when you explicitly specify namespace package as a target for discovery.But it does not recurse into any namespace packages inside namespace_pkg. So when you simply run 'python3 -m unittest discover' it doesn't go under all namespace packages (basically folders) in cwd.
Some PRs are underway(for example:issue35617) to fix this, but are yet to be released
checkout = Special step used to check out source code to the configured path (defaults to the working_directory). The reason this is a special step is because it is more of a helper function designed to make checking out code easy for you. If you require doing git over HTTPS you should not use this step as it configures git to checkout over ssh.
working_directory = In which directory to run the steps. Default: ~/project (where project is a literal string, not the name of your specific project). Processes run during the job can use the $CIRCLE_WORKING_DIRECTORY environment variable to refer to this directory. Note: Paths written in your YAML configuration file will not be expanded; if your store_test_results.path is $CIRCLE_WORKING_DIRECTORY/tests, then CircleCI will attempt to store the test subdirectory of the directory literally named $CIRCLE_WORKING_DIRECTORY, dollar sign $ and all.
Here I found the code:
erlc -I ~/ejabberd-2.1.13/lib/ejabberd-2.1.13/include -pa ~/ejabberd-2.1.13/lib/ejabberd-2.1.13/ebin mod_my.erl
But it did not work?
Here are steps to add your custom module into ejabberd
put your module into ejabberd/src folder.
come to ejabberd directory in terminal and run command $ sudo make
it will show you that your module is compiled. Now run $ sudo make install
Add your module into config file at /etc/ejabberd/ejabberd.yml
restart your ejabberd and your custom module will be running.
These are the Instructions based on Ejabberd recommendation
1) Form the folder structure like below (refer any module from --
https://github.com/processone/ejabberd-contrib).
sources
│
│───conf
│ └───modulename.yml
│───src
│ └───modulename.erl
│───README.txt
│───COPYING
│───modulename.spec
2) Add your module folder structure to ejabberd user home directory (check ejabberdctl.cfg for CONTRIB_MODULES_PATH param).
3) Type command ejabberdctl modules_available it will list your module
4) Type ejabberdctl module_install module_name command
For Reference https://docs.ejabberd.im/developer/extending-ejabberd/modules/
Just drop the module in the ejabberd's src/ folder then "make". Nothing special needed to compile it.
Now I am learning Erlang and I have a question about kind of running and testing Erlang applications.
We have some views of running and testing Erlang programs:
We can run Erlang shell and test in there our function.
We can compile some files with our Erlang code, than create .app file, and then again run Erlang shell and call application:start(AppName).
My question: Can we make binary executable file from Erlang code, Like C code? How can I run programmes without Erlang shell, in a way that I can run program, input something command and after that calls Erlang functions for this command?
For example I have a module (test.erl) with three functions:
foo1() -> ...
foo2() -> ...
foo3() -> ...
Then I want to run the programme in terminal and input -a flag to call function foo1, -b flag for foo2 and so on.
Let me divide the answer into three parts:
1. Running Erlang Applications
As Erlang source code (.erl files) compiled to BEAM bytecode (.beam files) and then run on top of Erlang virtual machine (BEAM), so There is no option for creating a stand-alone binary without the need of the virtual machine. But there are ways for packing, building, porting, upgrading and running Erlang applications based on OTP which is its formal platform.
A. Command-line flags
Imagine we developed an application with foo name, now we can start it with a set of flags just like this:
$ erl \
-pa path/to/foo \
-s foo \
-sname foo_node \
-setcookie foo_secret \
-noshell -noinput > /path/to/foo.log &
-pa adds the specified directory to the path
-s starts the foo application
-sname makes it distributed with a short name
-setcookie sets a cookie for making a minimum level of security
-noshell starts erlang without shell
-noinput doesn't let to read any input from shell
Then we can stop it with following command:
$ erl \
-sname stop_foo_node \
-setcookie foo_secret \
-eval 'rpc:call(foo, foo_node, stop, []), init:stop()' \
-noshell -noinput > /path/to/foo.log &
-eval evaluates the given expressions
And also we can attach to the foo application shell with this command:
$ erl \
-sname debug_foo_node \
-setcookie foo_secret \
-rmesh foo_node
-rmesh makes a remote shell to given node
We can put above commands into a makefile or shell script for using them simply.
Also for finer grained control over the start-up process of the system we can use a boot script file and specify it with -boot flag. The boot file contains instructions on how to initiate the system, which modules and applications we are dependant on, and also contains functions to restart, reboot and stop the system. The process of creating and using boot script is well documented in Erlang documentation website.
B. Release tool
The other way that automates and integrates most of the works for us is reltool which is a standard and fully featured release management tool. We can specify our application version, boot script, dependencies, and so on in reltool config file and create a portable release. This is a sample structure of an Erlang/OTP application compatible with reltool:
├── deps
│ └── ibrowse
├── ebin
│ ├── foo.app
│ ├── foo_app.beam
│ └── foo_sup.beam
├── rebar.config
├── rel
│ ├── files
│ ├── foo
│ └── reltool.config
└── src
├── foo_app.erl
├── foo.app.src
└── foo_sup.erl
We can use Rebar which is an Erlang build tool for making it even simpler to create Erlang application and also releases. There is a detailed tutorial on how to use Rebar for such task.
2. Testing Erlang applications
There are two standard test frameworks for Erlang applications:
Eunit: It is a standard unit-testing framework of OTP which can test a function, a module, a process or even an application.
CommonTest: It is another standard test framework of OTP that provides structures for defining local or distributed test scenarios and manages to run, log and report the results.
It is a common practice to combine them together for both white-box and black-box testing. Also Rebar provides rebar eunit and rebar ct commands for automating their execution.
3. Passing command-line argument
Using init:get_argument/1 we can retrieve user defined flags to decide upon them as follows:
$ erl -foo foo1 foo2 -bar bar1
1> init:get_argument(foo).
{ok,[["foo1","foo2"]]}
2> init:get_argument(bar).
{ok,[["bar1"]]}
No, you can't make a binary. You can write a bash- or escript to automatically run the startup / test code.
You should also be checking out eunit which can automate a lot of the hassle of running automated unit tests.