circleci python -t flag when running tests does not work - circleci

I have this run step in my circle.yaml file with no checkout or working directory set:
- run:
name: Running dataloader tests
command: venv/bin/python3 -m unittest discover -t dataloader tests
The problem with this is that the working directory from the -t flag does not get set. I have moduleNotFound Errors when trying to find an assertions folder inside the dataloader class.
My tree:
├── dataloader
│   ├── Dockerfile
│   ├── Makefile
│   ├── README.md
│   ├── __pycache__
│   ├── assertions
But this works:
version: 2
defaults: &defaults
docker:
- image: circleci/python:3.6
jobs:
dataloader_tests:
working_directory: ~/dsys-2uid/dataloader
steps:
- checkout:
path: ~/dsys-2uid
...
- run:
name: Running dataloader tests
command: venv/bin/python3 -m unittest discover -t ~/app/dataloader tests
Any idea as to what might be going on?
Why doesn't the first one work with just using the -t flag?
What does working directory and checkout with a path actually do? I don't even know why my solution works.

The exact path to the tests folder from the top has to be specified for 'discovery' to work. For example:'python -m unittest discover src/main/python/tests'. That must be why its working in the second case.
Its most likely a bug with 'unittest discovery' where discovery works when you explicitly specify namespace package as a target for discovery.But it does not recurse into any namespace packages inside namespace_pkg. So when you simply run 'python3 -m unittest discover' it doesn't go under all namespace packages (basically folders) in cwd.
Some PRs are underway(for example:issue35617) to fix this, but are yet to be released

checkout = Special step used to check out source code to the configured path (defaults to the working_directory). The reason this is a special step is because it is more of a helper function designed to make checking out code easy for you. If you require doing git over HTTPS you should not use this step as it configures git to checkout over ssh.
working_directory = In which directory to run the steps. Default: ~/project (where project is a literal string, not the name of your specific project). Processes run during the job can use the $CIRCLE_WORKING_DIRECTORY environment variable to refer to this directory. Note: Paths written in your YAML configuration file will not be expanded; if your store_test_results.path is $CIRCLE_WORKING_DIRECTORY/tests, then CircleCI will attempt to store the test subdirectory of the directory literally named $CIRCLE_WORKING_DIRECTORY, dollar sign $ and all.

Related

Serverless - Service configuration is expected to be placed in a root of a service (working directory)

I have this warning on GitHub Action:
Serverless: Deprecation warning: Service configuration is expected to be placed in a root of a service (working directory). All paths, function handlers in a configuration are > resolved against service directory".
Starting from next major Serverless will no longer permit configurations nested in sub directories.
Does it mean I have to put serverless.yml (Service configuration) in the working directory?
If yes, which one is the working directory?
.github/workflows/deploy.yml
service: myservice
jobs:
deploy:
# other steps here #
- name: Serverless
uses: serverless/github-action#master
with:
args: deploy --config ./src/MyStuff/serverless.yml
I store the serverless.yml in that path because it is related to Stuff.
I want to use multiple serverless.yml.
For AnotherStuff I will create src/AnotherStuff/serverless.yml .
So, what is the error, and the right way to do it?
[edit 21/02/2022]
I'm using the following workaround.
In GitHub Actions I have this job step in my build:
- name: Serverless preparation
run: |
# --config wants the serverless files in the root, so I move them there
echo move configuration file to the root folder
mv ./serverless/serverless.fsharp.yml ./serverless.fsharp.yml
Essentially, they want the file in the root folder... I put the file in the root folder.

How do I run a python file in Atom? Conda env?

I don't quite know what to do. I use VSCode and Jupyter Notebook and conda env. I just downloaded Atom and it keeps saying no kernal for grammar python. I have a similar problem if I try using the conda command in Terminal where it doesn't recognize the conda command until I:
export PATH=/Users/edgar/anaconda3/bin:$PATH
How do I make my atom run my python code? Thank you very much.
to set up atom to become a python ide you need packages like:
Community Packages (14) /home/simone/.atom/packages
├── Hydrogen#2.14.1
├── atom-ide-ui#0.13.0
├── autocomplete-python#1.16.0
├── hydrogen-python#0.0.8
├── ide-python#1.5.0
├── intentions#1.1.5
├── linter#2.3.1 (disabled)
├── linter-flake8#2.4.0
├── linter-ui-default#1.8.1
└── python-autopep8#0.1.3
and to run atom on a conda / pyenv environment you just need to:
$ cd [path to project]
$ conda activate [env]
$ atom .
so that atom will use that python env to run the scripts.
The easiest way is to install the package script. Then open the python script you want to run and go to the packages menu in the menu bar. Under this menu, you should see an option - Script. Select "script" and one option is to run the python script. Select this option and your python file should run. You can also tap the F5 key. That will also run your file.
This assumes you have the package "language-python" installed in Atom. If you dont you can get it from here.

yarn workspaces and docker

I am trying to use yarn workspaces and then put my application into a Docker
image.
The folder structure looks like this:
root
Dockerfile
node_modules/
libA --> ../libA
libA/
...
app/
...
Unfortunately Docker doesn't support symbolic links - therefore it is not possible to copy the node_modules-folder in the root directory into a Docker image, even if the Dockerfile is in the root as in my case.
One thing I could do would be to exclude the symlinks with .dockerignore and then copy the real directory to the image.
Another idea - which I would prefer - would be to have a tool that replaces the symlinks with the actual contents of the symlink. Do you know if there is such a tool (preferably a Javascript package)?
Thanks
Yarn is used for dependency management, and should be configured to run within the Docker container to install the necessary dependencies, rather than copying them from your local machine.
The major benefit of Docker is that it allows you to recreate your development environment without worrying about the machine that it is running on - the same thing applies to Yarn, by running yarn install it installs the right versions for the relevant architecture of the machine your Docker image is built upon.
In your Dockerfile include the following after configuring your work directory:
RUN yarn install
Then you should be all sorted!
Another thing you should do is include the node_modules directory in your .gitignore and .dockerignore files so it is never include when distributing your code.
TL;DR: Don't copy node_modules directory from local machine, include RUN yarn install in Dockerfile

Project structure for golang, docker, gradle

I am just starting with Go (golang) and want to get a new project folder structure set up for a project that will be built with Gradle and deployed to a Docker image. I'm struggling to determine what this project structure might look like, primarily because of the GOPATH structure and the fact that the Go language tooling seems to be antithetical to using Gradle or to configuring a project that can be cloned (Git).
The project will eventually contain various server-side code written in Go, client side code written in HTML and JavaScript, so I need a project structure that works well for Gradle to build and package all of these kinds of pieces.
Does anyone have a good working structure and tooling recommendations for this?
When I started with Go, I fiddled with a rather wide variety of build tools, from maven to gulp.
It turned out that at least for me, they were doing more harm than good, so I started to use Go's seemingly unimposing, but really well thought out features. One of them isgo generate. Add simple shell scripts or occasionally Makefiles for automation.
Sample project
I have put together a sample project to make this more clear
/Users/you/go/src/bitbucket.org/you/hello/
├── Dockerfile
├── Makefile
├── _templates
│   └── main.html
└── main.go
main.go
This is a simple web server which serves "Hello, World!" using a template which is embedded into the binary using the excellent go.rice tool:
//go:generate rice embed-go
package main
import (
"html/template"
"log"
"net/http"
rice "github.com/GeertJohan/go.rice"
)
func main() {
templateBox, err := rice.FindBox("_templates")
if err != nil {
log.Fatal(err)
}
// get file contents as string
templateString, err := templateBox.String("main.html")
if err != nil {
log.Fatal(err)
}
// parse and execute the template
tmplMessage, err := template.New("message").Parse(templateString)
if err != nil {
log.Fatal(err)
}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
if err := tmplMessage.Execute(w, map[string]string{"Greeting": "Hello, world!"}); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
})
log.Fatal(http.ListenAndServe("127.0.0.1:8080", nil))
}
Note the line
//go:generate rice embed-go
When you call go generate your source file will be scanned for such lines and the according commands will be executed. In this case, a file called rice-embed.go will be generated and your directory will look like this:
/Users/you/go/src/bitbucket.org/you/hello/
├── Dockerfile
├── Makefile
├── _templates
│   └── main.html
├── main.go
└── rice-box.go
You could call webpack in a //go generate for example, to get your stuff together and another generate to create a rice-box.go from the result. This way, all your stuff would be embedded in your binary and would become a breeze to deploy.
Dockerfile
I have used a rather simple Dockerfile for this example:
FROM alpine:latest
MAINTAINER You <you#example.com>
COPY hello /usr/bin
EXPOSE 8080
CMD ["/usr/bin/hello"]
However this brings us to a problem: We can not use go:generate to produce the docker image, as of the time when we need to call go:generate, the new binary is not build yet. This would make us do ugly things like
go generate && go build && go generate
leading to the docker image build twice and whatnot. So, we need a different solution
Solution A: A shell script
We could of course come up with something like:
#!/bin/bash
# Checks for existence omitted for brevity
GO=$(which go)
DOCKER=$(which docker)
$GO generate
$GO test
$GO build
$DOCKER -t you/hello .
However, this comes with a problem: you will always do the whole sequence using the shell script. Even when you just want to run the tests, you would end up building the docker image. Over time, this adds up. In such situations I tend to use
Solution B: a Makefile
A Makefile is a configuration file for GNU make
CC = $(shell which go 2>/dev/null)
DOCKER = $(shell which docker 2>/dev/null)
ifeq ($(CC),)
$(error "go is not in your system PATH")
else
$(info "go found")
endif
ifeq ($(DOCKER),)
$(error "docker not in your system path")
else
$(info "docker found")
endif
.PHONY: clean generate tests docker all
all: clean generate tests hello docker
clean:
$(RM) hello rice-box.go cover.out
generate:
$(CC) generate
tests: generate
$(CC) test -coverprofile=cover.out
hello: tests
$(CC) build
docker: hello
$(DOCKER) build -t sosample/hello .
A full explanation is beyond the scope of this answer, but what you can basically do here is that you can call make and the all target will be built: files from the old build are removed (clean), a new rice-box.go is generated (generate) and so on. But in case you only want to run the tests for example, calling make test would only execute the targets clean, generate and tests.
You can take a look to my approach to structure your project https://github.com/alehano/gobootstrap
It's a web framework.

ejabberd how to compile new module

Here I found the code:
erlc -I ~/ejabberd-2.1.13/lib/ejabberd-2.1.13/include -pa ~/ejabberd-2.1.13/lib/ejabberd-2.1.13/ebin mod_my.erl
But it did not work?
Here are steps to add your custom module into ejabberd
put your module into ejabberd/src folder.
come to ejabberd directory in terminal and run command $ sudo make
it will show you that your module is compiled. Now run $ sudo make install
Add your module into config file at /etc/ejabberd/ejabberd.yml
restart your ejabberd and your custom module will be running.
These are the Instructions based on Ejabberd recommendation
1) Form the folder structure like below (refer any module from --
https://github.com/processone/ejabberd-contrib).
sources
│
│───conf
│ └───modulename.yml
│───src
│ └───modulename.erl
│───README.txt
│───COPYING
│───modulename.spec
2) Add your module folder structure to ejabberd user home directory (check ejabberdctl.cfg for CONTRIB_MODULES_PATH param).
3) Type command ejabberdctl modules_available it will list your module
4) Type ejabberdctl module_install module_name command
For Reference https://docs.ejabberd.im/developer/extending-ejabberd/modules/
Just drop the module in the ejabberd's src/ folder then "make". Nothing special needed to compile it.

Resources