contiki on STM32F4 - contiki

I'm starting a development with contiki and STM32F4discovery.
I found a fork of contiki for STM32F4discovery from jensnielsen on github.
I've downloaded it and try to make for the TARGET STM32F4discovery using
make TARGET=stmf4discovery hello-world
The makefile just compile hellow-world.c and then fail when it try to link the project because it doesn't find the reference of contiki core, sys, etc.
I try to make the hello-world in native and it works find.
What do I have to do to make the target?
Best regards,
Pascal

This is a very narrow question on my personal code, you should probably have asked me directly rather than stack overflow...
But anyway, the answer to the question is that my port is incomplete and broken, you should go for another fork (perhaps https://github.com/ptasz3k/contiki-stm32f4discovery)

Related

"cannot find module providing for package" in go

Im trying to install the official go docker client by importing
"github.com/docker/docker/client"
But i get the following error
cannot load github.com/docker/distribution/reference: cannot find module providing package github.com/docker/distribution/reference
My go version is 1.12 and
my project is outside $GOPATH/src. My go.mod file looks like this.
module app
go 1.12
require (
github.com/Microsoft/go-winio v0.4.14 // indirect
github.com/docker/docker v1.13.1
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 // indirect
)
I remember that I had similar problems as you 1.5-2 monthes ago.
My observations:
there are a lot of versions of docker API
it's hard to tell which version described in "official" documentation
API itself flawed
source code - quite easy to understand
I did not found answers for "what actually describes official documentation?" or "where actually docker API repo?"
I gave up on attempts to solve in "official way" and found that more practical to use "unofficial"
"Unofficial way":
use docker/docker-ce (note that examples from official documentation will not work without modification)
instead of official documentation - just search code in docker-ce repo.
best example of docker/client usage I found here: components/engine/integration/internal/container/exec.go (note that it resides in internal and is impossible to be used as package)
I grabbed code that I needed into my package and solved the problem I had
Maybe it's "incorrect" solution but it worked for me.
I suppose it could be more practical for you to not rely on docker documentation too.
Good luck!
Based on docker documentation, it says that you have to execute go get github.com/docker/docker/client command to download it. Once library is retrieved it should compile.

Geting started with contiki 2.7

I am really new in Contiki OS. I downloaded contiki 2.7 zip from sourceforge. To understand how the system works I first need to understand the make system that is used for building projects. In almost every location inside the contiki there is a make file. So the syntax used in those files is completely alien for me. My question is: What/where do I need to read about these make files in order to understand how to use them? Some links would be very useful. Please have in mind that I am new in this matter.
Thank you in advance!
u can start with this link about the Contiki build system
http://dak664.github.io/contiki-doxygen/a01670.html

Clang dynamic memory analyzer not referencing back to source code Red Hat 6.3

We recently built the 3.3 release of clang/llvm using the Fedora 20 packaging process as a guide to unpacking, moving the different parts to the correct location and building the compiler tool chain. All seems to be working correctly except the dynamic memory analyzer is not referencing back to the source code. The same usage on the Fedora platform does reference back to the source code.
This is our first attempt to use the clang/llvm tool set. Also this is the first question asked in this forum which seems a bit different on its organization from all the others I have participated in so my appologies in advance if I have not figured out the nuances of posting a question here. Does seem odd that the main projects do not seem to have a way of asking questions.
We found a solution, do not know quite why we needed to add the extra
environment setup. Compiling as follows:
PATH=/net/fas4045/home3/jq031c/llvm_sandbox/bin:$PATH make -j 16
DEPFILES= CXX=clang++ CC=clang CXXFLAGS="-fsanitize=memory
-fsanitize-memory-track-origins -fno-omit-frame-pointer"
LDXFLAGS=-fsanitize=memory
Runing as follows:
MSAN_SYMBOLIZER_PATH=/net/fas4045/home3/jq031c/llvm_sandbox/bin/llvm-symbolizer ./runtests.sh
We can understand that we need to add the analysis option to the link flags as we do a two step build of compile followed by link. The discovery after searching was the need to define the path to llvm-symbolizer with an environment variable which none of the other dynamic analysis options seems to need.

HipHop for PHP, deploying apps

After Googling, I found a lot of HipHop documentation, but plenty was posted between 2011 and 2013.
Earlier this year was launched a new version of HipHop that even supports Drupal and includes a lot of improvements...
I've always used the Zend Guard to deploy my commercial applications, but now I started to consider seriously the use of HipHop in production, but here comes the question:
We can run an application using only the bytecode HHBC (Without .php source code)?
Follows the reference of my research
https://github.com/facebook/hhvm/wiki/FAQ
The question may seem very obvious, but it is not so easy to find this answer in the project documentation.
Thanks in advance!
Well, yes and no.
HHVM has a so-called RepoAuthoritative mode in which the HHVM will no longer check the existence of the PHP files or how up-to-date they are; instead, it will retrieve the HHBC directly from its cache.
Theoretically, you can follow these steps:
pre-generate the HHBC for all your PHP files and insert that HHBC in HHVM's cache. This is the so-called pre-analysis phase (if you ever see it in HHVM documentation, this is what they mean by it)
turn on RepoAuthoritative mode (it's just 1 line in HHVM's config)
delete your PHP code
This way your PHP applications will run just fine without the source code being present. Doing a server restart won't change this since HHVM's bytecode cache lives on disk (it's implemented as an SQLite database).
However, it will be kind of a headache if you:
want to change something in your code. You would have to copy your code, make the change and repeat the pre-analysis phase.
want to upgrade HHVM to a newer version. HHVM uses its build ID as part of the cache key so, if you upgrade it, the bytecode cache becomes unreachable and, since you'll be running in RepoAuthoritative mode, your application will be reduced to a bunch of HTTP 404 errors. To fix this, you would have to repeat the pre-analysis phase as well.
Bottom line: no upside, big downside. There's just no point in doing it.
PS: I hope I answered your question. It's also possible that I misunderstood what you asked; if that's the case, please let me know in a comment.

Building cross-platform Delphi applications

I downloaded Lazarus, but have worked with Embarcadero Delphi IDE too. I have a question about building cross-platform Delphi applications.
How can I build them under win32 environment? I read the wiki from Lazarus site, that explains how to do it, but I still do not understand it. Is is possible to build and compile application under win32 environment for Linux and MacOS? If it is possible, can someone explain ste-by-step how to do it exactly.
EDIT:
Now is the time for talking about the new XE2 version of the Delphi IDE I think :)
Thanks
What you're asking for already exists in the lazarus wiki site, you need to read these articles.
Multiplatform Programming Guide
Cross compiling
Cross compiling for Win32 under Linux
How to Write Portable Code (nice doc from Marco van de Voort)
Buildfaq
While crosscompiling to a non windows target is possible (and not that hard), getting used to fpc/lazarus and crosscompiling in one first step is a bridge too far. This because Linux is not a very homogenous target and dealing with this variation requires some understanding how libraries and linking works on Linux. This defeats one-button downloadable cross-compile setups to "general" linux. I know, such one-button thingies that work out of the box for everyone would be great, but it is just not going to happen (or only forvery limited distribution-version combinations)
Crosscompiling with FPC is not extremely difficult or rocket science, but the amount of jargon and details can flabbergast uninitiated people, and without background knowledge it is hard to diagnose problems as a result of minor misconfigurations
I recommend to first familiarize yourself with Lazarus/FPC, and only then make the crosscompilation leap. (and the already mentioned buildfaq names some reasons).
Bottomline: install lazarus on Windows and start porting your app. If that succeeds, start using a linux install (or VM) to familiarize yourself with Linux, and Lazarus under it. You'll need a linux install anyway to test.
Only then start thinking about crosscompiling to speed up the process.
CodeTyphon is a powerful Lazarus/FPC one click easy installation package for cross platform native development. It already supports 4 CPU/OS hosts (Win32, Win64, Linux32, Linux64), and 16 CPU/OS targets (arm-Wince, arm-Linux, arm-Embedded, arm-gba, arm-nds, i386-Win32, i386-Linux, i386-FreeBSD, i386-Haiku, x86_64-Win64, x86_64-Linux, x86_64-FreeBSD, powerpc-Linux, powerpc64-Linux, sparc-Linux, sparc-Solaris). More are supported in Lazarus/FreePascal, but others are not yet integrated in CodeTyphon. Did I mention that it is free? One code to rule them all ;-)
The point is that you don't have to waste days for setting up your cross platform environment, since someone has already done the hard work for you.

Resources