Recently, I'm facing problems related to mismatches between the actual image offsets in my QSPI flash and the offsets that are used in U-Boot via boot.scr. I saw that the offsets that are used in boot.scr can be updated using petalinux-config menu. (u-boot Configuration → u-boot script configuration → QSPI/OSPI image offsets)
However, even if I update the given offsets, the boot.scr file doesn't get affected. So, what is the way of triggering the recreation of boot.scr of U-Boot?
I've tried clearing the U-Boot build by calling petalinux-build -c u-boot -x distclean, and it didn't work. I don't want to rebuild the whole project.
Thanks in advance.
Changing the image offset values as you described will trigger the boot.scr file to update the next time you run petalinux-build.
You can also modify the boot.scr file directly.
Related
I'm currently refactoring old Packer code, switching from JSON config files with a bunch of shell scripts to HCL files using its features (like default variables values, expressions...).
With my old code, I initialized most of the internal variables inside the shell script, and I also use it to display a message containing all those variables just before the Packer build itself.
Now, I'm migrating to HCL code (which I think would be easier to maintain than the previous shell) but I miss the idea of displaying a message with at-runtime-user-defined computed or variables/locals.
I'm currently trying to do that with a shell or shell-local provisioner, but even if I succeed this message will still appears in the middle or the end of the build, not before (in my case the boot on Debian ISO and OS installation is still done before).
Does any one have an idea how to do it?
I'm having trouble getting consistent benefit from ccache in my jenkins pipeline builds. I'm setting CCACHE_BASEDIR to the parent directory of my current build directory (this works out to something like /opt/jenkins/workspace). Given this basedir, I would expect all PR/branch builds that share this common parent to be able to find hits in the cache, but alas they do not. I do see cache hits for subsequent builds in a given directory (if I manually rebuild a particular PR, for example), which implies that CCACHE_BASEDIR is not working like I would expect.
To further diagnose, I've tried setting CCACHE_LOGFILE and although that file is produced by the build, it is effectively empty (it contains only two lines indicating the version of ccache).
Can anyone suggest specific settings or techniques that have worked to get maximum benefit from ccache in jenkins pipelines, or other things to try to diagnose the problem? What might cause the empty ccache log file?
I'm running ccache 3.3.4.
The solution to the first part of the question is probably to set hash_dir = false (CCACHE_NOHASHDIR=1 if using environment variables) or setting -fdebug-prefix-map=old=new for relocating debug info to a common prefix (e.g. -fdebug-prefix-map=$PWD=.). More details can be found in the "Compiling in different directories" section in the ccache manual.
Regarding CCACHE_LOGFILE: I've never heard about that problem before (I'm the ccache maintainer, BTW), but if you set CCACHE_LOGFILE to a relative file path, try setting it to an absolute path instead.
This docker file's goal is to:
Goal: provide a thrift-compiler Docker image
I was just wondering why does this image need to install golang
It appears to download the Golang binary package but only copies over gofmt. Looking at https://github.com/apache/thrift/blob/19baeefd8c38d62085891d7956349601f79448b3/compiler/cpp/src/thrift/generate/t_go_generator.cc it seems that at one point they were running gofmt on the Golang generated code.
The comment for that part of code links to https://issues.apache.org/jira/browse/THRIFT-3893 which references pull request https://github.com/apache/thrift/pull/1061 where the feature was actually removed.
The specific commit (https://github.com/apache/thrift/commit/2007783e874d524a46b818598a45078448ecc53e) appears to be in 0.10 but not 0.9. So, along with the disabling of gofmt they probably just forgot to remove it from the image or decided it was just worth leaving as the feature could be fixed and re-enabled at a later date.
It might be worth opening an issue to ask the Thrift team about it and if it can be removed.
We are using TFS 2010 Build to deliver libraries on a fixed location. ( \\server\product-R0\latest )
Other team projects reference the library from this location.
On my build process I check if Build and unit tests passed, if it's ok I:
Transform web/app.config
Delete the latest folder using a "DeleteDirectory" activity
Create the latest folder using a "CreateDirectory" activity
Copy the binaries in the folder using "CopyDirectory" activity
I delete the folder first because if we rename an assembly the old one won't be deleted.
The issue is random and happen 40% of the time:
TF270002 : An error occurred copying files from
'D:\Builds\1\FooTeam\BarService\Binaries' to
'\\nas\Builds\BarService-R0\Latest'.
Details : Access to the path
'\\nas\Builds\BarService-R0\Latest\SomeFile.dll'
is denied.
If you launch the build several times it work.
I've try the usual dumb idea of "putting sleeps between steps to see what happens" but it don't solve the problem, it just seems to reduce the probability of it happening.
It's like TFS try to copy while still deleting the directory, some times it hangs on the directory creation step.
Anyone? Thank you!
The most elegant solution is to create a link instead of copying, something like
mklink /J D:\Drops\MyBuild_LatestGood D:\Drops\MyBuild_2014-06-13
Plus: No copy involved, same ACLs.
Caveats: this command works only locally, when the Drop share is located on the Build server. There are options also in the case of a NAS, as long as you are allowed to execute remote commands (e.g. SSH).
Another option is to create a network share on the desired folder, even if the disk is remote, as long as it reside on a Windows server.
Background
I am developing an iOS app that connects to a server. We have a team of developers who run their own server with unique addresses for debugging. Our rule for source control is to only checkin the "production url".
In Android we have a solution that works really well. This solution won't work in iOS.
What I've Tried
Set a "Command line argument" or "Environment variable" in the Build Scheme. The problem with this is those are put into the "*.xcproject" file which get's checked in and causes merge conflicts. If it could be set at the user level it would be fine because we .gitignore xcuserdata.
I also tried referencing a "MyConfig.h" file that does not get checked in. But if it does not exist the project won't build.
What I want to do
If a developer wants to point at a different server they would set an environment variable on their mac. Something like "export MY_SERVER="http://domain.com/api/". In the project file we would add and environment or command line argument that is basically "MY_SERVER=$(MY_SERVER)".
Unfortunately I can't figure out how to get XCode to resolve the variable on my OSX machine. It seems environment variables are resolved on the device only. Command line arguments seem to be taken literally.
Research I've done
http://qualitycoding.org/production-url/ - does not really address the real issue
http://nshipster.com/launch-arguments-and-environment-variables/
Google, Apples developer forum and Stackoverflow post.
How do you do this in your projects ?
Is the only solution to use a backdoor or some file folks change and just try not to accidentally checkin?
As an update I found the solution that solves the problem for me. I am using https://github.com/xslim/mobileDeviceManager and a script that is checked in. The developer can create their custom configuration and copy it to the documents directory. Now we keep production checked in and have a runtime check for our custum configuration file.
Here is an example of the tools usage:
$ mobileDeviceManager -o copy -app "com.domain.MyApp" -from ~/.myAppConfig/app_override.plist
This way the developer can keep their custom configuration in their home directory (out of source control) without fear of accidental checkin. We already use process like this for other desktop and android apps so this fits our process really well. This has the added benefit that if a testers device is failing we can point it at a custom debug server with extra logging to simplify the debug process and not need to deploy a new binary to that device during internal testing.
I hope this can help someone else.