Kivy Pyinstaller .spec mp4 type - kivy

I am working on compiling my kivy app into a .exe and currently am working on changing the .spec file to include the necessary datas. When attaching .kv files they are followed by 'DATA'. I've seen in the docs that mp3 are followed by 'sfx'. What is the type for mp4? I haven't been able to find anything in the docs.
# -*- mode: python ; coding: utf-8 -*-
from kivy_deps import sdl2, glew
block_cipher = None
a = Analysis(
['preprocessing.py'],
pathex=[],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
a.datas += [('Code\ScreenManager.kv', 'D:\\folder\\ScreenManager.kv', 'DATA'),
('Code\Main.kv', 'D:\\folder\\Main.kv', 'DATA'),
('Code\Begin.kv', 'D:\\folder\\Begin.kv', 'DATA'),
('Code\my_video.mp4', 'D:\\folder\\my_video.mp4', '?????']
...

Related

Install Custom Dependency for KFP Op

I'm trying to setup a simple KubeFlow pipeline, and I'm having trouble packaging up dependencies in a way that works for KubeFlow.
The code simply downloads a config file and parses it, then passes back the parsed configuration.
However, in order to parse the config file, it needs to have access to another internal python package.
I have a .tar.gz archive of the package hosted on a bucket in the same project, and added the URL of the package as a dependency, but I get an error message saying tarfile.ReadError: not a gzip file.
I know the file is good, so it's some intermediate issue with hosting on a bucket or the way kubeflow installs dependencies.
Here is a minimal example:
from kfp import compiler
from kfp import dsl
from kfp.components import func_to_container_op
from google.protobuf import text_format
from google.cloud import storage
import training_reader
def get_training_config(working_bucket: str,
working_directoy: str,
config_file: str) -> training_reader.TrainEvalPipelineConfig:
download_file(working_bucket, os.path.join(working_directoy, config_file), "ssd.config")
pipeline_config = training_reader.TrainEvalPipelineConfig()
with open("ssd.config", 'r') as f:
text_format.Merge(f.read(), pipeline_config)
return pipeline_config
config_op_packages = ["https://storage.cloud.google.com/my_bucket/packages/training-reader-0.1.tar.gz",
"google-cloud-storage",
"protobuf"
]
training_config_op = func_to_container_op(get_training_config,
base_image="tensorflow/tensorflow:1.15.2-py3",
packages_to_install=config_op_packages)
def output_config(config: training_reader.TrainEvalPipelineConfig) -> None:
print(config)
output_config_op = func_to_container_op(output_config)
#dsl.pipeline(
name='Post Training Processing',
description='Building the post-processing pipeline'
)
def ssd_postprocessing_pipeline(
working_bucket: str,
working_directory: str,
config_file:str):
config = training_config_op(working_bucket, working_directory, config_file)
output_config_op(config.output)
pipeline_name = ssd_postprocessing_pipeline.__name__ + '.zip'
compiler.Compiler().compile(ssd_postprocessing_pipeline, pipeline_name)
The https://storage.cloud.google.com/my_bucket/packages/training-reader-0.1.tar.gz IRL requires authentication. Try to download it in Incognito mode and you'll see the login page instead of file.
Changing the URL to https://storage.googleapis.com/my_bucket/packages/training-reader-0.1.tar.gz works for public objects, but your object is not public.
The only thing you can do (if you cannot make the package public) is to use google.cloud.storage library or gsutil program to download the file from the bucket and then manually install it suing subprocess.run([sys.executable, '-m', 'pip', 'install', ...])
Where are you downloading the data from?
What's the purpose of
pipeline_config = training_reader.TrainEvalPipelineConfig()
with open("ssd.config", 'r') as f:
text_format.Merge(f.read(), pipeline_config)
return pipeline_config
Why not just do the following:
def get_training_config(
working_bucket: str,
working_directory: str,
config_file: str,
output_config_path: OutputFile('TrainEvalPipelineConfig'),
):
download_file(working_bucket, os.path.join(working_directoy, config_file), output_config_path)
the way kubeflow installs dependencies.
Export your component to loadable component.yaml and you'll see how KFP Lighweight components install dependencies:
training_config_op = func_to_container_op(
get_training_config,
base_image="tensorflow/tensorflow:1.15.2-py3",
packages_to_install=config_op_packages,
output_component_file='component.yaml',
)
P.S. Some small pieces of info:
#dsl.pipeline(
Not required unless you want to use the dsl-compile command-line program
pipeline_name = ssd_postprocessing_pipeline.name + '.zip'
compiler.Compiler().compile(ssd_postprocessing_pipeline, pipeline_name)
Did you know that you can just kfp.Client(host=...).create_run_from_pipeline_func(ssd_postprocessing_pipeline, arguments={}) to run the pipeline right away?

VnPy - ImportError: No module named vnctpmd

I just cloned VnPy, and I am trying to run VnTrader on a Ubuntu 16.04 Machine, as mentioned in the VnPy Starter Guide. I followed step by step, but when I run
python vnpy/examples/VnTrader/run.py
I get the following Import Error. What is the problem?
Traceback (most recent call last):
File "run.py", line 28, in <module>
from vnpy.trader.gateway import (ctpGateway, ibGateway)
File "/home/alessandro/anaconda2/lib/python2.7/site-packages/vnpy-1.9.0-py2.7.egg/vnpy/trader/gateway/ctpGateway/__init__.py", line 5, in <module>
from .ctpGateway import CtpGateway
File "/home/alessandro/anaconda2/lib/python2.7/site-packages/vnpy-1.9.0-py2.7.egg/vnpy/trader/gateway/ctpGateway/ctpGateway.py", line 16, in <module>
from vnpy.api.ctp import MdApi, TdApi, defineDict
File "/home/alessandro/anaconda2/lib/python2.7/site-packages/vnpy-1.9.0-py2.7.egg/vnpy/api/ctp/__init__.py", line 4, in <module>
from .vnctpmd import MdApi
ImportError: No module named vnctpmd
ImportError: No module named vnctpmd
The vnctpmd module, is the API interface for CTP broker of the VnPy package. As for every other API interface, you need to first build it, and then import it.
In your case, you probably didn't build the CTP interface when prompted during the installation of VnPy, so now run.py can't import the module.
Do you need 'CTP' interface? No
Solution A: I don't need CTP interface
If you don't need CTP interface you can open run.py commenting the parts relative to CTP (and also relative to all the other interfaces you didn't build)
# /examples/VnTrader/run.py script
# this version runs with only the IB interface built
# encoding: UTF-8
# 重载sys模块,设置默认字符串编码方式为utf8
try:
reload # Python 2
except NameError: # Python 3
from importlib import reload
import sys
reload(sys)
try:
sys.setdefaultencoding('utf8')
except AttributeError:
pass
# 判断操作系统
import platform
system = platform.system()
# vn.trader模块
from vnpy.event import EventEngine
from vnpy.trader.vtEngine import MainEngine
from vnpy.trader.uiQt import createQApp
from vnpy.trader.uiMainWindow import MainWindow
# 加载底层接口
from vnpy.trader.gateway import ibGateway
# ### here comment the interfaces you don't need
# from vnpy.trader.gateway import (ctpGateway, ibGateway)
if system == 'Linux':
# from vnpy.trader.gateway import xtpGateway
pass
elif system == 'Windows':
from vnpy.trader.gateway import (femasGateway, xspeedGateway,
secGateway)
# 加载上层应用
from vnpy.trader.app import (riskManager, ctaStrategy,
spreadTrading, algoTrading)
#----------------------------------------------------------------------
def main():
"""主程序入口"""
# 创建Qt应用对象
qApp = createQApp()
# 创建事件引擎
ee = EventEngine()
# 创建主引擎
me = MainEngine(ee)
# 添加交易接口
# me.addGateway(ctpGateway)
me.addGateway(ibGateway)
if system == 'Windows':
me.addGateway(femasGateway)
me.addGateway(xspeedGateway)
me.addGateway(secGateway)
if system == 'Linux':
# me.addGateway(xtpGateway)
pass
# 添加上层应用
me.addApp(riskManager)
me.addApp(ctaStrategy)
me.addApp(spreadTrading)
me.addApp(algoTrading)
# 创建主窗口
mw = MainWindow(me, ee)
mw.showMaximized()
# 在主线程中启动Qt事件循环
sys.exit(qApp.exec_())
if __name__ == '__main__':
main()
Solution B: I need CTP interface
In case you need CTP, you can just reinstall vnpy with the command
bash install.sh
and when prompted 'do you need CTP' answer Yes

Google Cloud Dataflow cryptic message when downloading file from gcp to local system

I am writing a dataflow pipeline that processes videos from a google cloud bucket. My pipeline downloads each work item to the local system and then reuploads results back to GCP bucket. Following previous question.
The pipeline works on local DirectRunner, i'm having trouble debugging on DataFlowRunnner.
The error reads
File "run_clouddataflow.py", line 41, in process
File "/usr/local/lib/python2.7/dist-packages/google/cloud/storage/blob.py", line 464, in download_to_file self._do_download(transport, file_obj, download_url, headers)
File "/usr/local/lib/python2.7/dist-packages/google/cloud/storage/blob.py", line 418, in _do_download download.consume(transport) File "/usr/local/lib/python2.7/dist-packages/google/resumable_media/requests/download.py", line 101, in consume self._write_to_stream(result)
File "/usr/local/lib/python2.7/dist-packages/google/resumable_media/requests/download.py", line 62, in _write_to_stream with response: AttributeError: __exit__ [while running 'Run DeepMeerkat']
When trying to execute blob.download_to_file(file_obj) within:
storage_client=storage.Client()
bucket = storage_client.get_bucket(parsed.hostname)
blob=storage.Blob(parsed.path[1:],bucket)
#store local path
local_path="/tmp/" + parsed.path.split("/")[-1]
print('local path: ' + local_path)
with open(local_path, 'wb') as file_obj:
blob.download_to_file(file_obj)
print("Downloaded" + local_path)
I'm guessing that the workers are not in permission to write locally? Or perhaps there is not a /tmp folder in the dataflow container. Where should I write objects? Its hard to debug without access to the environment. Is it possible to access stdout from workers for debugging purposes (serial console?)
EDIT #1
I've tried explicitly passing credentials:
try:
credentials, project = google.auth.default()
except:
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = known_args.authtoken
credentials, project = google.auth.default()
as well as writing to cwd(), instead of /tmp/
local_path=parsed.path.split("/")[-1]
print('local path: ' + local_path)
with open(local_path, 'wb') as file_obj:
blob.download_to_file(file_obj)
Still getting the cryptic error on blob downloads from gcp.
Full Pipeline script is below, setup.py is here.
import logging
import argparse
import json
import logging
import os
import csv
import apache_beam as beam
from urlparse import urlparse
from google.cloud import storage
##The namespaces inside of clouddataflow workers is not inherited ,
##Please see https://cloud.google.com/dataflow/faq#how-do-i-handle-nameerrors, better to write ugly import statements then to miss a namespace
class PredictDoFn(beam.DoFn):
def process(self,element):
import csv
from google.cloud import storage
from DeepMeerkat import DeepMeerkat
from urlparse import urlparse
import os
import google.auth
DM=DeepMeerkat.DeepMeerkat()
print(os.getcwd())
print(element)
#try adding credentials?
#set credentials, inherent from worker
credentials, project = google.auth.default()
#download element locally
parsed = urlparse(element[0])
#parse gcp path
storage_client=storage.Client(credentials=credentials)
bucket = storage_client.get_bucket(parsed.hostname)
blob=storage.Blob(parsed.path[1:],bucket)
#store local path
local_path=parsed.path.split("/")[-1]
print('local path: ' + local_path)
with open(local_path, 'wb') as file_obj:
blob.download_to_file(file_obj)
print("Downloaded" + local_path)
#Assign input from DataFlow/manifest
DM.process_args(video=local_path)
DM.args.output="Frames"
#Run DeepMeerkat
DM.run()
#upload back to GCS
found_frames=[]
for (root, dirs, files) in os.walk("Frames/"):
for files in files:
fileupper=files.upper()
if fileupper.endswith((".JPG")):
found_frames.append(os.path.join(root, files))
for frame in found_frames:
#create GCS path
path="DeepMeerkat/" + parsed.path.split("/")[-1] + "/" + frame.split("/")[-1]
blob=storage.Blob(path,bucket)
blob.upload_from_filename(frame)
def run():
import argparse
import os
import apache_beam as beam
import csv
import logging
import google.auth
parser = argparse.ArgumentParser()
parser.add_argument('--input', dest='input', default="gs://api-project-773889352370-testing/DataFlow/manifest.csv",
help='Input file to process.')
parser.add_argument('--authtoken', default="/Users/Ben/Dropbox/Google/MeerkatReader-9fbf10d1e30c.json",
help='Input file to process.')
known_args, pipeline_args = parser.parse_known_args()
#set credentials, inherent from worker
try:
credentials, project = google.auth.default()
except:
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = known_args.authtoken
credentials, project = google.auth.default()
p = beam.Pipeline(argv=pipeline_args)
vids = (p|'Read input' >> beam.io.ReadFromText(known_args.input)
| 'Parse input' >> beam.Map(lambda line: csv.reader([line]).next())
| 'Run DeepMeerkat' >> beam.ParDo(PredictDoFn()))
logging.getLogger().setLevel(logging.INFO)
p.run()
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
I spoke to the google-cloud-storage package mantainer, this was a known issue. Updating specific versiosn in my setup.py to
REQUIRED_PACKAGES = ["google-cloud-storage==1.3.2","google-auth","requests>=2.18.0"]
fixed the issue.
https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3836

PyInstaller Kivy ClockApp Example Issue

===Edit End===
Whew!
Thanks to the suggestions below & (8) hours of compiling & recompiling I was able to fix my issues doing a combination of things:
Understanding how Pyinstaller hooks are required in order to call out the modules correctly.
Ensuring that all input file & output file file locations exist on the same drive. At my company, our system environment variables are set to a mapped network drive by default, I had to ensure all user environment variable settings were mapped to 'C:\' by default such that pyinstaller would pick it up. This included setting my cmd.exe to the correct drive & destination file output.
Ensuring all necessary files (.JPG, .kv, .PNG) files were in the /dist folder AFTER I built the correct .spec file
I was able to package the PongGame example on the kivy website as well as the Paint_App example for the Book/Chapter 2 mentioned below.
Thanks again.
===LAST Edit End===
===EDIT Again===
According to suggestion below, I've attempted to run program showing the error log.
However, running from cmd.exe doesn't even spit out the error log.
However, .kivy keeps all the error logs.
[INFO ] Logger: Record log in H:\.kivy\logs\kivy_16-08-30_29.txt
[INFO ] Kivy: v1.9.1
[INFO ] Python: v3.4.4 (v3.4.4:737efcadf5a6, Dec 20 2015, 19:28:18) [MSC v.1600 32 bit (Intel)]
[INFO ] Factory: 179 symbols loaded
[INFO ] Image: Providers: img_tex, img_dds, img_gif, img_sdl2 (img_pil, img_ffpyplayer ignored)
[INFO ] OSC: using <thread> for socket
[INFO ] Window: Provider: sdl2
[INFO ] GL: GLEW initialization succeeded
[INFO ] GL: OpenGL version <b'4.2.0'>
[INFO ] GL: OpenGL vendor <b'NVIDIA Corporation'>
[INFO ] GL: OpenGL renderer <b'Quadro K3000M/PCIe/SSE2'>
[INFO ] GL: OpenGL parsed version: 4, 2
[INFO ] GL: Shading version <b'4.20 NVIDIA via Cg compiler'>
[INFO ] GL: Texture max size <16384>
[INFO ] GL: Texture max units <32>
[INFO ] Window: auto add sdl2 input provider
[INFO ] Window: virtual keyboard not allowed, single mode, not docked
[INFO ] Base: Start application main loop
[INFO ] Base: Leaving application in progress...
[WARNING ] stderr: Traceback (most recent call last):
[WARNING ] stderr: File "kivy\properties.pyx", line 754, in kivy.properties.ObservableDict.__getattr__ (kivy\properties.c:11776)
[WARNING ] stderr: KeyError: 'time'
[WARNING ] stderr:
[WARNING ] stderr: During handling of the above exception, another exception occurred:
[WARNING ] stderr:
[WARNING ] stderr: Traceback (most recent call last):
[WARNING ] stderr: File "mainClock.py", line 40, in <module>
[WARNING ] stderr: File "site-packages\kivy\app.py", line 828, in run
[WARNING ] stderr: File "site-packages\kivy\base.py", line 487, in runTouchApp
[WARNING ] stderr: File "site-packages\kivy\core\window\window_sdl2.py", line 619, in mainloop
[WARNING ] stderr: File "site-packages\kivy\core\window\window_sdl2.py", line 362, in _mainloop
[WARNING ] stderr: File "site-packages\kivy\base.py", line 327, in idle
[WARNING ] stderr: File "site-packages\kivy\clock.py", line 515, in tick
[WARNING ] stderr: File "site-packages\kivy\clock.py", line 647, in _process_events
[WARNING ] stderr: File "site-packages\kivy\clock.py", line 406, in tick
[WARNING ] stderr: File "mainClock.py", line 32, in update
[WARNING ] stderr: File "kivy\properties.pyx", line 757, in kivy.properties.ObservableDict.__getattr__ (kivy\properties.c:11882)
[WARNING ] stderr: AttributeError: 'super' object has no attribute '__getattr__'
===EDIT AGAIN End===
EDIT:
Looking in the build folder, I see a warnmainClock.txt with a ridiculously LARGE number of lines saying I'm missing modules.
Does this mean I'm not linking to my python folder library correctly?
For example:
missing module named _collections.deque - imported by _collections, collections, threading, C:\Python34x86\Examples\kivy\mainClock.py
missing module named _thread.stack_size - imported by _thread, threading, C:\Python34x86\Examples\kivy\mainClock.py
missing module named _thread._local - imported by _thread, threading, C:\Python34x86\Examples\kivy\mainClock.py
missing module named threading.current_thread - imported by threading, _threading_local, C:\Python34x86\Examples\kivy\mainClock.py
missing module named threading.RLock - imported by threading, _threading_local, bz2, multiprocessing.dummy, C:\Python34x86\Examples\kivy\mainClock.py
missing module named sys.modules - imported by sys, dummy_threading, C:\Python34x86\Examples\kivy\mainClock.py
missing module named _dummy_threading - imported by dummy_threading, C:\Python34x86\Examples\kivy\mainClock.py
missing module named _dummy_threading.__all__ - imported by _dummy_threading, dummy_threading, C:\Python34x86\Examples\kivy\mainClock.py
missing module named dummy_threading.RLock - imported by dummy_threading, bz2, C:\Python34x86\Examples\kivy\mainClock.py
missing module named dummy_threading.local - imported by dummy_threading, numpy.distutils.misc_util, C:\Python34x86\Examples\kivy\mainClock.py
missing module named time.monotonic - imported by time, subprocess, threading, queue, C:\Python34x86\Examples\kivy\mainClock.py
missing module named time.time - imported by time, subprocess, threading, queue, kivy.uix.behaviors.button, kivy.uix.behaviors.compoundselection, kivy.effects.kinetic, kivy.uix.filechooser, kivy.input.motionevent, kivy.input.postproc.doubletap, kivy.input.postproc.tripletap, multiprocessing.managers, multiprocessing.synchronize, C:\Python34x86\Examples\kivy\mainClock.py
missing module named Queue - imported by kivy.compat, C:\Python34x86\Examples\kivy\mainClock.py
missing module named time.strftime - imported by time, kivy.logger, C:\Python34x86\Examples\kivy\mainClock.py
missing module named kivy.app.App - imported by kivy.app, kivy.uix.scrollview, kivy.uix.screenmanager, kivy.uix.filechooser, kivy.uix.textinput, kivy.uix.settings, kivy.lang, kivy.support, kivy.core.window.window_sdl2, kivy.uix.video, kivy.uix.carousel, kivy.uix.treeview, kivy.uix.colorpicker, kivy.uix.splitter, kivy.uix.slider, kivy.uix.codeinput, C:\Python34x86\Examples\kivy\mainClock.py
EDIT END
I am currently learning Kivy from the book "Kivy Blueprints" by Mark Vasilkov.
I am attempting to package the ClockApp from chapter 1 as an exercise. The code works perfectly from python command line. However, I am running into an issue using pyinstaller to create a windows package.
The source code & .kv file is found here https://github.com/mvasilkov/kb/tree/master/1_Clock
I have followed the example https://kivy.org/docs/guide/packaging-windows.html
I created a windows package from the command prompt using:
C:\Python34x86\Scripts\pysintaller C:\Python34x86\Examples\kivy\mainClock.py --noconsole
Then I build from the spec file using:
C:\Python34x86\Scripts\pyinstaller H:\mainClock.spec
Both commands create the /dist file in my H drive as specified.
When I access the EXE file after just building from the \mainClock.py, I get the error:
When I run the EXE file after building the .spec file I get the following:
The .spec file looks like this:
# -*- mode: python -*-
block_cipher = None
a = Analysis(['C:\\Python34x86\\Examples\\kivy\\mainClock.py'],
pathex=['H:\\'],
binaries=None,
datas=None,
hiddenimports=[],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
exclude_binaries=True,
name='mainClock',
debug=True,
strip=False,
upx=True,
console=False )
coll = COLLECT(exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
name='mainClock')
I think the issue isn't stemming from the source, can someone please point out what am I doing incorrectly?
May be the following code can help you further?
# -*- mode: python -*-
import os
from os.path import join
from kivy import kivy_data_dir
from kivy.deps import sdl2, glew
from kivy.tools.packaging import pyinstaller_hooks as hooks
block_cipher = None
kivy_deps_all = hooks.get_deps_all()
kivy_factory_modules = hooks.get_factory_modules()
datas = []
# list of modules to exclude from analysis
excludes = ['Tkinter', '_tkinter', 'twisted', 'pygments']
# list of hiddenimports
hiddenimports = kivy_deps_all['hiddenimports'] + kivy_factory_modules
# binary data
sdl2_bin_tocs = [Tree(p) for p in sdl2.dep_bins]
glew_bin_tocs = [Tree(p) for p in glew.dep_bins]
bin_tocs = sdl2_bin_tocs + glew_bin_tocs
# assets
kivy_assets_toc = Tree(kivy_data_dir, prefix=join('kivy_install', 'data'))
source_assets_toc = []
assets_toc = [kivy_assets_toc, source_assets_toc]
tocs = bin_tocs + assets_toc
a = Analysis(['C:\\Python34x86\\Examples\\kivy\\mainClock.py'],
pathex=['H:\\'],
binaries=None,
datas=datas,
hiddenimports=hiddenimports,
hookspath=[],
runtime_hooks=[],
excludes=excludes,
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
name='mainClock',
exclude_binaries=True,
debug=False,
strip=False,
upx=True,
console=False)
coll = COLLECT(exe,
a.binaries,
a.zipfiles,
a.datas,
*tocs,
strip=False,
upx=True,
name='mainClock')

How to compile a Delphi 7 project group file (.bpg) using the command line?

I've grouped a lot of projects in a project group. All the info is in the project.bpg. Now I'd like to automatically build them all.
How do I build all the projects using the command line?
I'm still using Delphi 7.
I never tried it myself, but here is a German article describing that you can use make -f ProjectGroup.bpg because *.bpgs essentially are makefiles.
You can also run Delphi from the command line or a batch file, passing the .bpg file name as a parameter.
Edit: Example (for D2007, but can be adjusted for D7):
=== rebuild.cmd (excerpt) ===
#echo off
set DelphiPath=C:\Program Files\CodeGear\RAD Studio\5.0\bin
set DelphiExe=bds.exe
set LibPath=V:\Library
set LibBpg=Library.groupproj
set LibErr=Library.err
set RegSubkey=BDSClean
:buildlib
echo Rebuilding %LibBpg%...
if exist "%LibPath%\%LibErr%" del /q "%LibPath%\%LibErr%"
"%DelphiPath%\%DelphiExe%" -pDelphi -r%RegSubkey% -b "%LibPath%\%LibBpg%"
if %errorlevel% == 0 goto buildlibok
As I said as a comment to Ulrich Gerhardt answer, the make project_group.bpg is useless if your projects are in subdirectories. Make won't use relative paths and the projects won't compile correctly.
I've made a python script to compile all the DPRs in every subdirectory. This is what I really wanted to do, but I'll leave the above answer as marked. Although it didn't worked for me, It really answered my question.
Here is my script to compile_all.py . I believe it may help somebody:
# -*- coding: utf-8 -*-
import os.path
import subprocess
import sys
#put this file in your root dir
BASE_PATH = os.path.dirname(os.path.realpath(__file__))
os.chdir(BASE_PATH)
os.environ['PATH'] += "C:\\Program Files\\Borland\\Delphi7\\Bin" #your delphi compiler path
DELPHI = "DCC32.exe"
DELPHI_PARAMS = ['-B', '-Q', '-$D+', '-$L+']
for root, dirs, files in os.walk(BASE_PATH):
projects = [project for project in files if project.lower().endswith('.dpr')]
if ('FastMM' in root ): #put here projects you don't want to compile
continue
os.chdir(os.path.join(BASE_PATH, root))
for project in projects:
print
print '*** Compiling:', os.path.join(root, project)
result = subprocess.call([DELPHI] + DELPHI_PARAMS + [project])
if result != 0:
print 'Failed for', project, result
sys.exit(result)
Another vantage of this approach is that you don't need to add new projects to your bpg file. If it is in a subdir, it will compile.

Resources