Is there a way to shift left without losing bits in delphi? - delphi

Here's the deal, I'm developing a security system and I'm doing some bit scrambling using bitwise operations. Using 4 bits just to illustrate, supose I have 1001 and I wish to shift left. This would leave me with 0010 since the right-most bit would be lost. What I wanted to do was to shift left and right without losing any bits.

You might choose to use rotate rather than shift. That preserves all the bits. If you wish to use an intermediate value that is the result of a shift, perform both a rotate and a shift. Keep track of the value returned by the rotate, but use the value returned by the shift. This question provides various implementations of rotate operations: RolDWord Implementation in Delphi (both 32 and 64 bit)?
Another option is to never modify the original value. Instead just keep track of the cumulative shift, and when a value is required, return it.
type
TLosslessShifter = record
private
FData: Cardinal;
FShift: Integer;
function GetValue: Cardinal;
public
class function New(Data: Cardinal): TLosslessShifter; static;
procedure Shift(ShiftIncrement: Integer);
property Value: Cardinal read GetValue;
end;
class function TLosslessShifter.New(Data: Cardinal): TLosslessShifter;
begin
Result.FData := Data;
Result.FShift := 0;
end;
procedure TLosslessShifter.Shift(ShiftIncrement: Integer);
begin
inc(FShift, ShiftIncrement);
end;
function TLosslessShifter.GetValue: Cardinal;
begin
if FShift > 0 then
Result := FData shr FShift
else
Result := FData shl -FShift;
end;
Some example usage and output:
var
Shifter: TLosslessShifter;
....
Shifter := TLosslessShifter.New(8);
Shifter.Shift(-1);
Writeln(Shifter.Value);
Shifter.Shift(5);
Writeln(Shifter.Value);
Shifter.Shift(-4);
Writeln(Shifter.Value);
Output:
16
0
8

Related

How I determine the number of references to a dynamic array?

Following on from this question (Dynamic arrays and memory management in Delphi), if I create a dynamic array in Delphi, how do I access the reference count?
SetLength(a1, 100);
a2 := a1;
// The reference count for the array pointed to by both
// a1 and a2 should be 2. How do I retrieve this?
Additionally, if the reference count can be accessed, can it also be modified manually? This latter question is mainly theoretical rather than for use practically (unlike the first question above).
You can see how the reference count is managed by inspecting the code in the System unit. Here are the pertinent parts from the XE3 source:
type
PDynArrayRec = ^TDynArrayRec;
TDynArrayRec = packed record
{$IFDEF CPUX64}
_Padding: LongInt; // Make 16 byte align for payload..
{$ENDIF}
RefCnt: LongInt;
Length: NativeInt;
end;
....
procedure _DynArrayAddRef(P: Pointer);
begin
if P <> nil then
AtomicIncrement(PDynArrayRec(PByte(P) - SizeOf(TDynArrayRec))^.RefCnt);
end;
function _DynArrayRelease(P: Pointer): LongInt;
begin
Result := AtomicDecrement(PDynArrayRec(PByte(P) - SizeOf(TDynArrayRec))^.RefCnt);
end;
A dynamic array variable holds a pointer. If the array is empty, then the pointer is nil. Otherwise the pointer contains the address of the first element of the array. Immediately before the first element of the array is stored the metadata for the array. The TDynArrayRec type describes that metadata.
So, if you wish to read the reference count you can use the exact same technique as does the RTL. For instance:
function DynArrayRefCount(P: Pointer): LongInt;
begin
if P <> nil then
Result := PDynArrayRec(PByte(P) - SizeOf(TDynArrayRec))^.RefCnt
else
Result := 0;
end;
If you want to modify the reference count then you can do so by exposing the functions in System:
procedure DynArrayAddRef(P: Pointer);
asm
JMP System.#DynArrayAddRef
end;
function DynArrayRelease(P: Pointer): LongInt;
asm
JMP System.#DynArrayRelease
end;
Note that the name DynArrayRelease as chosen by the RTL designers is a little mis-leading because it merely reduces the reference count. It does not release memory when the count reaches zero.
I'm not sure why you would want to do this mind you. Bear in mind that once you start modifying the reference count, you have to take full responsibility for getting it right. For instance, this program leaks:
{$APPTYPE CONSOLE}
var
a, b: array of Integer;
type
PDynArrayRec = ^TDynArrayRec;
TDynArrayRec = packed record
{$IFDEF CPUX64}
_Padding: LongInt; // Make 16 byte align for payload..
{$ENDIF}
RefCnt: LongInt;
Length: NativeInt;
end;
function DynArrayRefCount(P: Pointer): LongInt;
begin
if P <> nil then
Result := PDynArrayRec(PByte(P) - SizeOf(TDynArrayRec))^.RefCnt
else
Result := 0;
end;
procedure DynArrayAddRef(P: Pointer);
asm
JMP System.#DynArrayAddRef
end;
function DynArrayRelease(P: Pointer): LongInt;
asm
JMP System.#DynArrayRelease
end;
begin
ReportMemoryLeaksOnShutdown := True;
SetLength(a, 1);
Writeln(DynArrayRefCount(a));
b := a;
Writeln(DynArrayRefCount(a));
DynArrayAddRef(a);
Writeln(DynArrayRefCount(a));
a := nil;
Writeln(DynArrayRefCount(b));
b := nil;
Writeln(DynArrayRefCount(b));
end.
And if you make a call to DynArrayRelease that takes the reference count to zero then you would also need to dispose of the array, for reasons discussed above. I've never encountered a problem that would require manipulation of the reference count, and strongly suggest that you avoid doing so.
One final point. The RTL does not offer this functionality through its public interface. Which means that all of the above is private implementation detail. And so is subject to change in a future release. If you do attempt to read or modify the reference count then you must recognise that doing so relies on such implementation detail.
After some googling, I found an excellent article by Rudy Velthuis. I highly recommend to read it. Quoting dynamic arrays part from http://rvelthuis.de/articles/articles-pointers.html#dynarrays
At the memory location below the address to which the pointer points, there are two more fields, the number of elements allocated, and the reference count.
If, as in the diagram above, N is the address in the dynamic array variable, then the reference count is at address N-8, and the number of allocated elements (the length indicator) at N-4. The first element is at address N.
How to access these:
SetLength(a1, 100);
a2 := a1;
// Reference Count = 2
refCount := PInteger(NativeUInt(#a1[0]) - SizeOf(NativeInt) - SizeOf(Integer))^;
// Array Length = 100
arrLength := PNativeInt(NativeUInt(#a1[0]) - SizeOf(NativeInt))^;
The trick in computing proper offsets is to account for differences between 32bit and 64bit platforms code. Fields size in bytes is as follows:
32bit 64bit
RefCount 4 4
Length 4 8

Do dynamic arrays support a non-zero lower bound (for VarArrayCreate compatibility)?

I'm going maintain and port to Delphi XE2 a bunch of very old Delphi code that is full of VarArrayCreate constructs to fake dynamic arrays having a lower bound that is not zero.
Drawbacks of using Variant types are:
quite a bit slower than native arrays (the code does a lot of complex financial calculations, so speed is important)
not type safe (especially when by accident a wrong var... constant is used, and the Variant system starts to do unwanted conversions or rounding)
Both could become moot if I could use dynamic arrays.
Good thing about variant arrays is that they can have non-zero lower bounds.
What I recollect is that dynamic arrays used to always start at a lower bound of zero.
Is this still true? In other words: Is it possible to have dynamic arrays start at a different bound than zero?
As an illustration a before/after example for a specific case (single dimensional, but the code is full of multi-dimensional arrays, and besides varDouble, the code also uses various other varXXX data types that TVarData allows to use):
function CalculateVector(aSV: TStrings): Variant;
var
I: Integer;
begin
Result := VarArrayCreate([1,aSV.Count-1],varDouble);
for I := 1 to aSV.Count-1 do
Result[I] := CalculateItem(aSV, I);
end;
The CalculateItem function returns Double. Bounds are from 1 to aSV.Count-1.
Current replacement is like this, trading the space zeroth element of Result for improved compile time checking:
type
TVector = array of Double;
function CalculateVector(aSV: TStrings): TVector;
var
I: Integer;
begin
SetLength(Result, aSV.Count); // lower bound is zero, we start at 1 so we ignore the zeroth element
for I := 1 to aSV.Count-1 do
Result[I] := CalculateItem(aSV, I);
end;
Dynamic arrays always have a lower bound of 0. So, low(A) equals 0 for all dynamic arrays. This is even true for empty dynamic arrays, i.e. nil.
From the documentation:
Dynamic arrays are always integer-indexed, always starting from 0.
Having answered your direct question already, I also offer you the beginnings of a generic class that you can use in your porting.
type
TSpecifiedBoundsArray<T> = class
private
FValues: TArray<T>;
FLow: Integer;
function GetHigh: Integer;
procedure SetHigh(Value: Integer);
function GetLength: Integer;
procedure SetLength(Value: Integer);
function GetItem(Index: Integer): T;
procedure SetItem(Index: Integer; const Value: T);
public
property Low: Integer read FLow write FLow;
property High: Integer read GetHigh write SetHigh;
property Length: Integer read GetLength write SetLength;
property Items[Index: Integer]: T read GetItem write SetItem; default;
end;
{ TSpecifiedBoundsArray<T> }
function TSpecifiedBoundsArray<T>.GetHigh: Integer;
begin
Result := FLow+System.High(FValues);
end;
procedure TSpecifiedBoundsArray<T>.SetHigh(Value: Integer);
begin
SetLength(FValues, 1+Value-FLow);
end;
function TSpecifiedBoundsArray<T>.GetLength: Integer;
begin
Result := System.Length(FValues);
end;
procedure TSpecifiedBoundsArray<T>.SetLength(Value: Integer);
begin
System.SetLength(FValues, Value);
end;
function TSpecifiedBoundsArray<T>.GetItem(Index: Integer): T;
begin
Result := FValues[Index-FLow];
end;
function TSpecifiedBoundsArray<T>.SetItem(Index: Integer; const Value: T);
begin
FValues[Index-FLow] := Value;
end;
I think it's pretty obvious how this works. I contemplated using a record but I consider that to be unworkable. That's down to the mix between value type semantics for FLow and reference type semantics for FValues. So, I think a class is best here.
It also behaves rather weirdly when you modify Low.
No doubt you'd want to extend this. You'd add a SetBounds, a copy to, a copy from and so on. But I think you may find it useful. It certainly shows how you can make an object that looks very much like an array with non-zero lower bound.

Why compiler skips assigning variable

I have the following procedure :
procedure GetDegree(const num : DWORD ; var degree : DWORD ; min ,sec : Extended);
begin
degree := num div (500*60*60);
min := num div (500*60) - degree *60;
sec := num/500 - min *60 - degree *60*60;
end;
After degree variable gets assigned the debugger skips to the end of the procedure . Why is that?
It's an optimisation. The variables min and sec are passed by value. That means that modifications to them are not seen by the caller and are private to this procedure. Hence the compiler can work out that assigning to them is pointless. The values assigned to the variables can never be read. So the compiler elects to save time and skip the assignments.
I expect that you meant to declare the procedure like this:
procedure GetDegree(const num: DWORD; var degree: DWORD; var min, sec: Extended);
As I said in your previous question, there's not really much point in using Extended. You would be better off with one of the standard floating point types, Single or Double. Or even using the generic Real which maps to Double.
Also, you have declared min to be of floating point type, but the calculation computes an integer. My answer to your previous question is quite precise in this regard.
I would recommend that you create a record to hold these values. Passing three separate variables around makes your function interfaces very messy and breaks encapsulation. These three values only have meaning when consider as a whole.
type
TGlobalCoordinate = record
Degrees: Integer;
Minutes: Integer;
Seconds: Real;
end;
function LongLatToGlobalCoordinate(const LongLat: DWORD): TGlobalCoordinate;
begin
Result.Degrees := LongLat div (500*60*60);
Result.Minutes := LongLat div (500*60) - Result.Degrees*60;
Result.Seconds := LongLat/500 - Result.Minutes*60 - Result.Degrees*60*60;
end;
function GlobalCoordinateToLongLat(const Coord: TGlobalCoordinate): DWORD;
begin
Result := Round(500*(Coord.Seconds + Coord.Minutes*60 + Coord.Degrees*60*60));
end;

Is the use of `const` dogmatic or rational?

In Delphi you can speed up your code by passing parameters as const, e.g.
function A(const AStr: string): integer;
//or
function B(AStr: string): integer;
Suppose both functions have the same code inside, the speed difference between them is negligible and I doubt it can even be measured with a cycle-counter like:
function RDTSC: comp;
var
TimeStamp: record case byte of
1: (Whole: comp);
2: (Lo, Hi: Longint);
end;
begin
asm
db $0F; db $31;
mov [TimeStamp.Lo], eax
mov [TimeStamp.Hi], edx
end;
Result := TimeStamp.Whole;
end;
The reason for this is that all the const does in function A is to prevent the reference count of AStr to be incremented.
But the increment only takes one cycle of one core of my multicore CPU, so...
Why should I bother with const?
If there is no other reason for the function to contain an implicit try/finally, and the function itself is not doing much work, the use of const can result in a significant speedup (I once got one function that was using >10% of total runtime in a profiling run down to <2% just by adding a const in the right place).
Also, the reference counting takes much much more than one cycle because it has to be performed with the lock prefix for threadsafety reasons, so we are talking more like 50-100 cycles. More if something in the same cache line has been modified by another core in between.
As for not being able to measure it:
program Project;
{$APPTYPE CONSOLE}
uses
Windows,
SysUtils,
Math;
function GetThreadTime: Int64;
var
CreationTime, ExitTime, KernelTime, UserTime: TFileTime;
begin
GetThreadTimes(GetCurrentThread, CreationTime, ExitTime, KernelTime, UserTime);
Result := PInt64(#UserTime)^;
end;
function ConstLength(const s: string): Integer;
begin
Result := Length(s);
end;
function NoConstLength(s: string): Integer;
begin
Result := Length(s);
end;
var
s : string;
i : Integer;
j : Integer;
ConstTime, NoConstTime: Int64;
begin
try
// make sure we got an heap allocated string;
s := 'abc';
s := s + '123';
//make sure we minimize thread context switches during the timing
SetThreadPriority(GetCurrentThread, THREAD_PRIORITY_TIME_CRITICAL);
j := 0;
ConstTime := GetThreadTime;
for i := 0 to 100000000 do
Inc(j, ConstLength(s));
ConstTime := GetThreadTime - ConstTime;
j := 0;
NoConstTime := GetThreadTime;
for i := 0 to 100000000 do
Inc(j, NoConstLength(s));
NoConstTime := GetThreadTime - NoConstTime;
SetThreadPriority(GetCurrentThread, THREAD_PRIORITY_NORMAL);
WriteLn('Const: ', ConstTime);
WriteLn('NoConst: ', NoConstTime);
WriteLn('Const is ', (NoConstTime/ConstTime):2:2, ' times faster.');
except
on E: Exception do
Writeln(E.ClassName, ': ', E.Message);
end;
if DebugHook <> 0 then
ReadLn;
end.
Produces this output on my system:
Const: 6084039
NoConst: 36192232
Const is 5.95 times faster.
EDIT: it gets a bit more interesting if we add some thread contention:
program Project;
{$APPTYPE CONSOLE}
uses
Windows,
SysUtils,
Classes,
Math;
function GetThreadTime: Int64;
var
CreationTime, ExitTime, KernelTime, UserTime: TFileTime;
begin
GetThreadTimes(GetCurrentThread, CreationTime, ExitTime, KernelTime, UserTime);
Result := PInt64(#UserTime)^;
end;
function ConstLength(const s: string): Integer;
begin
Result := Length(s);
end;
function NoConstLength(s: string): Integer;
begin
Result := Length(s);
end;
function LockedAdd(var Target: Integer; Value: Integer): Integer; register;
asm
mov ecx, eax
mov eax, edx
lock xadd [ecx], eax
add eax, edx
end;
var
x : Integer;
s : string;
ConstTime, NoConstTime: Integer;
StartEvent: THandle;
ActiveCount: Integer;
begin
try
// make sure we got an heap allocated string;
s := 'abc';
s := s + '123';
ConstTime := 0;
NoConstTime := 0;
StartEvent := CreateEvent(nil, True, False, '');
ActiveCount := 0;
for x := 0 to 2 do
TThread.CreateAnonymousThread(procedure
var
i : Integer;
j : Integer;
ThreadConstTime: Int64;
begin
//make sure we minimize thread context switches during the timing
SetThreadPriority(GetCurrentThread, THREAD_PRIORITY_HIGHEST);
InterlockedIncrement(ActiveCount);
WaitForSingleObject(StartEvent, INFINITE);
j := 0;
ThreadConstTime := GetThreadTime;
for i := 0 to 100000000 do
Inc(j, ConstLength(s));
ThreadConstTime := GetThreadTime - ThreadConstTime;
SetThreadPriority(GetCurrentThread, THREAD_PRIORITY_NORMAL);
LockedAdd(ConstTime, ThreadConstTime);
InterlockedDecrement(ActiveCount);
end).Start;
while ActiveCount < 3 do
Sleep(100);
SetEvent(StartEvent);
while ActiveCount > 0 do
Sleep(100);
WriteLn('Const: ', ConstTime);
ResetEvent(StartEvent);
for x := 0 to 2 do
TThread.CreateAnonymousThread(procedure
var
i : Integer;
j : Integer;
ThreadNoConstTime: Int64;
begin
//make sure we minimize thread context switches during the timing
SetThreadPriority(GetCurrentThread, THREAD_PRIORITY_HIGHEST);
InterlockedIncrement(ActiveCount);
WaitForSingleObject(StartEvent, INFINITE);
j := 0;
ThreadNoConstTime := GetThreadTime;
for i := 0 to 100000000 do
Inc(j, NoConstLength(s));
ThreadNoConstTime := GetThreadTime - ThreadNoConstTime;
SetThreadPriority(GetCurrentThread, THREAD_PRIORITY_NORMAL);
LockedAdd(NoConstTime, ThreadNoConstTime);
InterlockedDecrement(ActiveCount);
end).Start;
while ActiveCount < 3 do
Sleep(100);
SetEvent(StartEvent);
while ActiveCount > 0 do
Sleep(100);
WriteLn('NoConst: ', NoConstTime);
WriteLn('Const is ', (NoConstTime/ConstTime):2:2, ' times faster.');
except
on E: Exception do
Writeln(E.ClassName, ': ', E.Message);
end;
if DebugHook <> 0 then
ReadLn;
end.
On a 6 core machine, this gives me:
Const: 19968128
NoConst: 1313528420
Const is 65.78 times faster.
EDIT2: replacing the call to Length with a call to Pos (I picked the worst case, search for something not contained in the string):
function ConstLength(const s: string): Integer;
begin
Result := Pos('x', s);
end;
function NoConstLength(s: string): Integer;
begin
Result := Pos('x', s);
end;
results in:
Const: 51792332
NoConst: 1377644831
Const is 26.60 times faster.
for the threaded case, and:
Const: 15912102
NoConst: 44616286
Const is 2.80 times faster.
for the non-threaded case.
Don't forget that const isn't only there to provide those tiny performance improvements.
Using const explains to anybody reading or maintaining the code that the value shouldn't be updated, and allows the compiler to catch any accidental attempts to do so.
So making your code more readable and maintainable can also make it marginally faster. What good reasons are there for not using const?
Using const prevents an implicit try/finally block which on x86 is rather more expensive than reference counting. That's really a separate issue to the semantic meaning of const. It's a shame that performance and semantics are mixed up in this way.
The type String is a special case, because it is managed by Delphi (copy on demand), and therefore not ideal to answer your question.
If you test your function with other types that are bigger than a pointer, records or arrays for example, you should see a bigger time difference, because with const only a pointer is passed, without const the record would be copied before passing to the function.
Using the keyword const, you can leave the decision of optimization to the compiler.
The documentation says:
Using const allows the compiler to optimize code for structured- and string-type parameters.
So, it is better, thus rational, to use const for string parameters, simply because the manual says so. ;)
Now, this may be well enough an answer for the questioner, but it is even more interesting to look at the general question whether to use const parameters or not.
Again, the documentation says at just one click away from the Delphi Language Guide Index:
Value and constant (const) parameters are passed by value or by reference, depending on the type and size of the parameter:
Note the apparent equality of value and constant parameters in this sentence. This concludes that using const for parameters, being not string- or structured-typed, makes no difference in performance nor code-size. (A short test, derived from Thorsten Engler's test code, indeed shows an average indifference between with and without const for parameters of ordinal and real types.)
So it turns out that whether or not using const only makes a difference to the programmer, not the executable.
As follow-up, and as LukeH already asked: What good reasons are there for not using const?
To follow Delphi's own syntax:
function FindDragTarget(const Pos: TPoint; AllowDisabled: Boolean): TControl;
function UpperCase(const S: string): string;
function UpCase(Ch: Char): Char;
function EncodeDate(Year, Month, Day: Word): TDateTime;
To produce more compact are therefore possibly slightly more readable code. For instance: using constant parameters in property setters really is superfluous, which surprisingly often leads to single line declarations instead of double, if you like to honour a line length limit.
To comfortably provide variables to virtual methods and event handlers. Note that none of the VCL event handler types use const parameters (for other than string- or record-typed members). It is just nice service for the users of your code or your components.
Of course, there also may be fine reasons for using const:
As LukeH already answered, if there is really no need at all to change the value of the parameter.
For (personal) protection, like the documentation says:
Using const also provides a safeguard against unintentionally passing a parameter by reference to another routine.
Partial origin of this answer: http://www.nldelphi.com.
Generally, I would avoid any optimizations (in any language) that don't solve real problems that you can measure. Profile your code, and fix the problems that you can actually see. Optimizing for theoretical issues is just a waste of your time.
If you suspect that something is wrong, and this somehow fixes it/speeds it up, then great, but implementing these kinds of micro optimizations by default are rarely worth the time.
One of the most important fact that people omitted. Interlock ... instruction is very costly in Multicore CPUs of x86 instruction. Read Intel manual. The cost is when refcounter var is taken placed and it is not in cpu cache, ALL other CPUs must be stopped for instruction to carried out.
Cheers

Does Delphi have isqrt?

I'm doing some heavy work on large integer numbers in UInt64 values, and was wondering if Delphi has an integer square root function.
Fow now I'm using Trunc(Sqrt(x*1.0)) but I guess there must be a more performant way, perhaps with a snippet of inline assembler? (Sqrt(x)with x:UInt64 throws an invalid type compiler error in D7, hence the *1.0 bit.)
I am very far from an expert on assembly, so this answer is just me fooling around.
However, this seems to work:
function isqrt(const X: Extended): integer;
asm
fld X
fsqrt
fistp #Result
fwait
end;
as long as you set the FPU control word's rounding setting to "truncate" prior to calling isqrt. The easiest way might be to define the helper function
function SetupRoundModeForSqrti: word;
begin
result := Get8087CW;
Set8087CW(result or $600);
end;
and then you can do
procedure TForm1.FormCreate(Sender: TObject);
var
oldCW: word;
begin
oldCW := SetupRoundModeForSqrti; // setup CW
// Compute a few million integer square roots using isqrt here
Set8087CW(oldCW); // restore CW
end;
Test
Does this really improve performance? Well, I tested
procedure TForm1.FormCreate(Sender: TObject);
var
oldCW: word;
p1, p2: Int64;
i: Integer;
s1, s2: string;
const
N = 10000000;
begin
oldCW := SetupRoundModeForSqrti;
QueryPerformanceCounter(p1);
for i := 0 to N do
Tag := isqrt(i);
QueryPerformanceCounter(p2);
s1 := inttostr(p2-p1);
QueryPerformanceCounter(p1);
for i := 0 to N do
Tag := trunc(Sqrt(i));
QueryPerformanceCounter(p2);
s2 := inttostr(p2-p1);
Set8087CW(oldCW);
ShowMessage(s1 + #13#10 + s2);
end;
and got the result
371802
371774.
Hence, it is simply not worth it. The naive approach trunc(sqrt(x)) is far easier to read and maintain, has superior future and backward compatibility, and is less prone to errors.
I believe that the answer is no it does not have an integer square root function and that your solution is reasonable.
I'm a bit surprised at the need to multiple by 1.0 to convert to a floating point value. I think that must be a Delphi bug and more recent versions certainly behave as you would wish.
This is the code I end up using, based on one of the algorhythms listed on wikipedia
type
baseint=UInt64;//or cardinal for the 32-bit version
function isqrt(x:baseint):baseint;
var
p,q:baseint;
begin
//get highest power of four
p:=0;
q:=4;
while (q<>0) and (q<=x) do
begin
p:=q;
q:=q shl 2;
end;
//
q:=0;
while p<>0 do
begin
if x>=p+q then
begin
dec(x,p);
dec(x,q);
q:=(q shr 1)+p;
end
else
q:=q shr 1;
p:=p shr 2;
end;
Result:=q;
end;

Resources