Sunday, 30 June 2019

An interesting comment buried in a discussion:

“in many garbage collected languages, it becomes orders of magnitude slower as the size of objects increases”
We can test your hypothesis. The following F# code allocates arrays of different lengths and measures the time taken:
  1. let rec test n =
  2. if n < 100000000 then
  3. let timer = System.Diagnostics.Stopwatch.StartNew()
  4. let mutable c = 0L
  5. for _ in 1..100000000 / n do
  6. c <- c + int64 (Array.init n byte).Length
  7. c <- c / int64 n
  8. printfn "%d, %g" n (timer.Elapsed.TotalSeconds / float c)
  9. test (n+1 + (n >>> 3))
The results show that the time taken to allocate objects up to 100MiB on 64-bit .NET 4.7.2 is linear in object size as expected. No "orders of magnitude" slowdown.
I'm not surprised because I have never seen this "orders of magnitude" slowdown that you refer to. The only GC'd language you referred to is Java so I assume you are familiar with Java. Are you saying that the behaviour is different with Java? Can you port my 9-line F# program to Java and see what graph you get?
"Overall new+delete is always faster than GC for all sizes."
Are you really saying that the same program written with new+delete will always be faster than with new+GC?
If so, there are many counter examples. Here are implementations of Hans Boehm's binary trees memory allocation benchmark written in C++ using the default new+delete and in F# using the tracing GC:
Here are the results:
The tracing GC is 7x faster than new+delete on average. Furthermore, the first thing you would do to optimise the C++ is replace the individual allocations with a pool allocator precisely because the default new and delete are so slow.
Raymond Chen was blogging about developing a Chinese/English dictionary in C++. Rico Mariani ported it to C# and noted that the unoptimised C# code was faster than the first few optimised C++ implementations. The C++ was eventually optimised to beat the C# by removing all OOP, all RAII and all new+delete precisely because they are so inefficient. The final C++ code was effectively just C.
"As the number of objects increases, GC becomes slower and slower making the system unusable, and its really a no competition."
That is theoretically true for tracing GCs but not reference counting GCs. However, the effect is so small that GCs are used with heap sizes up to 8TB (see Java Heap Size - Azul Systems, Inc.). For example, the GC'd language OCaml running on a supercomputer once held the record for the largest symbolic computation ever performed (Archives of the Caml mailing list > Message from Thomas Fischbacher). I’ve used OCaml on supercomputers myself.
"Most GC based systems managing large amount of memory/objects today use off-heap memory for that reason."
Firstly, you've written "memory/objects" so note that your statement is true for objects and not for memory. Secondly, I've worked on large systems for decades (my background is in HPC) and have never seen anyone move to off-heap memory for that reason. Were they using Java?
"With most compilers RAII injects exactly as much code as necessary for cleanup and nothing more."
Firstly, you don't need to inject any code at the end of scope for cleanup. After all, tracing GC's don't. Secondly, virtual destructors are an obvious counter example. To avoid undefined behaviour when delete'ing a derived class via a pointer to its base class the base class is given a virtual destructor culminating in many expensive dynamic jumps to no-ops.
"Most implementations use some form of DFA not code generation."
From the .NET docs "If a Regex object is constructed with the RegexOptions.Compiled option, it compiles the regular expression to explicit MSIL code instead of high-level regular expression internal instructions. This allows .NET's just-in-time (JIT) compiler to convert the expression to native machine code for higher performance" Compilation and Reuse in Regular Expressions
"Also note that any kind of code generation implementation has a very high initialization cost (compilation + JIT etc)."
This quick test shows C++ running 17x slower: C++ vs .NET regex performance
"The boost/pcre2 regex implementations are faster than Java, for example."
I'm sure your observation is correct but the correct conclusion is that Java is slow.
"Resource leaks (including memory) in GC based systems are extremely common."
Not in my experience but I suspect you're talking specifically about Java.
"In general my experience has been that programmers who have had good experience in C++ are usually much better off even when dealing with GC languages because they are overall more careful."
Experienced people are generally better.

No comments: