The best thing about .NET over Java is the ability to define custom struct types.
In Java, every object is heap allocated - unless you are lucky enough for that to be optimized away. If you absolutely need the performance, you must resort to a weird kind of place-orientated programming, where you pass around mutable references to primitive types that get operated on by static functions. It kind of resembles C, but without the struct keyword!
Both .NET and Java did major mistakes at their v 1.0, by ignoring what other GC based languages were doing.
- Proper value types (Even .NET is only catching up since C# 7.0)
- AOT compilation alongside JIT (NGEN was just for startup and without optimizations, and on Java side only third parties)
- Seamless FFI with host OS (P/Invoke did better, JNI is finally getting a replacement)
Eiffel, CLU, Mesa/Cedar, Oberon linage, Modula-2+, Modula-3 were inspiration, but those capabilities were apparently considered not relevant enough, go figure.
Regarding seamless FFI, I suspect that part of the JNI friction was intentional, out of fear that programmers would use FFI as a crutch.
Erlang's BEAM's nif[0] interface is the best I've seen in mainstream runtimes for FFI. You write an Erlang/Elixir/etc. function, and loading a native library may swap out the function body. This allows for workarounds in Erlang/Elixir/etc. in case of version skew in your native libraries.
In my day job, I work with a domain-specific language with a similar FFI (actually, 3 different FFIs, one similar to nifs), and it's very handy to have a fallback that does things with /proc/, calling external command line binaries, slower versions of FFI functions, etc. (This allows the native code release schedule to be decoupled from the domain-specific language code changes.)
The syntax isn't the same, but a Java-like syntax would be
// getpid is built in, but it could be done something like this
// look for XNativeGetPID() in mylib.so or mylib.dll depending on OS
@native_replace("XNativeGetPID", "mylib")
public static int getpid() {
// Throw an exception if you want to force
// native implemementation
return Integer.parse(Runtime.system( "/bin/grep pid /proc/self/status | /bin/sed 's/..." ));
}
Do you have any idea how easy it is to provide a Java fallback on platforms where a native implementation isn't available? (I find it's often possible to call out to a command line tool as a workaround for things I'm trying to do in native code.)
> ideally software should be “installed” in an ideal environment.
Native implementations with non-native fallbacks can be an important decoupling mechanism.
I used to be a developer for a Linux desktop application with a peak of something like 60 million unique users over a trailing 3 month window. We had Windows, OS X, RPM, and dpkg installers with native libraries, but people also ran our software on FreeBSD, Gentoo, etc.
It's handy to have gradual enhancement/graceful degradation in non-ideal installs where the native API's you'd like either don't exist or are buggy and shouldn't be used.
In my day job, we sometimes have a business need to iterate new features at a faster tempo than our native code release schedule, so it's nice to have a similar sort of fallback implementation.
I'm not very familiar with Java's native interface, but you could potentially implement the native and non-native approaches as separate classes with a common interface, and choose to inject the correct dependency to the calling code at runtime by detecting OS features?
It clearly can be done, perhaps by using reflection and catching a class loading exception, but it's pretty common boilerplate that should optionally be handled by the language runtime's FFI.
Let's say you want to have a native-looking prompt window via wxWidgets, and fall back to Swing if wx isn't installed. Or, you want to use some particular optimized native BLAS library if it's installed, or fall back to doing the linear algebra in Java if it's not available.
In these cases, and many others, "detecting OS features" involves checking LD_LIBRARY_PATH (or PATH on Win32) for libraries, checking they're for the correct architecture, etc. The OS already provides dlopen() or its equivalent for this purpose. Properly detecting the OS and emulating the OS's algorithm for finding native libraries is just wasteful when the OS already exposes the functionality.
As an added bonus, an FFI that replaces a method body allows someone who doesn't have the C/C++/Rust/FORTRAN handy, or can't read it, to see roughly what the native code should be doing.
> Proper value types (Even .NET is only catching up since C# 7.0)
In C# structs are value types, and they've been with .Net since the beginning IIRC. I assume maybe you're referring to System.ValueTuple? It was added in C# 7 and allowed tuples to be value types along with significantly better syntax for working with them.
If this is what you mean by "proper value types" then I think that the "value type" term is bloated. For me value types mean value semantics and what you describe are mostly optimizations. I like the point of Eric Lippert, that value types is not the same as "should live on stack" as a lot of people think.
I would agree that things like
> - using without IDispose for structs without implicit boxing
and
> - generics for blittable types
would make value types more "proper" in the C# language.
I don't think those things are necessarily mistakes. Maybe JNI was but most languages took the same approach of exposing interpreter APIs instead of providing FFIs to C.
AOT compilation of a dynamic language like Java is hard to do without losing performance. You can AOT Java now with the same compiler used also for JITC (Graal) and you lose, I think, about 20% of the peak performance unless you do C++ style profile collection and double compilation. It's not clear that lack of AOT compilation held Java back any - although SubstrateVM/native-image are getting popular at the moment, that's taking place in the context of hype around "serverless functions". If you aren't using those (or writing small CLI apps, or doing other unusual things) the benefit is less clear.
With respect to proper value types, trying to do that in 1.0 would have just killed the projects. Biting off too much. Look at the complexity of Valhalla - only a small part of it comes from backwards compatibility. Most of it is trying to get the benefits of generics over value types via specialisation without the horrible consequences suffered by other languages. It's not about rectifying past errors but rather, doing something new.
Static in itself doesn’t have a strict definition. You are right that for example java is statically (and strongly) typed, but it has many dynamic elements like class (re)loading, reflection.
golang (and D I think) have reflection, yet they're AOT languages. The JIT does a tremendous job of yielding strong performance, and in several (or many cases) being dynamic in this sense bears a negligible penalty. These use cases end up with Java being faster than AOT languages for example.
You are right that Java can easily surpass go for example in performance, especially in garbage-heavy programs —- but I didn’t read this thread as if it would be the point of it. My only remark was that Java while statically typed can be quite dynamic (and what is actually more meaningful, the JVM itself can often times swallow the performance impact of even really dynamic code constructs)
There's this incredible 'aha' moment captured 45 minutes in between Larks Bak (v8) and Erik Meijer (c#) with regards to JIT/AOT. The whole conversation is really interesting and you can see how brilliant people work: They are talking about really complicated things in simple terms and their minds are wide open to what the other person is saying.
True. Project Valhalla (which would add this to the JVM) has been years in the making, not sure why it takes so long. But at long last it seems to come to fruition: https://wiki.openjdk.java.net/display/valhalla/Main
When was this true ? I remember since forever the "DON'T UPGRADE JVM ON THIS SERVER" notices since Java 5-6 days. Hell I've seen a case where a minor update would break a production app.
And I don't even do Java development.
Not to mention android and gradle being broken and randomly failing to work between different JVM versions - depending on the gradle flavour of the day.
I don't really see JVM as a pillar of stability and backwards compatibility.
That’s just false. There have been some minor changes over the last 25 years, but the jvm will run a jar file compiled with a really old javac just fine, and I think you would be hard-pressed to find any platform with even a remotely similar level of stability and backward compatibility.
The don’t upgrade thing on the server is a generally wanted stable environment, or do you upgrade libc in prod without testing it first? Also, it happens mostly in banks and other not-too technically up-to-date places that often times depend on bugs themselves. Trailing the latest OpenJDK is the best thing one can do.
What are you talking about ? I had a problem with Gradle failing to work with OpenJDK a while ago [1] and the solution was to downgrade.
We literally had software in production that wouldn't work when you upgraded the JVM and their official documentation said it wouldn't work if you updated and you were unsupported (and IIRC downgrading was also a PITA) - this was in JVM 5 -> JVM 6 days which AFAIK was a big change (TBH I don't know back then I didn't do app development I did system integration).
Second, I didn’t say it is in every single case backward compatible, since that is impossible in a continuously evolving platform. But there are really few “user”-facing changes in the JVM. And so I still state that calling the JVM non-stable (for which you provided no evidence), or non-backward compatible is simply false.
Please tell me a platform which is stable/backward compatible according to you to at least this degree.
(Also, we are at java 16!, java 5, 6 is so old that it is not particularly telling about anything)
I see limited value in that when JVM ships with frameworks which break compatibility regularly - I doubt Python packages which had no dependencies on core libraries had any problems using 2to3 to migrate.
What's the difference? I still can't run my existing jar with a new JVM package; so the new JVM package isn't compatible with old binaries. And I have to horde old installers if I want specific programs to work (thankfully, I only have one or maybe two Java programs I'd like to run)
Another benefit of struct is since enumeration can't be inherited, you can achieve something similar with struct and operator overload magic as I did with OpResult [0], to allow consuming applications of ASPSecurityKit extend possible OpResult values with their app specific failure codes, without having to do ugly explicit casting if an enum was used instead.
As a Java-lander, programming C# for fun, while I like struct as a concept they are horrible in practice.
What do I mean by that? Here's my idea. Make a Fluent parser. I have some few shallow structs that can be used interchangeably. E.g. you want to have following structure
Entry:
- Message
- Term
- Comment
How do I add polymorphism to my entries?
Option 1) Go with interfaces. Yay. I've incurred the boxing penalty. I might as well as just write a class.
Option 2) Go with [FieldOffset]. Only do this if you are a masochist and you REEEEEEEEEEEEAAAAAAAALLY need that performance.
In Rust for instance, I could replace that stupid struct with an enumeration, which is impossible in C#. Hopefully in future there will be discriminated unions or some Rust enums equivalent.
Interfaces don’t always have boxing penalty. Write generic code, use the interface as generic type constraint, and the interface will work without virtual calls, boxing, or any other runtime overhead.
There's a few challenges with doing something like this.
- arrays (which most collections use under the covers) will need to allocate `elements*size_of_struct`. Unless the structs are the same size, you'd have to do some level of thunking/offsetting.
- Even if they are the same size, when you're iterating the list, how would the runtime know whether it's `A`, `B`, or `C`? Structs don't have header information that can be used to infer the type, it's only whatever fields you've defined (with padding potentially). Part of 'boxing' is essentially thunking the struct by wrapping a methodtable for the struct into the object [0].
If you need something like polymorphism with structs, your best bet is to make the struct itself quasi-polymorphic (i.e. with enums) and/or start abusing generics in some fashion I haven't yet figured out.
> Structs don't have header information that can be used to infer the type, it's only whatever fields you've defined (with padding potentially).
Sure, so let's make union or "struct enum" and do the same thing Rust does. Add a discriminator byte to the structure and use it to figure out type.
> If you need something like polymorphism with structs, your best bet is to make the struct itself quasi-polymorphic (i.e. with enums) and/or start abusing generics in some fashion I haven't yet figured out.
Well. I think there is the FieldOffset hack. Basically discriminated unions where programmer plays the role of compiler.
[StructLayout(LayoutKind.Explicit)]
struct DiscriminatedUnion
{
// Imagine we have enum like so
// enum
// {
// V1(byte a, byte b)
// V2(sbyte a, sbyte b)
// }
// The "FieldOffset" means that this Integer starts, an offset in bytes.
// sizeof(byte) = 1, sizeof(EnumType) = 1
[FieldOffset(0)] public EnumType TypeFlag;
[FieldOffset(1)] public byte V1a;
[FieldOffset(2)] public byte V1b;
[FieldOffset(1)] public sbyte V2a;
[FieldOffset(2)] public sbyte V2b;
}
> Structs are often horrible in practice if you need even a bit of polymorphism
I think that's what he is saying. That it's quite a leap from "structs are horrible in practice" to "structs are horrible if you need polymorphism in conjuction with structs" which is in my experience a pretty rare situation. If you approach structs from Rust or C++ you might expect to be able to form hierarchies of them, but approaching it from Java you are very happy to just be able to create a Point(X, Y) struct.
Even the rust type of enum structs with clever overlapping layouts I find pretty rare use for. The common use case is homogeneous data. If I have polymorphism, I'm likely going to incur some pretty expensive virtcalls etc anyway. At that point I just use classes.
Interfaces don't have a compile-time size so I don't see how you can use avoid boxing if you want a collection of IFoo.
(Optimized) discriminated unions have a size that is the max size of the possible types, plus a flag to indicate the type. This is the best you can hope for without boxing.
In C++ struct and class are the same except for default visibility rules. You could drop either keyword from the language and it wouldn't make any practical difference outside of C compatibility.
In GCed languages the default seems to be that every object is referred to by pointer. Java took the "ugly" step of adding in primitive types so you could calculate 1 + 1 without causing dozens of cache misses. Struct types take this hack a step further. It generally makes the definition of the languages virtual machine a lot uglier, for example most jvm instructions operate on primitive types.
As a C++-lander, I find it surprising that object allocation is tied to the object's type. IMO, the boxed/unboxed axis should be orthogonal to the type itself. It should be a property of the object instance, not of the object's type.
If the GC language properly supported interior pointers (and first class pointers) for the GC there wouldn't be a need to force boxing in all cases. You would be able to make the decision at the point of instantiation of the object. It would make the GC more complex, but I think these days most GCs support interior pointers for other reason anyway, right?
Are you talking about go-style explicit pointers? I don't know if it is worth it to add them with a different semantics and dealing with nulls when you can have value types with an attribute. Pointers as an abstraction also leak implementation detail.
I would look at adding LMAX Disruptor to this list. It can run circles around stuff scattered across TPL usages. Doesnt fit every use case, but its really incredible when it does fit. I was able to build a toy project that handles millions of user events per second on a single thread using this.
Getting your application aligned with the NUMA model makes way more difference in performance than anything else.
Considering the databases often end up being the bottleneck. It may not always be wise.
So its a small amount of data, but large computation it might be tempted to do it on the application. But its a large amount of data, I/O maybe a problem in doing it on the app.
That's why I hedged my bets with "where appropriate" there :).
You're of course right, and it could be that DB being a compute bottleneck is the most common case - but I've seen or been involved in projects where the bottleneck was DB I/O and processing unnecessary queries, because the application was doing most of the calculations.
In this article, with some optimizations, the author was able to achieve on the order of ~70m operations per second using the .NET port. I think .NET might actually be faster than Java in some places here.
Can you give me a .NET assembly (DLL), that I then load, run in a completely safe sandbox with practically zero risk, and then unload?
Is .NET sandboxing effective yet? It's my belief that if I want to do something like this, that I'm way better off using V8 or some other sandbox to host your code. Or use Deno or something.
Or is it still not possible to run someone else's .NET code really safely?
Asking for a friend who wants to make an online game in the style of C-ROBOTS, but is afraid of running other people's code.
Personally, I would still consider it. For a game, the risks are not that large, it’s not a bank nor a nuclear silo.
You can use Mono.Cecil library to reflect assemblies without loading them as code or executing any parts of them. You gonna need to whitelist allowed imports (allow stuff like String/List/Dictionary but not much else), blacklist pointer types everywhere (function arguments, return values, and locals variables, CIL+metadata do have types of things), blacklist DllImportAttribute, and probably a few other things I forgot.
The performance should be awesome that way, however security-wise that’s not 100% failsafe. Also easy to DoS your server by consuming too much CPU or using all the memory. You might need workarounds to address these issues. Mono.Cecil can patch code making new assemblies, not just inspect/reflect.
Probably for these reasons MS discontinued app domains / code security, and recommends processes, containers or virtual machines instead. You can run another instance of .NET runtime inside these things, the startup/warmup time is not that bad, it's not a Java.
P.S. If you only allow to edit code in your editor i.e. don’t need support for visual studio + Microsoft’s compiler, you can make your own language where only safe things are expressible, and compile that one into CIL. Simplifies many things, you only need to worry about CPU time and RAM usage. You don’t need to emit DLLs nor mess with CIL directly, can compile scripts into delegates using System.Linq.Expressions or probably some third-party libraries.
Have you ever looked into Terrarium? It was created by the .NET team a long time ago to demonstrate the capabilities of .NET. You program a creature and simulate an ecosystem which consist of other peoples "creatures" which if I recall correctly were essentially just DLL's of their code.
Yes! You can do this using a webassemnbly runtime. For example if you look at your browsers network activity on this page you can see a bunch of standard dotnet assemblies being downloaded https://try.dot.net/
.NET has had AoT compilation since v1. What, exactly, are you talking about?
Sure, rely on JIT for stuff like webapps and the like, but a CLI tool that is designed to run once would be a good candidate for AoT compilation, which, as I stated before, .NET supports.
I don't have much experience with .NET, but I believe the current production AOT option is the ReadyToRun publish option[1], which is actually a combination of AOT and JIT. The repo you linked is their work on a complete, native AOT implementation.
I'm getting downvoted but either people don't understand the difference between R2R and AOT (or they're just pissy today); from your link
>R2R binaries improve startup performance by reducing the amount of work the just-in-time (JIT) compiler needs to do as your application loads.
and
>However, R2R binaries are larger because they contain both intermediate language (IL) code, which is still needed for some scenarios
and
>Ahead-of-time generated code is not as highly optimized as code produced by the JIT. To address this issue, tiered compilation will replace commonly used ReadyToRun methods with JIT-generated methods.
> You can AOT compile .NET, that has been true for a while but now it is a first class option
That's not true at all, if you refer to R2R, it's only partial, your code is still JIT'd at runtime, it only helps to speedup a little bit the cold startup time
AOT compilation? I'll believe it when they'll release it, until then, it's all speculation
> AOT compilation? I'll believe it when they'll release it, until then, it's all speculation
Devil's in the details, but there -is- AOT compilation[0]. While it hasn't been released as an official product, it has been used for a few projects including a commercial game [1]. And yes, they're looking into the next steps to make it a 'released' thing.[2]
There are situations where startup overhead matters, and in those cases dotnet is not a good fit at this moment. But there are a lot of areas where it doesn't matter and those are where dotnet and Java are typically used.
Running the same code a lot is certainly not cheating, a lot of code where performance really matters is actually doing the same thing over and over.
.NET 5 added source generators, which are one part that is necessary to get to a point where ahead of time compilation would be feasible for typical .NET projects. So this is an aspect that Microsoft seems to be looking into now.
Adding on to this, the work done to make .NET linker friendly will also help make it more amenable to AOT.
For people who have not been following .NET closely, there is a thing called Blazor that runs in the browser using web assembly. To get the download size down, a linker is used to remove classes and methods that are not being used. You can also use this when deploying other types of apps.
Your comment is half right. While your app is JIT compiled, the runtime and its libraries are AOT compiled using a technology called cross-gen. As others have pointed out, the .NET team has a NativeAOT experiment (previously called CoreRT) that lets you AOT compile your entire app.
Also, Java takes longer to reach steady state as it starts execution in an interpreter, then JIT compiles hot methods. .NET skips the interpreter and goes straight to JIT compilation.
So if you're targeting .Net 1.0 this "PSA" is impactful, otherwise, it isn't really accurate. It could hypothetically apply if you created a new .Net process each execution (thus invalidating the GAC) but even then you could just add ngen to your build/deployment process.
> as far as I've heard from HFT people Java's relatively popular in HFT
Java's popularity depends how HF in HFT is and the threshold for pain developers are willing to go through to develop Java in a way to avoid GC and reduce memory use (and hence reduce cache misses)
Deploying to Linux boxen in colocation facilities is the norm and there is little/no culture of using dotNet on Linux. We all know it exists and it's open source now, but most Linux developers don't reach for dotNet.
>and there is little/no culture of using dotNet on Linux. We all know it exists and it's open source now, but most Linux developers don't reach for dotNet.
it's pretty weird sentence "most Linux developers dont use dotNet", but on the other hands dotNet (Core) developers definitely do use Linux - I have dotNet things on Linux on prod for 2 years atm
Just because some people do a thing doesn’t mean most people do that thing. Most Linux devs don’t use .NET. Maybe they have technical reasons or maybe they just don’t want to use a Microsoft product
Deployed a small internal .NET Core app on Linux recently. So far it's worked great! I've done a few major version package updates as well without issue.
> but most Linux developers don't reach for dotNet
On that note, how bad is .NET experience on Linux vs. on Windows? Besides the IDE (MonoDevelop is no Visual Studio).
I was about to dig into this topic soon - I'm particularly interested in how useful PowerShell is on Linux these days. On Windows, I appreciate the deep .NET interop it offers; I think it's closest experience to Lisp Machines that you can find in mainstream computing.
EDIT: Big thanks to the commenters who mentioned Rider, I didn't realize JetBrains had an IDE for .NET!
That said, to clarify my comment above, I'm more interested in the experience with the platform itself - how does .NET Core work on Linux in terms of performance, fragility, access to first-party and third-party libraries? Can I expect non-UI .NET code to port well to Linux? And, in PowerShell, do I get to do things like Add-Type "<insert bunch of C# code here>", or $foo = [Some.dotNET.Type]::new()?
>Can I expect non-UI .NET code to port well to Linux
If it wasn't written with tons of pinvoke to win32 or using Windows-only services (like WCF), then sure.
I've been building NET Core applications for Linux and exclusively on Linux for a few years now, mostly using Rider as the IDE. No complaints. I haven't seen any Windows-exclusive libraries worthy of any attention in all that time. Even libraries with tons of native code (like imagemagick) have Linux support.
The official MS tooling is behind what they offer on windows, but it's getting there. You can collect GC and memory dumps
I use mainly MacOs with VS Code (it’s no match for the real VS) and deploy dotnet apps in Linux containers (alpine). It’s a small app which more than an hundred users and it’s run fine for a year. I never add issues with libraries only available for Windows.
Its first class these days. You have native linux powershell, sql server, .net 5 (which is 4th linux/core version), vscode/rider or use visual studio with windows subsystem for linux.
Also, .NET 5 is now here, and 6 is already in preview, which are much, much more backwards compatible than .NET Core 3. And unlike Python, we had .NET Standard to start migrating code between Framework and Core. A lot of extant code on NuGet has already migrated and there is really very little you can't do these days.
Unless one is using third party controls, CMS like SiteCore, SharePoint and Dynamics, writing custom .NET code to be called by SQL Server stored procedures, having a working WCF infrastructure, using RDMS with EF 6 that isn't SQL Server, in-house stuff that isn't fully covered by the Windows Compatibility Pack, and a couple of other little things.
Also asp.net webforms is not supported, webservice/soap are 'supported' but the tooling is broken and i don't think you could extend a system with it in modern VS. The winforms editor works but is buggier with each new version, and asp.net mvc projects are similar but changed, and I'm sure the list of issues continues.
Also from my experience coaching and consulting there's a significant learning curve for .NET framework developers to work in the new core/.NET 5 systems.
So keep using .NET Framework. v4.8 is only 3 years old and there is no end date set on the support yet. Considering the last version of the v3 era will be supported until 2029, I'm sure the last version of the v4 era will get quite a long period of support as well.
I bit the bullet and transitioned from WebForms to ASP.NET Core. I think it's easier than WebForms ever was. You can replicate 90% of what WebForms did with Razor Pages, but you also get much easier support for building RESTful APIs. And with DI built-in, I don't have to worry about junior devs forgetting to close/dispose database connections ever again.
On my world those decisions come with project budget, if no department puts the money to hire the consultants that would do the job, there is no transition.
What do you want me to say? Because your organization is stuck in the past, that means .NET can't move forward?
If you've followed the .NET blogs at all, they have very good reasons they haven't prioritized support for these legacy systems. There isn't some cabal of platform designers in Redmond scheming on how to screw over developers at ossified orgs.
Not sure why I'm bothering. Everytime someone mentions .NET on here, someone comes out of the woodwork to complain about their pet features not being supported. Actually, come to think of it, it's often you in particular. We're going from .NET Framework being fundamentally tied to Windows to a new .NET that runs on everything. ASP.NET was built on HTTP.sys, a Windows driver for a web server. I don't know, is it terribly surprising that trying to migrate .NET off of Windows-only dependencies is going to break things downstream?
But if you still need this stuff that only runs on Windows, well, you got at least decade of support to look forward to.
It is not me that is asking, rather the Fortune 500 companies where software development is a cost center, that hire us to keep their production systems and factory lines rolling.
MS SQL Server started out as a port (to OS/2!) of Sybase SQL Server, which ran on Xenix (among other unixes). History doesn't repeat, but it does rhyme.
Actually its not that easy, depending which parts of framework you used.
It's certainly reasonable, but will take re-work for many applications beyond the project structure migration. I think in .NET 6/7/8 the .NET framework migration situation will improve to be mostly pure migration.
For example I migrated about 75% of a moderately large winforms system directly, but 25% needed rewrites due to missing core implementations for framework libraries. e.g. MS chart component not implemented in core yet, soap services changing to grpc, and various others. And this system was originally built to minimise third party dependencies and use the core framework or microsoft provided libraries wherever possible.
Sure, but comparing to python is disingenuous. Python3 was a complete break from Python2. The first version of .net core (5ish? years ago) broke the standard library down to only libs that work cross platform.
Is there anything in .NET 5 you're missing from .net 4.x?
And on the other hand, Bing is running on the first preview version of .NET 6.
MS is a big company, they aren't going to immediately migrate everything just like Google didn't migrate every single project to Go when they were heading it.
Options. You are talking about the very tiny subset of Java users who actively worry about the relative differences in performance tweakability of different JVM implementations. Chances are that they routinely run multiple VMs and VM configuration sets in parallel on real world inputs to find the fastest configuration.
"Look at this nice VM, trust us, it's very fast, rewrite all your code and see for yourselves!" just won't be very convincing to those. With that crowd, .NET isn't competing against the JVM, it's competing against against an entire market of competing JVMs.
I don't know of any, but I have been building prototypes in .NET Core 3+ that give me confidence in using this platform as a basis for an ultra-low-latency business system.
The biggest issue would be avoiding heap alloc, but that is actually not too difficult if you manage your data well enough. Most transactions in this business can be defined in terms of structs that are <1kb in size.
One other approach I have started to look at is sacrificing an entire high-priority thread to handling well-batched transactions (or extremely important timers) so that I never have to yield to the OS. In HFT, you always get an entire physical server to yourself, so eating 1 out of 32 threads is not a huge deal. In testing of this idea, I have found that I am able to reliably execute timers with accuracy of around 1/10th of a microsecond.
Java probably has more optimizations around GC suppression, but I feel like avoiding allocation in the first place is the most important bit. I believe there are already some ways to trick .NET into not running GC.
For now, I have been using this to test a toy/custom 2d client-server framework. All client events are piped to the server and processed in 1 gigantic ring buffer. This allows for ridiculous amounts of throughput due to batching effect. I am also using recurring, high-precision timers scheduled per client for purposes of triggering redraws and other important events as appropriate. This allows for complete decoupling of the event handling and client rendering pipelines. The breakdown is something like:
1 thread for ultra-low-latency timer execution
1 thread for processing the actual client event ring buffer, producing a consistent snapshot after each microbatch execution.
14+ threads for servicing HTTP requests (i.e. enqueuing client events), timers and redrawing client views using the near-real-time snapshots of business state.
My thinking is that if I can build something in this domain that is satisfactory, I could consider bringing it to an HFT firm as well.
I still don’t understand what you mean, please correct me where I’m wrong.
Branch prediction is a CPU level thing, and mis-predictions has a cost. You can do things like partitioning data beforehand so that a given if condition will take the same branch each time in the partition, but I don’t see how does it apply to templates at all.
You can’t avoid branches that depend on runtime infos, and those branches that depend on compile time known infos can be elided either automatically by the compiler if it can prove it is constant for example, or by things like constexpr, templates as you say. But it’s nothing too fancy, the dumb C-preprocessor macro can do similar things.
JIT-compiled languages can sometimes elide branches based on runtime data, so there is that.
In Java, every object is heap allocated - unless you are lucky enough for that to be optimized away. If you absolutely need the performance, you must resort to a weird kind of place-orientated programming, where you pass around mutable references to primitive types that get operated on by static functions. It kind of resembles C, but without the struct keyword!