Yes. By design (1), RISC-V removes much of what makes OoO harder. Most instructions have very regular and clean semantics, communicating only through registers, with a straight forward [up to] two register operands, at-most one destination. The memory model by default is the same as Arms, a fairly weak model. Basically code that works on Arm will port over trivially (vector and special functions excluded).
(1) Unfortunately compressed instructions were included in the Unix profile which causes a lot of pain and makes it mostly impossible to do any partial predecode at I$ fill time. Ironically Arm64 wised up and removed the Thumb variants. (Preempting the code density crowd: there are other ways to deal with this).
EDIT: Pain includes dealing with instructions that crosses cacheline or even page boundaries (hello double fault), but worse, not being able to tell what might be an instruction in the I$. It was a sad day when RISC-V with essentially no input from the higher-end concerns decided to force it on us.
That only applies if you want to run binary distros and applications such as Fedora or Ubuntu. If you're running a supercomputer and don't want to implement the C extension then you can build your own kernel and programs. If you do that then anything from RV32IA and up can be used.
You can also use a QEMU or Valgrind-like thing to JIT the C away.
Or you can do what modern x86 does and annotate your cache lines the first time they are actually decoded instead of at cache fill time. That's just a little slower for cold code but the same for hot code. Having the C extension keeps 40% more of your code hot for any given cache size.
There might be some implementation point at which Aarch64 wins by having bigger but fixed length code, but it's not at the low end and I don't think it's at the very high end either.
If it does, would it be a pain to program for (see Cell)?