A mistake we can see in practice when looking at the L1 cache size that Apple M1 needed to compensate for poor code density. Therefore, ARMv8 AArch64 made a critical mistake in adopting a fixed 32bit opcode size. The added complexity of using the C extension is negligible, to the point where if a chip has any cache or rom in it, using C becomes a net benefit in area and power. It does not apply to RISC-V, where you get either 32bit or 2x 16bit. Intel and AMD have both thus found 4-wide decode to be a practical limit. This does apply to x86 and m68k, as "variable" there means 1-16 byte, and dealing with that means bruteforcing decode at every possible starting point. >Variable length instruction coding for instance, which means a surprising amount of power is dedicated to circuitry which is just to find where the instruction boundaries are for speculative execution. They would still get ecosystem benefits they'd be able to use the open source toolchains, as they support even naked RV32E with no extensions. Such a use would be deeply embedded, and the vendor would be in control of the full stack so there would be no concerns of compatibility with e.g. Instructions being any size 1-16 (x86) vs being either 16bit or 32bit long (RISC-V).Īs with everything else in RISC-V, the architects did the weighting, and found that the advantage in code size overwhelms the (negligible by design) added decoding cost, for anything but the tiniest of implementations (no on-die cache + no builtin ROM).Īs it turns out, it would be difficult to even find a use for such a core, but in any event it is still possible to make one such very specialized chip, and simply not use the C extension. >Not as big of a problem as on x86, but still a fundamental limitation. >sadly this partly applies to RISC-V too.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |