Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a Rust WASM template #1977

Merged
merged 22 commits into from
Jul 13, 2022
Merged

Add a Rust WASM template #1977

merged 22 commits into from
Jul 13, 2022

Conversation

soxfox42
Copy link
Contributor

@soxfox42 soxfox42 commented Jul 9, 2022

Based on the WASM-4 Rust template, this should allow WASM TIC-80 cartridges to be built using Rust.

It should be pretty much as functional as the Zig template, including safe wrappers for the TIC-80 API, as well as the raw bindings, and raw memory access.

One issue still remains, and while it doesn't prevent the use of this template, it does limit it. TIC-80's memory layout uses address 0, which is basically impossible to access in Rust. Currently, the template defaults to building without any optimisations, which does allow access to the FRAMEBUFFER array, but also significantly increases cart size and reduces performance.
One possible way I can see to fix this would be to implement an alternate memory layout. It would require an extra pointer or offset so that memcpy, memset, peek, and poke could access the correct region of memory even if the MMIO wasn't at the beginning of memory, but since extra data about the memory needs to be stored to solve #1956, I think this would be okay. I'm happy to implement this, probably along with a fix to #1956, would the extra logic to handle this be acceptable?

Anyway, whatever happens with the memory situation, this template should be in a perfectly usable state, even if it doesn't have ideal performance.

Copy link
Owner

@nesbox nesbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the Rust template 👍

@nesbox nesbox merged commit b56dc40 into nesbox:main Jul 13, 2022
@joshgoebel
Copy link
Collaborator

This is really a Rust problem IMHO... we're effectively "legacy" hardware, Rust has to deal with us as-is... it doesn't make sense to add complexity for a single compiled language in a single runtime. Rust could add a FRAMEBUFFER1 pointer and just avoid every writing to the first byte of the display... yeah it sucks, but...

One possible way I can see to fix this would be to implement an alternate memory layout.

So I think we've (@nesbox) been pretty against this in the past - I'd guess because of the complexity and edge cases this introduces. If you're really chomping at the bit AND the change was small and non-invasive (a few lines somewhere) I'd go to bat for you, but I fear it is likely to be neither. How were you planning to solve this?

along with a fix to #1956

A fix for this would be a great PR though.

@soxfox42
Copy link
Contributor Author

soxfox42 commented Jul 13, 2022

Yeah, I do see that it's a specific language issue, and Rust is now usable here, so it's not too big an issue if it can't be fixed.

How were you planning to solve this?

Each of the memory access APIs currently accesses ram by finding a pointer using something like: u8* ram = (u8*)memory->ram;. I was thinking I could change this to instead store a second pointer inside tic_mem (and also the memory size for #1956), then use that pointer instead. Then, initWasm could detect a specific metatag to decide the value of core->memory.ram. The changes would be spread across each of the functions that access RAM, but I think it's still a fairly minor change if implemented that way.

@joshgoebel
Copy link
Collaborator

(and also the memory size for #1956),

It might be better to track this in the per-runtime configuration, though I'm forgetting how easy that is to access during API calls.

was thinking I could change this to instead store a second pointer inside tic_mem

A pointer to where? We're not even in charge of that allocation... WASM allocates it... so it isn't possible to over-allocate... you'd really need to subtract from the pointer (point earlier) to move the MMIO higher, wouldn't you? And if we're not in charge of the allocation - then we can't front pad it with the extra RAM we'd need. Am I missing something?

@joshgoebel
Copy link
Collaborator

For this to work properly you'd need all the RAM shifted all the time, would you not? Just fixing the memory APIs would be only half the problem... pointers to RAM would also need to know about the shift which means that the WASM VM would have to be aware as well, which is a whole other ballpark.

@soxfox42
Copy link
Contributor Author

A pointer to where?

To wasm_ram (from m3_GetMemory), exactly as it is now. It's actually the pointer to tic_ram that I would have to offset.

pointers to RAM would also need to know about the shift

See above, core->memory.ram would be updated, which all other code is already using.

It might be better to track this in the per-runtime configuration
Fair point, I'll look into that, thanks.

@joshgoebel
Copy link
Collaborator

It's actually the pointer to tic_ram that I would have to offset.

Yes, but then wouldn't that expose internal data structure and require extra bounds checks on the low side? And how does that solve pointers inside the VM runtime, which isn't using the tic_ram pointer at all?

@joshgoebel
Copy link
Collaborator

joshgoebel commented Jul 13, 2022

If you think it's trivial and want to spec it and write it (with no promise it'll be merged) I would happily review and go to bat for you IF I thought it was low enough complexity. @nesbox is the real gatekeeper here though. My worry is that to do it fully correctly it's more complex than you think - but it's possible I'm mistaken. :-)

The compiler would need to know about this RAM shift as well, wouldn't it so that any globals it placed in RAM it would know where to find them?

@soxfox42
Copy link
Contributor Author

.-----------------------------+------.
| General Memory              | MMIO |
'-----------------------------+------'
^                             ^
|                             |
|                             |
'-----------------------.     |
                        |     |
.------------------.    |     |
| tic_mem          |    |     |
+--------------+---+    |     |
| u8* full_ram | o-+----'     |
| tic_ram* ram | o-+----------'
| ...          |   |
'--------------+---'

wouldn't that expose internal data structure and require extra bounds checks on the low side?

I still don't see how I'm really exposing anything that didn't already exist. I'm simply decoupling MMIO from the start of RAM so it can be adjusted as seen above. Most API functions read the exact same ram pointer, which simply points at a slightly different region of memory (if a cart has the metatag to do so). memcpy, memset, peek, and poke now use full_ram, which is functionally equivalent to the pointer they currently obtain by casting memory->ram.

And how does that solve pointers inside the VM runtime

I'm not sure there is anything to be solved, other than tweaking the template to use the alternate MMIO addresses. The VM still sees memory as starting at 0, and all pointers should work fine and match up with the full_ram adresses.

The compiler would need to know about this RAM shift as well, wouldn't it

Yeah, but since it's mostly relevant to the Rust template, I'll just tweak its link-args. If for any reason another template later wants to use the other layout, that can be handled then.

My worry is that to do it fully correctly it's more complex than you think

Quite possible, but I'm happy to give it a shot anyway.

@joshgoebel
Copy link
Collaborator

Quite possible, but I'm happy to give it a shot anyway.

All right then... :-) Good luck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants