Integrate the wikitext task in the webapp#675
Conversation
tharvik
left a comment
There was a problem hiding this comment.
wouhou, gpt in browser, well done!
testing the model is a bit weird, as there is not specific handler for that (got 5000% accuracy, such impressive model). but that's alright for now IMO, we can add it in a later iteration
|
Did you really get 5000% accuracy? If that's the case this is alarming, I don't see how it could happen |
yep, it's fluctuating around 5000%±1000%, and use a huge amount of memory (before crashing my tab). the reproductability that I've is training wikitext once with "wiki.train.tokens" then testing it with "wiki.test.tokens". I'm using Firefox fwiw. before crashing it shows the next line in console: |
|
Indeed I'll try to fix that before merging the PR, testing the model takes up more than 50GB |
|
@tharvik all fixed, I reworked the validator:
Until an LLM UI is implemented running inference is useless as it doesn't display anything |
superbe, it tests nicely now 🥳 (tfjs is shitty indeed)
hum, no, you sadly have to have a while loop as
yep, I'm trying to draft up something basic in my PR, let's see what comes out of it :) |


Fixes Can't load tokenizer from the web-client #669.
Fixed by preventing local caching
Fixes WebGL prevents training wikitext on Firefox #673.

Added an error message suggesting changing browser.
Fixes Saving and loading GPT-tfjs models with IndexedDB fails with error #674
Extended memory caching and saving for gpt-tfjs