Replies: 1 comment 1 reply
-
I'm actually working on something to solve this...I haven't released this video publicly yet, but it's relevant to this conversation so here it is would love some testing and feedback. I recorded this this morning and I'm already having it create and use a zip file with a SQL .db to continually build a database of our conversation history, which I can then download and reupload all ysing the data analysis function. https://www.youtube.com/watch?v=O5K_ck0p2uE&ab_channel=SynapticLabs |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Synapse_CoR is verbose, which makes it possible to get relevant answers. And that's great :)
The downside is the loss of context after reaching (quickly) the 8192 tokens allowed by GPT.
I thought of querying GPT from time to time to find out how many intermediate tokens have been used since the start of the conversation.
Here's the prompt: Total number of tokens used so far, including this reply?
That's pretty handy.
I couldn't include this parameter in the custom statement because of the 1500-character limit. The few rules I've added to customize Synapse_CoR leave me no margin :(
If you have any other solutions, I'd love to hear from you :)
Beta Was this translation helpful? Give feedback.
All reactions