feat: showcase thinking steps#98
Conversation
| @@ -6,7 +6,7 @@ Models from [OpenAI](https://openai.com) are used and can be customized for prom | |||
|
|
|||
| ## Setup | |||
There was a problem hiding this comment.
note - still deciding how to restructure README.md to make setup easier. I think the most obvious point of confusion is deciding whether to use CLI or terminal to setup, i think a toggle like
Details
There was a problem hiding this comment.
@srtaalej Thanks for calling this out! 📚 ✨ I agree it's a confusing and hope we can encourage the CLI most-
IIRC @lukegalbraithrussell had thoughts on related changes as well and we should match such in slack-samples/bolt-python-assistant-template#42 too!
listeners/assistant/message.js
Outdated
| input: `System: ${DEFAULT_SYSTEM_CONTENT}\n\nUser: ${llmPrompt}`, | ||
| stream: true, | ||
| // This first example shows a generated text response for the provided prompt | ||
| if (message.text !== 'Wonder a few deep thoughts.') { |
There was a problem hiding this comment.
📝 note: This != and the cases contained were switched in the final commits of slack-samples/bolt-python-assistant-template#37 which we might want to match?
| @@ -6,7 +6,7 @@ Models from [OpenAI](https://openai.com) are used and can be customized for prom | |||
|
|
|||
| ## Setup | |||
There was a problem hiding this comment.
@srtaalej Thanks for calling this out! 📚 ✨ I agree it's a confusing and hope we can encourage the CLI most-
IIRC @lukegalbraithrussell had thoughts on related changes as well and we should match such in slack-samples/bolt-python-assistant-template#42 too!
zimeg
left a comment
There was a problem hiding this comment.
@srtaalej Sweet I'm liking the results of a roll I prompted 🏆
I'm requesting a few changes in hopes of focusing most on tool calls with this example, and also want to showcase the buffer in text streaming 🤖
Tests related to type checking might need a small change too, but once we have CI passing with a prerelease I think we should look toward merging this!
| /** | ||
| * Tool definition for OpenAI API | ||
| * | ||
| * @see {@link https://platform.openai.com/docs/guides/function-calling} | ||
| */ | ||
| export const rollDiceDefinition = { |
There was a problem hiding this comment.
🪬 note: Using jsdoc to match the expected type of the openai.responses.create arguments might be required since we're exporting from this package? I'm unsure what that type might be, but am hoping it'd address the error above!
Co-authored-by: Eden Zimbelman <eden.zimbelman@salesforce.com>
mwbrooks
left a comment
There was a problem hiding this comment.
🙌🏻 Thanks for getting the thinking steps demo together! It's looking very closely aligned to the Python demo 👌🏻
✏️ I've left a few suggestions and I'll begin manual testing next!
listeners/assistant/message.js
Outdated
| const { channel, thread_ts } = message; | ||
| const { userId, teamId } = context; | ||
|
|
||
| // The first example shows detailed thinking steps similar to tool calls |
There was a problem hiding this comment.
question: Should this comment be swapped with the else clause? The if seems to handle the scripted "Wonder a thought" example while the else handles the roll-the-dice tool call.
There was a problem hiding this comment.
@mwbrooks We should reword this to be more clear in both samples IMHO, but the order I think is correct. I'm curious of other changes too, but might suggest:
The first example shows a message with thinking steps that has different chunks to construct and update a plan alongside text outputs.
| model: 'gpt-4o-mini', | ||
| input: `System: ${DEFAULT_SYSTEM_CONTENT}\n\nUser: ${llmPrompt}`, | ||
| stream: true, | ||
| await streamer.stop({ |
There was a problem hiding this comment.
suggestion: I think we should add the feedback blocks to this .stop (and add it to Python if it's missing).
agent/llm_caller.js
Outdated
| * @see {@link https://platform.openai.com/docs/guides/streaming-responses} | ||
| * @see {@link https://platform.openai.com/docs/guides/function-calling} | ||
| */ | ||
| export async function callLlm(streamer, prompts) { |
There was a problem hiding this comment.
🐣 note: I'm open to that! But am unsure of best practices for this. As long as it's not the following-
callllm
manifest.json
Outdated
| "assistant_thread_started", | ||
| "message.im" | ||
| ] | ||
| "bot_events": ["app_mention", "assistant_thread_context_changed", "assistant_thread_started", "message.im"] |
There was a problem hiding this comment.
suggestion: I'd suggest putting each item of the array on a newline to match the JSON file format.
mwbrooks
left a comment
There was a problem hiding this comment.
🧪 Manually testing the sample goes works g-g-g-g-g-great! 👏🏻
zimeg
left a comment
There was a problem hiding this comment.
💡 Praises to these changes so far! Thanks! ✨
Responses are generating well for me but I left a few more comments with changes for CI checks. Some of these might find a test with this command too:
$ npm run check
But I realize that's not found with a proper "test" script 🧪
The comments @mwbrooks shared are all solid finds too! Once these are finalized let's mirror similar updates to Python implementation 🙏
listeners/assistant/message.js
Outdated
| const { channel, thread_ts } = message; | ||
| const { userId, teamId } = context; | ||
|
|
||
| // The first example shows detailed thinking steps similar to tool calls |
There was a problem hiding this comment.
@mwbrooks We should reword this to be more clear in both samples IMHO, but the order I think is correct. I'm curious of other changes too, but might suggest:
The first example shows a message with thinking steps that has different chunks to construct and update a plan alongside text outputs.
agent/llm_caller.js
Outdated
| * @see {@link https://platform.openai.com/docs/guides/streaming-responses} | ||
| * @see {@link https://platform.openai.com/docs/guides/function-calling} | ||
| */ | ||
| export async function callLlm(streamer, prompts) { |
There was a problem hiding this comment.
🐣 note: I'm open to that! But am unsure of best practices for this. As long as it's not the following-
callllm
| // OpenAI LLM client | ||
| export const openai = new OpenAI({ | ||
| apiKey: process.env.OPENAI_API_KEY, | ||
| }); |
There was a problem hiding this comment.
🌚 question: Is the openai client still used outside of this package, or could we remove the export here?
There was a problem hiding this comment.
good catch! it is not
Co-authored-by: Michael Brooks <mbrooks@slack-corp.com>
Co-authored-by: Michael Brooks <mbrooks@slack-corp.com>
Co-authored-by: Michael Brooks <mbrooks@slack-corp.com>
Co-authored-by: Michael Brooks <mbrooks@slack-corp.com>
Co-authored-by: Eden Zimbelman <eden.zimbelman@salesforce.com>
Co-authored-by: Eden Zimbelman <eden.zimbelman@salesforce.com>
… message.js example 1
mwbrooks
left a comment
There was a problem hiding this comment.
✅ Thanks a bunch for addressing all of our feedback @srtaalej!
🧪 Manual testing works great on my side! 🕹️
📝 I've left one final suggestion to rename callLLm to callLLM (or back to callLlm). Once you've update that, I think we're good to merge it!
zimeg
left a comment
There was a problem hiding this comment.
🗣️ Nice! I'm leaving a few more comments of polish but will approve this for now.
Let's make sure our checks are passing with development builds linked before merging this. One comment I left might require an upstream change beforehand, but I'm so glad this is caught in this PR!
Some conventions to file casing might be nice to update here as well, but this can be saved for follow up too 👾
Co-authored-by: Eden Zimbelman <eden.zimbelman@salesforce.com>
|
thank you @zimeg for the ChatStreamer export pr! i'll update here once we get that merged ⭐ ⭐ ⭐ |
@srtaalej I've merged slackapi/node-slack-sdk#2481 so the |
git pull origin feat-ai-apps-thinking-steps
cd web-api
npm run build
npm link
cd bolt-js-assistant-template
npm link @slack/web-api |
Type of change
Summary
This PR showcases "chunks" in chat streams. Related: slackapi/node-slack-sdk#2467
Demo videos 📹
dice-demo.mov
wonder-demo.mov
Requirements