Description
Before submitting your bug report
- I believe this is a bug. I'll try to join the Continue Discord for questions
- I'm not able to find an open issue that reports the same bug
- I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS: Windows 11 Pro for Workstations, 24H2, OS build is 26100.3194
- Continue version: 0.8.68
- IDE version: VSCode 1.96.2 (modified by cursor AI, version 0.45.11)
- Model: llama3.1
- config.yaml:
models:
- model: claude-3-5-sonnet-latest
provider: anthropic
api_key: ""
title: Claude 3.5 Sonnet
- model: deepseek-r1:8b
title: deepseek-r1:8b
provider: ollama
- model: deepseek-r1:1.5b
title: deepseek-r1:1.5b
provider: ollama
- model: llama3.1
title: llama3.1
provider: ollama
- model: qwen2.5-coder
title: qwen2.5-coder
provider: ollama
- model: qwen2.5-coder:3b
title: qwen2.5-coder:3b
provider: ollama
tabAutocompleteModel:
title: qwen2.5-coder:3b
provider: ollama
model: qwen2.5-coder:3b
contextProviders:
- name: code
params: {}
- name: docs
params: {}
- name: diff
params: {}
- name: terminal
params: {}
- name: problems
params: {}
- name: folder
params: {}
- name: codebase
params: {}
slashCommands:
- name: share
description: Export the current chat session to markdown
- name: cmd
description: Generate a shell command, explain after showing the command, not before.
- name: commit
description: Generate a git commit message
Description
Issue with Displaying HTML Code
When I requested the chatbot to give me response in HTML code, it failed to render it on its interface. However, when I shared the prompt using the /share
command or copied the response, the HTML code was present in the text, but not rendered in the UI.
model response by using /share
command:
Analysis of the Issue
I suspect that the chatbot is attempting to render the HTML, but this feature has been disabled somehow. This suggests that there may be a configuration issue or a bug preventing the correct rendering of HTML code.
Attempted Solutions
To resolve the issue, I tried restarting the IDE, but this did not resolve the problem.
Additional Information
I have attached the chat session history in _session.md
file, which contains more details about this issue.
I have the following information about my IDE for you.
VSCode Version: 1.96.2
Commit: f5f18731406b73244e0558ee7716d77c8096d150
Date: 2025-02-07T09:43:58.555Z
Electron: 32.2.6
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.26100
following is the info about the operating system:
Edition Windows 11 Pro for Workstations
Version 24H2
Installed on 04/01/2025
OS build 26100.3194
Experience Windows Feature Experience Pack 1000.26100.48.0
Please let me know if you have any questions.
To reproduce
Attempt to reproduce an issue using the Llama 3.1 model via Ollama.
- Ask the chatbot to display a table in Markdown format.
- Request it to render the same table in HTML format.
- Note the issue that arises.
screenshot:
Log output
==========================================================================
==========================================================================
##### Completion options #####
{
"contextLength": 8096,
"model": "llama3.1",
"maxTokens": 4096,
"stop": [
"<|start_header_id|>",
"<|end_header_id|>",
"<|eot_id|>"
]
}
##### Request options #####
{}
##### Prompt #####
<user>
convert to markdown table:
SuperGLUE Tasks / Benchmarks:
Short NameFull NameDescriptionBoolQBoolean QuestionsInvolves answering a yes/no question based on a short passage.CBCommitmentBankTests understanding of entailment and contradiction in a three-sentence format.COPAChoice of Plausible AlternativesMeasures causal reasoning by asking for the cause/effect of a given sentence.MultiRCMulti-Sentence Reading ComprehensionInvolves answering questions about a paragraph where each question may have multiple correct answers.ReCoRDReading Comprehension with Commonsense ReasoningRequires selecting the correct named entity from a passage to fill in the blank of a question.RTERecognizing Textual EntailmentInvolves identifying whether a sentence entails, contradicts, or is neutral towards another sentence.WiCWords in ContextTests understanding of word sense disambiguation in different contexts.WSCWinograd Schema ChallengeFocuses on resolving coreference resolution within a sentence, often requiring commonsense reasoning.AX-bBroad Coverage DiagnosticA diagnostic set to evaluate model performance on a broad range of linguistic phenomena.AX-gWinogender Schema DiagnosticsTests for the presence of gender bias in automated coreference resolution systems.
<assistant>
Here is the text converted to a markdown table:
| **Short Name** | **Full Name** | **Description** |
| --- | --- | --- |
| BoolQ | Boolean Questions | Involves answering a yes/no question based on a short passage. |
| CB | CommitmentBank | Tests understanding of entailment and contradiction in a three-sentence format. |
| COPA | Choice of Plausible Alternatives | Measures causal reasoning by asking for the cause/effect of a given sentence. |
| MultiRC | Multi-Sentence Reading Comprehension | Involves answering questions about a paragraph where each question may have multiple correct answers. |
| ReCoRD | Reading Comprehension with Commonsense Reasoning | Requires selecting the correct named entity from a passage to fill in the blank of a question. |
| RTE | Recognizing Textual Entailment | Involves identifying whether a sentence entails, contradicts, or is neutral towards another sentence. |
| WiC | Words in Context | Tests understanding of word sense disambiguation in different contexts. |
| WSC | Winograd Schema Challenge | Focuses on resolving coreference resolution within a sentence, often requiring commonsense reasoning. |
| AX-b | Broad Coverage Diagnostic | A diagnostic set to evaluate model performance on a broad range of linguistic phenomena. |
| AX-g | Winogender Schema Diagnostics | Tests for the presence of gender bias in automated coreference resolution systems. |
<user>
convert the table in html
==========================================================================
==========================================================================
##### Completion options #####
{
"contextLength": 8096,
"model": "llama3.1",
"maxTokens": 12,
"stop": [
"<|start_header_id|>",
"<|end_header_id|>",
"<|eot_id|>"
]
}
##### Request options #####
{}
##### Prompt #####
<user>
Given the following... please reply with a title for the chat that is 3-4 words in length, all words used should be directly related to the content of the chat, avoid using verbs unless they are directly related to the content of the chat, no additional text or explanation, you don't need ending punctuation.
Here is the text converted to a markdown table:
| **Short Name** | **Full Name** | **Description** |
| --- | --- | --- |
| BoolQ | Boolean Questions | Involves answering a yes/no question based on a short passage. |
| CB | CommitmentBank | Tests understanding of entailment and contradiction in a three-sentence format. |
| COPA | Choice of Plausible Alternatives | Measures causal reasoning by asking for the cause/effect of a given sentence. |
| MultiRC | Multi-Sentence Reading Comprehension | Involves answering questions about a paragraph where each question may have multiple correct answers. |
| ReCoRD | Reading Comprehension with Commonsense Reasoning | Requires selecting the correct named entity from a passage to fill in the blank of a question. |
| RTE | Recognizing Textual Entailment | Involves identifying whether a sentence entails, contradicts, or is neutral towards another sentence. |
| WiC | Words in Context | Tests understanding of word sense disambiguation in different contexts. |
| WSC | Winograd Schema Challenge | Focuses on resolving coreference resolution within a sentence, often requiring commonsense reasoning. |
| AX-b | Broad Coverage Diagnostic | A diagnostic set to evaluate model performance on a broad range of linguistic phenomena. |
| AX-g | Winogender Schema Diagnostics | Tests for the presence of gender bias in automated coreference resolution systems. |
==========================================================================
==========================================================================
Completion:
Natural Language Tasks
==========================================================================
==========================================================================
##### Completion options #####
{
"contextLength": 8096,
"model": "llama3.1",
"maxTokens": 4096,
"stop": [
"<|start_header_id|>",
"<|end_header_id|>",
"<|eot_id|>"
]
}
##### Request options #####
{}
##### Prompt #####
<user>
convert to markdown table:
SuperGLUE Tasks / Benchmarks:
Short NameFull NameDescriptionBoolQBoolean QuestionsInvolves answering a yes/no question based on a short passage.CBCommitmentBankTests understanding of entailment and contradiction in a three-sentence format.COPAChoice of Plausible AlternativesMeasures causal reasoning by asking for the cause/effect of a given sentence.MultiRCMulti-Sentence Reading ComprehensionInvolves answering questions about a paragraph where each question may have multiple correct answers.ReCoRDReading Comprehension with Commonsense ReasoningRequires selecting the correct named entity from a passage to fill in the blank of a question.RTERecognizing Textual EntailmentInvolves identifying whether a sentence entails, contradicts, or is neutral towards another sentence.WiCWords in ContextTests understanding of word sense disambiguation in different contexts.WSCWinograd Schema ChallengeFocuses on resolving coreference resolution within a sentence, often requiring commonsense reasoning.AX-bBroad Coverage DiagnosticA diagnostic set to evaluate model performance on a broad range of linguistic phenomena.AX-gWinogender Schema DiagnosticsTests for the presence of gender bias in automated coreference resolution systems.
<assistant>
Here is the text converted to a markdown table:
| **Short Name** | **Full Name** | **Description** |
| --- | --- | --- |
| BoolQ | Boolean Questions | Involves answering a yes/no question based on a short passage. |
| CB | CommitmentBank | Tests understanding of entailment and contradiction in a three-sentence format. |
| COPA | Choice of Plausible Alternatives | Measures causal reasoning by asking for the cause/effect of a given sentence. |
| MultiRC | Multi-Sentence Reading Comprehension | Involves answering questions about a paragraph where each question may have multiple correct answers. |
| ReCoRD | Reading Comprehension with Commonsense Reasoning | Requires selecting the correct named entity from a passage to fill in the blank of a question. |
| RTE | Recognizing Textual Entailment | Involves identifying whether a sentence entails, contradicts, or is neutral towards another sentence. |
| WiC | Words in Context | Tests understanding of word sense disambiguation in different contexts. |
| WSC | Winograd Schema Challenge | Focuses on resolving coreference resolution within a sentence, often requiring commonsense reasoning. |
| AX-b | Broad Coverage Diagnostic | A diagnostic set to evaluate model performance on a broad range of linguistic phenomena. |
| AX-g | Winogender Schema Diagnostics | Tests for the presence of gender bias in automated coreference resolution systems. |
<user>
convert the table in html
==========================================================================
==========================================================================
Completion:
Here is the markdown table converted to HTML:
<table>
<tr>
<th>Short Name</th>
<th>Full Name</th>
<th>Description</th>
</tr>
<tr>
<td>BoolQ</td>
<td>Boolean Questions</td>
<td>Involves answering a yes/no question based on a short passage.</td>
</tr>
<tr>
<td>CB</td>
<td>CommitmentBank</td>
<td>Tests understanding of entailment and contradiction in a three-sentence format.</td>
</tr>
<tr>
<td>COPA</td>
<td>Choice of Plausible Alternatives</td>
<td>Measures causal reasoning by asking for the cause/effect of a given sentence.</td>
</tr>
<tr>
<td>MultiRC</td>
<td>Multi-Sentence Reading Comprehension</td>
<td>Involves answering questions about a paragraph where each question may have multiple correct answers.</td>
</tr>
<tr>
<td>ReCoRD</td>
<td>Reading Comprehension with Commonsense Reasoning</td>
<td>Requires selecting the correct named entity from a passage to fill in the blank of a question.</td>
</tr>
<tr>
<td>RTE</td>
<td>Recognizing Textual Entailment</td>
<td>Involves identifying whether a sentence entails, contradicts, or is neutral towards another sentence.</td>
</tr>
<tr>
<td>WiC</td>
<td>Words in Context</td>
<td>Tests understanding of word sense disambiguation in different contexts.</td>
</tr>
<tr>
<td>WSC</td>
<td>Winograd Schema Challenge</td>
<td>Focusing on resolving coreference resolution within a sentence, often requiring commonsense reasoning.</td>
</tr>
<tr>
<td>AX-b</td>
<td>Broad Coverage Diagnostic</td>
<td>A diagnostic set to evaluate model performance on a broad range of linguistic phenomena.</td>
</tr>
<tr>
<td>AX-g</td>
<td>Winogender Schema Diagnostics</td>
<td>Tests for the presence of gender bias in automated coreference resolution systems.</td>
</tr>
</table>
Let me know if you'd like any further modifications.