Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: refactor LLM model selection and attack surface analysis #233

Open
wants to merge 21 commits into
base: release/2.2.0
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
284d83b
refactor: replace GPT with LLM for vulnerability reporting
psyray Nov 10, 2024
3ad4ea6
feat: enhance LLM model selection and attack surface analysis
psyray Nov 11, 2024
82fccad
refactor(logging): improve vulnerability logging details
psyray Nov 11, 2024
b170a31
feat(ui): enhance LLM toolkit UI and refactor model management
psyray Nov 11, 2024
d6a1b4b
feat(llm): convert LLM markdown response to HTML sanitize it
psyray Nov 11, 2024
678e317
feat: enhance attack surface analysis with model selection and deleti…
psyray Nov 11, 2024
916ed10
feat: enhance markdown rendering and update UI settings
psyray Nov 11, 2024
7e87530
refactor: update LLM vulnerability report generation and storage
psyray Nov 12, 2024
e7e56c1
fix: remove unused imports
psyray Nov 12, 2024
5390b37
fix: enable section response generation
psyray Nov 12, 2024
20823c0
refactor: update fixtures and permissions, remove unused data
psyray Nov 13, 2024
0d3f6ba
feat: enhance reference handling
psyray Nov 13, 2024
f5f37f4
fix: task reference conversion
psyray Nov 13, 2024
ac29d7f
fix: update model selection logic
psyray Nov 14, 2024
c5d0496
feat: enhance model management and download functionality in LLM Toolkit
psyray Nov 14, 2024
a8bd492
feat: integrate WebSocket support for model operations
psyray Nov 17, 2024
638ea94
test: enhance OllamaManager and LLM tests with additional mock setups
psyray Nov 17, 2024
79b52e5
refactor: remove cancel download feature
psyray Nov 17, 2024
d9d7af4
feat: add model name to progress bar popup
psyray Nov 17, 2024
5d489f7
fix: update model selection API endpoint
psyray Nov 17, 2024
3809741
feat: enhance model URL handling
psyray Nov 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
feat(ui): enhance LLM toolkit UI and refactor model management
- Improved the UI of the LLM toolkit by updating the layout and adding badges for model status and capabilities.
- Refactored the model management logic to use a centralized API call for fetching model data, improving error handling and reducing code duplication.
- Updated the model requirements configuration to enhance readability and consistency in the description of model capabilities.
- Adjusted the modal size for displaying model options to provide a better user experience.
  • Loading branch information
psyray committed Nov 11, 2024
commit b170a31bac3997e453abef2d55768c348e88c4d9
34 changes: 17 additions & 17 deletions web/reNgine/llm/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -175,28 +175,28 @@
'min_tokens': 64,
'max_tokens': 2048,
'supports_functions': True,
'best_for': ['basic_analysis', 'general_purpose'],
'best_for': ['Basic analysis', 'General purpose tasks'],
'provider': 'openai'
},
'gpt-3.5-turbo': {
'min_tokens': 64,
'max_tokens': 4096,
'supports_functions': True,
'best_for': ['quick_analysis', 'basic_suggestions', 'cost_effective'],
'best_for': ['Quick analysis', 'Basic suggestions', 'Cost effective solutions'],
'provider': 'openai'
},
'gpt-4': {
'min_tokens': 128,
'max_tokens': 8192,
'supports_functions': True,
'best_for': ['deep_analysis', 'complex_reasoning', 'advanced_security'],
'best_for': ['Deep security analysis', 'Complex reasoning', 'Advanced security tasks'],
'provider': 'openai'
},
'gpt-4-turbo': {
'min_tokens': 128,
'max_tokens': 128000,
'supports_functions': True,
'best_for': ['complex_analysis', 'technical_details', 'latest_capabilities'],
'best_for': ['Complex analysis', 'Technical details', 'Latest AI capabilities'],
'provider': 'openai'
},

Expand All @@ -205,35 +205,35 @@
'min_tokens': 32,
'max_tokens': 4096,
'supports_functions': False,
'best_for': ['local_processing', 'privacy_focused', 'balanced_performance'],
'best_for': ['Local processing', 'Privacy focused tasks', 'Balanced performance'],
'provider': 'ollama'
},
'llama2-uncensored': {
'min_tokens': 32,
'max_tokens': 4096,
'supports_functions': False,
'best_for': ['unfiltered_analysis', 'security_research', 'red_teaming'],
'best_for': ['Unfiltered analysis', 'Security research', 'Red team operations'],
'provider': 'ollama'
},
'llama3': {
'min_tokens': 64,
'max_tokens': 8192,
'supports_functions': False,
'best_for': ['advanced_reasoning', 'improved_context', 'technical_analysis'],
'best_for': ['Advanced reasoning', 'Improved context', 'Technical analysis'],
'provider': 'ollama'
},
'llama3.1': {
'min_tokens': 64,
'max_tokens': 8192,
'supports_functions': False,
'best_for': ['enhanced_comprehension', 'security_assessment', 'detailed_analysis'],
'best_for': ['Enhanced comprehension', 'Security assessment', 'Detailed analysis'],
'provider': 'ollama'
},
'llama3.2': {
'min_tokens': 64,
'max_tokens': 16384,
'supports_functions': False,
'best_for': ['long_context', 'complex_security_analysis', 'advanced_reasoning'],
'best_for': ['Long context', 'Complex security analysis', 'Advanced reasoning'],
'provider': 'ollama'
},

Expand All @@ -242,56 +242,56 @@
'min_tokens': 32,
'max_tokens': 8192,
'supports_functions': False,
'best_for': ['efficient_processing', 'technical_analysis', 'good_performance_ratio'],
'best_for': ['Efficient processing', 'Technical analysis', 'Performance optimization'],
'provider': 'ollama'
},
'mistral-medium': {
'min_tokens': 32,
'max_tokens': 8192,
'supports_functions': False,
'best_for': ['balanced_analysis', 'improved_accuracy', 'technical_tasks'],
'best_for': ['Balanced analysis', 'Improved accuracy', 'Technical tasks'],
'provider': 'ollama'
},
'mistral-large': {
'min_tokens': 64,
'max_tokens': 16384,
'supports_functions': False,
'best_for': ['deep_technical_analysis', 'complex_reasoning', 'high_accuracy'],
'best_for': ['Deep technical analysis', 'Complex reasoning', 'High accuracy'],
'provider': 'ollama'
},
'codellama': {
'min_tokens': 32,
'max_tokens': 4096,
'supports_functions': False,
'best_for': ['code_analysis', 'vulnerability_assessment', 'technical_details'],
'best_for': ['Code analysis', 'Vulnerability assessment', 'Technical documentation'],
'provider': 'ollama'
},
'qwen2.5': {
'min_tokens': 64,
'max_tokens': 8192,
'supports_functions': False,
'best_for': ['multilingual_analysis', 'efficient_processing', 'technical_understanding'],
'best_for': ['Multilingual analysis', 'Efficient processing', 'Technical understanding'],
'provider': 'ollama'
},
'gemma': {
'min_tokens': 32,
'max_tokens': 4096,
'supports_functions': False,
'best_for': ['lightweight_analysis', 'quick_assessment', 'general_tasks'],
'best_for': ['Lightweight analysis', 'Quick assessment', 'General tasks'],
'provider': 'ollama'
},
'solar': {
'min_tokens': 64,
'max_tokens': 8192,
'supports_functions': False,
'best_for': ['creative_analysis', 'unique_perspectives', 'alternative_approaches'],
'best_for': ['Creative analysis', 'Unique perspectives', 'Alternative approaches'],
'provider': 'ollama'
},
'yi': {
'min_tokens': 64,
'max_tokens': 8192,
'supports_functions': False,
'best_for': ['comprehensive_analysis', 'detailed_explanations', 'technical_depth'],
'best_for': ['Comprehensive analysis', 'Detailed explanations', 'Technical depth'],
'provider': 'ollama'
}
}
78 changes: 50 additions & 28 deletions web/scanEngine/templates/scanEngine/settings/llm_toolkit.html
psyray marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -44,43 +44,52 @@ <h5>{{installed_models|length}} available Models</h5>
<b>Warning:</b> GPT model is currently selected and requires API key to be set. Please set the API key in the <a href="{% url 'api_vault' %}"> API Vault.</a>
</div>
{% endif %}
<div class="row mt-2">
<div class="row">
{% for model in installed_models %}
<div class="col-lg-4">
<div class="card project-box">
<div class="card-body">
<div class="dropdown float-end">
<a href="#" class="dropdown-toggle card-drop arrow-none" data-bs-toggle="dropdown" aria-expanded="false">
<i class="mdi mdi-dots-horizontal m-0 text-muted h3"></i>
</a>
<div class="dropdown-menu dropdown-menu-end">
{% if model.is_local %}
<a class="dropdown-item" href="#" onClick=deleteModel('{{model.name}}')>Delete</a>
{% endif %}
{% if not model.selected %}
<a class="dropdown-item" href="#" onClick=selectModel('{{model.name}}')>Use Model</a>
{% endif %}
<div class="col-lg-4 mt-4">
<div class="card project-box h-100">
<div class="card-body d-flex flex-column">
<div class="d-flex justify-content-between align-items-center">
<h4 class="mt-0 mb-0">
<span class="{% if model.selected %}text-success{% endif %}">
{{model.name}}
</span>
</h4>
<div class="dropdown">
<a href="#" class="dropdown-toggle card-drop arrow-none" data-bs-toggle="dropdown" aria-expanded="false">
<i class="mdi mdi-dots-horizontal m-0 text-muted h3"></i>
</a>
<div class="dropdown-menu dropdown-menu-end">
{% if not model.selected %}
<a class="dropdown-item" href="#" onClick=selectModel('{{model.name}}')>
<i class="mdi mdi-check-circle text-success me-1"></i>Use Model
</a>
{% endif %}
{% if model.is_local %}
<a class="dropdown-item" href="#" onClick=deleteModel('{{model.name}}')>
<i class="mdi mdi-delete text-danger me-1"></i>Delete
</a>
{% endif %}
</div>
</div>
</div>
<h4 class="mt-0">
<span class="{% if model.selected %}text-success{% endif %}">{{model.name}} {% if model.selected %}<span class="badge bg-soft-primary text-primary ms-4">Selected Model</span>{% endif %}</span>
</h4>
<p class="mt-1">
{% if not model.is_local %}
<span class="badge bg-soft-warning text-warning mt-auto">Remote Model - API Key Required</span>
{% else %}
<span class="badge bg-soft-success text-success mt-auto">Locally installed model</span>
{% endif %}
{% if model.selected %}
<span class="badge bg-soft-primary text-primary ms-4 float-end">Selected Model</span>
{% endif %}
</p>
<p class="mb-1">
<p class="mb-0 mt-1">
<span class="pe-2 text-nowrap mb-2 d-inline-block">
<i class="mdi mdi-calendar-range text-primary"></i>
Modified <b>{% if model.modified_at %}{{model.modified_at|naturaltime}} {% else %} NA{% endif %}</b>
</span>
<br>
<span class="pe-2 text-nowrap mb-2 d-inline-block">
<i class="mdi mdi-database text-info"></i>
{% if model.is_local %}
Locally installed model
{% else %}
Open AI Model
{% endif %}
</span>
<br>
{% if model.details %}
<span class="pe-2 text-nowrap mb-2 d-inline-block">
<i class="mdi mdi-numeric text-info"></i>
<b>{{model.details.parameter_size}}</b> Parameters
Expand All @@ -89,6 +98,19 @@ <h4 class="mt-0">
<i class="mdi mdi-family-tree text-success"></i>
<b>{{model.details.family}}</b> Family
</span>
{% endif %}
{% if model.capabilities %}
<br>
<span class="mb-2 d-inline-block w-100">
<i class="mdi mdi-star text-warning"></i>
Best for:
<ul class="list-unstyled mt-1 ms-3">
{% for capability in model.capabilities.best_for %}
<li><i class="mdi mdi-check-circle text-success me-1"></i>{{capability}}</li>
{% endfor %}
</ul>
</span>
{% endif %}
</p>
</div>
</div>
Expand Down
43 changes: 16 additions & 27 deletions web/scanEngine/views.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,25 +2,23 @@
import json
import os
import re
import requests
import shutil

from datetime import datetime
from django import http
from django.contrib import messages
from django.shortcuts import get_object_or_404, render
from django.urls import reverse
from rolepermissions.decorators import has_permission_decorator

from reNgine.common_func import get_open_ai_key
from reNgine.llm.config import OLLAMA_INSTANCE, DEFAULT_GPT_MODELS
from reNgine.tasks import run_command, send_discord_message, send_slack_message, send_lark_message, send_telegram_message, run_gf_list
from scanEngine.forms import AddEngineForm, UpdateEngineForm, AddWordlistForm, ExternalToolForm, InterestingLookupForm, NotificationForm, ProxyForm, HackeroneForm, ReportForm
from scanEngine.models import EngineType, Wordlist, InstalledExternalTool, InterestingLookupModel, Notification, Hackerone, Proxy, VulnerabilityReportSetting
from dashboard.models import OpenAiAPIKey, NetlasAPIKey, OllamaSettings
from dashboard.models import OpenAiAPIKey, NetlasAPIKey
from api.views import LLMModelsManager
from reNgine.definitions import PERM_MODIFY_SCAN_CONFIGURATIONS, PERM_MODIFY_SCAN_REPORT, PERM_MODIFY_WORDLISTS, PERM_MODIFY_INTERESTING_LOOKUP, PERM_MODIFY_SYSTEM_CONFIGURATIONS, FOUR_OH_FOUR_URL
from reNgine.settings import RENGINE_WORDLISTS, RENGINE_HOME, RENGINE_TOOL_GITHUB_PATH
from pathlib import Path
import requests

def index(request):
engine_type = EngineType.objects.order_by('engine_name').all()
Expand Down Expand Up @@ -415,28 +413,19 @@ def api_vault_delete(request):
return http.JsonResponse(response)

def llm_toolkit_section(request):
all_models = DEFAULT_GPT_MODELS.copy()
response = requests.get(f'{OLLAMA_INSTANCE}/api/tags')
if response.status_code == 200:
ollama_models = response.json().get('models', [])
date_format = "%Y-%m-%dT%H:%M:%S"
all_models.extend([{**model,
'modified_at': datetime.strptime(model['modified_at'].split('.')[0], date_format),
'is_local': True,
} for model in ollama_models])

selected_model = OllamaSettings.objects.first()
selected_model_name = selected_model.selected_model if selected_model else 'gpt-3.5-turbo'

for model in all_models:
if model['name'] == selected_model_name:
model['selected'] = True

context = {
'installed_models': all_models,
'openai_key_error': not get_open_ai_key() and 'gpt' in selected_model_name
}
return render(request, 'scanEngine/settings/llm_toolkit.html', context)
try:
# Appel direct de l'API
api_response = LLMModelsManager().get(request)
data = api_response.data

context = {
'installed_models': data['models'],
'openai_key_error': data['openai_key_error']
}
return render(request, 'scanEngine/settings/llm_toolkit.html', context)
except Exception as e:
messages.error(request, f'Error fetching LLM models: {str(e)}')
return render(request, 'scanEngine/settings/llm_toolkit.html', {'installed_models': []})

@has_permission_decorator(PERM_MODIFY_SYSTEM_CONFIGURATIONS, redirect_url=FOUR_OH_FOUR_URL)
def api_vault(request):
Expand Down
29 changes: 22 additions & 7 deletions web/static/custom/custom.js
Original file line number Diff line number Diff line change
Expand Up @@ -3338,6 +3338,9 @@ async function show_attack_surface_modal(endpoint_url, id) {
throw new Error(data.error || 'Failed to fetch models');
}

// Change modal size to xl
$('#modal_dialog .modal-dialog').removeClass('modal-lg').addClass('modal-xl');

const allModels = data.models;
const selectedModel = data.selected_model;

Expand All @@ -3348,10 +3351,10 @@ async function show_attack_surface_modal(endpoint_url, id) {
const isLocal = model.is_local || false;

modelOptions += `
<div class="col-md-4">
<div class="card project-box" style="min-height: 180px; cursor: pointer"
<div class="col-md-4 mt-2">
<div class="card project-box h-100" style="cursor: pointer"
onclick="document.getElementById('${modelName}').click()">
<div class="card-body p-2">
<div class="card-body p-2 pt-3 d-flex flex-column">
<div class="form-check">
<input class="form-check-input" type="radio" name="llm_model"
id="${modelName}" value="${modelName}"
Expand All @@ -3361,7 +3364,8 @@ async function show_attack_surface_modal(endpoint_url, id) {
${modelName === selectedModel ? '<span class="badge bg-soft-primary text-primary ms-2">Selected</span>' : ''}
</span>
</h5>
<p class="mb-1 small">
<p>${!isLocal ? '<span class="badge bg-soft-warning text-warning mt-auto">Remote Model - API Key Required</span>' : '<span class="badge bg-soft-success text-success mt-auto">Locally installed model</span>'}</p>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (code-quality): Invert ternary operator to remove negation (invert-ternary)

Suggested change
<p>${!isLocal ? '<span class="badge bg-soft-warning text-warning mt-auto">Remote Model - API Key Required</span>' : '<span class="badge bg-soft-success text-success mt-auto">Locally installed model</span>'}</p>
<p>${isLocal ? '<span class="badge bg-soft-success text-success mt-auto">Locally installed model</span>' : '<span class="badge bg-soft-warning text-warning mt-auto">Remote Model - API Key Required</span>'}</p>


ExplanationNegated conditions are more difficult to read than positive ones, so it is best
to avoid them where we can. By inverting the ternary condition and swapping the
expressions we can simplify the code.

<p class="mb-1 small flex-grow-1">
<span class="pe-2 text-nowrap d-inline-block">
<i class="mdi mdi-database text-info"></i>
${isLocal ? 'Local model' : 'OpenAI'}
Expand All @@ -3371,12 +3375,23 @@ async function show_attack_surface_modal(endpoint_url, id) {
<i class="mdi mdi-family-tree text-success"></i>
${model.details.family}
</span>
<br>
<span class="text-nowrap d-inline-block">
<i class="mdi mdi-numeric text-info"></i>
<b>${model.details.parameter_size}</b> Parameters
</span>
` : ''}
<br>
<span class="text-muted">
${capabilities.best_for ? capabilities.best_for.join(', ') : 'General analysis'}
<br>
<span class="text-muted w-100 d-inline-block">
<i class="mdi mdi-star text-warning"></i>
Best for:
<ul class="list-unstyled mt-1 ms-3">
${capabilities.best_for ? capabilities.best_for.map(cap =>
`<li><i class="mdi mdi-check-circle text-success me-1"></i>${cap}</li>`
).join('') : '<li><i class="mdi mdi-check-circle text-success me-1"></i>General analysis</li>'}
</ul>
</span>
${!isLocal ? '<br><span class="badge bg-soft-warning text-warning mt-1">API Key Required</span>' : ''}
</p>
</div>
</div>
Expand Down
Loading