-
Notifications
You must be signed in to change notification settings - Fork 1
C++ #177
C++ #177
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughThis PR adds C++ sources and headers (Spinlock, BuddyAllocator, new operator), replaces mm/VMem.c with mm/VMem.cpp which integrates a BuddyAllocator and exposes many VMem C-compatible APIs, removes the old kernel/atomic/Spinlock.h, updates C headers with extern "C" guards (Io.h, Panic.h, PMem.h, VMem.h, MemOps.h, Magazine.h, etc.), removes restrict qualifiers from mem ops, lowers RUST_MAX_SIZE to 64KB, and updates CMake include/source lists and include_directories. Poem
Pre-merge checks❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: ASSERTIVE Plan: Pro 📒 Files selected for processing (4)
Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
mm/VMem.cpp (1)
58-70: Protect tlb_batch operations with vmem_lock
flush_tlb_batch()mutatestlb_batch/tlb_batch_countwithout holdingvmem_lock, yet callers such asVMemAlloc(Line 250) andVMemFree(Line 278) invoke it after releasing the guard. A second CPU that is still insideVMemMap/VMemUnmapwithvmem_lockheld can append to the batch while the first CPU concurrently zerostlb_batch_count, dropping entries and skipping the requiredinvlpg. That leaves stale translations visible and corrupts memory safety. Please make the flush run under the same lock—either by keepingvmem_lockheld through the flush or by adding a locked helper so everytlb_batchmutation is serialized.Also applies to: 250-256, 278-282
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (17)
cmake/source.cmake(4 hunks)drivers/virtio/VirtioBlk.c(1 hunks)include/Io.h(2 hunks)include/stdbool.h(1 hunks)kernel/atomic/Spinlock.h(0 hunks)kernel/atomic/cpp/Spinlock.cpp(1 hunks)kernel/atomic/cpp/Spinlock.h(1 hunks)kernel/core/Panic.h(2 hunks)mm/KernelHeap.h(1 hunks)mm/MemOps.h(1 hunks)mm/PMem.h(2 hunks)mm/VMem.cpp(16 hunks)mm/VMem.h(3 hunks)mm/dynamic/c/Magazine.h(2 hunks)mm/dynamic/cpp/BuddyAllocator.cpp(1 hunks)mm/dynamic/cpp/BuddyAllocator.h(1 hunks)mm/dynamic/cpp/new.cpp(1 hunks)
💤 Files with no reviewable changes (1)
- kernel/atomic/Spinlock.h
🧰 Additional context used
🧠 Learnings (18)
📓 Common learnings
Learnt from: assembler-0
Repo: assembler-0/autocc PR: 33
File: include/utils.hpp:138-190
Timestamp: 2025-08-04T13:24:48.984Z
Learning: The user assembler-0 tends to create wrapper functions during refactoring and sometimes accidentally leaves duplicate implementations. They have a casual, self-aware attitude about coding mistakes and are receptive to feedback about code duplication issues.
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 101
File: kernel/etc/Shell.c:829-837
Timestamp: 2025-08-31T07:06:37.003Z
Learning: assembler-0 is working on optimizing ramfs performance in VoidFrame kernel and wants to implement proper filesystem characteristics like faster lookups, better memory management, and advanced features like copy-on-write and caching.
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 132
File: arch/x86_64/syscall/Syscall.c:12-19
Timestamp: 2025-09-19T11:55:43.328Z
Learning: In the VoidFrame kernel project, assembler-0 is implementing basic POSIX-compliant syscalls in ring 0 without user/kernel memory protection, prioritizing getting basic functionality working before adding full security features.
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 103
File: docs/ARCHITECHTURE.md:1-3
Timestamp: 2025-08-31T14:19:11.366Z
Learning: assembler-0 wants to implement deep fault tracing for VoidFrame kernel exceptions (Page Fault, General Protection Fault, Invalid Opcode) instead of just panicking. The current interrupt handler has complete register state available in the Registers struct (defined in arch/x86_64/cpu/Cpu.h) including RBP for stack walking, RIP for fault location, and error codes for analysis.
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 80
File: README.md:7-8
Timestamp: 2025-08-13T08:43:54.750Z
Learning: User assembler-0 is working on VoidFrame kernel with a tight deadline (2-3 days before summer ends) and specifically requested runtime safety checks similar to __stack_chk_fail but for deadlocks and other kernel issues. The kernel already has extensive safety mechanisms including stack canaries, guard pages, heap integrity checks, and process security validation.
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 72
File: arch/x86_64/syscall/SyscallEntry.asm:24-30
Timestamp: 2025-08-12T02:13:46.266Z
Learning: User assembler-0 has existing ELF infrastructure in VoidFrame kernel with Elf.h header definitions and CreateProcess() function that properly sets up process context including stack allocation, ProcessExitStub return address, and register initialization. The issue was missing ELF loader implementation, not compilation problems.
📚 Learning: 2025-08-31T02:11:29.291Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 100
File: kernel/memory/VMem.h:44-47
Timestamp: 2025-08-31T02:11:29.291Z
Learning: VoidFrame kernel's MAX_PAGES is hardcoded to exactly 4GB (1,048,576 pages), causing false "Memory detected exceeds MAX_PAGES" warnings when memory holes push the highest physical address slightly above 4GB. The solution is to sum only MULTIBOOT2_MEMORY_AVAILABLE regions instead of using max_physical_address for total_pages calculation.
Applied to files:
mm/PMem.hmm/KernelHeap.hmm/VMem.cpp
📚 Learning: 2025-08-17T05:53:48.040Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 92
File: meson.build:31-34
Timestamp: 2025-08-17T05:53:48.040Z
Learning: The VoidFrame kernel's memory detection was reporting incorrect memory amounts (like 9200MB) because it used the highest physical address (including memory holes and reserved regions) as total memory size, rather than summing only the MULTIBOOT2_MEMORY_AVAILABLE regions. The fix involves counting only available memory regions and using the highest available address only for bitmap sizing.
Applied to files:
mm/PMem.hmm/VMem.hmm/VMem.cpp
📚 Learning: 2025-08-17T08:16:43.539Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 93
File: kernel/memory/VMem.c:72-79
Timestamp: 2025-08-17T08:16:43.539Z
Learning: In VoidFrame kernel, VMemGetPhysAddr and related page table walking functions were causing panics because they assumed identity mapping existed for physical addresses < IDENTITY_MAP_SIZE (4GB), but the bootloader doesn't provide this 1:1 mapping. The fix is to remove all identity mapping assumptions and always use PHYS_TO_VIRT() for page table access in functions like VMemGetPageTablePhys, VMemMap, VMemMapHuge, VMemGetPhysAddr, VMemUnmap, VMemMapMMIO, and VMemUnmapMMIO.
Applied to files:
mm/VMem.hdrivers/virtio/VirtioBlk.cmm/VMem.cpp
📚 Learning: 2025-08-16T05:28:34.925Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 90
File: kernel/memory/MemOps.c:103-124
Timestamp: 2025-08-16T05:28:34.925Z
Learning: In VoidFrame kernel, the virtual address space management uses a simple linear allocator (kernel_space.next_vaddr) without any reclamation mechanism. VMemFree and VMemUnmapMMIO clear page tables but don't free virtual address space for reuse, causing eventual virtual address space exhaustion. The solution requires implementing a free list with coalescing in the VirtAddrSpace structure and updating allocation/deallocation functions to reuse freed ranges.
Applied to files:
mm/VMem.hmm/dynamic/cpp/BuddyAllocator.cppmm/VMem.cpp
📚 Learning: 2025-08-13T07:19:55.274Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 79
File: drivers/PCI/PCI.c:154-154
Timestamp: 2025-08-13T07:19:55.274Z
Learning: VMemUnmapMMIO in VoidFrame kernel should use the same page table access pattern as the working VMemUnmap function: `uint64_t* pt_table = (uint64_t*)(pt_phys + KERNEL_VIRTUAL_OFFSET);` rather than PHYS_TO_VIRT macro or identity mapping checks.
Applied to files:
mm/VMem.hmm/VMem.cpp
📚 Learning: 2025-07-16T01:04:23.790Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 47
File: kernel/core/Kernel.c:434-434
Timestamp: 2025-07-16T01:04:23.790Z
Learning: VMemAlloc() in VoidFrame kernel fails with error code -2 (VMEM_ERROR_INVALID_ADDR) due to virtual address space layout conflict where the heap start address (_bss_end + KERNEL_VIRTUAL_OFFSET + 1MB margin) exceeds VIRT_ADDR_SPACE_END (0xFFFFFFFF80000000ULL), causing VMemMap validation to fail.
Applied to files:
mm/VMem.hmm/dynamic/cpp/BuddyAllocator.cppmm/VMem.cpp
📚 Learning: 2025-08-09T10:31:44.439Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 58
File: drivers/VesaBIOSExtension.c:77-85
Timestamp: 2025-08-09T10:31:44.439Z
Learning: In VoidFrame kernel, process stacks are allocated using VMemAllocWithGuards() which returns virtual addresses, but TerminateProcess incorrectly uses FreePage() (which expects physical addresses) to free them, causing "Page index out of bounds" errors. The correct function to use is VMemFreeWithGuards(proc->stack, STACK_SIZE).
Applied to files:
mm/VMem.hmm/VMem.cpp
📚 Learning: 2025-08-16T05:27:10.760Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 90
File: kernel/memory/MemOps.c:103-124
Timestamp: 2025-08-16T05:27:10.760Z
Learning: In VoidFrame kernel VMemAlloc implementation, different virtual addresses on consecutive allocations after freeing is expected behavior because VMemAlloc uses a simple bump allocator (kernel_space.next_vaddr) that only moves forward and doesn't track or reuse freed virtual address ranges.
Applied to files:
mm/VMem.hmm/dynamic/cpp/BuddyAllocator.cppmm/VMem.cpp
📚 Learning: 2025-08-13T07:17:35.941Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 79
File: drivers/PCI/PCI.c:154-154
Timestamp: 2025-08-13T07:17:35.941Z
Learning: In VoidFrame kernel, VMemGetPageTablePhys uses a 3-level hierarchy: Level 0 = PML4->PDP, Level 1 = PDP->PD, Level 2 = PD->PT. To access Page Tables, always use level=2, not level=1.
Applied to files:
mm/VMem.hmm/VMem.cpp
📚 Learning: 2025-08-10T06:22:28.816Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 62
File: kernel/core/Panic.c:69-79
Timestamp: 2025-08-10T06:22:28.816Z
Learning: In VoidFrame kernel, panic functions must call "asm volatile("cli")" as the very first instruction to disable interrupts immediately. Without this, if interrupts are enabled (after "sti"), an interrupt can fire during panic handling causing the panic to fail, hang, or behave unpredictably. This must be done in ALL panic entry points: Panic(), PanicWithCode(), PanicWithContext(), PanicFromInterrupt(), and at the start of KernelPanicHandler().
Applied to files:
kernel/core/Panic.h
📚 Learning: 2025-08-10T06:22:28.816Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 62
File: kernel/core/Panic.c:69-79
Timestamp: 2025-08-10T06:22:28.816Z
Learning: In VoidFrame kernel, panic functions must disable interrupts with "cli" as the very first operation, before any other code executes. If interrupts are enabled (after "sti"), an interrupt can fire between the panic call and the cli instruction in KernelPanicHandler, causing the panic to fail or behave unpredictably.
Applied to files:
kernel/core/Panic.h
📚 Learning: 2025-09-14T11:01:02.698Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 130
File: kernel/core/InitRD.c:25-35
Timestamp: 2025-09-14T11:01:02.698Z
Learning: In VoidFrame kernel's InitRD.c, multiboot modules should be mapped using VMemMap() with PAGE_PRESENT | PAGE_WRITABLE flags instead of VMemMapMMIO(), as the latter applies cache-disabling flags (PAGE_NOCACHE | PAGE_WRITETHROUGH) that can corrupt data when reading regular RAM content.
Applied to files:
mm/VMem.cpp
📚 Learning: 2025-08-09T04:47:56.883Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 53
File: kernel/core/Kernel.c:256-259
Timestamp: 2025-08-09T04:47:56.883Z
Learning: The VoidFrame kernel memory crash occurs when VIRT_ADDR_SPACE_START is set to 0xFFFF800000000000ULL instead of 0xFFFFFFFF80000000ULL, causing the heap to allocate 128TB away from where the kernel is mapped, resulting in page faults when accessing heap memory.
Applied to files:
mm/VMem.cpp
📚 Learning: 2025-09-02T01:55:54.511Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 106
File: kernel/sched/MLFQ.c:691-694
Timestamp: 2025-09-02T01:55:54.511Z
Learning: In VoidFrame's Cerberus system, CerberusInstallStackCanary fails because it incorrectly calculates the canary address. VMemAllocStack returns stack_top (highest address) but the canary calculation uses stack_base + stack_size - 8, which results in an address beyond the allocated stack region. The correct canary location should be calculated as actual_stack_base + sizeof(uint64_t) where actual_stack_base = stack_top - stack_size, placing the canary just above the unmapped guard page.
Applied to files:
mm/VMem.cpp
📚 Learning: 2025-08-14T02:11:58.963Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 82
File: kernel/process/Process.h:0-0
Timestamp: 2025-08-14T02:11:58.963Z
Learning: In VoidFrame kernel's FastMemcpy implementation (kernel/memory/MemOps.c), memory corruption occurs under high load due to interrupts occurring between SSE2 movdqu load and store instructions, corrupting XMM register state. The fix requires disabling interrupts around SSE operations and adding memory barriers (sfence) to ensure stores complete atomically.
Applied to files:
mm/VMem.cppmm/MemOps.h
📚 Learning: 2025-08-31T02:11:29.291Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 100
File: kernel/memory/VMem.h:44-47
Timestamp: 2025-08-31T02:11:29.291Z
Learning: VoidFrame kernel's ValidateMemoryLayout() function in Kernel.c produces false positive "Stack overlaps with kernel code" warnings because the static kernel_stack array is correctly allocated within the kernel's physical memory region. This overlap is expected and normal behavior, not an error.
Applied to files:
mm/VMem.cpp
📚 Learning: 2025-08-14T02:11:27.395Z
Learnt from: assembler-0
Repo: assembler-0/VoidFrame PR: 82
File: kernel/process/Process.h:0-0
Timestamp: 2025-08-14T02:11:27.395Z
Learning: In VoidFrame kernel, FastMemcpy was using SSE2 movdqu instructions without preserving XMM registers, causing crashes when called from interrupt context (like the scheduler's FastSchedule function). The fix is to use only general-purpose registers with 64-bit unrolled copies to maintain performance while being interrupt-safe.
Applied to files:
mm/MemOps.h
🔇 Additional comments (15)
include/stdbool.h (1)
3-8: LGTM! Correct C++ compatibility guard.The
#ifndef __cplusplusguard correctly ensures that the bool typedef and true/false macros are only defined when compiling as C. In C++ builds, the native bool type and true/false keywords will be used instead. This is the standard approach for maintaining C/C++ compatibility.mm/dynamic/c/Magazine.h (3)
125-127: LGTM! Extern "C" guards correctly implemented.The C++ compatibility guards are properly placed around only the function declarations, leaving data structures and type definitions outside the block, which is the correct approach.
Also applies to: 139-142
6-6: Verify SpinlockRust.h has C++ compatibility guards.Since this header now supports C++ compilation, ensure that
SpinlockRust.halso has properextern "C"guards around its function declarations. Without them, linkage errors will occur when this header is included in C++ code.
131-137: Verify implementations exist for newly declared functions.Ensure that implementations are provided for the seven newly declared functions, especially given this is a C to C++ transition. Missing implementations would cause linker errors.
Based on learnings.
kernel/core/Panic.h (2)
25-27: LGTM - C++ linkage guards properly applied.The extern "C" guards correctly ensure C linkage for the panic API when included from C++ code, maintaining consistency with other headers in this PR.
Also applies to: 55-57
34-36: Verify PanicFromInterrupt disables interrupts immediately.Based on learnings, all panic entry points in VoidFrame kernel must call
clias the very first instruction to prevent interrupt-driven corruption during panic handling.Verify the implementation contains
asm volatile("cli")at the start:mm/PMem.h (2)
20-22: LGTM - C++ compatibility guards correctly applied.The extern "C" guards properly wrap the public API for C linkage compatibility.
Also applies to: 38-40
28-31: LGTM - New page allocation APIs.The new function declarations provide a cleaner interface for page allocation. The signatures are clear and consistent with the existing API style.
mm/VMem.h (1)
112-114: LGTM - C++ linkage guards properly applied.The extern "C" guards correctly ensure C linkage for the virtual memory API when included from C++ code, maintaining consistency with other headers in this PR.
Also applies to: 160-162
mm/MemOps.h (2)
6-8: LGTM - C++ compatibility guards correctly applied.The extern "C" guards properly wrap the API for C linkage compatibility.
Also applies to: 20-22
10-18: LGTM - Removed restrict qualifiers for C++ compatibility.The removal of
restrictqualifiers is necessary since C++ does not support this C99 keyword (though some compilers provide extensions). This change enables these functions to be called from C++ code.include/Io.h (1)
6-8: LGTM - C++ linkage guards properly applied.The extern "C" guards correctly wrap the I/O API for C linkage compatibility when included from C++ code, consistent with other headers in this PR.
Also applies to: 79-81
cmake/source.cmake (2)
46-46: LGTM - C++ source files added to build.The addition of C++ sources (Spinlock.cpp, VMem.cpp, BuddyAllocator.cpp, new.cpp) correctly integrates the C++ components into the build system.
Also applies to: 61-61, 66-67
186-188: LGTM - Include directories configured for C++ components.The new include directories (
kernel,kernel/atomic/cpp,mm/dynamic/cpp) properly expose the C++ headers, enabling standard<Header.h>includes instead of relative paths.Also applies to: 202-202
mm/KernelHeap.h (1)
19-19: Verify the reduced threshold doesn't impact allocation patterns.The RUST_MAX_SIZE reduction from 128KB to 64KB means allocations in the 64KB-128KB range will now use the slower LargeBlockAlloc (VMemAlloc) path instead of the Rust allocator. Ensure this doesn't cause performance degradation or allocation failures for common kernel allocation sizes.
Run the following to identify allocations in the affected range:
| // Pre-allocated pool for buddy nodes | ||
| #define MAX_BUDDY_NODES 2048 | ||
|
|
||
| typedef struct VMemFreeBlock { | ||
| uint64_t base; | ||
| uint64_t size; | ||
| struct VMemFreeBlock* next; | ||
| struct VMemFreeBlock* prev; | ||
| struct VMemFreeBlock* hnext; // For hash table chaining | ||
| } VMemFreeBlock; | ||
|
|
||
| struct BuddyAllocator { | ||
| VMemFreeBlock* free_lists[BUDDY_NUM_ORDERS]; | ||
| VMemFreeBlock* hash_table[HASH_TABLE_SIZE]; | ||
| VMemFreeBlock node_pool[MAX_BUDDY_NODES]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Node pool exhaustion silently leaks the entire free block
MAX_BUDDY_NODES is capped at 2048, but a 1 GiB arena (allowed by BUDDY_MAX_ORDER) can require ~262 k leaf blocks. Once the pool runs out, AddFreeBlock returns early and the freed block disappears forever, so the allocator leaks memory and eventually reports OOM even though pages were freed. Please size the pool for the full buddy tree (or allocate it dynamically) and make exhaustion impossible/screamingly obvious.
-#define MAX_BUDDY_NODES 2048
+#define MAX_BUDDY_NODES ((1u << BUDDY_NUM_ORDERS) - 1)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Pre-allocated pool for buddy nodes | |
| #define MAX_BUDDY_NODES 2048 | |
| typedef struct VMemFreeBlock { | |
| uint64_t base; | |
| uint64_t size; | |
| struct VMemFreeBlock* next; | |
| struct VMemFreeBlock* prev; | |
| struct VMemFreeBlock* hnext; // For hash table chaining | |
| } VMemFreeBlock; | |
| struct BuddyAllocator { | |
| VMemFreeBlock* free_lists[BUDDY_NUM_ORDERS]; | |
| VMemFreeBlock* hash_table[HASH_TABLE_SIZE]; | |
| VMemFreeBlock node_pool[MAX_BUDDY_NODES]; | |
| // Pre-allocated pool for buddy nodes | |
| #define MAX_BUDDY_NODES ((1u << BUDDY_NUM_ORDERS) - 1) | |
| typedef struct VMemFreeBlock { | |
| uint64_t base; | |
| uint64_t size; | |
| struct VMemFreeBlock* next; | |
| struct VMemFreeBlock* prev; | |
| struct VMemFreeBlock* hnext; // For hash table chaining | |
| } VMemFreeBlock; | |
| struct BuddyAllocator { | |
| VMemFreeBlock* free_lists[BUDDY_NUM_ORDERS]; | |
| VMemFreeBlock* hash_table[HASH_TABLE_SIZE]; | |
| VMemFreeBlock node_pool[MAX_BUDDY_NODES]; |
🤖 Prompt for AI Agents
In mm/dynamic/cpp/BuddyAllocator.cpp around lines 14–28 the fixed node_pool of
MAX_BUDDY_NODES=2048 can be exhausted for large arenas (e.g. 1 GiB), causing
AddFreeBlock to return early and silently drop freed blocks; replace the static
capped pool with a capacity that covers the full buddy tree or allocate the node
pool dynamically (heap container or allocator) sized from BUDDY_MAX_ORDER so
exhaustion cannot occur, and change AddFreeBlock to check allocation failures
and fail loudly (log/error/abort or assert) instead of returning silently;
ensure the construction/teardown and any access are updated to use the new
dynamic pool (or a computed MAX_BUDDY_NODES = full_tree_node_count) and keep
thread-safety consistent.
| #include <KernelHeap.h> | ||
|
|
||
| void* operator new(unsigned long size) { | ||
| return KernelMemoryAlloc(size); | ||
| } | ||
|
|
||
| void operator delete(void* ptr) { | ||
| KernelFree(ptr); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure global operator new never returns nullptr
Directly returning KernelMemoryAlloc(size) allows ::operator new to hand back nullptr on OOM, but the C++ contract requires it to either succeed or raise an allocation failure signal (typically std::bad_alloc). Propagating nullptr leads to immediate UB once the caller dereferences the result. Please map allocation failure to a guaranteed trap/exception instead of returning nullptr.
-#include <KernelHeap.h>
+#include <KernelHeap.h>
+#include <new>
void* operator new(unsigned long size) {
- return KernelMemoryAlloc(size);
+ if (void* ptr = KernelMemoryAlloc(size)) {
+ return ptr;
+ }
+#if defined(__cpp_exceptions)
+ throw std::bad_alloc();
+#else
+ __builtin_trap();
+#endif
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #include <KernelHeap.h> | |
| void* operator new(unsigned long size) { | |
| return KernelMemoryAlloc(size); | |
| } | |
| void operator delete(void* ptr) { | |
| KernelFree(ptr); | |
| #include <KernelHeap.h> | |
| #include <new> | |
| void* operator new(unsigned long size) { | |
| if (void* ptr = KernelMemoryAlloc(size)) { | |
| return ptr; | |
| } | |
| #if defined(__cpp_exceptions) | |
| throw std::bad_alloc(); | |
| #else | |
| __builtin_trap(); | |
| #endif | |
| } | |
| void operator delete(void* ptr) { | |
| KernelFree(ptr); | |
| } |
🤖 Prompt for AI Agents
In mm/dynamic/cpp/new.cpp lines 1-8, the global operator new can return nullptr
on OOM which violates the C++ contract; change the implementation to detect a
null return from KernelMemoryAlloc and translate it into a throwing allocation
failure (throw std::bad_alloc()), and include the required header (<new>) so
std::bad_alloc is available; keep operator delete behavior the same.
Summary by CodeRabbit
New Features
Improvements
Other