- Introduction
- Architecture Overview
- Key Components
- Key Functions
- Memory System
- Neuron and Connection Management
- Dynamic Parameters
- Performance Metrics
- Optimization
- Adaptation
- Usage
- API Reference
- Algorithm explanations and mathematics
- Training mechanism
- Reflection System
- Self-Identification System
- Knowledge Filter
This documentation provides a comprehensive guide to the decentralized neural web architecture implemented in the provided code. The architecture is designed to simulate a neural network with hierarchical memory management, dynamic adaptation, and performance optimization. The goal of this architecture is to present an alternative to modern neural models, which are often complex and resource-intensive, taking inspiration from our brains, neurons are decentralized organized in layers, allowing them to interact with themselves and change themselves overtime in more than just sates and weights, while also creating a dynamic memory system.
You can see the architecture better here
gnuplot library
json-c library
curl library
for metal macos version metal api
for cuda version cuda
If on windows use wsl i have removed the windows version as I don't think it was ever correctly working either way.
Optional:
golang for the converter
docker
It's recommended you use this more in a framework kind of way than by just copying the whole int main functionality basicly use functions you like from here compilation guide is below.
It's good practice to use the validaitng critical secyrity status functions if doing more complex things with the code.
SecurityValidationStatus secStatus =
validateCriticalSecurity(updatedNeurons, weights, connections,
max_neurons, max_connections);
if (secStatus.critical_violation) {
handleCriticalSecurityViolation(updatedNeurons, weights, connections, &secStatus);
}change updatedNeurons to neurons if using any other version than metal not only in this function but all the functions you use, that you copy from here.
You can either setup the python package steps in building only unix 64x support or you can compile the neural web the one you want and then install it with the install.sh script. This will allow you to use the neural web as a library either from C/C++
Find the correct version you want to build by downloading the whole repo git clone https://github.com/Okerew/Neural-Web.git and navigating to the correct version you want to build.
If you seriosly want to do it like I did start by firstly generating embeddings with train_embedding main file which you can compile like this go build, then run with ./main this should generate a embeddings file (custom_embeddings.txt) if you didn't change the name which then copy to the directory were you will be building the neural web.
docker build -t neural_web .docker run --rm -it neural_webStart by firstly generating embeddings with train_embeddings main file which you can compile like this go build, then run with ./main this should generate a embeddings file (custom_embeddings.txt) if you didn't change the name which then copy to the directory were you will be building the neural web.
To compile the code, run the following command in the root directory of the project:
python setup.py sdist bdist_wheelTo install
python -m pip install dist/neural_web-1.0-cp313-cp313-linux_x86_64.whl# Build executable
clang -framework Metal -framework Foundation \
-I/opt/homebrew/Cellar/json-c/0.17/include \
-L/opt/homebrew/Cellar/json-c/0.17/lib \
-ljson-c -lcurl \
-o neural_web neural_web.m
# Build dynamic library
clang -dynamiclib -framework Metal -framework Foundation \
-I/opt/homebrew/Cellar/json-c/0.17/include \
-L/opt/homebrew/Cellar/json-c/0.17/lib \
-ljson-c -lcurl \
-o libneural_web.dylib neural_web.m
For cpu 86/64 unix version
# Build executable from C++17 source
clang++ -std=c++17 -O2 -Wall -Wextra \
-o neural_web64 neural_web.cpp \
-I/usr/include \
-ljson-c -lcurl -lm
# Build object file (for library usage)
clang++ -std=c++17 -O2 -Wall -Wextra \
-c neural_web.cpp -o neural_web64.o
# Create static library
ar rcs libneural_web.a neural_web64.oFor cuda version
# Build executable
nvcc -o neural_web_cu neural_web.cu \
-I/usr/include \
-L/opt/homebrew/Cellar/json-c/0.17/lib \
-ljson-c -lcurl
# Build static library
nvcc -c neural_web.cu -o neural_web_cu.o
ar rcs libneural_web_cu.a neural_web_cu.oJsonC library replace with your own imports in the command if you copied it into the project or aren't using homebrew or another version of the lib
- Note: the actual code is located in src so you should either unpack files from there into your folder or just compile there
- Note: you can use the install.sh script to be able to use the lib straight from your os.
The architecture consists of several key components:
- Neurons: The basic units of the neural network, organized into layers, each connected in a 3d like structure.

- Memory System: A hierarchical memory system to store and manage memories with varying importance.
- Dynamic Parameters: Parameters that adapt based on the network's performance and stability.
- Performance Metrics: Metrics to track the performance of the network.
- Optimization: Techniques to optimize the network's parameters for better performance.

- Reflection System: Evaluates the quality of outputs and suggests improvements.
- Self-Identification System: Helps the system assess its own state and biases, allowing the AI to form an identity of sorts.
- Knowledge Filter: Ensures that only relevant and high-quality information is processed.
The memory system is designed to store and manage memories with varying importance. It consists of:
- MemoryEntry: A structure to store individual memories.
- MemoryCluster: A structure to manage a cluster of memories.
- HierarchicalMemory: A structure to manage short-term, medium-term, and long-term memories.
- MemorySystem: The main structure to manage the hierarchical memory system.
typedef struct {
float vector[MEMORY_VECTOR_SIZE];
float importance;
unsigned int timestamp;
} MemoryEntry;typedef struct MemoryCluster {
MemoryEntry *entries;
float importance_threshold;
unsigned int size;
unsigned int capacity;
} MemoryCluster;typedef struct HierarchicalMemory {
MemoryCluster short_term;
MemoryCluster medium_term;
MemoryCluster long_term;
float consolidation_threshold;
float abstraction_threshold;
unsigned int total_capacity;
} HierarchicalMemory;typedef struct MemorySystem {
HierarchicalMemory hierarchy;
unsigned int head;
unsigned int size;
unsigned int capacity;
MemoryEntry *entries;
} MemorySystem;Neurons are the basic units of the neural network, organized into layers, each connected in a 3d like structure and specialized.
typedef struct {
float state;
float output;
unsigned int num_connections;
unsigned int layer_id;
} Neuron;
typedef enum {
SPEC_NONE = 0,
SPEC_PATTERN_DETECTOR,
SPEC_FEATURE_EXTRACTOR,
SPEC_TEMPORAL_PROCESSOR,
SPEC_CONTEXT_INTEGRATOR,
SPEC_DECISION_MAKER,
SPEC_MEMORY_ENCODER,
SPEC_EMOTIONAL_PROCESSOR,
SPEC_PREDICTION_GENERATOR
} NeuronSpecializationType;
typedef struct {
unsigned int neuron_id;
NeuronSpecializationType type;
float specialization_score;
float activation_history[50]; // Recent activation history
unsigned int history_index; // Current index in circular buffer
float avg_activation; // Average activation level
float importance_factor; // How important this specialized neuron is
} SpecializedNeuron;
typedef struct {
SpecializedNeuron neurons[MAX_SPECIALIZED_NEURONS];
unsigned int count;
float type_distribution[MAX_SPECIALIZATIONS]; // Distribution of
// specialization types
float specialization_threshold; // Minimum score to be considered specialized
} NeuronSpecializationSystem;Dynamic parameters adapt based on the network's performance and stability.
typedef struct {
float input_noise_scale;
float weight_noise_scale;
float base_adaptation_rate;
float current_adaptation_rate;
float learning_momentum;
float stability_threshold;
float noise_tolerance;
float recovery_rate;
float plasticity;
float homeostatic_factor;
} DynamicParameters;Performance metrics track the performance of the network.
typedef struct {
double execution_time;
float average_output;
float error_rate;
int batch_size;
float learning_rate;
} PerformanceMetrics;Optimization techniques are used to improve the network's performance.
typedef struct {
int optimal_batch_size;
float optimal_learning_rate;
double best_execution_time;
float best_performance_score;
} OptimizationState;typedef struct {
float current_adaptation_rate;
float input_noise_scale;
float weight_noise_scale;
float plasticity;
float noise_tolerance;
float learning_rate;
} ReflectionParameters;The reflection system evaluates the quality of outputs and suggests improvements. It helps in continuously refining the network's performance by identifying areas that need enhancement.
typedef struct {
float *core_values; // Stable personality traits/values
float *belief_system; // Current belief states
float *identity_markers; // Unique identifying characteristics
float *experience_history; // Compressed history of experiences
float *behavioral_patterns; // Consistent behavior patterns
uint32_t num_core_values;
uint32_t num_beliefs;
uint32_t num_markers;
uint32_t history_size;
uint32_t pattern_size;
float consistency_score; // Measure of identity stability
float adaptation_rate; // Rate of identity evolution
float confidence_level; // Self-confidence in identity
// Temporal consistency tracking
float *temporal_coherence; // Track consistency over time
uint32_t coherence_window; // Time window for coherence analysis
// Identity verification system
struct {
float threshold; // Minimum consistency threshold
float *reference_state; // Reference identity state
uint32_t state_size; // Size of reference state
} verification;
} SelfIdentitySystem;The self-identification system helps the neural web assess its own state and biases. This allows the AI to form an identity of sorts, enabling it to understand its capabilities and limitations better.
The knowledge filter ensures that only relevant and high-quality information is processed. This component is crucial for maintaining the integrity and efficiency of the neural web by filtering out noise and irrelevant data.
typedef struct {
KnowledgeCategory *categories;
uint32_t num_categories;
uint32_t capacity;
ProblemInstance *problem_history;
uint32_t num_problems;
uint32_t problem_capacity;
float *category_similarity_matrix;
} KnowledgeFilter;The metacognition system evaluates the performance of the neural web and suggests improvements. It helps in continuously refining the network's performance by identifying areas that need enhancement.
typedef struct MetacognitionMetrics {
float confidence_level; // Overall confidence in decisions
float adaptation_rate; // Rate of learning adjustment
float cognitive_load; // Current processing complexity
float error_awareness; // Awareness of prediction errors
float context_relevance; // Relevance of current context
float performance_history[HISTORY_LENGTH]; // Historical performance tracking
} MetacognitionMetrics;
typedef struct MetaLearningState {
float learning_efficiency; // Current learning effectiveness
float exploration_rate; // Balance between exploration/exploitation
float stability_index; // System stability measure
float *priority_weights; // Attention allocation weights
uint32_t current_phase; // Current learning phase
} MetaLearningState;The security system evaluates it the network is trying to access the system. It helps in preventing unauthorized access to the system.
typedef struct {
bool critical_violation;
uint64_t suspect_address;
const char *violation_type;
} SecurityValidationStatus;The goal system allows the network to set and track goals.
typedef struct {
float novelty_score;
float competence_score;
float autonomy_score;
float mastery_level;
float curiosity_drive;
float achievement_drive;
float exploration_rate;
} IntrinsicMotivation;
typedef struct {
char description[256];
float priority;
float progress;
float reward_value;
bool achieved;
int timestamp;
} Goal;
typedef struct {
Goal *goals;
int num_goals;
int capacity;
float planning_horizon;
float discount_factor;
} GoalSystem;The internal self expression system allows the network to express itself. It allows the network to ask questions about it self and get answers
typedef struct {
int symbol_id;
char description[256];
} InternalSymbol;
typedef struct {
int question_id;
int symbol_ids[MAX_SYMBOLS];
int num_symbols;
} InternalQuestion;The search world wide web allows the network to search for information. It allows the network to ask questions about it self and get answers
typedef struct {
char *data;
size_t size;
} HttpResponse;
typedef struct {
char **titles;
char **snippets;
char **urls;
int count;
} SearchResults;The moral compass ensures the model adheres to basic ethical principles. It allows the model to make decisions that are aligned with ethical standards.
typedef struct {
char **titles;
char **snippets;
char **urls;
int count;
} SearchResults;
typedef struct {
float importance; // How important this principle is (0.0-1.0)
float adherence; // Current adherence level (0.0-1.0)
char description[256]; // Description of the principle
int violations; // Count of violations
int activations; // Count of successful applications
} EthicalPrinciple;
typedef struct {
float benefit_score; // Positive impact measurement
float harm_score; // Negative impact measurement
float uncertainty; // Level of uncertainty in assessment
int affected_parties; // Number of parties potentially affected
float reversibility; // How reversible the decision is (0-1)
float long_term_impact; // Long-term consequence rating
} DecisionImpact;
typedef struct {
EthicalPrinciple *principles; // Array of ethical principles
int num_principles; // Number of principles
float overall_alignment; // Overall ethical alignment (0.0-1.0)
DecisionImpact last_decision; // Impact of the last decision
float confidence_threshold; // Minimum confidence for ethical decisions
int dilemma_count; // Number of ethical dilemmas encountered
int resolution_count; // Number of dilemmas successfully resolved
} MoralCompass;The emotion system allows the network to express emotions. It allows the network to express emotions and get answers
typedef struct {
float intensity; // Strength of the emotion (0.0 to 1.0)
float decay_rate; // How quickly the emotion fades
float influence_factor; // How much this emotion affects decision making
float threshold; // Activation threshold for this emotion
float previous_intensity; // For tracking changes
float momentum; // Carries emotional momentum across steps
unsigned int last_update; // Timestamp of last update
} EmotionState;
typedef struct {
EmotionState emotions[MAX_EMOTION_TYPES];
float cognitive_impact; // How much emotions affect logical processing
float emotional_regulation; // System's ability to regulate emotions (0.0-1.0)
float emotional_memory[MAX_EMOTION_TYPES]
[10]; // Recent emotional memory traces
int memory_index; // Current index in circular memory buffer
} EmotionalSystem;The imagination system allows the network to generate imaginative outcomes. It allows the network to generate imaginative outcomes and get answers
typedef struct {
float probability;
float confidence;
float impact_score;
float plausibility;
float vector[MEMORY_VECTOR_SIZE];
char description[256];
} ImaginedOutcome;
typedef struct {
int num_outcomes;
ImaginedOutcome outcomes[10];
float divergence_factor;
float creativity_level;
} ImaginationScenario;
typedef struct {
ImaginationScenario scenarios[MAX_SCENARIOS];
int num_scenarios;
int current_scenario;
float creativity_factor;
float coherence_threshold;
float novelty_weight;
float memory_influence;
float identity_influence;
bool active;
int steps_simulated;
float divergence_history[100];
char current_scenario_name[MAX_SCENARIO_NAME_LENGTH];
int total_scenarios_generated;
} ImaginationSystem;The social system allows the network to interact with others. It allows the network to interact with others and get answers
typedef struct {
unsigned int timestamp;
int person_id; // ID of the person involved
float emotional_state[5]; // Emotional state during interaction
float cooperation_level; // How cooperative the interaction was
float outcome_satisfaction; // How satisfied both parties were
char interaction_type[32]; // Type of interaction (negotiation, casual, etc.)
char *context; // Context of the interaction
} SocialInteraction;
// Structure to model another person
typedef struct {
int person_id;
char person_name[64];
float observed_traits[10]; // Personality traits inferred
float prediction_confidence; // Confidence in behavioral predictions
float relationship_quality; // Quality of relationship with this person
float trust_level; // Trust built with this person
int interaction_count; // Number of interactions with this person
} PersonModel;
typedef struct {
// Core social capabilities
float empathy_level; // Ability to understand others' emotions
float negotiation_skill; // Ability to find mutually beneficial solutions
float behavior_prediction_accuracy; // Accuracy in predicting others' actions
float social_awareness; // Awareness of social dynamics and norms
// Social interaction history
int interaction_count;
SocialInteraction *interactions; // Array of past interactions
int max_interactions; // Maximum number of interactions to store
// Social models of others
int model_count;
PersonModel
*person_models; // Models of individuals the system has interacted with
int max_models; // Maximum number of models to maintain
// Social learning parameters
float learning_rate; // Rate at which social skills improve
float forgetting_factor; // Rate at which old interactions lose relevance
} SocialSystem;The validation system allows for the neural web to fallback if encountering an error.
typedef struct {
time_t start_time;
unsigned long total_checks;
unsigned long successful_checks;
unsigned long failed_checks;
unsigned long segfaults_recovered;
unsigned long fpe_recovered;
float average_check_time;
float min_check_time;
float max_check_time;
float total_check_time;
unsigned long component_failures;
unsigned long memory_issues;
unsigned long instability_events;
unsigned long critical_failures;
unsigned long neuron_corrections;
unsigned long connection_corrections;
unsigned long weight_corrections;
unsigned long memory_reinitializations;
unsigned long memory_cluster_errors;
} SystemHealthMetrics;-
selectOptimalDecisionPath(Neuron* neurons, float* weights, int* connections, float* input_tensor, int max_neurons, float* previous_outputs, NetworkStateSnapshot* stateHistory, int step, MemoryEntry* relevantMemory, DynamicParameters* params)Selects the optimal decision path based on current states and parameters. Chooses the best course of action based on the network's current state and relevant memories. -
selectOptimalMetaDecisionPath(Neuron* neurons, float* weights, int* connections, float* input_tensor, int max_neurons, MetaLearningState* meta_learning_state, MetacognitionMetrics* metacognition)Selects the optimal meta-decision path based on meta-learning state and metacognition metrics. Chooses the best course of action based on the meta-learning state and metacognition metrics.
-
integrateReflectionSystem(Neuron* neurons, MemorySystem* memorySystem, NetworkStateSnapshot* stateHistory, int step, float* weights, int* connections, ReflectionParameters* reflection_params)Integrates the reflection system into the network processing. Uses the reflection system to influence the network's processing.
-
updateMotivationSystem(IntrinsicMotivation* motivation, float performance_delta, float novelty, float task_difficulty)Updates the motivation system based on performance delta, novelty, and task difficulty. Adjusts the motivation system based on the network's performance and the difficulty of the task.
-
adjustBehaviorBasedOnAnswers(Neuron* neurons, float* input_tensor, MemorySystem* memorySystem, float *learning_rate, float *input_noise_scale, float *weight_noise_scale)Adjusts the behavior based on answers from the internal self-expression system. Makes changes to the network based on the responses to questions.
-
storeSearchResultsWithMetadata(MemorySystem *memorySystem, WorkingMemorySystem *working_memory, const SearchResults *results, const char *original_query, float feature_projection_matrix[FEATURE_VECTOR_SIZE][MEMORY_VECTOR_SIZE])Stores search results with metadata in the memory system and working memory. Adds the search results to the memory system and working memory.
-
void recordSocialInteraction(SocialSystem *system, int person_id, float *emotional_state, float cooperation_level, float satisfaction, const char *type, const char *context)Records a social interaction for a person. Stores details about the interaction, including emotional state, cooperation level, satisfaction, type, and context.
To initialize the neural network and memory system, use the following functions:
MemorySystem *memorySystem = createMemorySystem(MEMORY_BUFFER_SIZE);
Neuron neurons[MAX_NEURONS];
uint connections[MAX_NEURONS * MAX_CONNECTIONS] = {0};
float weights[MAX_NEURONS * MAX_CONNECTIONS] = {0};
float input_tensor[INPUT_SIZE] = {0};
initializeNeurons(neurons, connections, weights, input_tensor);MemorySystem *createMemorySystem(unsigned int capacity): Creates a new memory system.void freeMemorySystem(MemorySystem *system): Frees the memory system.void addMemory(MemorySystem *system, Neuron *neurons, float *input_tensor, unsigned int timestamp): Adds a memory to the system.void saveMemorySystem(MemorySystem *system, const char *filename): Saves the memory system to a file.MemorySystem *loadMemorySystem(const char *filename): Loads the memory system from a file.void saveHierarchicalMemory(MemorySystem *system, const char *filename): Saves the hierarchical memory to a file.void loadHierarchicalMemory(MemorySystem *system, const char *filename): Loads the hierarchical memory from a file.
void initializeNeurons(Neuron *neurons, uint *connections, float *weights, float *input_tensor): Initializes the neurons.void updateNeuronStates(Neuron *neurons, float *recurrent_weights): Updates the neuron states.void updateWeights(float *weights, Neuron *neurons, uint *connections, float learning_rate): Updates the weights.
DynamicParameters initDynamicParameters(): Initializes the dynamic parameters.void updateDynamicParameters(DynamicParameters *params, float performance_delta, float stability_measure, float error_rate): Updates the dynamic parameters.
float calculatePerformanceScore(PerformanceMetrics metrics): Calculates the performance score.float computeAverageOutput(Neuron *neurons): Computes the average output.float computeErrorRate(Neuron *neurons, float *previous_outputs): Computes the error rate.
void optimizeParameters(OptimizationState *opt_state, PerformanceMetrics *history, int history_size): Optimizes the parameters.
void adaptNetworkDynamic(Neuron *neurons, float *weights, DynamicParameters *params, float performance_delta, float *input_tensor): Adapts the network with dynamic parameters.
This document outlines the core algorithms and mathematical principles behind a decentralized neural network architecture. These mechanisms enable hierarchical memory management, dynamic adaptation, and optimization.
This algorithm calculates the new state of each neuron based on its current state, inputs, and neighboring influences.
- State Update Formula:
new_state = (current_state * decay_factor) + (recurrent_inputs * recurrent_weight) + (neighbor_influences * neighbor_weight) - Activation Function: The output is scaled using a hyperbolic tangent (tanh) function:
output = tanh(state * scale)
Adjusts the connections (weights) between neurons using a modified Hebbian learning rule.
- Weight Update Formula:
delta_w = learning_rate * (pre_activation * post_activation - weight * decay_factor) - Normalization: Weights are clipped to prevent unbounded growth:
new_weight = max(-1.0, min(1.0, weight + delta_w))
Maintains memories in a hierarchical system, adjusting their importance dynamically.
- Memory Importance:
importance = sum(abs(vector[i]) for i in range(vector_size)) / vector_size - Decay Over Time:
new_importance = importance * decay_factor - Strengthening Important Memories:
new_importance = importance * strengthen_factor
Automatically tunes parameters based on performance and stability.
- Adaptation Rate:
new_adaptation_rate = (momentum * adaptation_rate) + ((1 - momentum) * target_rate) - Plasticity Adjustment:
new_plasticity = plasticity * stability_factor - Noise Tolerance:
new_noise_tolerance = max(0.1, noise_tolerance * (1 - error_rate))
Optimizes learning rate and batch size based on network performance.
- Performance Score:
performance_score = (time_score * 0.4) + (output_score * 0.4) + (error_penalty * 0.2) - Batch Size Adjustment:
new_batch_size = (current_batch_size % max_batch_size) + 1 - Learning Rate Update:
new_learning_rate = current_learning_rate * (rand() / RAND_MAX) * 0.5 + 0.75
Finds similar memories using cosine similarity.
- Cosine Similarity:
similarity = (sum(vector1[i] * vector2[i] for i in range(vector_size))) / (sqrt(sum(vector1[i]**2 for i in range(vector_size))) * sqrt(sum(vector2[i]**2 for i in range(vector_size))))
Provides insights by calculating statistics like averages and variances.
- Average:
average = sum(values) / len(values) - Variance:
variance = sum((x - average)**2 for x in values) / len(values)
Refines the network by minimizing errors via gradient descent.
- Loss Function (Mean Squared Error):
loss = sum((y[i] - y_hat[i])**2 for i in range(n)) / n - Gradient Descent:
new_weight = weight - (learning_rate * loss_gradient)
Performs vector operations efficiently.
- Addition:
result[i] = vector1[i] + vector2[i] - Multiplication:
result[i] = vector1[i] * vector2[i]
Creates a memory vector by combining various data sources:
memory_vector = [neuron_states, neuron_outputs, input_tensor]
Assesses stability by comparing current and previous neuron states.
- Stability Measure:
stability_measure = 1 - (sum(abs(current_state[i] - previous_state[i]) for i in range(n)) / n)
Combines similar memories to reduce redundancy.
- Weighted Merge:
merged_vector = ((importance1 * vector1) + (importance2 * vector2)) / (importance1 + importance2) - Merged Importance:
merged_importance = max(importance1, importance2) * 1.1
This README file explains the training mechanism of the neural web implemented in the provided main() function. The training process involves several key components, including Metal device setup, memory system management, neural network initialization, and the main simulation loop for training. Below, we delve into the design reasons behind each component.
- Metal Device Setup
- Memory System Management
- Neural Network Initialization
- Main Simulation Loop
- Performance Tracking and Optimization
- Dynamic Parameter Adaptation
- Overview
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
id<MTLCommandQueue> commandQueue = [device newCommandQueue];Metal is used for its high performance and low-level access to the GPU, which is crucial for the efficient computation required in neural network training. Setting up the Metal device and command queue ensures that the GPU resources are ready for compute tasks.
MemorySystem *memorySystem = loadMemorySystem("memory_system.dat");
if (memorySystem != NULL) {
loadHierarchicalMemory(memorySystem, "hierarchical_memory.dat");
// Print memory system statistics and samples
} else {
memorySystem = createMemorySystem(MEMORY_BUFFER_SIZE);
}The memory system is designed to store and manage hierarchical memory structures, which are essential for retaining learned patterns and experiences. This hierarchical approach allows the system to prioritize and manage memories based on their importance and recency, mimicking human memory processes. Loading an existing memory system ensures continuity and prevents the loss of previously learned information, along with the working memory system allowing the model to have a dynamic realtime memory.
MetaController *metaController = initializeMetaController(network_regions);
IntrinsicMotivation *motivation = initializeMotivationSystem();
GoalSystem *goalSystem = initializeGoalSystem(10);
GlobalContextManager *contextManager = initializeGlobalContextManager(MAX_NEURONS);The meta controller, cognitive system, and goal planning components are initialized using the provided functions. These components are responsible for orchestrating the overall behavior of the neural web. Allowing the model to have a dynamic realtime memory, understand in a way.
Neuron neurons[MAX_NEURONS];
uint connections[MAX_NEURONS * MAX_CONNECTIONS] = {0};
float weights[MAX_NEURONS * MAX_CONNECTIONS] = {0};
float input_tensor[INPUT_SIZE] = {0};
if (memorySystem->size > 0) {
// Initialize neurons from memory
} else {
initializeNeurons(neurons, connections, weights, input_tensor);
}Initializing the neural network involves setting up neurons, connections, and weights. If the memory system contains existing data, neurons are initialized from the last memory state to leverage previously learned information. This approach ensures that the network can build upon past experiences, enhancing learning efficiency and effectiveness.
The main simulation loop is the core of the training process. It iterates over a predefined number of steps, performing various operations to train the neural network. This loop ensures that the network is continuously learning and adapting based on new inputs and feedback. Key operations include input generation, memory maintenance, forward and backward passes, memory updates, state history updates, performance metrics updates, dynamic parameter adaptation, and pattern matching.
Performance tracking and optimization are crucial for ensuring that the neural network operates efficiently. By periodically optimizing parameters such as learning rate and batch size, the system can adapt to changing conditions and improve overall performance. This dynamic optimization helps in achieving better convergence and accuracy.
updateDynamicParameters(¶ms, performance_delta, stability, performance_history[step].error_rate);
adaptNetworkDynamic(updatedNeurons, weights, ¶ms, performance_delta, input_tensor);Dynamic parameter adaptation allows the neural network to adjust its parameters in real-time based on performance metrics and network stability. This adaptability ensures that the network can respond to varying inputs and conditions, improving its robustness and flexibility. Parameters such as adaptation rate, input noise scale, and plasticity are adjusted to optimize learning and performance.
-
Initialization and Setup:
- Metal Device and Command Queue:
id<MTLDevice> device = MTLCreateSystemDefaultDevice(); id<MTLCommandQueue> commandQueue = [device newCommandQueue];
- Memory System:
MemorySystem *memorySystem = loadMemorySystem("memory_system.dat"); if (memorySystem == NULL) { memorySystem = createMemorySystem(MEMORY_BUFFER_SIZE); }
- Metal Device and Command Queue:
-
Loading and Creating Shaders:
- Shader Source and Library:
NSString *shaderSource = @"neuron_update.metal"; NSString *sourceCode = [NSString stringWithContentsOfFile:shaderSource encoding:NSUTF8StringEncoding error:&error]; id<MTLLibrary> library = [device newLibraryWithSource:sourceCode options:nil error:&error];
- Pipeline States:
id<MTLFunction> function = [library newFunctionWithName:@"update_neurons"]; id<MTLComputePipelineState> pipelineState = [device newComputePipelineStateWithFunction:function error:&error];
- Shader Source and Library:
-
Neural Network Initialization:
- Neurons and Connections:
Neuron neurons[MAX_NEURONS]; uint connections[MAX_NEURONS * MAX_CONNECTIONS] = {0}; float weights[MAX_NEURONS * MAX_CONNECTIONS] = {0};
- Buffers:
id<MTLBuffer> neuronBuffer = [device newBufferWithBytes:neurons length:sizeof(neurons) options:MTLResourceStorageModeShared]; id<MTLBuffer> connectionBuffer = [device newBufferWithBytes:connections length:sizeof(connections) options:MTLResourceStorageModeShared]; id<MTLBuffer> weightBuffer = [device newBufferWithBytes:weights length:sizeof(weights) options:MTLResourceStorageModeShared];
- Neurons and Connections:
-
Main Simulation Loop:
- Task Prompt and Memory Management:
for (int step = 0; step < STEPS; step++) { TaskPrompt current_prompt; generateTaskPrompt(¤t_prompt, step); if (step % 10 == 0) { decayMemorySystem(memorySystem); mergeSimilarMemories(memorySystem); } }
- Forward and Backward Pass:
id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer]; id<MTLComputeCommandEncoder> forwardEncoder = [commandBuffer computeCommandEncoder]; [forwardEncoder setComputePipelineState:pipelineState]; [forwardEncoder setBuffer:neuronBuffer offset:0 atIndex:0]; [forwardEncoder setBuffer:weightBuffer offset:0 atIndex:1]; [forwardEncoder setBuffer:connectionBuffer offset:0 atIndex:2]; [forwardEncoder dispatchThreads:gridSize threadsPerThreadgroup:threadGroupSize]; [forwardEncoder endEncoding];
- Task Prompt and Memory Management:
-
Performance Metrics and Optimization:
- Compute Loss and Update Weights:
float loss = computeMSELoss(updatedNeurons, target_outputs, max_neurons); updateWeights(weights, updatedNeurons, connections, learning_rate);
- Optimize Parameters:
if (step % OPTIMIZATION_WINDOW == 0 && step > 0) { optimizeParameters(&opt_state, performance_history, step + 1); }
- Compute Loss and Update Weights:
-
Cleanup and Saving State:
- Save States:
saveNetworkStates(stateHistory, STEPS); saveMemorySystem(memorySystem, "memory_system.dat"); saveHierarchicalMemory(memorySystem, "hierarchical_memory.dat"); saveSystemParameters(system_params, "system_parameters.dat");
- Free Memory:
freeMemorySystem(memorySystem); free(stateHistory); free(system_params);
- Save States:
Example of the training can be seen in the MacOS\Arm/neural_web.m file in int main or if you are not familiar with metal 86\64/CPU/neural_web64.c and 86\64/CUDA/neural_web.cu
You can call ./process {dataset} to load a dataset.
Note if you modify max_neurons in the example you have to also modify the input_size to be at max greater than the number of max_neurons by 1 or just lesser than the number of max_neurons or it will have an out of bounds error
The model uses reverse pathways and generally doesn't only do patterns good because it also reverses its outputs and finds more meaning in it and additional pathways to achieve what it is supposed to similar to how humans do, or as how I think humans do, is very dynamic and has meta cognition capabilities.
See Documents folder for more information on the neural web.
To modify number of neurons change MAX_NEURONS
You can use vocabulary the converter assuming this json structure {"WORD": {"MEANINGS": [[...]], "ANTONYMS": [...], "SYNONYMS": [...]}} it will then output the correct structure for the neural web to read.
Remember to use the security feature
In training use a dataset which explains that hateful things are bad and morally correct things are good, to fix the problem.
Only unix systems.



