Skip to content

Commit 84b0c67

Browse files
committed
README's for A1 & A2
1 parent 0c03e89 commit 84b0c67

File tree

2 files changed

+110
-0
lines changed

2 files changed

+110
-0
lines changed

HopfieldNetworkTraining/README.md

Lines changed: 89 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
## Group A - Exercise 2
2+
***
3+
4+
### Context
5+
A [Hopfield Network](https://en.wikipedia.org/wiki/Hopfield_network) operates recurrently, as following:
6+
7+
- It consists of $N$ self-fed nodes-neurons, with 2 possible states: $+1$ & $-1$.
8+
- We can store a number of $N$-dimensional vectors, with possible element values $+1$ & $-1$.
9+
- When the network is fed with an $N$-dimensional vector, its state will eventually balance itself in one of the stored vectors, which is the closest to the given vector.
10+
11+
12+
A Hopfield Network is trained by storing a number of $N$-dimensional vectors.
13+
This storage is implemented by calculating the $N \times N$ weight vector of the network.
14+
15+
Each weight of the network is attached to a connection between 2 nodes. The weight matrix $W$ is calculated as following:
16+
17+
$$ W = \sum_{i=1}^M (Y^3_{i} \cdot Y_i) - M \cdot I $$
18+
19+
Where:
20+
- $M$ is the number of $N$-dimensional vectors
21+
- $Y_i$ is one of the $1 \times N$ vectors to be stored
22+
- $Y_i^T$ is the $N \times 1$ column vector (the reversed $Y_i$)
23+
- $I_n$ is the $N \times N$ [identity matrix](https://en.wikipedia.org/wiki/Identity_matrix)
24+
25+
For example, to store $M = 3$ vectors: $(+1, -1, -1, +1)$, $(−1, −1, +1, −1)$, $(+1, +1, +1, +1)$ to a network with $N = 4$ nodes:
26+
27+
$$ W = \begin{pmatrix}
28+
+1 \\
29+
-1 \\
30+
-1 \\
31+
+1 \\
32+
\end{pmatrix}
33+
\cdot
34+
\begin{pmatrix}
35+
+1 & -1 & -1 & +1
36+
\end{pmatrix}
37+
+
38+
\begin{pmatrix}
39+
-1 \\
40+
-1 \\
41+
+1 \\
42+
-1 \\
43+
\end{pmatrix}
44+
\cdot
45+
\begin{pmatrix}
46+
-1 & -1 & +1 & -1
47+
\end{pmatrix}
48+
+
49+
\begin{pmatrix}
50+
+1 \\
51+
+1 \\
52+
+1 \\
53+
+1 \\
54+
\end{pmatrix}
55+
\cdot
56+
\begin{pmatrix}
57+
+1 & +1 & +1 & +1
58+
\end{pmatrix} $$
59+
60+
Finally,
61+
62+
$$ W = \begin{pmatrix}
63+
0 & 1 & 1 & 3 \\
64+
1 & 0 & 1 & 1 \\
65+
-1 & 1 & 0 & -1 \\
66+
3 & 1 & -1 & 0 \\
67+
\end{pmatrix} $$
68+
69+
### Task
70+
Implement a `hopfield/2` predicate:
71+
- The first argument is a given list of vectors to be stored in a Hopfield Network. Every vector is also a list with the corresponding vector elements.
72+
- The second argument will return the matrix containing the network weights, as a list of lists (one list per row).
73+
74+
### Execution Example
75+
Input:
76+
77+
?- hopfield([
78+
[+1,-1,-1,+1],
79+
[-1,-1,+1,-1],
80+
[+1,+1,+1,+1]
81+
], W).
82+
83+
84+
Output:
85+
86+
W = [[0,1,-1,3], [1,0,1,1], [-1,1,0,-1], [3,1,-1,0]]
87+
88+
### Implementation
89+
The `hopfield` predicate is implemented in `hopfield.pl`, along with several helper predicates.

MatrixDiagonals/README.md

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
## Group A - Exercise 1
2+
***
3+
4+
### Task
5+
Implement a `diags(Matrix, DiagsDown, DiagsUp)` predicate:
6+
- `Matrix` is a given 2-D matrix (a list of lists)
7+
- `DiagsDown` will return a list of the descending diagonals of `Matrix`
8+
- `DiagsUp` will return a list of the ascending diagonals of `Matrix`
9+
10+
### Execution Example
11+
Input:
12+
13+
?- diags([[a,b,c,d],[e,f,g,h],[i,j,k,l]],DiagsDown,DiagsUp).
14+
15+
Output:
16+
17+
DiagsDown = [[i],[e,j],[a,f,k],[b,g,l],[c,h],[d]]
18+
DiagsUp = [[a],[b,e],[c,f,i],[d,g,j],[h,k],[l]]
19+
20+
### Implementation
21+
The `diags` predicate is implemented in `diags.pl`, along with several helper predicates.

0 commit comments

Comments
 (0)