You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Remove unnecessary goal test in search.py (aimacode#953)
Remove unnecessary initial goal test in best_first_graph_search. The loop will catch that case immediately.
* Minor Changes in Text (aimacode#955)
* Minor text change (aimacode#957)
To make it more accurate.
* Minor change in text (aimacode#956)
To make it more descriptive and accurate.
* Added relu Activation (aimacode#960)
* added relu activation
* added default parameters
* Changes in texts (aimacode#959)
Added a few new sentences, modified the sentence structure at a few places, and corrected some grammatical errors.
* Change PriorityQueue expansion (aimacode#962)
`self.heap.append` simply appends to the end of the `self.heap`
Since `self.heap` is just a python list.
`self.append` calls the append method of the class instance, effectively putting the item in its proper place.
* added GSoC 2018 contributors
A thank you to contributors from the GSoC 2018 program!
* Revamped the notebook (aimacode#963)
* Revamped the notebook
* A few changes reversed
Changed a few things from my original PR after a review from ad71.
* Text Changes + Colored Table (aimacode#964)
Made a colored table to display dog movement instead. Corrected grammatical errors, improved the sentence structure and corrected any typos found.
* Fixed typos (aimacode#970)
Typos and minor other text errors removed
* Fixed Typos (aimacode#971)
Corrected typos + minor other text changes
* Update intro.ipynb (aimacode#969)
* Added activation functions (aimacode#968)
* Updated label_queen_conflicts function (aimacode#967)
Shortened it, finding conflicts separately and storing them in different variables has no use later in the notebook; so i believe this looks better.
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -168,7 +168,7 @@ Here is a table of the implemented data structures, the figure, name of the impl
168
168
169
169
# Acknowledgements
170
170
171
-
Many thanks for contributions over the years. I got bug reports, corrected code, and other support from Darius Bacon, Phil Ruggera, Peng Shao, Amit Patil, Ted Nienstedt, Jim Martin, Ben Catanzariti, and others. Now that the project is on GitHub, you can see the [contributors](https://github.com/aimacode/aima-python/graphs/contributors) who are doing a great job of actively improving the project. Many thanks to all contributors, especially @darius, @SnShine, @reachtarunhere, @MrDupin, and @Chipe1.
171
+
Many thanks for contributions over the years. I got bug reports, corrected code, and other support from Darius Bacon, Phil Ruggera, Peng Shao, Amit Patil, Ted Nienstedt, Jim Martin, Ben Catanzariti, and others. Now that the project is on GitHub, you can see the [contributors](https://github.com/aimacode/aima-python/graphs/contributors) who are doing a great job of actively improving the project. Many thanks to all contributors, especially @darius, @SnShine, @reachtarunhere, @MrDupin, @Chipe1, @ad71and @MariannaSpyrakou.
Copy file name to clipboardExpand all lines: intro.ipynb
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -62,7 +62,7 @@
62
62
"source": [
63
63
"From there, the notebook alternates explanations with examples of use. You can run the examples as they are, and you can modify the code cells (or add new cells) and run your own examples. If you have some really good examples to add, you can make a github pull request.\n",
64
64
"\n",
65
-
"If you want to see the source code of a function, you can open a browser or editor and see it in another window, or from within the notebook you can use the IPython magic function `%psource` (for \"print source\") or the function `psource` from `notebook.py`. Also, if the algorithm has pseudocode, you can read it by calling the `pseudocode` function with input the name of the algorithm."
65
+
"If you want to see the source code of a function, you can open a browser or editor and see it in another window, or from within the notebook you can use the IPython magic function `%psource` (for \"print source\") or the function `psource` from `notebook.py`. Also, if the algorithm has pseudocode available, you can read it by calling the `pseudocode` function with the name of the algorithm passed as a parameter."
Copy file name to clipboardExpand all lines: knowledge_current_best.ipynb
+23-14Lines changed: 23 additions & 14 deletions
Original file line number
Diff line number
Diff line change
@@ -38,15 +38,15 @@
38
38
"source": [
39
39
"## OVERVIEW\n",
40
40
"\n",
41
-
"Like the [learning module](https://github.com/aimacode/aima-python/blob/master/learning.ipynb), this chapter focuses on methods for generating a model/hypothesis for a domain. Unlike though the learning chapter, here we use prior knowledge to help us learn from new experiences and find a proper hypothesis.\n",
41
+
"Like the [learning module](https://github.com/aimacode/aima-python/blob/master/learning.ipynb), this chapter focuses on methods for generating a model/hypothesis for a domain; however, unlike the learning chapter, here we use prior knowledge to help us learn from new experiences and find a proper hypothesis.\n",
42
42
"\n",
43
43
"### First-Order Logic\n",
44
44
"\n",
45
-
"Usually knowledge in this field is represented as **first-order logic**, a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called **goal predicate**, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples.\n",
45
+
"Usually knowledge in this field is represented as **first-order logic**; a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called **goal predicate**, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples.\n",
46
46
"\n",
47
47
"### Representation\n",
48
48
"\n",
49
-
"In this module, we use dictionaries to represent examples, with keys the attribute names and values the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions.\n",
49
+
"In this module, we use dictionaries to represent examples, with keys being the attribute names and values being the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions.\n",
50
50
"\n",
51
51
"For example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example:\n",
52
52
"\n",
@@ -73,15 +73,15 @@
73
73
"\n",
74
74
"### Overview\n",
75
75
"\n",
76
-
"In **Current-Best Learning**, we start with a hypothesis and we refine it as we iterate through the examples. For each example, there are three possible outcomes. The example is consistent with the hypothesis, the example is a **false positive** (real value is false but got predicted as true) and **false negative** (real value is true but got predicted as false). Depending on the outcome we refine the hypothesis accordingly:\n",
76
+
"In **Current-Best Learning**, we start with a hypothesis and we refine it as we iterate through the examples. For each example, there are three possible outcomes: the example is consistent with the hypothesis, the example is a **false positive** (real value is false but got predicted as true) and the example is a **false negative** (real value is true but got predicted as false). Depending on the outcome we refine the hypothesis accordingly:\n",
77
77
"\n",
78
-
"* Consistent: We do not change the hypothesis and we move on to the next example.\n",
78
+
"* Consistent: We do not change the hypothesis and move on to the next example.\n",
79
79
"\n",
80
80
"* False Positive: We **specialize** the hypothesis, which means we add a conjunction.\n",
81
81
"\n",
82
82
"* False Negative: We **generalize** the hypothesis, either by removing a conjunction or a disjunction, or by adding a disjunction.\n",
83
83
"\n",
84
-
"When specializing and generalizing, we should take care to not create inconsistencies with previous examples. To avoid that caveat, backtracking is needed. Thankfully, there is not just one specialization or generalization, so we have a lot to choose from. We will go through all the specialization/generalizations and we will refine our hypothesis as the first specialization/generalization consistent with all the examples seen up to that point."
84
+
"When specializing or generalizing, we should make sure to not create inconsistencies with previous examples. To avoid that caveat, backtracking is needed. Thankfully, there is not just one specialization or generalization, so we have a lot to choose from. We will go through all the specializations/generalizations and we will refine our hypothesis as the first specialization/generalization consistent with all the examples seen up to that point."
85
85
]
86
86
},
87
87
{
@@ -138,7 +138,7 @@
138
138
"source": [
139
139
"### Implementation\n",
140
140
"\n",
141
-
"As mentioned previously, examples are dictionaries (with keys the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the *NOT* operation with an exclamation mark (!).\n",
141
+
"As mentioned earlier, examples are dictionaries (with keys being the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the *NOT* operation with an exclamation mark (!).\n",
142
142
"\n",
143
143
"We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples.\n",
144
144
"\n",
@@ -148,7 +148,9 @@
148
148
{
149
149
"cell_type": "code",
150
150
"execution_count": 3,
151
-
"metadata": {},
151
+
"metadata": {
152
+
"collapsed": true
153
+
},
152
154
"outputs": [
153
155
{
154
156
"data": {
@@ -370,7 +372,7 @@
370
372
"\n",
371
373
"We will take a look at two examples. The first is a trivial one, while the second is a bit more complicated (you can also find it in the book).\n",
372
374
"\n",
373
-
"First we have the \"animals taking umbrellas\" example. Here we want to find a hypothesis to predict whether or not an animal will take an umbrella. The attributes are `Species`, `Rain` and `Coat`. The possible values are `[Cat, Dog]`, `[Yes, No]` and `[Yes, No]` respectively. Below we give seven examples (with `GOAL` we denote whether an animal will take an umbrella or not):"
375
+
"Earlier, we had the \"animals taking umbrellas\" example. Now we want to find a hypothesis to predict whether or not an animal will take an umbrella. The attributes are `Species`, `Rain` and `Coat`. The possible values are `[Cat, Dog]`, `[Yes, No]` and `[Yes, No]` respectively. Below we give seven examples (with `GOAL` we denote whether an animal will take an umbrella or not):"
374
376
]
375
377
},
376
378
{
@@ -427,7 +429,7 @@
427
429
"cell_type": "markdown",
428
430
"metadata": {},
429
431
"source": [
430
-
"We got 5/7 correct. Not terribly bad, but we can do better. Let's run the algorithm and see how that performs."
432
+
"We got 5/7 correct. Not terribly bad, but we can do better. Lets now run the algorithm and see how that performs in comparison to our current result. "
0 commit comments