Skip to content

Commit d7ad2ee

Browse files
author
Mallikarjunarao Kosuri
committed
latext in text
1 parent aca1d44 commit d7ad2ee

File tree

1 file changed

+88
-3
lines changed

1 file changed

+88
-3
lines changed

extra/latext_functions.txt

Lines changed: 88 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,91 @@
1-
Linear Regression
2-
Hypothesis
31

4-
Cost Fucntion
2+
# Supervise Learning
53

4+
## Linear Regression
5+
6+
### Hypothesis
7+
8+
### Cost Fucntion
9+
```
10+
J(\theta_0,\theta_1)=\frac{1}{2m}\sum_{i=1}^{m}(\hat{y}_i-y_i)^2=\frac{1}{2m}\sum_{i=1}^{m}(h_\theta(x_i)-y_i)^2
11+
```
12+
13+
14+
## Linear Regression with multiple variables
15+
16+
### Hypothesis
17+
18+
### Cost Fucntion
19+
```
620
J(\theta_0,\theta_1)=\frac{1}{2m}\sum_{i=1}^{m}(\hat{y}_i-y_i)^2=\frac{1}{2m}\sum_{i=1}^{m}(h_\theta(x_i)-y_i)^2
21+
```
22+
23+
### Gradient Descent
24+
```
25+
repeat \hspace*{1mm} untill \hspace*{1mm} convergence: \{\\
26+
\hspace*{20mm} \theta_j:=\theta_j-\alpha\frac{1}{m}\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)}).x_j^{(i)} \hspace*{8mm} for \hspace*{1mm} j:=0..n
27+
\\\hspace*{6mm}\}
28+
```
29+
30+
## Logistic Regression
31+
32+
### Hypothesis
33+
```
34+
h_\theta(x)=g(\theta^Tx)
35+
```
36+
37+
### Cost Fucntion
38+
```
39+
J(\theta)=-\frac{1}{m}\sum_{i=1}^{m}(y^{(i)}log(h_\theta(x^{(i)}))+(1-y^{(i)})log(1-h_\theta(x^{(i)}))
40+
```
41+
42+
### Gradient Descent
43+
```
44+
repeat \hspace*{1mm} untill \hspace*{1mm} convergence: \{\\
45+
\hspace*{20mm} \theta_j:=\theta_j-\alpha\frac{1}{m}\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)}).x_j^{(i)} \hspace*{8mm} for \hspace*{1mm} j:=0..n
46+
\\\hspace*{6mm}\}
47+
```
48+
49+
## Logistic Regression with multiple variables
50+
51+
### Hypothesis
52+
```
53+
h_\theta(x)=g(\theta^Tx)
54+
```
55+
56+
### Cost Fucntion
57+
```
58+
J(\theta)=-\frac{1}{m}\sum_{i=1}^{m}[y^{(i)}log(h_\theta(x^{(i)}))+(1-y^{(i)})log(1-h_\theta(x^{(i)})]+\frac{\lambda}{2m}\sum_{j=1}^n\theta_j^2
59+
```
60+
61+
### Gradient Descent
62+
```
63+
Repeat: \{
64+
\\
65+
\hspace*{20mm}\theta_0:=\theta_0-\alpha\frac{1}{m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})x_0^{(i)}
66+
\\
67+
\hspace*{20mm} \theta_j:=\theta_j-\alpha[(\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)})x_j^{(i)})+\frac{\lambda}{m}\theta_j]\hspace*{8mm}j\epsilon{1,2,\dots n})
68+
\\
69+
\hspace*{6mm}\}
70+
71+
## Nural Networks
72+
73+
### Hypothesis
74+
```
75+
h_\theta(x)=g(\theta^Tx)
76+
```
77+
78+
### Cost Fucntion
79+
```
80+
J(\Theta)=-\frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{k}[y^{(k)}log((h_\Theta(x^{(i)}))_k)+(1-y^{(i)}_k)log(1-(h_\Theta(x^{(i)})_k)]+\frac{\lambda}{2m}\sum_{l=1}^{L-1}\sum_{i=1}^{s_l}\sum_{j=1}^{s_{l+1}}(\theta_{j,i}^{(l)})^2
81+
```
82+
83+
### Gradient Descent
84+
```
85+
Repeat: \{
86+
\\
87+
\hspace*{20mm}\theta_0:=\theta_0-\alpha\frac{1}{m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})x_0^{(i)}
88+
\\
89+
\hspace*{20mm} \theta_j:=\theta_j-\alpha[(\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)})x_j^{(i)})+\frac{\lambda}{m}\theta_j]\hspace*{8mm}j\epsilon{1,2,\dots n})
90+
\\
91+
\hspace*{6mm}\}

0 commit comments

Comments
 (0)