diff --git a/dev/articles/examples/basic-nn-module.html b/dev/articles/examples/basic-nn-module.html index 6a2fe20207..dfaa79b78f 100644 --- a/dev/articles/examples/basic-nn-module.html +++ b/dev/articles/examples/basic-nn-module.html @@ -148,9 +148,9 @@
## $w
## torch_tensor
-## 0.1865
-## -1.0072
-## -0.3684
+## -0.6771
+## -0.8312
+## -0.9106
## [ CPUFloatType{3,1} ][ requires_grad = TRUE ]
##
## $b
@@ -161,9 +161,9 @@ Autograd
# or individually
model$w
## torch_tensor
-## 0.1865
-## -1.0072
-## -0.3684
+## -0.6771
+## -0.8312
+## -0.9106
## [ CPUFloatType{3,1} ][ requires_grad = TRUE ]
model$b
y_pred
## torch_tensor
-## 0.1877
-## 1.6619
-## -1.2566
-## -0.1307
-## -0.8274
-## -0.5725
-## -0.3600
-## -0.3510
-## 0.9889
-## -0.5854
+## 0.2889
+## 0.8299
+## 2.1768
+## 0.7418
+## 1.9289
+## -0.5952
+## 0.3256
+## 0.4967
+## 1.0273
+## 1.0356
## [ CPUFloatType{10,1} ][ grad_fn = <AddBackward0> ]
diff --git a/dev/articles/indexing.html b/dev/articles/indexing.html
index f9cc2e0201..4fd66bfdb6 100644
--- a/dev/articles/indexing.html
+++ b/dev/articles/indexing.html
@@ -264,23 +264,23 @@ The following syntax will give you the first row:
x[1,]
#> torch_tensor
-#> -0.3052
-#> -0.2663
-#> -0.4936
+#> 0.9608
+#> -0.3748
+#> -0.2395
#> [ CPUFloatType{3} ]
And this would give you the first 2 columns:
x[,1:2]
#> torch_tensor
-#> -0.3052 -0.2663
-#> 0.4514 -0.4122
+#> 0.9608 -0.3748
+#> 0.5306 -0.3664
#> [ CPUFloatType{2,2} ]
You can also use boolean vectors, for example:
x[c(TRUE, FALSE, TRUE, FALSE), c(TRUE, FALSE, TRUE, FALSE)]
#> torch_tensor
-#> 0.4386 0.7013
-#> -0.0037 -1.7523
+#> -0.6921 0.2122
+#> -0.0511 -1.3651
#> [ CPUFloatType{2,2} ]
The above examples also work if the index were long or boolean tensors, instead of R vectors. It’s also possible to index with diff --git a/dev/articles/loading-data.html b/dev/articles/loading-data.html index 70ed9c43cf..b6fe74df21 100644 --- a/dev/articles/loading-data.html +++ b/dev/articles/loading-data.html @@ -401,7 +401,7 @@
Another example is torch_ones
, which creates a tensor
filled with ones.
traced_fn(torch_randn(3))
#> torch_tensor
-#> 1.6110
#> 0.0000
-#> 1.0772
+#> 0.4954
+#> 0.0000
#> [ CPUFloatType{3} ]
It’s also possible to trace nn_modules()
defined in R,
for example:
traced_module(torch_randn(3, 10))
#> torch_tensor
-#> -0.3029
-#> 0.0101
-#> -0.2558
+#> 0.3633
+#> 0.2271
+#> 0.2926
#> [ CPUFloatType{3,1} ][ grad_fn = <AddmmBackward0> ]
traced_dropout$eval()
# even after setting to eval mode, dropout is applied
traced_dropout(torch_ones(3,3))
#> torch_tensor
-#> 2 2 0
#> 2 2 2
+#> 2 2 0
#> 0 0 2
#> [ CPUFloatType{3,3} ]
We still manually compute the forward pass, and we still manually update the weights. In the last two chapters of this section, we’ll see how these parts of the logic can be made more modular and reusable, as diff --git a/dev/pkgdown.yml b/dev/pkgdown.yml index dee9784413..fe1637c531 100644 --- a/dev/pkgdown.yml +++ b/dev/pkgdown.yml @@ -20,7 +20,7 @@ articles: tensor-creation: tensor-creation.html torchscript: torchscript.html using-autograd: using-autograd.html -last_built: 2024-06-20T12:34Z +last_built: 2024-06-20T13:28Z urls: reference: https://torch.mlverse.org/docs/reference article: https://torch.mlverse.org/docs/articles diff --git a/dev/reference/distr_bernoulli.html b/dev/reference/distr_bernoulli.html index 736ef194e3..551750aec0 100644 --- a/dev/reference/distr_bernoulli.html +++ b/dev/reference/distr_bernoulli.html @@ -134,7 +134,7 @@