forked from fengdu78/Coursera-ML-AndrewNg-Notes
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path5 - 6 - Vectorization (14 min).srt
1876 lines (1501 loc) · 33.2 KB
/
5 - 6 - Vectorization (14 min).srt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1
00:00:00,280 --> 00:00:04,479
In this video, I'd like to tell you about the idea of vectorization.
在这段视频中 我将介绍有关向量化的内容
(字幕整理:中国海洋大学 黄海广,haiguang2000@qq.com )
2
00:00:04,480 --> 00:00:06,471
So, whether you're using Octave
无论你是用Octave
3
00:00:06,471 --> 00:00:08,277
or a similar language like MATLAB
还是别的语言 比如MATLAB
4
00:00:08,277 --> 00:00:09,604
or whether you're using Python
或者你正在用Python
5
00:00:09,604 --> 00:00:12,520
and NumPy or Java CC++.
NumPy 或 Java C C++
6
00:00:12,520 --> 00:00:14,850
All of these languages have either
所有这些语言都具有
7
00:00:14,850 --> 00:00:16,708
built into them or have
各种线性代数库
8
00:00:16,720 --> 00:00:19,439
readily and easily accessible, different
这些库文件都是内置的
9
00:00:19,439 --> 00:00:21,806
numerical linear algebra libraries.
容易阅读和获取
10
00:00:21,820 --> 00:00:23,335
They're usually very well written,
他们通常写得很好
11
00:00:23,335 --> 00:00:25,695
highly optimized, often so that developed by
已经经过高度优化
12
00:00:25,695 --> 00:00:29,181
people that, you know, have PhDs in numerical computing or
通常是数值计算方面的博士
13
00:00:29,181 --> 00:00:32,075
they are really specializing numerical computing.
或者专业人士开发的
14
00:00:32,075 --> 00:00:33,944
And when you're implementing machine
而当你实现机器学习算法时
15
00:00:33,960 --> 00:00:35,904
learning algorithms, if you're able
如果你能
16
00:00:35,930 --> 00:00:37,797
to take advantage of these
好好利用这些
17
00:00:37,810 --> 00:00:39,296
linear algebra libraries or these
线性代数库或者说
18
00:00:39,310 --> 00:00:41,600
numerical linear algebra libraries and
数值线性代数库
19
00:00:41,620 --> 00:00:43,387
mix the routine calls to them
并联合调用它们
20
00:00:43,387 --> 00:00:45,172
rather than sort of right call
而不是自己去做那些
21
00:00:45,180 --> 00:00:48,029
yourself to do things that these libraries could be doing.
函数库可以做的事情
22
00:00:48,040 --> 00:00:49,612
If you do that then
如果是这样的话 那么
23
00:00:49,612 --> 00:00:51,872
often you get that "first is more efficient".
通常你会发现 首先 这样更有效
24
00:00:51,880 --> 00:00:53,179
So, just run more quickly and
也就是说运行速度更快
25
00:00:53,179 --> 00:00:54,891
take better advantage of
并且更好地利用
26
00:00:54,891 --> 00:00:56,631
any parallel hardware your computer
你的计算机里可能有的一些并行硬件系统
27
00:00:56,631 --> 00:00:58,254
may have and so on.
等等
28
00:00:58,270 --> 00:01:00,533
And second, it also means
第二 这也意味着
29
00:01:00,540 --> 00:01:03,075
that you end up with less code that you need to write.
你可以用更少的代码来实现你需要的功能
30
00:01:03,075 --> 00:01:04,962
So have a simpler implementation
因此 实现的方式更简单
31
00:01:04,962 --> 00:01:08,532
that is, therefore, maybe also more likely to be bug free.
代码出现问题的有可能性也就越小
32
00:01:08,550 --> 00:01:10,534
And as a concrete example.
举个具体的例子
33
00:01:10,570 --> 00:01:12,726
Rather than writing code
与其自己写代码
34
00:01:12,726 --> 00:01:15,061
yourself to multiply matrices, if
做矩阵乘法
35
00:01:15,061 --> 00:01:16,300
you let Octave do it by
如果你只在Octave中
36
00:01:16,300 --> 00:01:18,145
typing a times b,
输入 a乘以b
37
00:01:18,145 --> 00:01:19,833
that will use a very efficient
就是一个非常有效的
38
00:01:19,833 --> 00:01:22,318
routine to multiply the 2 matrices.
两个矩阵相乘的程序
39
00:01:22,340 --> 00:01:23,985
And there's a bunch of examples like
有很多例子可以说明
40
00:01:24,010 --> 00:01:27,220
these where you use appropriate vectorized implementations.
如果你用合适的向量化方法来实现
41
00:01:27,220 --> 00:01:30,062
You get much simpler code, and much more efficient code.
你就会有一个简单得多 也有效得多的代码
42
00:01:30,280 --> 00:01:33,071
Let's look at some examples.
让我们来看一些例子
43
00:01:33,071 --> 00:01:34,937
Here's a usual hypothesis of linear
这是一个常见的线性回归假设函数
44
00:01:34,937 --> 00:01:36,415
regression and if you
如果
45
00:01:36,415 --> 00:01:37,348
want to compute H of
你想要计算 h(x)
46
00:01:37,348 --> 00:01:40,032
X, notice that there is a sum on the right.
注意到右边是求和
47
00:01:40,032 --> 00:01:41,130
And so one thing you could
那么你可以
48
00:01:41,130 --> 00:01:42,775
do is compute the sum
自己计算
49
00:01:42,775 --> 00:01:46,611
from J equals 0 to J equals N yourself.
j =0 到 j = n 的和
50
00:01:46,620 --> 00:01:48,000
Another way to think of this
但换另一种方式来想想
51
00:01:48,000 --> 00:01:49,210
is to think of h
是把 h 看作
52
00:01:49,210 --> 00:01:52,029
of x as theta transpose x
θ 转置乘以 x
53
00:01:52,029 --> 00:01:53,262
and what you can do is
那么
54
00:01:53,262 --> 00:01:55,654
think of this as you know, computing this
你就可以写成
55
00:01:55,660 --> 00:01:57,823
in a product between 2 vectors
两个向量的内积
56
00:01:57,840 --> 00:02:00,135
where theta is, you know, your
其中 θ 就是
57
00:02:00,135 --> 00:02:01,784
vector say theta 0, theta 1,
θ0 θ1
58
00:02:01,800 --> 00:02:04,812
theta 2 if you have 2 features.
θ2 如果你有两个特征量
59
00:02:04,812 --> 00:02:06,410
If n equals 2 and if
如果 n 等于2 并且如果
60
00:02:06,450 --> 00:02:08,133
you think of x as this
你把 x 看作
61
00:02:08,133 --> 00:02:11,810
vector, x0, x1, x2
x0 x1 x2
62
00:02:11,884 --> 00:02:13,952
and these 2 views can
这两种思考角度
63
00:02:13,952 --> 00:02:17,539
give you 2 different implementations.
会给你两种不同的实现方式
64
00:02:17,560 --> 00:02:18,909
Here's what I mean.
比如说
65
00:02:18,909 --> 00:02:21,012
Here's an unvectorized implementation for
这是未向量化的代码实现方式
66
00:02:21,040 --> 00:02:22,454
how to compute h of
计算 h(x) 是未向量化的
67
00:02:22,454 --> 00:02:26,120
x and by unvectorized I mean, without vectorization.
我的意思是 没有被向量化
68
00:02:26,130 --> 00:02:29,479
We might first initialize, you know, prediction to be 0.0.
我们可能首先要初始化变量 prediction 的值为0.0
69
00:02:29,479 --> 00:02:32,383
This is going to eventually, the
而这个
70
00:02:32,383 --> 00:02:34,287
prediction is going to be
变量 prediction 的最终结果就是
71
00:02:34,300 --> 00:02:36,090
h of x and then
h(x) 然后
72
00:02:36,090 --> 00:02:37,258
I'm going to have a for loop for
我要用一个 for 循环
73
00:02:37,270 --> 00:02:38,354
j equals one through n+1
j 取值 0 到 n+1
74
00:02:38,354 --> 00:02:40,792
prediction gets incremented by
变量prediction 每次就通过
75
00:02:40,792 --> 00:02:41,822
theta j times xj.
自身加上 θ(j) 乘以 x(j) 更新值
76
00:02:41,822 --> 00:02:44,737
So, it's kind of this expression over here.
这个就是算法的代码实现
77
00:02:44,737 --> 00:02:47,223
By the way, I should mention in these
顺便我要提醒一下
78
00:02:47,223 --> 00:02:48,894
vectors right over here, I
这里的向量
79
00:02:48,900 --> 00:02:51,102
had these vectors being 0 index.
我用的下标是 0
80
00:02:51,110 --> 00:02:52,600
So, I had theta 0 theta 1,
所以我有 θ0 θ1
81
00:02:52,600 --> 00:02:54,390
theta 2, but because MATLAB
θ2 但因为 MATLAB
82
00:02:54,390 --> 00:02:56,713
is one index, theta 0
的下标从1开始 在 MATLAB 中 θ0
83
00:02:56,713 --> 00:02:58,019
in MATLAB, we might
我们可能会
84
00:02:58,019 --> 00:03:00,204
end up representing as theta
用 θ1 来表示
85
00:03:00,204 --> 00:03:02,042
1 and this second element
这第二个元素
86
00:03:02,042 --> 00:03:04,392
ends up as theta
最后就会变成
87
00:03:04,392 --> 00:03:05,862
2 and this third element
θ2 而第三个元素
88
00:03:05,880 --> 00:03:08,002
may end up as theta
最终可能就用
89
00:03:08,002 --> 00:03:09,952
3 just because vectors in
θ3 表示 因为
90
00:03:09,960 --> 00:03:11,998
MATLAB are indexed starting
MATLAB 中的下标从1开始
91
00:03:11,998 --> 00:03:13,525
from 1 even though our real
即使我们实际的
92
00:03:13,525 --> 00:03:15,436
theta and x here starting,
θ 和 x 的下标从0开始
93
00:03:15,450 --> 00:03:17,002
indexing from 0, which
这就是为什么
94
00:03:17,002 --> 00:03:18,785
is why here I have a for loop
这里我的 for 循环
95
00:03:18,785 --> 00:03:20,498
j goes from 1 through n+1
j 取值从 1 直到 n+1
96
00:03:20,498 --> 00:03:22,225
rather than j go through
而不是
97
00:03:22,225 --> 00:03:26,243
0 up to n, right? But
从 0 到 n 清楚了吗?
98
00:03:26,300 --> 00:03:27,870
so, this is an
但这是一个
99
00:03:27,870 --> 00:03:29,571
unvectorized implementation in that we
未向量化的代码实现方式
100
00:03:29,571 --> 00:03:31,373
have a for loop that summing up
我们用一个 for 循环
101
00:03:31,373 --> 00:03:34,018
the n elements of the sum.
对 n 个元素进行加和
102
00:03:34,050 --> 00:03:35,646
In contrast, here's how you
作为比较 接下来是
103
00:03:35,646 --> 00:03:38,400
write a vectorized implementation which
向量化的代码实现
104
00:03:38,410 --> 00:03:39,959
is that you would think
你把
105
00:03:39,959 --> 00:03:42,618
of x and theta
x 和 θ
106
00:03:42,618 --> 00:03:43,955
as vectors, and you just set
看做向量 而你只需要
107
00:03:43,955 --> 00:03:46,039
prediction equals theta transpose
令变量 prediction 等于 θ转置
108
00:03:46,039 --> 00:03:48,347
times x. You're just computing like so.
乘以 x 你就可以这样计算
109
00:03:48,360 --> 00:03:51,011
Instead of writing all these
与其写所有这些
110
00:03:51,011 --> 00:03:52,966
lines of code with the for loop,
for 循环的代码
111
00:03:52,966 --> 00:03:54,242
you instead have one line
你只需要一行代码
112
00:03:54,242 --> 00:03:56,648
of code and what this
这行代码
113
00:03:56,648 --> 00:03:57,555
line of code on the right
右边所做的
114
00:03:57,555 --> 00:03:59,237
will do is it use
就是
115
00:03:59,237 --> 00:04:01,829
Octaves highly optimized numerical
利用 Octave 的高度优化的数值
116
00:04:01,840 --> 00:04:03,859
linear algebra routines to compute
线性代数算法来计算
117
00:04:03,859 --> 00:04:06,245
this inner product between the
两个向量的内积
118
00:04:06,245 --> 00:04:08,186
two vectors, theta and X. And not
θ 以及 x
119
00:04:08,190 --> 00:04:10,182
only is the vectorized implementation
这样向量化的实现不仅仅是更简单
120
00:04:10,182 --> 00:04:14,664
simpler, it will also run more efficiently.
它运行起来也将更加高效
121
00:04:15,820 --> 00:04:17,792
So, that was Octave, but
这就是 Octave 所做的
122
00:04:17,792 --> 00:04:19,912
issue of vectorization applies to
而向量化的方法
123
00:04:19,920 --> 00:04:22,020
other programming languages as well.
在其他编程语言中同样可以实现
124
00:04:22,040 --> 00:04:24,947
Let's look at an example in C++.
让我们来看一个 C++ 的例子
125
00:04:24,947 --> 00:04:27,965
Here's what an unvectorized implementation might look like.
这就是未向量化的代码实现可能看起来的样子
126
00:04:27,965 --> 00:04:31,395
We again initialize prediction, you know, to
我们再次初始化变量 prediction 为 0.0
127
00:04:31,395 --> 00:04:32,518
0.0 and then we now have a full
然后我们现在有一个完整的
128
00:04:32,518 --> 00:04:34,508
loop for J0 up to
从 j 等于 0 直到 n
129
00:04:34,508 --> 00:04:36,819
n. Prediction + equals
变量 prediction +=
130
00:04:36,830 --> 00:04:38,546
theta j times x j where
theta[j] 乘以 x[j]
131
00:04:38,560 --> 00:04:42,777
again, you have this x + for loop that you write yourself.
再一次 你有这样的自己写的 for 循环
132
00:04:42,777 --> 00:04:44,843
In contrast, using a good
与此相反 使用一个比较好的
133
00:04:44,850 --> 00:04:46,498
numerical linear algebra library in
C++ 数值线性代数库
134
00:04:46,498 --> 00:04:48,965
C++, you could use
你就可以用这个方程
135
00:04:48,990 --> 00:04:54,440
write the function like or rather.
write the function like or rather.
136
00:04:54,560 --> 00:04:56,533
In contrast, using a good
与此相反 使用较好的
137
00:04:56,533 --> 00:04:58,152
numerical linear algebra library in
C++ 数值线性代数库
138
00:04:58,152 --> 00:05:00,686
C++, you can instead
你可以写出这样的代码
139
00:05:00,686 --> 00:05:02,470
write code that might look like this.
write code that might look like this.
140
00:05:02,470 --> 00:05:03,985
So, depending on the details
因此取决于你的
141
00:05:03,985 --> 00:05:05,595
of your numerical linear algebra
数值线性代数库的内容
142
00:05:05,595 --> 00:05:06,790
library, you might be
你可以有一个
143
00:05:06,830 --> 00:05:08,580
able to have an object that
able to have an object that
144
00:05:08,580 --> 00:05:09,918
is a C++ object which is
C++ 对象
145
00:05:09,918 --> 00:05:11,328
vector theta and a C++
theta 和一个 C++
146
00:05:11,350 --> 00:05:13,436
object which is a vector X,
对象 向量 x
147
00:05:13,436 --> 00:05:15,552
and you just take theta dot
你只需要用 theta.transpose ()
148
00:05:15,552 --> 00:05:18,115
transpose times x where
乘以 x
149
00:05:18,120 --> 00:05:20,092
this times becomes C++ to
而这次是让 C++ 来实现运算
150
00:05:20,092 --> 00:05:22,028
overload the operator so
因此
151
00:05:22,028 --> 00:05:26,156
that you can just multiply these two vectors in C++.
你只需要在 C++ 中将两个向量相乘
152
00:05:26,156 --> 00:05:28,091
And depending on, you know, the details
根据
153
00:05:28,110 --> 00:05:29,515
of your numerical and linear algebra
你所使用的数值和线性代数库的使用细节的不同
154
00:05:29,515 --> 00:05:30,855
library, you might end
你最终使的代码表达方式
155
00:05:30,855 --> 00:05:31,894
up using a slightly different and
可能会有些许不同
156
00:05:31,894 --> 00:05:33,636
syntax, but by relying
但是通过
157
00:05:33,636 --> 00:05:35,758
on a library to do this in a product.
一个库来做内积
158
00:05:35,760 --> 00:05:37,064
You can get a much simpler piece
你可以得到一段更简单
159
00:05:37,064 --> 00:05:40,623
of code and a much more efficient one.
更有效的代码
160
00:05:40,623 --> 00:05:43,582
Let's now look at a more sophisticated example.
现在 让我们来看一个更为复杂的例子
161
00:05:43,582 --> 00:05:45,015
Just to remind you here's our
提醒一下
162
00:05:45,015 --> 00:05:46,792
update rule for gradient descent
这是线性回归算法梯度下降的更新规则
163
00:05:46,792 --> 00:05:48,794
for linear regression and so,
所以
164
00:05:48,794 --> 00:05:50,488
we update theta j using this
我们用这条规则对 j 等于 0 1 2 等等的所有值
165
00:05:50,488 --> 00:05:53,672
rule for all values of J equals 0, 1, 2, and so on.
更新 对象 θ j
166
00:05:53,672 --> 00:05:56,259
And if I just write
我只是
167
00:05:56,260 --> 00:05:58,206
out these equations for
用 θ0 θ1 θ2 来写方程
168
00:05:58,206 --> 00:06:00,048
theta 0 Theta one, theta two.
theta 0 Theta one, theta two.
169
00:06:00,048 --> 00:06:02,173
Assuming we have two features.
那就是假设我们有两个特征量
170
00:06:02,173 --> 00:06:03,469
So N equals 2.
所以 n等于2
171
00:06:03,469 --> 00:06:04,607
Then these are the updates we
这些都是我们需要对
172
00:06:04,610 --> 00:06:07,388
perform to theta zero, theta one, theta two.
theta0 theta1 theta2的更新
173
00:06:07,410 --> 00:06:08,982
where you might remember my
你可能还记得
174
00:06:08,982 --> 00:06:10,825
saying in an earlier video
在以前的视频中说过
175
00:06:10,825 --> 00:06:14,783
that these should be simultaneous updates.
这些都应该是同步更新
176
00:06:14,783 --> 00:06:16,268
So let's see if
因此 让我们来看看
177
00:06:16,268 --> 00:06:17,725
we can come up with a
我们是否可以拿出一个
178
00:06:17,725 --> 00:06:20,723
vectorized implementation of this.
向量化的代码实现
179
00:06:20,740 --> 00:06:22,598
Here are my same 3 equations written
这里是和之前相同的三个方程
180
00:06:22,598 --> 00:06:24,182
on a slightly smaller font and you
只不过用更小一些的字体写出来
181
00:06:24,182 --> 00:06:25,517
can imagine that 1 wait
你可以想象
182
00:06:25,520 --> 00:06:26,716
to implement this three lines
实现这三个方程的方式之一
183
00:06:26,720 --> 00:06:27,798
of code is to have a
就是用
184
00:06:27,798 --> 00:06:28,968
for loop that says, you
一个 for 循环
185
00:06:28,968 --> 00:06:31,682
know, for j equals 0,
就是让 j 等于0
186
00:06:31,682 --> 00:06:33,305
1 through 2 the update
1 2
187
00:06:33,305 --> 00:06:35,603
theta J or something like that.
来更新 θ j
188
00:06:35,603 --> 00:06:36,760
But instead, let's come up
但让我们
189
00:06:36,760 --> 00:06:40,975
with a vectorized implementation and see if we can have a simpler way.
用向量化的方式来实现 并看看我们是否能够有一个更简单的方法
190
00:06:40,975 --> 00:06:42,711
So, basically compress these three
基本上用三行代码
191
00:06:42,757 --> 00:06:44,314
lines of code or a
或者一个 for 循环
192
00:06:44,314 --> 00:06:48,518
for loop that, you know, effectively does these 3 sets, 1 set at a time.
一次实现这三个方程
193
00:06:48,518 --> 00:06:49,688
Let's see who can these 3
让我们来看看谁能用这三步
194
00:06:49,688 --> 00:06:51,402
steps and compress them into
并将它们压缩成
195
00:06:51,402 --> 00:06:53,972
1 line of vectorized code.
一行向量化的代码
196
00:06:53,976 --> 00:06:55,476
Here's the idea.
这个想法是
197
00:06:55,480 --> 00:06:56,462
What I'm going to do is I'm
我打算
198
00:06:56,462 --> 00:06:59,131
going to think of theta
把 θ 看做一个向量
199
00:06:59,131 --> 00:07:00,633
as a vector and I'm
然后我用
200
00:07:00,633 --> 00:07:04,214
going to update theta as theta
θ 减去