File tree Expand file tree Collapse file tree 1 file changed +0
-13
lines changed
tensorflow/g3doc/tutorials/deep_cnn Expand file tree Collapse file tree 1 file changed +0
-13
lines changed Original file line number Diff line number Diff line change @@ -429,19 +429,6 @@ version of the training script parallelizes the model across multiple GPU cards.
429
429
python cifar10_multi_gpu_train.py --num_gpus=2
430
430
```
431
431
432
- The training script should output:
433
-
434
- ``` shell
435
- Filling queue with 20000 CIFAR images before starting to train. This will take a few minutes.
436
- 2015-11-04 11:45:45.927302: step 0, loss = 4.68 (2.0 examples/sec; 64.221 sec/batch)
437
- 2015-11-04 11:45:49.133065: step 10, loss = 4.66 (533.8 examples/sec; 0.240 sec/batch)
438
- 2015-11-04 11:45:51.397710: step 20, loss = 4.64 (597.4 examples/sec; 0.214 sec/batch)
439
- 2015-11-04 11:45:54.446850: step 30, loss = 4.62 (391.0 examples/sec; 0.327 sec/batch)
440
- 2015-11-04 11:45:57.152676: step 40, loss = 4.61 (430.2 examples/sec; 0.298 sec/batch)
441
- 2015-11-04 11:46:00.437717: step 50, loss = 4.59 (406.4 examples/sec; 0.315 sec/batch)
442
- ...
443
- ```
444
-
445
432
Note that the number of GPU cards used defaults to 1. Additionally, if only 1
446
433
GPU is available on your machine, all computations will be placed on it, even if
447
434
you ask for more.
You can’t perform that action at this time.
0 commit comments