@@ -16,22 +16,22 @@ quick links here:
16
16
17
17
We currently maintain three Docker container images:
18
18
19
- * ` b. gcr.io/tensorflow/tensorflow` , which is a minimal VM with TensorFlow and
19
+ * ` gcr.io/tensorflow/tensorflow ` , which is a minimal VM with TensorFlow and
20
20
all dependencies.
21
21
22
- * ` b. gcr.io/tensorflow/tensorflow-full` , which contains a full source
22
+ * ` gcr.io/tensorflow/tensorflow-full ` , which contains a full source
23
23
distribution and all required libraries to build and run TensorFlow from
24
24
source.
25
25
26
- * ` b. gcr.io/tensorflow/tensorflow-full-gpu` , which is the same as the previous
26
+ * ` gcr.io/tensorflow/tensorflow-full-gpu ` , which is the same as the previous
27
27
container, but built with GPU support.
28
28
29
29
## Running the container
30
30
31
31
Each of the containers is published to a Docker registry; for the non-GPU
32
32
containers, running is as simple as
33
33
34
- $ docker run -it -p 8888:8888 b. gcr.io/tensorflow/tensorflow
34
+ $ docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow
35
35
36
36
For the container with GPU support, we require the user to make the appropriate
37
37
NVidia libraries available on their system, as well as providing mappings so
@@ -40,7 +40,7 @@ accomplished via
40
40
41
41
$ export CUDA_SO=$(\ls /usr/lib/x86_64-linux-gnu/libcuda.* | xargs -I{} echo '-v {}:{}')
42
42
$ export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}')
43
- $ docker run -it -p 8888:8888 $CUDA_SO $DEVICES b. gcr.io/tensorflow/tensorflow-devel-gpu
43
+ $ docker run -it -p 8888:8888 $CUDA_SO $DEVICES gcr.io/tensorflow/tensorflow-devel-gpu
44
44
45
45
Alternately, you can use the ` docker_run_gpu.sh ` script in this directory.
46
46
0 commit comments