@@ -2,38 +2,50 @@ TensorFlow Serving on ARM
22=========================
33
44TensorFlow Serving cross-compile project targeting linux on common arm cores from
5- a linux amd64 ( x86_64) host.
5+ a linux amd64 / x86_64 build host.
66
7- ## Overview
8-
9- ** Upstream Project:** [ tensorflow/serving] ( https://github.com/tensorflow/serving )
7+ ## Contents
8+ * [ Overview] ( #overview )
9+ * [ Docker Images] ( #docker-images )
10+ * [ Build From Source] ( #build-from-source )
11+ * [ Legacy Builds] ( #legacy-builds )
12+ * [ Disclosures] ( #disclosures )
13+ * [ Disclaimer] ( #disclaimer )
1014
11- ** Usage Documentation: ** [ TensorFlow Serving with Docker ] ( https://www.tensorflow.org/tfx/serving/docker )
15+ ## Overview
1216
13- This project is basically a giant build wrapper around [ tensorflow/serving] ( https://github.com/tensorflow/serving )
14- with the intention of making it easy to cross-build CPU optimized model server
17+ The basis of this project is to provide an alternative build strategy for
18+ [ tensorflow/serving] ( https://github.com/tensorflow/serving )
19+ with the intention of making it relatively easy to cross-build CPU optimized model server
1520docker images targeting common linux arm platforms. Additonally, a set of docker
16- images is produced for some of the most popular linux arm platforms and hosted on
21+ image build targets is maintained and built for some of the popular linux arm platforms and hosted on
1722Docker Hub.
1823
19- ## The Docker Images
24+ ** Upstream Project:** [ tensorflow/serving] ( https://github.com/tensorflow/serving )
25+
26+ ## Docker Images
2027
2128** Hosted on Docker Hub:** [ emacski/tensorflow-serving] ( https://hub.docker.com/r/emacski/tensorflow-serving )
2229
30+ ** Usage Documentation:** [ TensorFlow Serving with Docker] ( https://www.tensorflow.org/tfx/serving/docker )
31+
32+ ** Note:** The project images are desinged to be functionally equivalent to their upstream counter part.
33+
2334### Quick Start
2435
2536On many consumer / developer 64-bit and 32-bit arm platforms you can simply:
2637``` sh
2738docker pull emacski/tensorflow-serving:latest
2839# or
29- docker pull emacski/tensorflow-serving:2.1 .0
40+ docker pull emacski/tensorflow-serving:2.2 .0
3041```
3142
32- Refer to [ TensorFlow Serving with Docker] ( https://www.tensorflow.org/tfx/serving/docker ) for usage.
43+ Refer to [ TensorFlow Serving with Docker] ( https://www.tensorflow.org/tfx/serving/docker )
44+ for configuration and setting up a model for serving.
3345
3446### Images
3547
36- ` emacski/tensorflow-serving:[Tag] `
48+ #### ` emacski/tensorflow-serving:[Tag] `
3749
3850| ** Tag** | ** ARM Core Compatability** |
3951| ---------| ----------------------------|
@@ -46,12 +58,12 @@ Refer to [TensorFlow Serving with Docker](https://www.tensorflow.org/tfx/serving
4658Example
4759``` bash
4860# on beaglebone black
49- docker pull emacski/tensorflow-serving:2.1 .0-linux_arm_armv7-a_neon_vfpv3
61+ docker pull emacski/tensorflow-serving:2.2 .0-linux_arm_armv7-a_neon_vfpv3
5062```
5163
5264### Aliases
5365
54- ` emacski/tensorflow-serving:[Alias] `
66+ #### ` emacski/tensorflow-serving:[Alias] `
5567
5668| ** Alias** | ** Tag** | ** Notes** |
5769| -----------| ---------| -----------|
@@ -62,10 +74,10 @@ docker pull emacski/tensorflow-serving:2.1.0-linux_arm_armv7-a_neon_vfpv3
6274| <nobr >` latest-linux_arm64 ` </nobr > | <nobr >` [Latest-Version]-linux_arm64 ` </nobr > | |
6375| <nobr >` latest-linux_arm ` </nobr > | <nobr >` [Latest-Version]-linux_arm ` </nobr > | |
6476
65- Example
77+ Examples
6678``` bash
6779# on Raspberry PI 3 B+
68- docker pull emacski/tensorflow-serving:2.1 .0-linux_arm64
80+ docker pull emacski/tensorflow-serving:2.2 .0-linux_arm64
6981# or
7082docker pull emacski/tensorflow-serving:latest-linux_arm64
7183```
@@ -80,10 +92,12 @@ docker pull emacski/tensorflow-serving:latest-linux_arm64
8092| <nobr >` emacski/tensorflow-serving:latest-linux_arm64 ` </nobr > | ` linux ` | ` arm64 ` |
8193| <nobr >` emacski/tensorflow-serving:latest-linux_amd64 ` </nobr > | ` linux ` | ` amd64 ` |
8294
83- Example
95+ Examples
8496``` bash
8597# on Raspberry PI 3 B+
8698docker pull emacski/tensorflow-serving
99+ # or
100+ docker pull emacski/tensorflow-serving:latest
87101# the actual image used is emacski/tensorflow-serving:latest-linux_arm64
88102# itself actually being emacski/tensorflow-serving:[Latest-Version]-linux_arm64_armv8-a
89103```
@@ -99,31 +113,31 @@ docker pull emacski/tensorflow-serving
99113Example
100114``` sh
101115# on Raspberry PI 3 B+
102- docker pull emacski/tensorflow-serving:2.1 .0
103- # the actual image used is emacski/tensorflow-serving:2.1 .0-linux_arm64
104- # itself actually being emacski/tensorflow-serving:2.1 .0-linux_arm64_armv8-a
116+ docker pull emacski/tensorflow-serving:2.2 .0
117+ # the actual image used is emacski/tensorflow-serving:2.2 .0-linux_arm64
118+ # itself actually being emacski/tensorflow-serving:2.2 .0-linux_arm64_armv8-a
105119```
106120
107121### Debug Images
108122
109- As of version 2.1.0 , debug images are also built and published to docker hub.
123+ As of version ` 2.0.0 ` , debug images are also built and published to docker hub.
110124These images are identical to the non-debug images with the addition of busybox
111125utils. The utils are located at ` /busybox/bin ` which is also included in the
112- image ` PATH ` env variable .
126+ image's system ` PATH ` .
113127
114128For any image above, add ` debug ` after the ` [Version] ` and before the platform
115129suffix (if one is required) in the image tag.
116130
117- Examples
118131``` sh
119132# multi-arch
120- docker pull emacski/tensorflow-serving:2.1 .0-debug
133+ docker pull emacski/tensorflow-serving:2.2 .0-debug
121134# specific image
122- docker pull emacski/tensorflow-serving:2.1 .0-debug-linux_arm64_armv8-a
135+ docker pull emacski/tensorflow-serving:2.2 .0-debug-linux_arm64_armv8-a
123136# specific alias
124137docker pull emacski/tensorflow-serving:latest-debug-linux_arm64
125138```
126139
140+ Example Usage
127141``` sh
128142# start a new container with an interactive ash (busybox) shell
129143docker run -ti --entrypoint /busybox/bin/sh emacski/tensorflow-serving:latest-debug-linux_arm64
@@ -133,93 +147,129 @@ docker run -ti --entrypoint sh emacski/tensorflow-serving:latest-debug-linux_arm
133147docker exec -ti my_running_container /busybox/bin/sh
134148```
135149
136- ## Building from Source
150+ [ Back to Top] ( #contents )
151+
152+ ## Build From Source
137153
138- ** Host Build Requirements:**
154+ ### Build / Development Environment
155+
156+ ** Build Host Platform:** ` linux_amd64 ` (` x86_64 ` )
157+
158+ ** Build Host Requirements:**
139159* git
140160* docker
141161
142- ### Build / Development Environment
162+ For each version / release, a self contained build environment ` devel ` image is
163+ created and published. This image contains all necessary tools and dependencies
164+ required for building project artifacts.
143165
144166``` bash
145167git clone git@github.com:emacski/tensorflow-serving-arm.git
146-
147168cd tensorflow-serving-arm
148169
170+ # pull devel
149171docker pull emacski/tensorflow-serving:latest-devel
150- # or
172+ # or build devel
151173docker build -t emacski/tensorflow-serving:latest-devel -f tensorflow_model_server/tools/docker/Dockerfile .
152174```
153175
154- ### Build Examples
155-
156- The following examples assume that the commands are executed within the ` devel ` container:
176+ All of the build examples assume that the commands are executed within the ` devel `
177+ container:
157178``` bash
158179# interactive shell
159180docker run --rm -ti \
160181 -w /code -v $PWD :/code \
161182 -v /var/run/docker.sock:/var/run/docker.sock \
162183 emacski/tensorflow-serving:latest-devel /bin/bash
184+ # or
163185# non-interactive
164186docker run --rm \
165187 -w /code -v $PWD :/code \
166188 -v /var/run/docker.sock:/var/run/docker.sock \
167189 emacski/tensorflow-serving:latest-devel [example_command]
168190```
169191
170- #### Build Project Docker Images
192+ ### Config Groups
193+
194+ The following bazel config groups represent the build options used for each target
195+ platform (found in ` .bazelrc ` ). These config groups should be treated as mutually
196+ exclusive with each other and only one should be specified in a build command as
197+ a ` --config ` option.
198+
199+ | Name | Type | Info |
200+ | ------| ------| ------|
201+ | ` linux_amd64 ` | Base | can be used for [ custom builds] ( #build-image-for-custom-arm-target ) |
202+ | ` linux_arm64 ` | Base | can be used for [ custom builds] ( #build-image-for-custom-arm-target ) |
203+ | ` linux_arm ` | Base | can be used for [ custom builds] ( #build-image-for-custom-arm-target ) |
204+ | ** ` linux_amd64_avx_sse4.2 ` ** | ** Project** | inherits from ` linux_amd64 ` |
205+ | ** ` linux_arm64_armv8-a ` ** | ** Project** | inherits from ` linux_arm64 ` |
206+ | ** ` linux_arm64_armv8.2-a ` ** | ** Project** | inherits from ` linux_arm64 ` |
207+ | ** ` linux_arm_armv7-a_neon_vfpv3 ` ** | ** Project** | inherits from ` linux_arm ` |
208+ | ** ` linux_arm_armv7-a_neon_vfpv4 ` ** | ** Project** | inherits from ` linux_arm ` |
209+
210+ ### Build Project Image Target
211+
212+ #### ` //tensorflow_model_server:project_image.tar `
213+
214+ Build a project maintained model server docker image targeting one of the platforms
215+ specified by a project config group as listed above. The resulting image can be
216+ found as a tar file in bazel's output directory.
217+
171218``` bash
172- bazel build //tensorflow_model_server:linux_amd64_avx_sse4.2 --config=linux_amd64_avx_sse4.2
173- bazel build //tensorflow_model_server:linux_arm64_armv8-a --config=linux_arm64_armv8-a
174- bazel build //tensorflow_model_server:linux_arm64_armv8.2-a --config=linux_arm64_armv8.2-a
175- bazel build //tensorflow_model_server:linux_arm_armv7-a_neon_vfpv3 --config=linux_arm_armv7-a_neon_vfpv3
176- bazel build //tensorflow_model_server:linux_arm_armv7-a_neon_vfpv4 --config=linux_arm_armv7-a_neon_vfpv4
219+ bazel build //tensorflow_model_server:project_image.tar --config=linux_arm64_armv8-a
220+ # or
221+ bazel build //tensorflow_model_server:project_image.tar --config=linux_arm_armv7-a_neon_vfpv4
177222# each build creates a docker loadable image tar in bazel's output dir
178223```
179224
180- #### Build and Load Project Images
225+ ### Load Project Image Target
181226
182- ``` bash
183- bazel run //tensorflow_model_server:linux_amd64_avx_sse4.2 --config=linux_amd64_avx_sse4.2
184- bazel run //tensorflow_model_server:linux_arm64_armv8-a --config=linux_arm64_armv8-a
185- bazel run //tensorflow_model_server:linux_arm64_armv8.2-a --config=linux_arm64_armv8.2-a
186- bazel run //tensorflow_model_server:linux_arm_armv7-a_neon_vfpv3 --config=linux_arm_armv7-a_neon_vfpv3
187- bazel run //tensorflow_model_server:linux_arm_armv7-a_neon_vfpv4 --config=linux_arm_armv7-a_neon_vfpv4
188- ```
227+ #### ` //tensorflow_model_server:project_image `
228+
229+ Same as above, but additionally bazel attempts to load the resulting image onto
230+ the host, making it immediatly available to the host's docker.
189231
190232** Note:** host docker must be available to the build container for final images
191233to be available on the host automatically.
192234
193- #### Build Project Binaries
194- It's not recommended to use these binaries as standalone executables as they are built specifically to run in their respective containers.
195235``` bash
196- bazel build //tensorflow_model_server --config=linux_amd64_avx_sse4.2
197- bazel build //tensorflow_model_server --config=linux_arm64_armv8-a
198- bazel build //tensorflow_model_server --config=linux_arm64_armv8.2-a
199- bazel build //tensorflow_model_server --config=linux_arm_armv7-a_neon_vfpv3
200- bazel build //tensorflow_model_server --config=linux_arm_armv7-a_neon_vfpv4
236+ bazel run //tensorflow_model_server:project_image --config=linux_arm64_armv8-a
237+ # or
238+ bazel run //tensorflow_model_server:project_image --config=linux_arm_armv7-a_neon_vfpv4
201239```
202240
203- #### Build Docker Image for Custom ARM target
204- Just specify the ` image.tar ` target and base arch config group and custom copile options.
241+ ### Build Project Binary Target
242+
243+ #### ` //tensorflow_model_server `
244+
245+ Build the model server binary targeting one of the platforms specified by a project
246+ config group as listed above.
205247
206- For ` linux_arm64 ` and ` linux_arm ` options see: https://releases.llvm.org/9.0.0/tools/clang/docs/CrossCompilation.html
248+ ** Note:** It's not recommended to use these binaries as standalone executables
249+ as they are built specifically to run in their respective containers, but they may
250+ work on debian 10 like systems.
207251
208- Example building an image tuned for Cortex-A72
209252``` bash
210- bazel build //tensorflow_model_server:image.tar --config=linux_arm64 --copt=-mcpu=cortex-a72
211- # resulting image tar: bazel-bin/tensorflow_model_server/image.tar
253+ bazel build //tensorflow_model_server --config=linux_arm64_armv8-a
254+ # or
255+ bazel build //tensorflow_model_server --config=linux_arm_armv7-a_neon_vfpv4
212256```
213257
214- ## Disclaimer
258+ ### Build Image for Custom ARM Target
215259
216- * Not an ARM expert
217- * Not a Bazel expert (but I know a little bit more now)
218- * Not a TensorFlow expert
219- * Personal project, so testing is minimal
260+ #### ` //tensorflow_model_server:custom_image.tar `
220261
221- Should any of those scare you, I recommend NOT using anything here.
222- Additionally, any help to improve things is always appreciated.
262+ Can be used to fine-tune builds for specific platforms. Use a "Base" type
263+ [ config group] ( #config-groups ) and custom compile options. For ` linux_arm64 ` and
264+ ` linux_arm ` options see: https://releases.llvm.org/10.0.0/tools/clang/docs/CrossCompilation.html
265+
266+ ``` bash
267+ # building an image tuned for Cortex-A72
268+ bazel build //tensorflow_model_server:custom_image.tar --config=linux_arm64 --copt=-mcpu=cortex-a72
269+ # look for custom_image.tar in bazel's output directory
270+ ```
271+
272+ [ Back to Top] ( #contents )
223273
224274## Legacy Builds
225275
@@ -229,7 +279,8 @@ Additionally, any help to improve things is always appreciated.
229279* ` v1.13.0 `
230280* ` v1.14.0 `
231281
232- ** Note:** a tag exists for both ` v1.14.0 ` and ` 1.14.0 ` as this was the current upstream tensorflow/serving version when this project was refactored
282+ ** Note:** a tag exists for both ` v1.14.0 ` and ` 1.14.0 ` as this was the current
283+ upstream tensorflow/serving version when this project was refactored
233284
234285### Legacy Docker Images
235286The following tensorflow serving versions were built using the legacy project
@@ -239,3 +290,28 @@ structure and are still available on DockerHub.
239290* ` emacksi/tensorflow-serving:[Version]-arm32v7_vfpv3 `
240291
241292Versions: ` 1.11.1 ` , ` 1.12.0 ` , ` 1.13.0 ` , ` 1.14.0 `
293+
294+ [ Back to Top] ( #contents )
295+
296+ ## Disclosures
297+
298+ This project uses llvm / clang toolchains for c++ cross-compiling. By
299+ default, the model server is statically linked to llvm's libc++. To dynamically
300+ link against gnu libstdc++, include the build option ` --config=gnulibcpp ` .
301+
302+ The base docker images used in this project come from another project I
303+ maintain called [ Discolix] ( https://github.com/discolix/discolix ) (distroless for arm).
304+
305+ [ Back to Top] ( #contents )
306+
307+ ## Disclaimer
308+
309+ * Not an ARM expert
310+ * Not a Bazel expert (but I know a little bit more now)
311+ * Not a TensorFlow expert
312+ * Personal project, so testing is minimal
313+
314+ Should any of those scare you, I recommend NOT using anything here.
315+ Additionally, any help to improve things is always appreciated.
316+
317+ [ Back to Top] ( #contents )
0 commit comments