[MobiCom '23] AccuMO: Accuracy-Centric Multitask Offloading in Edge-Assisted Mobile Augmented Reality
AccuMO is an edge-assisted multi-task AR framework that dynamically schedules the offloading of multiple compute-intensive DNN tasks of an AR app from a mobile device, while optimizing the overall DNN inference accuracy across the tasks.
This repository contains the code and scripts to run and evaluate AccuMO, download links to the pretrained models, and an example dataset that evaluates AccuMO on two tasks: depth estimation and visual odometry (VO).
-
An Android phone
- Tested on a Pixel 5 with Android 12, but others should also work.
-
A Mac/linux laptop
- With 10GB of disk space.
-
A linux server with a NVIDIA GPU
-
Tested on Ubuntu 18.04.6 LTS and NVIDIA 2080Ti, but others should also work.
-
This server needs to have an IP address reachable from the phone.
-
This server can be the same physical machine as the laptop, if the laptop meets the hardware requirements.
-
- Laptop:
- Server:
-
Clone this repository to any directory.
-
Download the dataset (290MB) and pretrained_models (2.57GB) folders and place them in the top-level directory, i.e., the top-level directory will look like:
AccuMO/ ├─ client/ ├─ server/ ├─ scripts/ ├─ dataset/ ├─ pretrained_models/ ├─ README.md
-
Convert the downloaded RGB frames to YUV format.
# (from the top-level directory) ./scripts/run_convert_to_yuv.sh
You are expected to see the following outputs. It will take several minutes for all frames to be processed.
Processing frame: 0000000576.jpg Processing frame: 0000000578.jpg Processing frame: 0000000580.jpg ...
-
Connect the phone to the laptop, and upload the YUV frames and model files to the phone via ADB:
# (from the top-level directory) adb shell mkdir -p /sdcard/accumo/dataset adb push dataset/yuv/* /sdcard/accumo/dataset adb shell mkdir -p /sdcard/accumo/models adb push pretrained_models/client/fast-depth-64x224* /sdcard/accumo/models cp pretrained_models/client/*.tflite client/app/src/main/ml
-
Install dependencies for accuracy calculation:
pip install evo --upgrade --no-binary evo pip install scikit-image pandas numpy Pillow
-
Clone this repository to any directory.
-
Download the pretrained_models folder and place it in the top-level directory.
-
Create a conda environment and install dependencies. Note that the environment creation is likely to take a long time (tens of minutes to an hour):
# (from the top-level directory) conda create -n accumo python=3.9 tensorflow-gpu=2.7.0 'pytorch=1.11.0=*cuda*' \ torchvision cudatoolkit cudatoolkit-dev scikit-image pandas opencv av \ tqdm matplotlib -c pytorch -c conda-forge conda activate accumo cd server/flownet2-pytorch && ./install.sh && cd -
-
If the server is configured with firewall, configure it to allow TCP on port 9999.
-
Connect the phone to the laptop.
-
Connect the phone to any network (Wi-Fi or cellular) that can access the server.
-
Follow the steps here to enable Developer options and USB debugging on the phone.
-
On the laptop, open the
client/
folder with Android Studio. -
On the phone, grant permissions to the "AccuMO" app:
- Long-press the "AccuMO" app, click
App info
, then clickPermissions
. - Go in
Camera permission
and selectAllow only while using the app
. - Go in
Files and media
and selectAllow management of all files
.
- Long-press the "AccuMO" app, click
-
On the linux server, start the server process:
# (from the top-level directory) python -m server.server
The server will take around 30s to start. Proceed after the server prints "Server ready".
-
On the laptop, run the following command to start offloading the downloaded video, replacing
<SERVER_IP>
with the address of the server. Make sure to quit the AccuMO application on the phone (i.e. go back to the home screen).adb shell am start -n com.example.accumo/.MainActivity \ -e com.example.accumo.VIDEO 2022-04-13-Town06-0060-40-0 \ -e com.example.accumo.SCHED mpc \ --ez com.example.accumo.ENABLE_FASTDEPTH true \ -e com.example.accumo.MODE online \ -e com.example.accumo.IP <SERVER_IP>
You are supposed to see the AccuMO app launching and showing a white screen, and the server printing the information of each offloaded frame.
Wait for 40-50 seconds for experiment to finish (i.e. the length of the example video), which is indicated by the server stopping to print.
Then, quit the AccuMO application to go back to the home screen. The resulting depth maps and VO trajectories will be written to files on the phone, under
/sdcard/accumo/results
. -
On the laptop, compute the accuracy:
-
Pull results from phone to laptop into any directory (denoted
<RESULT_DIR>
)adb pull /sdcard/accumo/results <RESULT_DIR>
-
Compute odometry accuracy
python scripts/odom/kitti_error.py \ dataset/rgb/2022-04-13-Town06-0060-40-0/poses_gt_skipped.txt \ <RESULT_DIR>/mpc/2022-04-13-Town06-0060-40-0/poses.txt
You are expected to see the following outputs (the exact number may differ):
KITTI error: 0.11263842591295233
-
Compute depth accuracy
python scripts/depth/get_depth_acc.py \ dataset/rgb/2022-04-13-Town06-0060-40-0 \ <RESULT_DIR>/mpc/2022-04-13-Town06-0060-40-0/depth
You are expected to see the following outputs (the exact numbers may differ):
... AbsRel for frame 002334.png: 0.139769047498703 AbsRel for frame 002335.png: 0.16346190869808197 AbsRel for frame 002336.png: 0.18522021174430847 AbsRel for frame 002337.png: 0.20051079988479614 AbsRel for frame 002338.png: 0.16257807612419128 Mean AbsRel: 0.21011945605278015
-