Please skip the installation stepsif you are remotely accessing the machine we provided.
GoFS requires libnvm kernel module to enable GPU direct accesses. To build the kernel module, you may need a few packages. You can get them installed with:
sudo apt install `\\`libncurses-dev gawk flex bison openssl libssl-dev dkms libelf-dev libudev-dev libpci-dev libiberty-dev autoconf llvm
The compiling also requires the open NVIDIA driver. We use the nvidia-550.54.14 driver in our experiment.
# this line may change on different machines
cd /usr/src/nvidia-550.54.14
make & make install
Now, enter the repository and compile the module and GoFS source:
mkdir build & cd build
cmake ..
make
cd module
make
Next, to reserve SSDs for direct access, execute the script file:
cd scripts
# show all nvme devices on pcie
./pci_bind_helpper.sh
# you will see the following;
...
Slot: 0000:8e:00.0 Dev: Samsung Electronics Co Ltd -- Device a80c
Driver: nvme
NVMe device: /dev/nvme1n1
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 1.8T 0 disk
...
# select the ssd to be detached and used in evaluation:
./pci_bind_helpper.sh -u 0000:8e:00.0
# after detached, you may bind the libnvm module to the target SSD:
./pci_bind_helpper.sh -b 0000:8f:00.0=libnvm_helper
The cmdline will output all the attached NVMe devices on your current system. Before unbinding the NVMe driver, you should make sure the SSD device you detach is not storing any important data, and its format should be in F2FS layout.
After, you can check the SSD with:
ls /dev/
You should see the devices /dev/libnvm0, /dev/libnvm1, /dev/libnvm2...
Run the following commands to install the necessary workloads:
./setworkloads.sh
We are still waiting for the assignment of the vpn access to the machine we prepared. TBD
After setting up the experiment environment, you can run specific benchmarks with the following command:
./run_artifact.sh
# or to run a specific workload
./run_artifact.sh -w ${workload}
%## Experiment customization