-
Notifications
You must be signed in to change notification settings - Fork 4k
Update tutorial #342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update tutorial #342
Conversation
Have you tested with py2 + py3? |
@mdouze Yes, python2.7 and python3.6. |
21728f2
to
9e26a5f
Compare
9e26a5f
to
46130ac
Compare
46130ac
to
b1a2808
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope you don't mind that I throw in a few comments on the C++ examples. Feel free to disregard them if they are not deemed important enough for this context. As code is copied all the time, I would say that providing examples with good practices is a plus.
tutorial/cpp/4-GPU.cpp
Outdated
xq[d * i] += i / 1000.; | ||
} | ||
|
||
faiss::gpu::GpuResources *res = new faiss::gpu::StandardGpuResources; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to allocate res
with new
here? It already seems to outlive all indexes in the example, so a plain object with automatic storage duration would suffice:
faiss::gpu::StandardGpuResources res;
faiss::GpuIndexFlatL2 index(&res, d);
std::vector<faiss::gpu::GpuResources*> res; | ||
std::vector<int> devs; | ||
for(int i = 0; i < ngpus; i++) { | ||
res.push_back(new faiss::gpu::StandardGpuResources); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A similar reasoning could be applied here, but alas, it would require another vector to keep the pointers to the resource objects. Other than that, a vector of ngpu
GPU resources can be even created inplace without a loop. I can't think of a better way to do this unfortunately. This is what it would look like:
std::vector<faiss::gpu::StandardGpuResources> owned_res{ngpus};
std::vector<faiss::gpu::GpuResources*> res{ngpus};
std::vector<int> devs{ngpus};
for (int i = 0; i < ngpus; i++) {
res[i] = &owned_res[i];
devs[i] = i;
}
tutorial/cpp/4-GPU.cpp
Outdated
int k = 4; | ||
|
||
{ // sanity check: search 5 first vectors of xb | ||
long *I = new long[k * 5]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When relying on C++11, one can wrap these data allocations around a unique_ptr
. Not only it would be automatically freed when falling out of scope, it would protect also prevent leaks in case of exceptions.
#include <memory>
// ...
auto I = std::unique_ptr<long[]>(new long[k * 5]);
auto D = std::unique_ptr<float[]>(new float[k * 5]);
index.search(5, xb, k, D.get(), I.get());
tutorial/cpp/5-Multiple-GPUs.cpp
Outdated
devs.push_back(i); | ||
} | ||
|
||
faiss::Index *index = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this in a more complicated context, wouldn't want the index to leak. I believe a unique_ptr
around the pointer to the GPU index works fine here.
auto index = std::unique_ptr<faiss::Index>(
faiss::gpu::index_cpu_to_gpu_multiple(
res,
devs,
new faiss::IndexFlat(d)
));
cecba12
to
82eb855
Compare
@Enet4 Thanks for your comments. Agreed that providing examples with good practices is a plus. I modified the example on one GPU to avoid using Regarding the (lack of) use of |
b491746
to
33f6a01
Compare
1e729c0
to
8dd8f7c
Compare
b975870
to
0b23b79
Compare
0b23b79
to
63fcedd
Compare
tutorial/cpp/4-GPU.cpp
Outdated
faiss::gpu::GpuResources *res = new faiss::gpu::StandardGpuResources; | ||
|
||
faiss::GpuIndexFlatL2 index(res, d); // call constructor | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
inconsistent: resources is allocated with new, GpuIndexFlat is allocated on the stack
tutorial/cpp/5-Multiple-GPUs.cpp
Outdated
faiss::gpu::index_cpu_to_gpu_multiple( // call constructor | ||
res, | ||
devs, | ||
new faiss::IndexFlatL2(d) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will not be deallocated, so it should be declared separately.
tutorial/cpp/5-Multiple-GPUs.cpp
Outdated
|
||
int k = 4; | ||
|
||
{ // sanity check: search 5 first vectors of xb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the sanity check, we will not comment it twice
tutorial/cpp/Makefile
Outdated
|
||
gpu: tutorial4 tutorial5 | ||
|
||
tutorial1: 1-Flat.cpp ../../libfaiss.a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you call the executables 1-Flat, 2-IVFFlat, etc. ?
std::vector<faiss::gpu::GpuResources*> res; | ||
std::vector<int> devs; | ||
for(int i = 0; i < ngpus; i++) { | ||
res.push_back(new faiss::gpu::StandardGpuResources); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use std::unique_ptr
at the least if you keep it by pointer, so you don't need the free at the end.
* Remove transitional print import. * Add example for multiple GPUs in python. * Add example on GPU in C++. * Add example on multiple GPUs in C++. * Add IVFFlat example on GPU in python. * Add Makefile for C++ tutorial examples. * Add IVF index on GPU example in C++.
Uh oh!
There was an error while loading. Please reload this page.