Thanks for putting this together!
The variable naming indicates that the trained word2vec model within node2vec model is a skip-gram model. However, the default setting of gensim's Word2Vec is CBOW. Having said that, one can easily pass the required parameter to chose a skip-gram model instead of CBOW.
This is a tiny technical detail which makes the default setting of this code different from the original node2vec which explicitly trains a skip-gram model. Perhaps the users should be aware of that. Or did I miss something?