Replies: 1 comment 3 replies
-
Hi @jrsperry But I am thinking of making this part a bit easier, so maybe a feature request can come out of this question. (How to use sentences / tokens coming from outside in Spark NLP) Just out of curiosity, are those sentences/tokens generated by spaCy? (just wondering which format we can accept and simply run them through a function to get an actual SENTENCE/TOKEN in Spark NLP pipeline so users can continue with their own tokens) |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have an NLP stack where I currently already run sentence splitting and tokenization (in addition to many other tasks) and would like to run the SpanBertCorefModel for my coreference.
Since I already have sentences and tokens, is there are way to provide them as inputs to the model? This would help me line up the results of the coreference with the rest of my NLP data as I wouldn't need to worry about doing any complicated alignment checks.
I've tried the following already in python but got an error as I'm new to spark-nlp and took my best guess at what would make sense.
I have been able to successfully run the example for this model as detailed in the documentation, using the Pipeline. Thanks in advance
Beta Was this translation helpful? Give feedback.
All reactions