Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video binary object extraction #306

Closed
ruebot opened this issue Jan 31, 2019 · 3 comments
Closed

Video binary object extraction #306

ruebot opened this issue Jan 31, 2019 · 3 comments

Comments

@ruebot
Copy link
Member

ruebot commented Jan 31, 2019

Using the image extraction process as a basis, our next set of binary object extractions will be documents. This issue is meant to focus specially on video objects.

There may be a some tweaks to this depending on the outcome of #298.

@ruebot
Copy link
Member Author

ruebot commented Aug 12, 2019

Ok, I have a basic framework setup in the branch.

Pull down the branch, and do something along these lines (I had memory issues, so I did my whole Spark config thing):

rm -rf ~/.m2/repository/* && mvn clean install && rm -rf ~/.ivy2/* && ~/bin/spark-2.4.3-bin-hadoop2.7/bin/spark-shell --master local\[10\] --driver-memory 35g --conf spark.network.timeout=10000000 --conf spark.executor.heartbeatInterval=600s --conf spark.driver.maxResultSize=4g --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --conf spark.shuffle.compress=true --conf spark.rdd.compress=true --packages io.archivesunleashed:aut:0.17.1-SNAPSHOT -i ~/306-pdf-audio-video-extract.scala

306-pdf-audio-video-extract.scala

import io.archivesunleashed._
import io.archivesunleashed.df._

val df_pdf = RecordLoader.loadArchives("/home/nruest/Projects/au/sample-data/geocites/1/*gz", sc).extractPDFDetailsDF();
val res_pdf = df_pdf.select($"bytes", $"extension").saveToDisk("bytes", "/home/nruest/Projects/au/sample-data/306-307-test/pdf", "extension")

val df_audio = RecordLoader.loadArchives("/home/nruest/Projects/au/sample-data/geocites/1/*gz", sc).extractAudioDetailsDF();
val res_audio = df_audio.select($"bytes", $"extension").saveToDisk("bytes", "/home/nruest/Projects/au/sample-data/306-307-test/audio", "extension")

val df_video = RecordLoader.loadArchives("/home/nruest/Projects/au/sample-data/geocites/1/*gz", sc).extractVideoDetailsDF();
val res_video = df_video.select($"bytes", $"extension").saveToDisk("bytes", "/home/nruest/Projects/au/sample-data/306-307-test/video", "extension")

sys.exit

I have a whole lot of audio, pdf, and video files.

Considerations

  1. We need tests. Should have had some for PDF binary object extraction #302 🤷‍♂️
  2. Is this an ok implementation for getting the extension? It seems to have a really good success rate.
  3. How do we want to handle when we don't or are not able to get an extension? Throw a conditional in the mix, and say if null/empty UNKNOWN? Do we want that stored in the dataframe, or done on the fly in saveToDisk?

@jrwiebe @lintool @ianmilligan1 let me know what you think.

@ianmilligan1
Copy link
Member

Woohoo, this is looking great. Congrats @ruebot! I've tested locally on our CPP Sample Data and all the extractors are working on the data. Some fun PDFs and lots of weird political talk radio and interview clips.

FWIW, at least on this weird CPP collection subset from 2009, it's having trouble getting any extensions for video (it found a few wmv and the rest are all sans extension).

Screen Shot 2019-08-12 at 5 41 57 PM

I don't know the best route on your questions #2 and #3 so will leave that to the more qualified @jrwiebe and @lintool.

@jrwiebe
Copy link
Contributor

jrwiebe commented Aug 13, 2019

I'm not sure about this method of getting the extension. I think the reason @ianmilligan1 was getting so many videos without extension is that you're just getting the extension based on the URL. It's easy to think of examples of how audio or video files might be served without containing the file extension.

I think a better way to get the extension is based on the MIME type. There isn't a 1:1 mapping between MIME and extension, so perhaps we combine this with URL analysis.

I've created a branch that implements this method. I just finished running a test with it right now. It was working fine until the end, when it failed with java.lang.OutOfMemoryError. So some thought resource use is warranted here.


(Aside: If you look at the branch's commit history you'll see I modified the POM. This is because the shading I did that somehow resolved #302 actually did not relocate (i.e., rename) commons-compress as intended. Evidently some other change allowed our tests to work, but I was getting that NoSuchMethodError related to commons-compress again when I tested my getExtension method. Now we're relocating for real, as verified by unzipping the JAR and seeing shaded/org/apache/tika/tika-parsers/ paths.)

ruebot added a commit that referenced this issue Aug 20, 2019
- Address #190
- Address #259
- Address #302
- Address #303
- Address #304
- Address #305
- Address #306
- Address #307
ianmilligan1 pushed a commit that referenced this issue Aug 21, 2019
* Add binary extration DataFrames to PySpark.
- Address #190
- Address #259
- Address #302
- Address #303
- Address #304
- Address #305
- Address #306
- Address #307
- Resolves #350 
- Update README
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants