A tool to populate missing EXIF metadata (like DateTimeOriginal
, geolocation) in Google Photos takeout, using Google's JSON metadata.
- Quick Start
- How does this tool works?
- Background
- Structure of Google Takeout Export
- How to download Google Takeout content
- Supported file types
- How are media files matched to JSON sidecar files?
- Disclaimer?
npm i && npm run start -- --inputFolder="~/takeout" --outputFolder="~/output" --errorOutputFolder="~/error" --mockProcess
The tool takes in three parameters:
- an
inputFolder
directory path containing the extracted Google Takeout. Required. - an
outputFolder
directory path where processed files will be moved to. - an
errorOutputFolder
directory path where images with bad EXIF data that fail to process will be moved to. - a
mockProcess
boolean parameter which allows the script to run on your files without any modification. It can help you make sure everything goes well.
The inputFolder
needs to be a single directory containing an extracted zip from Google takeout. As described in the section above, it is important that the zip has been extracted into a directory (this tool doesn't extract zips for you) and that it is a single folder containing the whole Takeout (or if coming from multiple archives, that they have been properly merged together).
The tool will do the following:
-
Find all "media files" with one of the supported extensions (see "Configuring supported file types" above) from the (nested)
inputFolder
folder structure. -
For each "media file":
a. Look for a corresponding sidecar JSON metadata file (see the section below for more on this) and if found, read the
photoTakenTime
fieldb. Copy the media file to the output directory (if
outputFolder
is present)c. If the file supports EXIF (e.g. JPEG images), read the EXIF metadata and write the
DateTimeOriginal
and geo locations fields if it does not already have a value in this fieldd. Update the file modification date to the
photoTakenTime
found in the JSON metadata or in EXIF metadatae. If an error occurs whilst processing the file, copy it to the directory specified in the
errorOutputFolder
argument (if present), so that it can be inspected manually or removed
I wrote this tool to help me overcome some issues that I had when trying to make use of photos exported from Google Photos using Google Takeout.
Whilst it is great that I was able to use Google Takeout to extract all of my stored images from Google Photos at once, I found that some images were landing in the wrong place due to missing DateTimeOriginal
EXIF timestamps.
This tool aims to eliminate some of those issues by reading the photoTakenTime
timestamp from the JSON metadata files that are included in Google Takeout export and using it to:
- set a meaningful modification date on the file itself
- populate the
DateTimeOriginal
field in the EXIF metadata if this field is not already set
At the time of writing (October 2020), Google Takeout provides you with one or more zip files, structured in a way that is fairly unintuitive and tricky to use of directly.
Extracting the zip, you might find something similar to this (It's OK if your folder structure looks different):
Extracted Takeout Zip
Google Photos
2006-01-01
IMG0305.jpg.json
2020-09-01
IMG1001.jpg
IMG1001.jpg.json
2020-09-02
IMG1002.jpg.json
metadata.json
2020-09-04
IMG1003.jpg
IMG1003.json
SomeAlbumName
IMG1004.jpg
IMG1004.jpg.json
IMG1005.jpg
IMG1005.json
archive_browser.html
There are some interesting challenges to note here:
- Each zip contains folders for certain dates and/or album names. These folders contain a mixture of image files and JSON metadata files. The JSON sidecar files include, amongst other things, a useful
photoTakenTime
property. - The date based folders don't always contain perfect pairs of images and JSON files, sometimes you get JSON files without a corresponding image. In the case that the export was split across multiple zips, I'm not sure whether there is any guarantee that the images & JSON files will always be co-located within the same export
- The naming convention for the JSON files seems inconsistent and has some interesting edge cases. For an image named
IMG123.jpg
, sometimes you getIMG123.jpg.json
but sometimes it's justIMG123.json
- From what I can tell, the embedded metadata (e.g. EXIF / IPTC) in the image files is not updated if changes are made within Google Photos, for example if the dates are updated using the Google Photos UI. Instead, Google's metadata comes out in the accompanying JSON files.
- Whilst most of my images contained reasonable EXIF timestamps for the time they were taken (written by the phone's camera), a small number did not. My guess is that these images originated from other sources (e.g. they were shared with me or imported into the library by other means, and the source did not include a timestamp in the EXIF metadata)
- Some file formats such as GIFs or MP4 videos don't have this metadata and thus also get sorted into the wrong place when run through tools that organise images based on the metadata timestamps.
The first step to using this tool is to request & download a Google Takeout
. At the time of writing the steps to do this are:
- Visit https://takeout.google.com/
- Deselect all products and then tick
Google Photos
- Click
All photo albums included
. - Keep all of the date-based albums selected. Deselect any "Hangout: *" albums unless you specifically want to include images from chats.
- Important: If you have custom albums (ones with non-date names), deselect these because the images will already be referenced by the date-based albums. If you don't do this you will end up with duplicates.
- Click OK and move to the next step
- Select "Export once"
- Under "File type & size" I recommend increasing the file size to 50GB. Important: If your collection is larger than this (or you need to export it as multiple smaller archives) then you will need to merge the resultant folders together manually before using this tool. If you do this, be sure to merge the contents of any directories with the same name, rather than overwriting them.
- Click "Create Export", wait for a link to be sent by email and then download the zip file
- Extract the zip file into a directory. The path of this directory will be what we pass into the tool as the
inputFolder
.
In order to avoid touching files that are not photos or videos, this tool will only process files whose extensions are whitelisted in the configuration options. Any other files will be ignored and not included in the output.
The Google Takeout file/folder structure has some interesting inconsistencies/quirks which make it tricky to work with.
This tool makes a best-effort attempt to work with this structure. Read on to get a feel for how it works.
Imagine you have an image named foo.jpg
.
In the same directory, we also expect to find a JSON file containing metadata about the image. The pattern for how this is names seems inconsistent but in general, we seem to get either: foo.json
or foo.jpg.json
. This tool will look for either of those two files.
It appears that images that were edited using Google Photos (e.g. rotated or edited on a mobile device) don't get their own JSON sidecar files.
For example, for image file foo-edited.jpg
we don't get foo-edited.json
nor foo-edited.jpg.json
. Instead we must rely on the JSON file from the original image.
To work around this, for any images with a suffix of -edited
, this tool will ignore that suffix and look for the JSON file from the original image, e.g. foo.json
or foo.jpg.json
.
Files with numbered suffixes (e.g. "foo(1).jpg") follow a different pattern for the naming of JSON files
I found that some of my images were named with a numeric suffix, such as foo(1).jpg
.
Counter-intuitively, the corresponding JSON file for this doesn't follow the same pattern. Instead of foo(1).json
or foo(1).jpg.json
, it is instead named foo.jpg(1).json
.
To support that, this tool will also check for files that have a number suffix in brackets and where that is the case, look for a JSON file with the pattern above.
This tool was only written for the purpose of solving my own personal requirements.
I decided to make this public on GitHub because:
- it was useful for me, so maybe it'll be useful for others in the future
- future me might be thankful if I ever need to do this again
With that said, please bear in mind that your mileage may vary. I'm sure it's far from perfect so if you choose to use it please proceed with caution and be careful to verify the results (i.e. with mockProcess
parameter)! I hope it's helpful.